-
-File list structure in memory
-
- Rather than one big array, perhaps have a tree in memory mirroring
- the directory tree.
-
- This might make sorting much faster! (I'm not sure it's a big CPU
- problem, mind you.)
-
- It might also reduce memory use in storing repeated directory names
--- again I'm not sure this is a problem.
-
-Performance
-
- Traverse just one directory at a time. Tridge says it's possible.
-
- At the moment rsync reads the whole file list into memory at the
- start, which makes us use a lot of memory and also not pipeline
- network access as much as we could.
-
-
-Handling duplicate names
-
- We need to be careful of duplicate names getting into the file list.
- See clean_flist(). This could happen if multiple arguments include
- the same file. Bad.
-
- I think duplicates are only a problem if they're both flowing
- through the pipeline at the same time. For example we might have
- updated the first occurrence after reading the checksums for the
- second. So possibly we just need to make sure that we don't have
- both in the pipeline at the same time.
-
- Possibly if we did one directory at a time that would be sufficient.
-
- Alternatively we could pre-process the arguments to make sure no
- duplicates will ever be inserted. There could be some bad cases
- when we're collapsing symlinks.
-
- We could have a hash table.
-
- The root of the problem is that we do not want more than one file
- list entry referring to the same file. At first glance there are
- several ways this could happen: symlinks, hardlinks, and repeated
- names on the command line.
-
- If names are repeated on the command line, they may be present in
- different forms, perhaps by traversing directory paths in different
- ways, traversing paths including symlinks. Also we need to allow
- for expansion of globs by rsync.
-
- At the moment, clean_flist() requires having the entire file list in
- memory. Duplicate names are detected just by a string comparison.
-
- We don't need to worry about hard links causing duplicates because
- files are never updated in place. Similarly for symlinks.
-
- I think even if we're using a different symlink mode we don't need
- to worry.
-
- Unless we're really clever this will introduce a protocol
- incompatibility, so we need to be able to accept the old format as
- well.
-
-
-Memory accounting
-
- At exit, show how much memory was used for the file list, etc.
-
- Also we do a wierd exponential-growth allocation in flist.c. I'm
- not sure this makes sense with modern mallocs. At any rate it will
- make us allocate a huge amount of memory for large file lists.
-
-
-Hard-link handling
-
- At the moment hardlink handling is very expensive, so it's off by
- default. It does not need to be so.
-
- Since most of the solutions are rather intertwined with the file
- list it is probably better to fix that first, although fixing
- hardlinks is possibly simpler.
-
- We can rule out hardlinked directories since they will probably
- screw us up in all kinds of ways. They simply should not be used.
-
- At the moment rsync only cares about hardlinks to regular files. I
- guess you could also use them for sockets, devices and other beasts,
- but I have not seen them.
-
- When trying to reproduce hard links, we only need to worry about
- files that have more than one name (nlinks>1 && !S_ISDIR).
-
- The basic point of this is to discover alternate names that refer to
- the same file. All operations, including creating the file and
- writing modifications to it need only to be done for the first name.
- For all later names, we just create the link and then leave it
- alone.
-
- If hard links are to be preserved:
-
- Before the generator/receiver fork, the list of files is received
- from the sender (recv_file_list), and a table for detecting hard
- links is built.
-
- The generator looks for hard links within the file list and does
- not send checksums for them, though it does send other metadata.
-
- The sender sends the device number and inode with file entries, so
- that files are uniquely identified.
-
- The receiver goes through and creates hard links (do_hard_links)
- after all data has been written, but before directory permissions
- are set.
-
- At the moment device and inum are sent as 4-byte integers, which
- will probably cause problems on large filesystems. On Linux the
- kernel uses 64-bit ino_t's internally, and people will soon have
- filesystems big enough to use them. We ought to follow NFS4 in
- using 64-bit device and inode identification, perhaps with a
- protocol version bump.
-
- Once we've seen all the names for a particular file, we no longer
- need to think about it and we can deallocate the memory.
-
- We can also have the case where there are links to a file that are
- not in the tree being transferred. There's nothing we can do about
- that. Because we rename the destination into place after writing,
- any hardlinks to the old file are always going to be orphaned. In
- fact that is almost necessary because otherwise we'd get really
- confused if we were generating checksums for one name of a file and
- modifying another.
-
- At the moment the code seems to make a whole second copy of the file
- list, which seems unnecessary.
-
- We should have a test case that exercises hard links. Since it
- might be hard to compare ./tls output where the inodes change we
- might need a little program to check whether several names refer to
- the same file.
-