for people who want to generate the file list using a find(1)
command or a script.
+
Performance
Traverse just one directory at a time. Tridge says it's possible.
start, which makes us use a lot of memory and also not pipeline
network access as much as we could.
+
+Handling duplicate names
+
+ We need to be careful of duplicate names getting into the file list.
+ See clean_flist(). This could happen if multiple arguments include
+ the same file. Bad.
+
+ I think duplicates are only a problem if they're both flowing
+ through the pipeline at the same time. For example we might have
+ updated the first occurrence after reading the checksums for the
+ second. So possibly we just need to make sure that we don't have
+ both in the pipeline at the same time.
+
+ Possibly if we did one directory at a time that would be sufficient.
+
+ Alternatively we could pre-process the arguments to make sure no
+ duplicates will ever be inserted. There could be some bad cases
+ when we're collapsing symlinks.
+
+ We could have a hash table.
+
+ The root of the problem is that we do not want more than one file
+ list entry referring to the same file. At first glance there are
+ several ways this could happen: symlinks, hardlinks, and repeated
+ names on the command line.
+
+ If names are repeated on the command line, they may be present in
+ different forms, perhaps by traversing directory paths in different
+ ways, traversing paths including symlinks. Also we need to allow
+ for expansion of globs by rsync.
+
+ At the moment, clean_flist() requires having the entire file list in
+ memory. Duplicate names are detected just by a string comparison.
+
+ We don't need to worry about hard links causing duplicates because
+ files are never updated in place. Similarly for symlinks.
+
+ I think even if we're using a different symlink mode we don't need
+ to worry.
+
+ Unless we're really clever this will introduce a protocol
+ incompatibility, so we need to be able to accept the old format as
+ well.
+
+
Memory accounting
At exit, show how much memory was used for the file list, etc.
+ Also we do a wierd exponential-growth allocation in flist.c. I'm
+ not sure this makes sense with modern mallocs. At any rate it will
+ make us allocate a huge amount of memory for large file lists.
+
+ We can try using the GNU/SVID/XPG mallinfo() function to get some
+ heap statistics.
+
+
Hard-link handling
At the moment hardlink handling is very expensive, so it's off by
default. It does not need to be so.
+ Since most of the solutions are rather intertwined with the file
+ list it is probably better to fix that first, although fixing
+ hardlinks is possibly simpler.
+
We can rule out hardlinked directories since they will probably
screw us up in all kinds of ways. They simply should not be used.
hang/timeout friendliness
- On
-
verbose output
Indicate whether files are new, updated, or deleted
current host, directory and so on. We can probably even do
completion of remote filenames.
+%K%