- We need to be careful of duplicate names getting into the file list.
- See clean_flist(). This could happen if multiple arguments include
- the same file. Bad.
-
- I think duplicates are only a problem if they're both flowing
- through the pipeline at the same time. For example we might have
- updated the first occurrence after reading the checksums for the
- second. So possibly we just need to make sure that we don't have
- both in the pipeline at the same time.
-
- Possibly if we did one directory at a time that would be sufficient.
-
- Alternatively we could pre-process the arguments to make sure no
- duplicates will ever be inserted. There could be some bad cases
- when we're collapsing symlinks.
-
- We could have a hash table.
-
- The root of the problem is that we do not want more than one file
- list entry referring to the same file. At first glance there are
- several ways this could happen: symlinks, hardlinks, and repeated
- names on the command line.
-
- If names are repeated on the command line, they may be present in
- different forms, perhaps by traversing directory paths in different
- ways, traversing paths including symlinks. Also we need to allow
- for expansion of globs by rsync.
-
- At the moment, clean_flist() requires having the entire file list in
- memory. Duplicate names are detected just by a string comparison.
-
- We don't need to worry about hard links causing duplicates because
- files are never updated in place. Similarly for symlinks.
-
- I think even if we're using a different symlink mode we don't need
- to worry.
-
- Unless we're really clever this will introduce a protocol
- incompatibility, so we need to be able to accept the old format as
- well.
+ Some folks would like rsync to be deterministic in how it handles
+ duplicate names that come from mering multiple source directories
+ into a single destination directory; e.g. the last name wins. We
+ could do this by switching our sort algorithm to one that will
+ guarantee that the names won't be reordered. Alternately, we could
+ assign an ever-increasing number to each item as we insert it into
+ the list and then make sure that we leave the largest number when
+ cleaning the file list (see clean_flist()). Another solution would
+ be to add a hash table, and thus never put any duplicate names into
+ the file list (and bump the protocol to handle this).