+
+Handling duplicate names
+
+ We need to be careful of duplicate names getting into the file list.
+ See clean_flist(). This could happen if multiple arguments include
+ the same file. Bad.
+
+ I think duplicates are only a problem if they're both flowing
+ through the pipeline at the same time. For example we might have
+ updated the first occurrence after reading the checksums for the
+ second. So possibly we just need to make sure that we don't have
+ both in the pipeline at the same time.
+
+ Possibly if we did one directory at a time that would be sufficient.
+
+ Alternatively we could pre-process the arguments to make sure no
+ duplicates will ever be inserted. There could be some bad cases
+ when we're collapsing symlinks.
+
+ We could have a hash table.
+
+ The root of the problem is that we do not want more than one file
+ list entry referring to the same file. At first glance there are
+ several ways this could happen: symlinks, hardlinks, and repeated
+ names on the command line.
+
+ If names are repeated on the command line, they may be present in
+ different forms, perhaps by traversing directory paths in different
+ ways, traversing paths including symlinks. Also we need to allow
+ for expansion of globs by rsync.
+
+ At the moment, clean_flist() requires having the entire file list in
+ memory. Duplicate names are detected just by a string comparison.
+
+ We don't need to worry about hard links causing duplicates because
+ files are never updated in place. Similarly for symlinks.
+
+ I think even if we're using a different symlink mode we don't need
+ to worry.
+
+ Unless we're really clever this will introduce a protocol
+ incompatibility, so we need to be able to accept the old format as
+ well.
+
+