+ At the moment rsync reads the whole file list into memory at the
+ start, which makes us use a lot of memory and also not pipeline
+ network access as much as we could.
+
+
+Handling duplicate names
+
+ We need to be careful of duplicate names getting into the file list.
+ See clean_flist(). This could happen if multiple arguments include
+ the same file. Bad.
+
+ I think duplicates are only a problem if they're both flowing
+ through the pipeline at the same time. For example we might have
+ updated the first occurrence after reading the checksums for the
+ second. So possibly we just need to make sure that we don't have
+ both in the pipeline at the same time.
+
+ Possibly if we did one directory at a time that would be sufficient.
+
+ Alternatively we could pre-process the arguments to make sure no
+ duplicates will ever be inserted. There could be some bad cases
+ when we're collapsing symlinks.
+
+ We could have a hash table.
+
+ The root of the problem is that we do not want more than one file
+ list entry referring to the same file. At first glance there are
+ several ways this could happen: symlinks, hardlinks, and repeated
+ names on the command line.
+
+ If names are repeated on the command line, they may be present in
+ different forms, perhaps by traversing directory paths in different
+ ways, traversing paths including symlinks. Also we need to allow
+ for expansion of globs by rsync.
+
+ At the moment, clean_flist() requires having the entire file list in
+ memory. Duplicate names are detected just by a string comparison.
+
+ We don't need to worry about hard links causing duplicates because
+ files are never updated in place. Similarly for symlinks.
+
+ I think even if we're using a different symlink mode we don't need
+ to worry.
+
+ Unless we're really clever this will introduce a protocol
+ incompatibility, so we need to be able to accept the old format as
+ well.
+
+
+Memory accounting
+
+ At exit, show how much memory was used for the file list, etc.
+
+ Also we do a wierd exponential-growth allocation in flist.c. I'm
+ not sure this makes sense with modern mallocs. At any rate it will
+ make us allocate a huge amount of memory for large file lists.
+
+
+Hard-link handling
+
+ At the moment hardlink handling is very expensive, so it's off by
+ default. It does not need to be so.
+
+ Since most of the solutions are rather intertwined with the file
+ list it is probably better to fix that first, although fixing
+ hardlinks is possibly simpler.
+
+ We can rule out hardlinked directories since they will probably
+ screw us up in all kinds of ways. They simply should not be used.
+
+ At the moment rsync only cares about hardlinks to regular files. I
+ guess you could also use them for sockets, devices and other beasts,
+ but I have not seen them.
+
+ When trying to reproduce hard links, we only need to worry about
+ files that have more than one name (nlinks>1 && !S_ISDIR).
+
+ The basic point of this is to discover alternate names that refer to
+ the same file. All operations, including creating the file and
+ writing modifications to it need only to be done for the first name.
+ For all later names, we just create the link and then leave it
+ alone.
+
+ If hard links are to be preserved:
+
+ Before the generator/receiver fork, the list of files is received
+ from the sender (recv_file_list), and a table for detecting hard
+ links is built.
+
+ The generator looks for hard links within the file list and does
+ not send checksums for them, though it does send other metadata.
+
+ The sender sends the device number and inode with file entries, so
+ that files are uniquely identified.
+
+ The receiver goes through and creates hard links (do_hard_links)
+ after all data has been written, but before directory permissions
+ are set.
+
+ At the moment device and inum are sent as 4-byte integers, which
+ will probably cause problems on large filesystems. On Linux the
+ kernel uses 64-bit ino_t's internally, and people will soon have
+ filesystems big enough to use them. We ought to follow NFS4 in
+ using 64-bit device and inode identification, perhaps with a
+ protocol version bump.
+
+ Once we've seen all the names for a particular file, we no longer
+ need to think about it and we can deallocate the memory.
+
+ We can also have the case where there are links to a file that are
+ not in the tree being transferred. There's nothing we can do about
+ that. Because we rename the destination into place after writing,
+ any hardlinks to the old file are always going to be orphaned. In
+ fact that is almost necessary because otherwise we'd get really
+ confused if we were generating checksums for one name of a file and
+ modifying another.
+
+ At the moment the code seems to make a whole second copy of the file
+ list, which seems unnecessary.
+
+ We should have a test case that exercises hard links. Since it
+ might be hard to compare ./tls output where the inodes change we
+ might need a little program to check whether several names refer to
+ the same file.
+
+
+
+Handling IPv6 on old machines
+
+ The KAME IPv6 patch is nice in theory but has proved a bit of a
+ nightmare in practice. The basic idea of their patch is that rsync
+ is rewritten to use the new getaddrinfo()/getnameinfo() interface,
+ rather than gethostbyname()/gethostbyaddr() as in rsync 2.4.6.
+ Systems that don't have the new interface are handled by providing
+ our own implementation in lib/, which is selectively linked in.
+
+ The problem with this is that it is really hard to get right on
+ platforms that have a half-working implementation, so redefining
+ these functions clashes with system headers, and leaving them out
+ breaks. This affects at least OSF/1, RedHat 5, and Cobalt, which
+ are moderately improtant.
+
+ Perhaps the simplest solution would be to have two different files
+ implementing the same interface, and choose either the new or the
+ old API. This is probably necessary for systems that e.g. have
+ IPv6, but gethostbyaddr() can't handle it. The Linux manpage claims
+ this is currently the case.
+
+ In fact, our internal sockets interface (things like
+ open_socket_out(), etc) is much narrower than the getaddrinfo()
+ interface, and so probably simpler to get right. In addition, the
+ old code is known to work well on old machines.
+
+ We could drop the rather large lib/getaddrinfo files.
+
+
+Other IPv6 stuff:
+
+ Implement suggestions from http://www.kame.net/newsletter/19980604/
+ and ftp://ftp.iij.ad.jp/pub/RFC/rfc2553.txt
+
+ If a host has multiple addresses, then listen try to connect to all
+ in order until we get through. (getaddrinfo may return multiple
+ addresses.) This is kind of implemented already.
+
+ Possibly also when starting as a server we may need to listen on
+ multiple passive addresses. This might be a bit harder, because we
+ may need to select on all of them. Hm.