+Traverse just one directory at a time
+
+ Traverse just one directory at a time. Tridge says it's possible.
+
+ At the moment rsync reads the whole file list into memory at the
+ start, which makes us use a lot of memory and also not pipeline
+ network access as much as we could.
+
+ -- --
+
+
+Hard-link handling
+
+ At the moment hardlink handling is very expensive, so it's off by
+ default. It does not need to be so.
+
+ Since most of the solutions are rather intertwined with the file
+ list it is probably better to fix that first, although fixing
+ hardlinks is possibly simpler.
+
+ We can rule out hardlinked directories since they will probably
+ screw us up in all kinds of ways. They simply should not be used.
+
+ At the moment rsync only cares about hardlinks to regular files. I
+ guess you could also use them for sockets, devices and other beasts,
+ but I have not seen them.
+
+ When trying to reproduce hard links, we only need to worry about
+ files that have more than one name (nlinks>1 && !S_ISDIR).
+
+ The basic point of this is to discover alternate names that refer to
+ the same file. All operations, including creating the file and
+ writing modifications to it need only to be done for the first name.
+ For all later names, we just create the link and then leave it
+ alone.
+
+ If hard links are to be preserved:
+
+ Before the generator/receiver fork, the list of files is received
+ from the sender (recv_file_list), and a table for detecting hard
+ links is built.
+
+ The generator looks for hard links within the file list and does
+ not send checksums for them, though it does send other metadata.
+
+ The sender sends the device number and inode with file entries, so
+ that files are uniquely identified.
+
+ The receiver goes through and creates hard links (do_hard_links)
+ after all data has been written, but before directory permissions
+ are set.
+
+ At the moment device and inum are sent as 4-byte integers, which
+ will probably cause problems on large filesystems. On Linux the
+ kernel uses 64-bit ino_t's internally, and people will soon have
+ filesystems big enough to use them. We ought to follow NFS4 in
+ using 64-bit device and inode identification, perhaps with a
+ protocol version bump.
+
+ Once we've seen all the names for a particular file, we no longer
+ need to think about it and we can deallocate the memory.
+
+ We can also have the case where there are links to a file that are
+ not in the tree being transferred. There's nothing we can do about
+ that. Because we rename the destination into place after writing,
+ any hardlinks to the old file are always going to be orphaned. In
+ fact that is almost necessary because otherwise we'd get really
+ confused if we were generating checksums for one name of a file and
+ modifying another.
+
+ At the moment the code seems to make a whole second copy of the file
+ list, which seems unnecessary.
+
+ We should have a test case that exercises hard links. Since it
+ might be hard to compare ./tls output where the inodes change we
+ might need a little program to check whether several names refer to
+ the same file.
+
+ -- --
+
+
+Allow skipping MD4 file_sum 2002/04/08
+
+ If we're doing a local transfer, or using -W, then perhaps don't
+ send the file checksum. If we're doing a local transfer, then
+ calculating MD4 checksums uses 90% of CPU and is unlikely to be
+ useful.
+
+ Indeed for transfers over zlib or ssh we can also rely on the
+ transport to have quite strong protection against corruption.
+
+ Perhaps we should have an option to disable this,
+ analogous to --whole-file, although it would default to
+ disabled. The file checksum takes up a definite space in
+ the protocol -- we can either set it to 0, or perhaps just
+ leave it out.
+
+ -- --
+
+
+Accelerate MD4
+
+ Perhaps borrow an assembler MD4 from someone?
+
+ Make sure we call MD4 with properly-sized blocks whenever possible
+ to avoid copying into the residue region?
+
+ -- --
+
+
+String area code
+
+ Test whether this is actually faster than just using malloc(). If
+ it's not (anymore), throw it out.
+
+ -- --
+
+TESTING --------------------------------------------------------------
+
+Torture test
+
+ Something that just keeps running rsync continuously over a data set
+ likely to generate problems.
+
+ -- --
+
+
+Cross-test versions 2001/08/22
+
+ Part of the regression suite should be making sure that we
+ don't break backwards compatibility: old clients vs new
+ servers and so on. Ideally we would test both up and down
+ from the current release to all old versions.
+
+ Run current rsync versions against significant past releases.
+
+ We might need to omit broken old versions, or versions in which
+ particular functionality is broken
+
+ It might be sufficient to test downloads from well-known public
+ rsync servers running different versions of rsync. This will give
+ some testing and also be the most common case for having different
+ versions and not being able to upgrade.
+
+ The new --protocol option may help in this.
+
+ -- --
+
+
+Test on kernel source
+
+ Download all versions of kernel; unpack, sync between them. Also
+ sync between uncompressed tarballs. Compare directories after
+ transfer.
+
+ Use local mode; ssh; daemon; --whole-file and --no-whole-file.
+
+ Use awk to pull out the 'speedup' number for each transfer. Make
+ sure it is >= x.
+
+ -- --
+
+
+Test large files
+
+ Sparse and non-sparse
+
+ -- --
+
+
+Create mutator program for testing
+
+ Insert bytes, delete bytes, swap blocks, ...
+
+ -- --
+
+
+Create configure option to enable dangerous tests
+
+ -- --
+
+
+If tests are skipped, say why.
+
+ -- --
+
+
+Test daemon feature to disallow particular options.
+
+ -- --
+
+
+Create pipe program for testing
+
+ Create pipe program that makes slow/jerky connections for
+ testing Versions of read() and write() that corrupt the
+ stream, or abruptly fail
+
+ -- --
+
+
+Create test makefile target for some tests
+
+ Separate makefile target to run rough tests -- or perhaps
+ just run them every time?
+
+ -- --
+
+
+Test "refuse options" works
+
+ What about for --recursive?
+
+ If you specify an unrecognized option here, you should get an error.
+
+ We need a test case for this...
+
+ Was this broken when we changed to popt?
+
+ -- --
+
+RELATED PROJECTS -----------------------------------------------------
+
+rsyncsh