+ -- --
+
+DEVELOPMENT --------------------------------------------------------
+
+Handling duplicate names
+
+ We need to be careful of duplicate names getting into the file list.
+ See clean_flist(). This could happen if multiple arguments include
+ the same file. Bad.
+
+ I think duplicates are only a problem if they're both flowing
+ through the pipeline at the same time. For example we might have
+ updated the first occurrence after reading the checksums for the
+ second. So possibly we just need to make sure that we don't have
+ both in the pipeline at the same time.
+
+ Possibly if we did one directory at a time that would be sufficient.
+
+ Alternatively we could pre-process the arguments to make sure no
+ duplicates will ever be inserted. There could be some bad cases
+ when we're collapsing symlinks.
+
+ We could have a hash table.
+
+ The root of the problem is that we do not want more than one file
+ list entry referring to the same file. At first glance there are
+ several ways this could happen: symlinks, hardlinks, and repeated
+ names on the command line.
+
+ If names are repeated on the command line, they may be present in
+ different forms, perhaps by traversing directory paths in different
+ ways, traversing paths including symlinks. Also we need to allow
+ for expansion of globs by rsync.
+
+ At the moment, clean_flist() requires having the entire file list in
+ memory. Duplicate names are detected just by a string comparison.
+
+ We don't need to worry about hard links causing duplicates because
+ files are never updated in place. Similarly for symlinks.
+
+ I think even if we're using a different symlink mode we don't need
+ to worry.
+
+ Unless we're really clever this will introduce a protocol
+ incompatibility, so we need to be able to accept the old format as
+ well.
+
+ -- --
+
+
+Use generic zlib 2002/02/25
+
+ Perhaps don't use our own zlib.
+
+ Advantages:
+
+ - will automatically be up to date with bugfixes in zlib
+
+ - can leave it out for small rsync on e.g. recovery disks
+
+ - can use a shared library
+
+ - avoids people breaking rsync by trying to do this themselves and
+ messing up
+
+ Should we ship zlib for systems that don't have it, or require
+ people to install it separately?
+
+ Apparently this will make us incompatible with versions of rsync
+ that use the patched version of rsync. Probably the simplest way to
+ do this is to just disable gzip (with a warning) when talking to old
+ versions.
+
+ -- --
+
+
+TDB: 2002/03/12
+
+ Rather than storing the file list in memory, store it in a TDB.
+
+ This *might* make memory usage lower while building the file list.
+
+ Hashtable lookup will mean files are not transmitted in order,
+ though... hm.
+
+ This would neatly eliminate one of the major post-fork shared data
+ structures.
+
+ -- --
+
+
+Splint 2002/03/12
+
+ Build rsync with SPLINT to try to find security holes. Add
+ annotations as necessary. Keep track of the number of warnings
+ found initially, and see how many of them are real bugs, or real
+ security bugs. Knowing the percentage of likely hits would be
+ really interesting for other projects.
+
+ -- --
+
+
+Memory debugger
+
+ jra recommends Valgrind:
+
+ http://devel-home.kde.org/~sewardj/
+
+ -- --
+
+
+Create release script
+
+ Script would:
+
+ Update spec files
+
+ Build tar file; upload
+
+ Send announcement to mailing list and c.o.l.a.
+
+ Make freshmeat announcement
+
+ Update web site
+
+ -- --
+
+
+Add machines to build farm
+
+ Cygwin (on different versions of Win32?)
+
+ HP-UX variants (via HP?)
+
+ SCO
+
+
+
+ -- --
+
+PERFORMANCE ----------------------------------------------------------
+
+File list structure in memory
+
+ Rather than one big array, perhaps have a tree in memory mirroring
+ the directory tree.
+
+ This might make sorting much faster! (I'm not sure it's a big CPU
+ problem, mind you.)
+
+ It might also reduce memory use in storing repeated directory names
+ -- again I'm not sure this is a problem.
+
+ -- --
+
+
+Traverse just one directory at a time
+
+ Traverse just one directory at a time. Tridge says it's possible.
+
+ At the moment rsync reads the whole file list into memory at the
+ start, which makes us use a lot of memory and also not pipeline
+ network access as much as we could.
+
+ -- --
+
+
+Hard-link handling
+
+ At the moment hardlink handling is very expensive, so it's off by
+ default. It does not need to be so.
+
+ Since most of the solutions are rather intertwined with the file
+ list it is probably better to fix that first, although fixing
+ hardlinks is possibly simpler.
+
+ We can rule out hardlinked directories since they will probably
+ screw us up in all kinds of ways. They simply should not be used.
+
+ At the moment rsync only cares about hardlinks to regular files. I
+ guess you could also use them for sockets, devices and other beasts,
+ but I have not seen them.
+
+ When trying to reproduce hard links, we only need to worry about
+ files that have more than one name (nlinks>1 && !S_ISDIR).
+
+ The basic point of this is to discover alternate names that refer to
+ the same file. All operations, including creating the file and
+ writing modifications to it need only to be done for the first name.
+ For all later names, we just create the link and then leave it
+ alone.
+
+ If hard links are to be preserved:
+
+ Before the generator/receiver fork, the list of files is received
+ from the sender (recv_file_list), and a table for detecting hard
+ links is built.
+
+ The generator looks for hard links within the file list and does
+ not send checksums for them, though it does send other metadata.
+
+ The sender sends the device number and inode with file entries, so
+ that files are uniquely identified.
+
+ The receiver goes through and creates hard links (do_hard_links)
+ after all data has been written, but before directory permissions
+ are set.
+
+ At the moment device and inum are sent as 4-byte integers, which
+ will probably cause problems on large filesystems. On Linux the
+ kernel uses 64-bit ino_t's internally, and people will soon have
+ filesystems big enough to use them. We ought to follow NFS4 in
+ using 64-bit device and inode identification, perhaps with a
+ protocol version bump.
+
+ Once we've seen all the names for a particular file, we no longer
+ need to think about it and we can deallocate the memory.
+
+ We can also have the case where there are links to a file that are
+ not in the tree being transferred. There's nothing we can do about
+ that. Because we rename the destination into place after writing,
+ any hardlinks to the old file are always going to be orphaned. In
+ fact that is almost necessary because otherwise we'd get really
+ confused if we were generating checksums for one name of a file and
+ modifying another.
+
+ At the moment the code seems to make a whole second copy of the file
+ list, which seems unnecessary.
+
+ We should have a test case that exercises hard links. Since it
+ might be hard to compare ./tls output where the inodes change we
+ might need a little program to check whether several names refer to
+ the same file.
+
+ -- --
+
+
+Allow skipping MD4 file_sum 2002/04/08
+
+ If we're doing a local transfer, or using -W, then perhaps don't
+ send the file checksum. If we're doing a local transfer, then
+ calculating MD4 checksums uses 90% of CPU and is unlikely to be
+ useful.
+
+ Indeed for transfers over zlib or ssh we can also rely on the
+ transport to have quite strong protection against corruption.
+
+ Perhaps we should have an option to disable this,
+ analogous to --whole-file, although it would default to
+ disabled. The file checksum takes up a definite space in
+ the protocol -- we can either set it to 0, or perhaps just
+ leave it out.
+
+ -- --
+
+
+Accelerate MD4
+
+ Perhaps borrow an assembler MD4 from someone?
+
+ Make sure we call MD4 with properly-sized blocks whenever possible
+ to avoid copying into the residue region?
+
+ -- --
+
+
+String area code
+
+ Test whether this is actually faster than just using malloc(). If
+ it's not (anymore), throw it out.
+
+ -- --
+
+TESTING --------------------------------------------------------------
+
+Torture test
+
+ Something that just keeps running rsync continuously over a data set
+ likely to generate problems.
+
+ -- --
+
+
+Cross-test versions 2001/08/22
+
+ Part of the regression suite should be making sure that we
+ don't break backwards compatibility: old clients vs new
+ servers and so on. Ideally we would test both up and down
+ from the current release to all old versions.
+
+ Run current rsync versions against significant past releases.
+
+ We might need to omit broken old versions, or versions in which
+ particular functionality is broken
+
+ It might be sufficient to test downloads from well-known public
+ rsync servers running different versions of rsync. This will give
+ some testing and also be the most common case for having different
+ versions and not being able to upgrade.
+
+ The new --protocol option may help in this.
+
+ -- --
+
+
+Test on kernel source
+
+ Download all versions of kernel; unpack, sync between them. Also
+ sync between uncompressed tarballs. Compare directories after
+ transfer.
+
+ Use local mode; ssh; daemon; --whole-file and --no-whole-file.
+
+ Use awk to pull out the 'speedup' number for each transfer. Make
+ sure it is >= x.
+
+ -- --
+
+
+Test large files
+
+ Sparse and non-sparse
+
+ -- --
+
+
+Create mutator program for testing
+
+ Insert bytes, delete bytes, swap blocks, ...
+
+ -- --
+
+
+Create configure option to enable dangerous tests