+ Perhaps flush stdout after each filename, so that people trying to
+ monitor progress in a log file can do so more easily. See
+ http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=48108
+
+ -- --
+
+
+Log deamon sessions that just list modules
+
+ At the connections that just get a list of modules are not logged,
+ but they should be.
+
+ -- --
+
+
+Log child death on signal
+
+ If a child of the rsync daemon dies with a signal, we should notice
+ that when we reap it and log a message.
+
+ -- --
+
+
+Keep stderr and stdout properly separated (Debian #23626)
+
+ -- --
+
+
+Log errors with function that reports process of origin
+
+ Use a separate function for reporting errors; prefix it with
+ "rsync:" or "rsync(remote)", or perhaps even "rsync(local
+ generator): ".
+
+ -- --
+
+
+verbose output David Stein 2001/12/20
+
+ Indicate whether files are new, updated, or deleted
+
+ At end of transfer, show how many files were or were not transferred
+ correctly.
+
+ -- --
+
+
+Add reason for transfer to file logging
+
+ Explain *why* every file is transferred or not (e.g. "local mtime
+ 123123 newer than 1283198")
+
+ -- --
+
+
+debugging of daemon 2002/04/08
+
+ Add an rsyncd.conf parameter to turn on debugging on the server.
+
+ -- --
+
+
+internationalization
+
+ Change to using gettext(). Probably need to ship this for platforms
+ that don't have it.
+
+ Solicit translations.
+
+ Does anyone care? Before we bother modifying the code, we ought to
+ get the manual translated first, because that's possibly more useful
+ and at any rate demonstrates desire.
+
+ -- --
+
+DEVELOPMENT --------------------------------------------------------
+
+Handling duplicate names
+
+ We need to be careful of duplicate names getting into the file list.
+ See clean_flist(). This could happen if multiple arguments include
+ the same file. Bad.
+
+ I think duplicates are only a problem if they're both flowing
+ through the pipeline at the same time. For example we might have
+ updated the first occurrence after reading the checksums for the
+ second. So possibly we just need to make sure that we don't have
+ both in the pipeline at the same time.
+
+ Possibly if we did one directory at a time that would be sufficient.
+
+ Alternatively we could pre-process the arguments to make sure no
+ duplicates will ever be inserted. There could be some bad cases
+ when we're collapsing symlinks.
+
+ We could have a hash table.
+
+ The root of the problem is that we do not want more than one file
+ list entry referring to the same file. At first glance there are
+ several ways this could happen: symlinks, hardlinks, and repeated
+ names on the command line.
+
+ If names are repeated on the command line, they may be present in
+ different forms, perhaps by traversing directory paths in different
+ ways, traversing paths including symlinks. Also we need to allow
+ for expansion of globs by rsync.
+
+ At the moment, clean_flist() requires having the entire file list in
+ memory. Duplicate names are detected just by a string comparison.
+
+ We don't need to worry about hard links causing duplicates because
+ files are never updated in place. Similarly for symlinks.
+
+ I think even if we're using a different symlink mode we don't need
+ to worry.
+
+ Unless we're really clever this will introduce a protocol
+ incompatibility, so we need to be able to accept the old format as
+ well.
+
+ -- --
+
+
+Use generic zlib 2002/02/25
+
+ Perhaps don't use our own zlib.