X-Git-Url: https://mattmccutchen.net/rsync/rsync.git/blobdiff_plain/bde47ca7c52ac6b3e9f5b9bb0823c7b2ceb42768..16a3fec02d3306c95c4484dc263bd80bf5d0bb06:/TODO diff --git a/TODO b/TODO index 79840312..f634e728 100644 --- a/TODO +++ b/TODO @@ -1,15 +1,95 @@ -*- indented-text -*- BUGS --------------------------------------------------------------- +Fix hardlink reporting 2002/03/25 +Fix progress indicator to not corrupt log +lchmod question +Do not rely on having a group called "nobody" +Incorrect timestamps (Debian #100295) +Win32 + +FEATURES ------------------------------------------------------------ +server-imposed bandwidth limits +rsyncd over ssh +Use chroot only if supported +Allow supplementary groups in rsyncd.conf 2002/04/09 +Handling IPv6 on old machines +Other IPv6 stuff: +Add ACL support 2001/12/02 +Lazy directory creation +Conditional -z for old protocols +proxy authentication 2002/01/23 +SOCKS 2002/01/23 +FAT support +Allow forcing arbitrary permissions 2002/03/12 +--diff david.e.sewell 2002/03/15 +Add daemon --no-detach and --no-fork options +Create more granular verbosity jw 2003/05/15 + +DOCUMENTATION -------------------------------------------------------- +Update README +Keep list of open issues and todos on the web site +Update web site from CVS +Perhaps redo manual as SGML + +LOGGING -------------------------------------------------------------- +Make dry run list all updates 2002/04/03 +Memory accounting +Improve error messages +Better statistics: Rasmus 2002/03/08 +Perhaps flush stdout like syslog +Log deamon sessions that just list modules +Log child death on signal +Keep stderr and stdout properly separated (Debian #23626) +Log errors with function that reports process of origin +verbose output David Stein 2001/12/20 +Add reason for transfer to file logging +debugging of daemon 2002/04/08 +internationalization + +DEVELOPMENT -------------------------------------------------------- +Handling duplicate names +Use generic zlib 2002/02/25 +TDB: 2002/03/12 +Splint 2002/03/12 +Memory debugger +Create release script +Add machines to build farm + +PERFORMANCE ---------------------------------------------------------- +File list structure in memory +Traverse just one directory at a time +Hard-link handling +Allow skipping MD4 file_sum 2002/04/08 +Accelerate MD4 +String area code + +TESTING -------------------------------------------------------------- +Torture test +Cross-test versions 2001/08/22 +Test on kernel source +Test large files +Create mutator program for testing +Create configure option to enable dangerous tests +If tests are skipped, say why. +Test daemon feature to disallow particular options. +Create pipe program for testing +Create test makefile target for some tests +Test "refuse options" works -rsync-url barfs on upload +RELATED PROJECTS ----------------------------------------------------- +rsyncsh +http://rsync.samba.org/rsync-and-debian/ +rsyncable gzip patch +rsyncsplit as alternative to real integration with gzip? +reverse rsync over HTTP Range - rsync foo rsync://localhost/transfer/ - Fix the parser. +BUGS --------------------------------------------------------------- -There seems to be a bug with hardlinks +Fix hardlink reporting 2002/03/25 + (was: There seems to be a bug with hardlinks) mbp/2 build$ ls -l /tmp/a /tmp/b -i /tmp/a: @@ -79,8 +159,12 @@ There seems to be a bug with hardlinks -rw-rw-r-- 5 mbp mbp 29 Mar 25 17:30 b2 -rw-rw-r-- 5 mbp mbp 29 Mar 25 17:30 b3 + -- -- -Progress indicator can produce corrupt output when transferring directories: + +Fix progress indicator to not corrupt log + + Progress indicator can produce corrupt output when transferring directories: main/binary-arm/ main/binary-arm/admin/ @@ -99,23 +183,16 @@ Progress indicator can produce corrupt output when transferring directories: main/binary-arm/math/ main/binary-arm/misc/ -lchmod - I don't think we handle this properly on systems that don't have the - call. + -- -- -Cross-test versions - Part of the regression suite should be making sure that we don't - break backwards compatibility: old clients vs new servers and so - on. Ideally we would test the cross product of versions. - It might be sufficient to test downloads from well-known public - rsync servers running different versions of rsync. This will give - some testing and also be the most common case for having different - versions and not being able to upgrade. +lchmod question + + I don't think we handle this properly on systems that don't have the + call. Are there any such? ---no-blocking-io might be broken + -- -- - in the same way as --no-whole-file; somebody needs to check. Do not rely on having a group called "nobody" @@ -123,30 +200,44 @@ Do not rely on having a group called "nobody" On Debian it's "nogroup" -DAEMON -------------------------------------------------------------- + -- -- -server-imposed bandwidth limits -rsyncd over ssh +Incorrect timestamps (Debian #100295) - There are already some patches to do this. + A bit hard to believe, but apparently it happens. + + -- -- + + +Win32 + + Don't detach, because this messes up --srvany. + + http://sources.redhat.com/ml/cygwin/2001-08/msg00234.html - BitKeeper uses a server whose login shell is set to bkd. That's - probably a reasonable approach. + -- -- + FEATURES ------------------------------------------------------------ +server-imposed bandwidth limits ---dry-run is insufficiently dry + -- -- - Mark Santcroos points out that -n fails to list files which have - only metadata changes, though it probably should. - There may be a Debian bug about this as well. +rsyncd over ssh + + There are already some patches to do this. + + BitKeeper uses a server whose login shell is set to bkd. That's + probably a reasonable approach. + -- -- -use chroot + +Use chroot only if supported If the platform doesn't support it, then don't even try. @@ -156,327 +247,452 @@ use chroot http://lists.samba.org/pipermail/rsync/2001-August/thread.html http://lists.samba.org/pipermail/rsync/2001-September/thread.html + -- -- ---files-from - - Avoids traversal. Better option than a pile of --include statements - for people who want to generate the file list using a find(1) - command or a script. - -supplementary groups +Allow supplementary groups in rsyncd.conf 2002/04/09 Perhaps allow supplementary groups to be specified in rsyncd.conf; then make the first one the primary gid and all the rest be supplementary gids. + -- -- -File list structure in memory - Rather than one big array, perhaps have a tree in memory mirroring - the directory tree. +Handling IPv6 on old machines - This might make sorting much faster! (I'm not sure it's a big CPU - problem, mind you.) + The KAME IPv6 patch is nice in theory but has proved a bit of a + nightmare in practice. The basic idea of their patch is that rsync + is rewritten to use the new getaddrinfo()/getnameinfo() interface, + rather than gethostbyname()/gethostbyaddr() as in rsync 2.4.6. + Systems that don't have the new interface are handled by providing + our own implementation in lib/, which is selectively linked in. - It might also reduce memory use in storing repeated directory names - -- again I'm not sure this is a problem. + The problem with this is that it is really hard to get right on + platforms that have a half-working implementation, so redefining + these functions clashes with system headers, and leaving them out + breaks. This affects at least OSF/1, RedHat 5, and Cobalt, which + are moderately improtant. -Performance + Perhaps the simplest solution would be to have two different files + implementing the same interface, and choose either the new or the + old API. This is probably necessary for systems that e.g. have + IPv6, but gethostbyaddr() can't handle it. The Linux manpage claims + this is currently the case. - Traverse just one directory at a time. Tridge says it's possible. + In fact, our internal sockets interface (things like + open_socket_out(), etc) is much narrower than the getaddrinfo() + interface, and so probably simpler to get right. In addition, the + old code is known to work well on old machines. - At the moment rsync reads the whole file list into memory at the - start, which makes us use a lot of memory and also not pipeline - network access as much as we could. + We could drop the rather large lib/getaddrinfo files. + -- -- -Handling duplicate names - We need to be careful of duplicate names getting into the file list. - See clean_flist(). This could happen if multiple arguments include - the same file. Bad. +Other IPv6 stuff: + + Implement suggestions from http://www.kame.net/newsletter/19980604/ + and ftp://ftp.iij.ad.jp/pub/RFC/rfc2553.txt - I think duplicates are only a problem if they're both flowing - through the pipeline at the same time. For example we might have - updated the first occurrence after reading the checksums for the - second. So possibly we just need to make sure that we don't have - both in the pipeline at the same time. + If a host has multiple addresses, then listen try to connect to all + in order until we get through. (getaddrinfo may return multiple + addresses.) This is kind of implemented already. - Possibly if we did one directory at a time that would be sufficient. + Possibly also when starting as a server we may need to listen on + multiple passive addresses. This might be a bit harder, because we + may need to select on all of them. Hm. - Alternatively we could pre-process the arguments to make sure no - duplicates will ever be inserted. There could be some bad cases - when we're collapsing symlinks. + Define a syntax for IPv6 literal addresses. Since they include + colons, they tend to break most naming systems, including ours. + Based on the HTTP IPv6 syntax, I think we should use + + rsync://[::1]/foo/bar [::1]::bar - We could have a hash table. + which should just take a small change to the parser code. - The root of the problem is that we do not want more than one file - list entry referring to the same file. At first glance there are - several ways this could happen: symlinks, hardlinks, and repeated - names on the command line. + -- -- - If names are repeated on the command line, they may be present in - different forms, perhaps by traversing directory paths in different - ways, traversing paths including symlinks. Also we need to allow - for expansion of globs by rsync. - At the moment, clean_flist() requires having the entire file list in - memory. Duplicate names are detected just by a string comparison. +Add ACL support 2001/12/02 - We don't need to worry about hard links causing duplicates because - files are never updated in place. Similarly for symlinks. + Transfer ACLs. Need to think of a standard representation. + Probably better not to even try to convert between NT and POSIX. + Possibly can share some code with Samba. - I think even if we're using a different symlink mode we don't need - to worry. + -- -- - Unless we're really clever this will introduce a protocol - incompatibility, so we need to be able to accept the old format as - well. +Lazy directory creation -Memory accounting + With the current common --include '*/' --exclude '*' pattern, people + can end up with many empty directories. We might avoid this by + lazily creating such directories. - At exit, show how much memory was used for the file list, etc. + -- -- - Also we do a wierd exponential-growth allocation in flist.c. I'm - not sure this makes sense with modern mallocs. At any rate it will - make us allocate a huge amount of memory for large file lists. +Conditional -z for old protocols -Hard-link handling + After we get the @RSYNCD greeting from the server, we know it's + version but we have not yet sent the command line, so we could just + remove the -z option if the server is too old. - At the moment hardlink handling is very expensive, so it's off by - default. It does not need to be so. + For ssh invocation it's not so simple, because we actually use the + command line to start the remote process. However, we only actually + do compression in token.c, and we could therefore once we discover + the remote version emit an error if it's too old. I'm not sure if + that's a good tradeoff or not. - Since most of the solutions are rather intertwined with the file - list it is probably better to fix that first, although fixing - hardlinks is possibly simpler. + -- -- - We can rule out hardlinked directories since they will probably - screw us up in all kinds of ways. They simply should not be used. - At the moment rsync only cares about hardlinks to regular files. I - guess you could also use them for sockets, devices and other beasts, - but I have not seen them. +proxy authentication 2002/01/23 - When trying to reproduce hard links, we only need to worry about - files that have more than one name (nlinks>1 && !S_ISDIR). + Allow RSYNC_PROXY to be http://user:pass@proxy.foo:3128/, and do + HTTP Basic Proxy-Authentication. - The basic point of this is to discover alternate names that refer to - the same file. All operations, including creating the file and - writing modifications to it need only to be done for the first name. - For all later names, we just create the link and then leave it - alone. + Multiple schemes are possible, up to and including the insanity that + is NTLM, but Basic probably covers most cases. - If hard links are to be preserved: + -- -- - Before the generator/receiver fork, the list of files is received - from the sender (recv_file_list), and a table for detecting hard - links is built. - The generator looks for hard links within the file list and does - not send checksums for them, though it does send other metadata. +SOCKS 2002/01/23 - The sender sends the device number and inode with file entries, so - that files are uniquely identified. + Add --with-socks, and then perhaps a command-line option to put them + on or off. This might be more reliable than LD_PRELOAD hacks. - The receiver goes through and creates hard links (do_hard_links) - after all data has been written, but before directory permissions - are set. + -- -- - At the moment device and inum are sent as 4-byte integers, which - will probably cause problems on large filesystems. On Linux the - kernel uses 64-bit ino_t's internally, and people will soon have - filesystems big enough to use them. We ought to follow NFS4 in - using 64-bit device and inode identification, perhaps with a - protocol version bump. - Once we've seen all the names for a particular file, we no longer - need to think about it and we can deallocate the memory. +FAT support - We can also have the case where there are links to a file that are - not in the tree being transferred. There's nothing we can do about - that. Because we rename the destination into place after writing, - any hardlinks to the old file are always going to be orphaned. In - fact that is almost necessary because otherwise we'd get really - confused if we were generating checksums for one name of a file and - modifying another. + rsync to a FAT partition on a Unix machine doesn't work very well at + the moment. I think we get errors about invalid filenames and + perhaps also trying to do atomic renames. - At the moment the code seems to make a whole second copy of the file - list, which seems unnecessary. + I guess the code to do this is currently #ifdef'd on Windows; + perhaps we ought to intelligently fall back to it on Unix too. - We should have a test case that exercises hard links. Since it - might be hard to compare ./tls output where the inodes change we - might need a little program to check whether several names refer to - the same file. + -- -- -IPv6 - Perhaps put back the old socket code; if on a machine that does not - properly support the getaddrinfo API, then use it. This is probably - much simpler than reimplementing it. +Allow forcing arbitrary permissions 2002/03/12 - Alternatively, have two different files implementing the same - interface, and choose either the new or the old API. This is - probably necessary for systems that e.g. have IPv6, but - gethostbyaddr() can't handle it. The Linux manpage claims this is - currently the case. + On 12 Mar 2002, Dave Dykstra wrote: + > If we would add an option to do that functionality, I + > would vote for one that was more general which could mask + > off any set of permission bits and possibly add any set of + > bits. Perhaps a chmod-like syntax if it could be + > implemented simply. - This might get us working again on RedHat 5 and similar systems. - Although the Kame patch seems like a good idea, in fact it is a much - broader interface than the relatively narrow "open by name", "accept - and log" interface that rsync uses internally, and it has the - disadvantage of clashing with half-arsed implementations of the API. + I think that would be good too. For example, people uploading files + to a web server might like to say - Implement suggestions from http://www.kame.net/newsletter/19980604/ - and ftp://ftp.iij.ad.jp/pub/RFC/rfc2553.txt + rsync -avzP --chmod a+rX ./ sourcefrog.net:/home/www/sourcefrog/ - If a host has multiple addresses, then listen try to connect to all - in order until we get through. (getaddrinfo may return multiple - addresses.) This is kind of implemented already. + Ideally the patch would implement as many of the gnu chmod semantics + as possible. I think the mode parser should be a separate function + that passes back something like (mask,set) description to the rest + of the program. For bonus points there would be a test case for the + parser. - Possibly also when starting as a server we may need to listen on - multiple passive addresses. This might be a bit harder, because we - may need to select on all of them. Hm. + Possibly also --chown - Define a syntax for IPv6 literal addresses. Since they include - colons, they tend to break most naming systems, including ours. - Based on the HTTP IPv6 syntax, I think we should use - - rsync://[::1]/foo/bar - [::1]::bar + (Debian #23628) - which should just take a small change to the parser code. + -- -- -Errors +--diff david.e.sewell 2002/03/15 - If we hang or get SIGINT, then explain where we were up to. Perhaps - have a static buffer that contains the current function name, or - some kind of description of what we were trying to do. This is a - little easier on people than needing to run strace/truss. + Allow people to specify the diff command. (Might want to use wdiff, + gnudiff, etc.) - "The dungeon collapses! You are killed." Rather than "unexpected - eof" give a message that is more detailed if possible and also more - helpful. + Just diff the temporary file with the destination file, and delete + the tmp file rather than moving it into place. - If we get an error writing to a socket, then we should perhaps - continue trying to read to see if an error message comes across - explaining why the socket is closed. I'm not sure if this would - work, but it would certainly make our messages more helpful. + Interaction with --partial. - What happens if a directory is missing -x attributes. Do we lose - our load? (Debian #28416) Probably fixed now, but a test case - would be good. + Security interactions with daemon mode? + -- -- -File attributes - Device major/minor numbers should be at least 32 bits each. See - http://lists.samba.org/pipermail/rsync/2001-November/005357.html +Add daemon --no-detach and --no-fork options - Transfer ACLs. Need to think of a standard representation. - Probably better not to even try to convert between NT and POSIX. - Possibly can share some code with Samba. + Very useful for debugging. Also good when running under a + daemon-monitoring process that tries to restart the service when the + parent exits. -Empty directories + -- -- - With the current common --include '*/' --exclude '*' pattern, people - can end up with many empty directories. We might avoid this by - lazily creating such directories. +Create more granular verbosity jw 2003/05/15 -zlib + Control output with the --report option. - Perhaps don't use our own zlib. + The option takes as a single argument (no whitespace) a + comma delimited lists of keywords. - Advantages: - - - will automatically be up to date with bugfixes in zlib + This would separate debugging from "logging" as well as + fine grained selection of statistical reporting and what + actions are logged. - - can leave it out for small rsync on e.g. recovery disks + http://lists.samba.org/archive/rsync/2003-May/006059.html - - can use a shared library + -- -- - - avoids people breaking rsync by trying to do this themselves and - messing up +DOCUMENTATION -------------------------------------------------------- - Should we ship zlib for systems that don't have it, or require - people to install it separately? +Update README - Apparently this will make us incompatible with versions of rsync - that use the patched version of rsync. Probably the simplest way to - do this is to just disable gzip (with a warning) when talking to old - versions. + -- -- + + +Keep list of open issues and todos on the web site + + -- -- + + +Update web site from CVS + + -- -- + + +Perhaps redo manual as SGML + + The man page is getting rather large, and there is more information + that ought to be added. + + TexInfo source is probably a dying format. + + Linuxdoc looks like the most likely contender. I know DocBook is + favoured by some people, but it's so bloody verbose, even with emacs + support. + + -- -- + +LOGGING -------------------------------------------------------------- + +Make dry run list all updates 2002/04/03 + --dry-run is too dry -logging + Mark Santcroos points out that -n fails to list files which have + only metadata changes, though it probably should. + + There may be a Debian bug about this as well. + + -- -- + + +Memory accounting + + At exit, show how much memory was used for the file list, etc. + + Also we do a wierd exponential-growth allocation in flist.c. I'm + not sure this makes sense with modern mallocs. At any rate it will + make us allocate a huge amount of memory for large file lists. + + -- -- + + +Improve error messages + + If we hang or get SIGINT, then explain where we were up to. Perhaps + have a static buffer that contains the current function name, or + some kind of description of what we were trying to do. This is a + little easier on people than needing to run strace/truss. + + "The dungeon collapses! You are killed." Rather than "unexpected + eof" give a message that is more detailed if possible and also more + helpful. + + If we get an error writing to a socket, then we should perhaps + continue trying to read to see if an error message comes across + explaining why the socket is closed. I'm not sure if this would + work, but it would certainly make our messages more helpful. + + What happens if a directory is missing -x attributes. Do we lose + our load? (Debian #28416) Probably fixed now, but a test case would + be good. + + + + -- -- + + +Better statistics: Rasmus 2002/03/08 + + + hey, how about an rsync option that just gives you the + summary without the list of files? And perhaps gives + more information like the number of new files, number + of changed, deleted, etc. ? + + + nice idea there is --stats but at the moment it's very + tridge-oriented rather than user-friendly it would be + nice to improve it that would also work well with + --dryrun + + -- -- + + +Perhaps flush stdout like syslog Perhaps flush stdout after each filename, so that people trying to monitor progress in a log file can do so more easily. See http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=48108 + -- -- + + +Log deamon sessions that just list modules + At the connections that just get a list of modules are not logged, but they should be. + -- -- + + +Log child death on signal + If a child of the rsync daemon dies with a signal, we should notice that when we reap it and log a message. - Keep stderr and stdout properly separated (Debian #23626) + -- -- - After we get the @RSYNCD greeting from the server, we know it's - version but we have not yet sent the command line, so we could just - remove the -z option if the server is too old. - For ssh invocation it's not so simple, because we actually use the - command line to start the remote process. However, we only actually - do compression in token.c, and we could therefore once we discover - the remote version emit an error if it's too old. I'm not sure if - that's a good tradeoff or not. +Keep stderr and stdout properly separated (Debian #23626) + -- -- -rsyncd over ssh - There are already some patches to do this. +Log errors with function that reports process of origin -proxy authentication + Use a separate function for reporting errors; prefix it with + "rsync:" or "rsync(remote)", or perhaps even "rsync(local + generator): ". - Allow RSYNC_PROXY to be http://user:pass@proxy.foo:3128/, and do - HTTP Basic Proxy-Authentication. + -- -- - Multiple schemes are possible, up to and including the insanity that - is NTLM, but Basic probably covers most cases. -SOCKS +verbose output David Stein 2001/12/20 + + Indicate whether files are new, updated, or deleted - Add --with-socks, and then perhaps a command-line option to put them - on or off. This might be more reliable than LD_PRELOAD hacks. + At end of transfer, show how many files were or were not transferred + correctly. -FAT support + -- -- - rsync to a FAT partition on a Unix machine doesn't work very well - at the moment. I think we get errors about invalid filenames and - perhaps also trying to do atomic renames. - I guess the code to do this is currently #ifdef'd on Windows; perhaps - we ought to intelligently fall back to it on Unix too. +Add reason for transfer to file logging + Explain *why* every file is transferred or not (e.g. "local mtime + 123123 newer than 1283198") -Better statistics: + -- -- - mbp: hey, how about an rsync option that just gives you the - summary without the list of files? And perhaps gives more - information like the number of new files, number of changed, - deleted, etc. ? - Rasmus: nice idea - there is --stats - but at the moment it's very tridge-oriented - rather than user-friendly - it would be nice to improve it - that would also work well with --dryrun -TDB: +debugging of daemon 2002/04/08 + + Add an rsyncd.conf parameter to turn on debugging on the server. + + -- -- + + +internationalization + + Change to using gettext(). Probably need to ship this for platforms + that don't have it. + + Solicit translations. + + Does anyone care? Before we bother modifying the code, we ought to + get the manual translated first, because that's possibly more useful + and at any rate demonstrates desire. + + -- -- + +DEVELOPMENT -------------------------------------------------------- + +Handling duplicate names + + We need to be careful of duplicate names getting into the file list. + See clean_flist(). This could happen if multiple arguments include + the same file. Bad. + + I think duplicates are only a problem if they're both flowing + through the pipeline at the same time. For example we might have + updated the first occurrence after reading the checksums for the + second. So possibly we just need to make sure that we don't have + both in the pipeline at the same time. + + Possibly if we did one directory at a time that would be sufficient. + + Alternatively we could pre-process the arguments to make sure no + duplicates will ever be inserted. There could be some bad cases + when we're collapsing symlinks. + + We could have a hash table. + + The root of the problem is that we do not want more than one file + list entry referring to the same file. At first glance there are + several ways this could happen: symlinks, hardlinks, and repeated + names on the command line. + + If names are repeated on the command line, they may be present in + different forms, perhaps by traversing directory paths in different + ways, traversing paths including symlinks. Also we need to allow + for expansion of globs by rsync. + + At the moment, clean_flist() requires having the entire file list in + memory. Duplicate names are detected just by a string comparison. + + We don't need to worry about hard links causing duplicates because + files are never updated in place. Similarly for symlinks. + + I think even if we're using a different symlink mode we don't need + to worry. + + Unless we're really clever this will introduce a protocol + incompatibility, so we need to be able to accept the old format as + well. + + -- -- + + +Use generic zlib 2002/02/25 + + Perhaps don't use our own zlib. + + Advantages: + + - will automatically be up to date with bugfixes in zlib + + - can leave it out for small rsync on e.g. recovery disks + + - can use a shared library + + - avoids people breaking rsync by trying to do this themselves and + messing up + + Should we ship zlib for systems that don't have it, or require + people to install it separately? + + Apparently this will make us incompatible with versions of rsync + that use the patched version of rsync. Probably the simplest way to + do this is to just disable gzip (with a warning) when talking to old + versions. + + -- -- + + +TDB: 2002/03/12 Rather than storing the file list in memory, store it in a TDB. @@ -488,294 +704,298 @@ TDB: This would neatly eliminate one of the major post-fork shared data structures. + -- -- -chmod: - On 12 Mar 2002, Dave Dykstra wrote: - > If we would add an option to do that functionality, I would vote for one - > that was more general which could mask off any set of permission bits and - > possibly add any set of bits. Perhaps a chmod-like syntax if it could be - > implemented simply. +Splint 2002/03/12 - I think that would be good too. For example, people uploading files - to a web server might like to say + Build rsync with SPLINT to try to find security holes. Add + annotations as necessary. Keep track of the number of warnings + found initially, and see how many of them are real bugs, or real + security bugs. Knowing the percentage of likely hits would be + really interesting for other projects. - rsync -avzP --chmod a+rX ./ sourcefrog.net:/home/www/sourcefrog/ + -- -- - Ideally the patch would implement as many of the gnu chmod semantics - as possible. I think the mode parser should be a separate function - that passes back something like (mask,set) description to the rest of - the program. For bonus points there would be a test case for the - parser. - Possibly also --chown +Memory debugger - (Debian #23628) + jra recommends Valgrind: + http://devel-home.kde.org/~sewardj/ ---diff + -- -- - Allow people to specify the diff command. (Might want to use wdiff, - gnudiff, etc.) - Just diff the temporary file with the destination file, and delete - the tmp file rather than moving it into place. +Create release script + + Script would: - Interaction with --partial. + Update spec files - Security interactions with daemon mode? + Build tar file; upload - (Suggestion from david.e.sewell) + Send announcement to mailing list and c.o.l.a. + + Make freshmeat announcement + Update web site -Incorrect timestamps (Debian #100295) + -- -- - A bit hard to believe, but apparently it happens. +Add machines to build farm -Check "refuse options works" + Cygwin (on different versions of Win32?) - We need a test case for this... + HP-UX variants (via HP?) - Was this broken when we changed to popt? + SCO -PERFORMANCE ---------------------------------------------------------- -MD4 file_sum + -- -- - If we're doing a local transfer, or using -W, then perhaps don't - send the file checksum. If we're doing a local transfer, then - calculating MD4 checksums uses 90% of CPU and is unlikely to be - useful. +PERFORMANCE ---------------------------------------------------------- - Indeed for transfers over zlib or ssh we can also rely on the - transport to have quite strong protection against corruption. +File list structure in memory - Perhaps we should have an option to disable this, analogous to - --whole-file, although it would default to disabled. The file - checksum takes up a definite space in the protocol -- we can either - set it to 0, or perhaps just leave it out. + Rather than one big array, perhaps have a tree in memory mirroring + the directory tree. -MD4 + This might make sorting much faster! (I'm not sure it's a big CPU + problem, mind you.) - Perhaps borrow an assembler MD4 from someone? + It might also reduce memory use in storing repeated directory names + -- again I'm not sure this is a problem. - Make sure we call MD4 with properly-sized blocks whenever possible - to avoid copying into the residue region? + -- -- -String area code - Test whether this is actually faster than just using malloc(). If - it's not (anymore), throw it out. - +Traverse just one directory at a time -PLATFORMS ------------------------------------------------------------ + Traverse just one directory at a time. Tridge says it's possible. -Win32 + At the moment rsync reads the whole file list into memory at the + start, which makes us use a lot of memory and also not pipeline + network access as much as we could. - Don't detach, because this messes up --srvany. + -- -- - http://sources.redhat.com/ml/cygwin/2001-08/msg00234.html - According to "Effective TCP/IP Programming" (??) close() on a socket - has incorrect behaviour on Windows -- it sends a RST packet to the - other side, which gives a "connection reset by peer" error. On that - platform we should probably do shutdown() instead. However, on Unix - we are correct to call close(), because shutdown() discards - untransmitted data. +Hard-link handling + At the moment hardlink handling is very expensive, so it's off by + default. It does not need to be so. -DEVELOPMENT ---------------------------------------------------------- + Since most of the solutions are rather intertwined with the file + list it is probably better to fix that first, although fixing + hardlinks is possibly simpler. -Splint + We can rule out hardlinked directories since they will probably + screw us up in all kinds of ways. They simply should not be used. - Build rsync with SPLINT to try to find security holes. Add - annotations as necessary. Keep track of the number of warnings - found initially, and see how many of them are real bugs, or real - security bugs. Knowing the percentage of likely hits would be - really interesting for other projects. + At the moment rsync only cares about hardlinks to regular files. I + guess you could also use them for sockets, devices and other beasts, + but I have not seen them. -Torture test + When trying to reproduce hard links, we only need to worry about + files that have more than one name (nlinks>1 && !S_ISDIR). - Something that just keeps running rsync continuously over a data set - likely to generate problems. + The basic point of this is to discover alternate names that refer to + the same file. All operations, including creating the file and + writing modifications to it need only to be done for the first name. + For all later names, we just create the link and then leave it + alone. -Cross-testing + If hard links are to be preserved: - Run current rsync versions against significant past releases. + Before the generator/receiver fork, the list of files is received + from the sender (recv_file_list), and a table for detecting hard + links is built. -Memory debugger + The generator looks for hard links within the file list and does + not send checksums for them, though it does send other metadata. - jra recommends Valgrind: + The sender sends the device number and inode with file entries, so + that files are uniquely identified. - http://devel-home.kde.org/~sewardj/ + The receiver goes through and creates hard links (do_hard_links) + after all data has been written, but before directory permissions + are set. -Release script - - Update spec files + At the moment device and inum are sent as 4-byte integers, which + will probably cause problems on large filesystems. On Linux the + kernel uses 64-bit ino_t's internally, and people will soon have + filesystems big enough to use them. We ought to follow NFS4 in + using 64-bit device and inode identification, perhaps with a + protocol version bump. - Build tar file; upload + Once we've seen all the names for a particular file, we no longer + need to think about it and we can deallocate the memory. - Send announcement to mailing list and c.o.l.a. - - Make freshmeat announcement + We can also have the case where there are links to a file that are + not in the tree being transferred. There's nothing we can do about + that. Because we rename the destination into place after writing, + any hardlinks to the old file are always going to be orphaned. In + fact that is almost necessary because otherwise we'd get really + confused if we were generating checksums for one name of a file and + modifying another. - Update web site + At the moment the code seems to make a whole second copy of the file + list, which seems unnecessary. + We should have a test case that exercises hard links. Since it + might be hard to compare ./tls output where the inodes change we + might need a little program to check whether several names refer to + the same file. + -- -- -TESTING -------------------------------------------------------------- -Cross-test versions +Allow skipping MD4 file_sum 2002/04/08 - Part of the regression suite should be making sure that we don't - break backwards compatibility: old clients vs new servers and so - on. Ideally we would test both up and down from the current release - to all old versions. + If we're doing a local transfer, or using -W, then perhaps don't + send the file checksum. If we're doing a local transfer, then + calculating MD4 checksums uses 90% of CPU and is unlikely to be + useful. - We might need to omit broken old versions, or versions in which - particular functionality is broken + Indeed for transfers over zlib or ssh we can also rely on the + transport to have quite strong protection against corruption. - It might be sufficient to test downloads from well-known public - rsync servers running different versions of rsync. This will give - some testing and also be the most common case for having different - versions and not being able to upgrade. + Perhaps we should have an option to disable this, + analogous to --whole-file, although it would default to + disabled. The file checksum takes up a definite space in + the protocol -- we can either set it to 0, or perhaps just + leave it out. + -- -- -Test on kernel source - Download all versions of kernel; unpack, sync between them. Also - sync between uncompressed tarballs. Compare directories after - transfer. +Accelerate MD4 - Use local mode; ssh; daemon; --whole-file and --no-whole-file. + Perhaps borrow an assembler MD4 from someone? - Use awk to pull out the 'speedup' number for each transfer. Make - sure it is >= x. + Make sure we call MD4 with properly-sized blocks whenever possible + to avoid copying into the residue region? + -- -- -Test large files - Sparse and non-sparse +String area code -Mutator program + Test whether this is actually faster than just using malloc(). If + it's not (anymore), throw it out. - Insert bytes, delete bytes, swap blocks, ... + -- -- -configure option to enable dangerous tests +TESTING -------------------------------------------------------------- -If tests are skipped, say why. +Torture test -Test daemon feature to disallow particular options. + Something that just keeps running rsync continuously over a data set + likely to generate problems. -Pipe program that makes slow/jerky connections. + -- -- -Versions of read() and write() that corrupt the stream, or abruptly fail -Separate makefile target to run rough tests -- or perhaps just run -them every time? +Cross-test versions 2001/08/22 -Test "refuse options" works + Part of the regression suite should be making sure that we + don't break backwards compatibility: old clients vs new + servers and so on. Ideally we would test both up and down + from the current release to all old versions. - What about for --recursive? + Run current rsync versions against significant past releases. - If you specify an unrecognized option here, you should get an error. + We might need to omit broken old versions, or versions in which + particular functionality is broken + It might be sufficient to test downloads from well-known public + rsync servers running different versions of rsync. This will give + some testing and also be the most common case for having different + versions and not being able to upgrade. -DOCUMENTATION -------------------------------------------------------- + The new --protocol option may help in this. -Update README + -- -- -Keep list of open issues and todos on the web site -Update web site from CVS +Test on kernel source + Download all versions of kernel; unpack, sync between them. Also + sync between uncompressed tarballs. Compare directories after + transfer. -Perhaps redo manual as SGML + Use local mode; ssh; daemon; --whole-file and --no-whole-file. - The man page is getting rather large, and there is more information - that ought to be added. + Use awk to pull out the 'speedup' number for each transfer. Make + sure it is >= x. - TexInfo source is probably a dying format. + -- -- - Linuxdoc looks like the most likely contender. I know DocBook is - favoured by some people, but it's so bloody verbose, even with emacs - support. +Test large files -BUILD FARM ----------------------------------------------------------- + Sparse and non-sparse -Add machines + -- -- - AMDAHL UTS (Dave Dykstra) - Cygwin (on different versions of Win32?) +Create mutator program for testing - HP-UX variants (via HP?) + Insert bytes, delete bytes, swap blocks, ... - SCO + -- -- -LOGGING -------------------------------------------------------------- +Create configure option to enable dangerous tests - Perhaps flush stdout after each filename, so that people trying to - monitor progress in a log file can do so more easily. See - http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=48108 + -- -- - At the connections that just get a list of modules are not logged, - but they should be. - If a child of the rsync daemon dies with a signal, we should notice - that when we reap it and log a message. +If tests are skipped, say why. - Keep stderr and stdout properly separated (Debian #23626) + -- -- - Use a separate function for reporting errors; prefix it with - "rsync:" or "rsync(remote)", or perhaps even "rsync(local - generator): ". -verbose output - - Indicate whether files are new, updated, or deleted +Test daemon feature to disallow particular options. - At end of transfer, show how many files were or were not transferred - correctly. + -- -- --vv - Explain *why* every file is transferred or not (e.g. "local mtime - 123123 newer than 1283198") +Create pipe program for testing + Create pipe program that makes slow/jerky connections for + testing Versions of read() and write() that corrupt the + stream, or abruptly fail -debugging of daemon + -- -- - Add an rsyncd.conf parameter to turn on debugging on the server. +Create test makefile target for some tests + Separate makefile target to run rough tests -- or perhaps + just run them every time? -NICE ----------------------------------------------------------------- + -- -- ---no-detach and --no-fork options - Very useful for debugging. Also good when running under a - daemon-monitoring process that tries to restart the service when the - parent exits. +Test "refuse options" works -hang/timeout friendliness + What about for --recursive? -internationalization + If you specify an unrecognized option here, you should get an error. - Change to using gettext(). Probably need to ship this for platforms - that don't have it. + We need a test case for this... - Solicit translations. + Was this broken when we changed to popt? - Does anyone care? Before we bother modifying the code, we ought to - get the manual translated first, because that's possibly more useful - and at any rate demonstrates desire. + -- -- + +RELATED PROJECTS ----------------------------------------------------- -rsyncsh +rsyncsh Write a small emulation of interactive ftp as a Pythonn program that calls rsync. Commands such as "cd", "ls", "ls *.c" etc map @@ -783,20 +1003,33 @@ rsyncsh current host, directory and so on. We can probably even do completion of remote filenames. + -- -- -RELATED PROJECTS ----------------------------------------------------- http://rsync.samba.org/rsync-and-debian/ + + -- -- + + rsyncable gzip patch Exhaustive, tortuous testing Cleanups? + -- -- + + rsyncsplit as alternative to real integration with gzip? + -- -- + + reverse rsync over HTTP Range Goswin Brederlow suggested this on Debian; I think tridge and I talked about it previous in relation to rproxy. + + -- -- +