3 URGENT ---------------------------------------------------------------
6 IMPORTANT ------------------------------------------------------------
10 Part of the regression suite should be making sure that we don't
11 break backwards compatibility: old clients vs new servers and so
12 on. Ideally we would test the cross product of versions.
14 It might be sufficient to test downloads from well-known public
15 rsync servers running different versions of rsync. This will give
16 some testing and also be the most common case for having different
17 versions and not being able to upgrade.
21 If the platform doesn't support it, then don't even try.
23 If running as non-root, then don't fail, just give a warning.
24 (There was a thread about this a while ago?)
26 http://lists.samba.org/pipermail/rsync/2001-August/thread.html
27 http://lists.samba.org/pipermail/rsync/2001-September/thread.html
31 Avoids traversal. Better option than a pile of --include statements
32 for people who want to generate the file list using a find(1)
35 File list structure in memory
37 Rather than one big array, perhaps have a tree in memory mirroring
40 This might make sorting much faster! (I'm not sure it's a big CPU
43 It might also reduce memory use in storing repeated directory names
44 -- again I'm not sure this is a problem.
48 Traverse just one directory at a time. Tridge says it's possible.
50 At the moment rsync reads the whole file list into memory at the
51 start, which makes us use a lot of memory and also not pipeline
52 network access as much as we could.
55 Handling duplicate names
57 We need to be careful of duplicate names getting into the file list.
58 See clean_flist(). This could happen if multiple arguments include
61 I think duplicates are only a problem if they're both flowing
62 through the pipeline at the same time. For example we might have
63 updated the first occurrence after reading the checksums for the
64 second. So possibly we just need to make sure that we don't have
65 both in the pipeline at the same time.
67 Possibly if we did one directory at a time that would be sufficient.
69 Alternatively we could pre-process the arguments to make sure no
70 duplicates will ever be inserted. There could be some bad cases
71 when we're collapsing symlinks.
73 We could have a hash table.
75 The root of the problem is that we do not want more than one file
76 list entry referring to the same file. At first glance there are
77 several ways this could happen: symlinks, hardlinks, and repeated
78 names on the command line.
80 If names are repeated on the command line, they may be present in
81 different forms, perhaps by traversing directory paths in different
82 ways, traversing paths including symlinks. Also we need to allow
83 for expansion of globs by rsync.
85 At the moment, clean_flist() requires having the entire file list in
86 memory. Duplicate names are detected just by a string comparison.
88 We don't need to worry about hard links causing duplicates because
89 files are never updated in place. Similarly for symlinks.
91 I think even if we're using a different symlink mode we don't need
94 Unless we're really clever this will introduce a protocol
95 incompatibility, so we need to be able to accept the old format as
101 At exit, show how much memory was used for the file list, etc.
103 Also we do a wierd exponential-growth allocation in flist.c. I'm
104 not sure this makes sense with modern mallocs. At any rate it will
105 make us allocate a huge amount of memory for large file lists.
107 We can try using the GNU/SVID/XPG mallinfo() function to get some
113 At the moment hardlink handling is very expensive, so it's off by
114 default. It does not need to be so.
116 Since most of the solutions are rather intertwined with the file
117 list it is probably better to fix that first, although fixing
118 hardlinks is possibly simpler.
120 We can rule out hardlinked directories since they will probably
121 screw us up in all kinds of ways. They simply should not be used.
123 At the moment rsync only cares about hardlinks to regular files. I
124 guess you could also use them for sockets, devices and other beasts,
125 but I have not seen them.
127 When trying to reproduce hard links, we only need to worry about
128 files that have more than one name (nlinks>1 && !S_ISDIR).
130 The basic point of this is to discover alternate names that refer to
131 the same file. All operations, including creating the file and
132 writing modifications to it need only to be done for the first name.
133 For all later names, we just create the link and then leave it
136 If hard links are to be preserved:
138 Before the generator/receiver fork, the list of files is received
139 from the sender (recv_file_list), and a table for detecting hard
142 The generator looks for hard links within the file list and does
143 not send checksums for them, though it does send other metadata.
145 The sender sends the device number and inode with file entries, so
146 that files are uniquely identified.
148 The receiver goes through and creates hard links (do_hard_links)
149 after all data has been written, but before directory permissions
152 At the moment device and inum are sent as 4-byte integers, which
153 will probably cause problems on large filesystems. On Linux the
154 kernel uses 64-bit ino_t's internally, and people will soon have
155 filesystems big enough to use them. We ought to follow NFS4 in
156 using 64-bit device and inode identification, perhaps with a
157 protocol version bump.
159 Once we've seen all the names for a particular file, we no longer
160 need to think about it and we can deallocate the memory.
162 We can also have the case where there are links to a file that are
163 not in the tree being transferred. There's nothing we can do about
164 that. Because we rename the destination into place after writing,
165 any hardlinks to the old file are always going to be orphaned. In
166 fact that is almost necessary because otherwise we'd get really
167 confused if we were generating checksums for one name of a file and
170 At the moment the code seems to make a whole second copy of the file
171 list, which seems unnecessary.
173 We should have a test case that exercises hard links. Since it
174 might be hard to compare ./tls output where the inodes change we
175 might need a little program to check whether several names refer to
180 Implement suggestions from http://www.kame.net/newsletter/19980604/
181 and ftp://ftp.iij.ad.jp/pub/RFC/rfc2553.txt
183 If a host has multiple addresses, then listen try to connect to all
184 in order until we get through. (getaddrinfo may return multiple
185 addresses.) This is kind of implemented already.
187 Possibly also when starting as a server we may need to listen on
188 multiple passive addresses. This might be a bit harder, because we
189 may need to select on all of them. Hm.
191 Define a syntax for IPv6 literal addresses. Since they include
192 colons, they tend to break most naming systems, including ours.
193 Based on the HTTP IPv6 syntax, I think we should use
195 rsync://[::1]/foo/bar
198 which should just take a small change to the parser code.
202 If we hang or get SIGINT, then explain where we were up to. Perhaps
203 have a static buffer that contains the current function name, or
204 some kind of description of what we were trying to do. This is a
205 little easier on people than needing to run strace/truss.
207 "The dungeon collapses! You are killed." Rather than "unexpected
208 eof" give a message that is more detailed if possible and also more
213 Device major/minor numbers should be at least 32 bits each. See
214 http://lists.samba.org/pipermail/rsync/2001-November/005357.html
216 Transfer ACLs. Need to think of a standard representation.
217 Probably better not to even try to convert between NT and POSIX.
218 Possibly can share some code with Samba.
222 With the current common --include '*/' --exclude '*' pattern, people
223 can end up with many empty directories. We might avoid this by
224 lazily creating such directories.
228 Perhaps don't use our own zlib. Will we actually be incompatible,
229 or just be slightly less efficient?
233 Perhaps flush stdout after each filename, so that people trying to
234 monitor progress in a log file can do so more easily. See
235 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=48108
237 At the connections that just get a list of modules are not logged,
242 There are already some patches to do this.
246 Allow RSYNC_PROXY to be http://user:pass@proxy.foo:3128/, and do
247 HTTP Basic Proxy-Authentication.
249 Multiple schemes are possible, up to and including the insanity that
250 is NTLM, but Basic probably covers most cases.
254 Add --with-socks, and then perhaps a command-line option to put them
255 on or off. This might be more reliable than LD_PRELOAD hacks.
257 PLATFORMS ------------------------------------------------------------
261 Don't detach, because this messes up --srvany.
263 http://sources.redhat.com/ml/cygwin/2001-08/msg00234.html
265 According to "Effective TCP/IP Programming" (??) close() on a socket
266 has incorrect behaviour on Windows -- it sends a RST packet to the
267 other side, which gives a "connection reset by peer" error. On that
268 platform we should probably do shutdown() instead. However, on Unix
269 we are correct to call close(), because shutdown() discards
272 DOCUMENTATION --------------------------------------------------------
276 BUILD FARM -----------------------------------------------------------
280 AMDAHL UTS (Dave Dykstra)
282 Cygwin (on different versions of Win32?)
284 HP-UX variants (via HP?)
288 NICE -----------------------------------------------------------------
290 --no-detach and --no-fork options
292 Very useful for debugging. Also good when running under a
293 daemon-monitoring process that tries to restart the service when the
296 hang/timeout friendliness
300 Indicate whether files are new, updated, or deleted
304 Change to using gettext(). Probably need to ship this for platforms
307 Solicit translations.
313 Write a small emulation of interactive ftp as a Pythonn program
314 that calls rsync. Commands such as "cd", "ls", "ls *.c" etc map
315 fairly directly into rsync commands: it just needs to remember the
316 current host, directory and so on. We can probably even do
317 completion of remote filenames.