Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Newer
Older
100644 604 lines (481 sloc) 31.329 kb
08faabb @apenwarr First draft of a DESIGN document.
apenwarr authored
1
2 The Crazy Hacker's Crazy Guide to Bup Craziness
3 ===============================================
4
5 Despite what you might have heard, bup is not that crazy, and neither are
6 you if you're trying to figure out how it works. But it's also (as of this
7 writing) rather new and the source code doesn't have a lot of comments, so
8 it can be a little confusing at first glance. This document is designed to
9 make it easier for you to get started if you want to add a new feature, fix
10 a bug, or just understand how it all works.
11
12
13 Bup Source Code Layout
14 ----------------------
15
16 As you're reading this, you might want to look at different parts of the bup
17 source code to follow along and see what we're talking about. bup's code is
18 written primarily in python with a bit of C code in speed-sensitive places.
19 Here are the most important things to know:
20
21 - bup (symlinked to main.py) is the main program that runs when you type
22 'bup'.
23
24 - cmd/bup-* (mostly symlinked to cmd/*-cmd.py) are the individual
25 subcommands, in a way similar to how git breaks all its subcommands into
26 separate programs. Not all the programs have to be written in python;
27 they could be in any language, as long as they end up named cmd/bup-*.
28 We might end up re-coding large parts of bup in C eventually so that it
29 can be even faster and (perhaps) more portable.
30
31 - lib/bup/*.py are python library files used by the cmd/*.py commands.
32 That directory name seems a little silly (and worse, redundant) but there
33 seemed to be no better way to let programs write "from bup import
34 index" and have it work. Putting bup in the top level conflicted with
35 the 'bup' command; calling it anything other than 'bup' was fundamentally
36 wrong, and doesn't work when you install bup on your system in /usr/lib
37 somewhere. So we get the annoyingly long paths.
38
39
40 Repository Structure
41 ====================
42
43 Before you can talk about how bup works, we need to first address what it
44 does. The purpose of bup is essentially to let you "replicate" data between
45 two main data structures:
46
47 1. Your computer's filesystem;
48
49 2. A bup repository. (Yes, we know, that part also resides in your
50 filesystem. Stop trying to confuse yourself. Don't worry, we'll be
51 plenty confusing enough as it is.)
52
53 Essentially, copying data from the filesystem to your repository is called
54 "backing stuff up," which is what bup specializes in. Normally you initiate
55 a backup using the 'bup save' command, but that's getting ahead of
56 ourselves.
57
f8175fa @apenwarr DESIGN: 'bup restore' exists now.
apenwarr authored
58 For the inverse operation, ie. copying from the repository to your
59 filesystem, you have several choices; the main ones are 'bup restore', 'bup
60 ftp', 'bup fuse', and 'bup web'.
08faabb @apenwarr First draft of a DESIGN document.
apenwarr authored
61
62 Now, those are the basics of backups. In other words, we just spent about
63 half a page telling you that bup backs up and restores data. Are we having
64 fun yet?
65
66 The next thing you'll want to know is the format of the bup repository,
67 because hacking on bup is rather impossible unless you understand that part.
68 In short, a bup repository is a git repository. If you don't know about
69 git, you'll want to read about it now. A really good article to read is
70 "Git for Computer Scientists" - you can find it in Google. Go read it now.
71 We'll wait.
72
73 Got it? Okay, so now you're an expert in blobs, trees, commits, and refs,
74 the four building blocks of a git repository. bup uses these four things,
75 and they're formatted in exactly the same way as git does it, so you can use
76 git to manipulate the bup repository if you want, and you probably won't
77 break anything. It's also a comfort to know you can squeeze data out using
78 git, just in case bup fails you, and as a developer, git offers some nice
79 tools (like 'git rev-list' and 'git log' and 'git diff' and 'git show' and
80 so on) that allow you to explore your repository and help debug when things
81 go wrong.
82
83 Now, bup does use these tools a little bit differently than plain git. We
84 need to do this in order to address two deficiencies in git when used for
85 large backups, namely a) git bogs down and crashes if you give it really
86 large files; b) git is too slow when you give it too many files; and c) git
87 doesn't store detailed filesystem metadata.
88
89 Let's talk about each of those problems in turn.
90
91
92 Handling large files (cmd/split, hashsplit.split_to_blob_or_tree)
93 --------------------
94
95 The primary reason git can't handle huge files is that it runs them through
96 xdelta, which generally means it tries to load the entire contents of a file
97 into memory at once. If it didn't do this, it would have to store the
98 entire contents of every single revision of every single file, even if you
99 only changed a few bytes of that file. That would be a terribly inefficient
100 use of disk space, and git is well known for its amazingly efficient
101 repository format.
102
103 Unfortunately, xdelta works great for small files and gets amazingly slow
104 and memory-hungry for large files. For git's main purpose, ie. managing
105 your source code, this isn't a problem. But when backing up your
106 filesystem, you're going to have at least a few large files, and so it's a
107 non-starter. bup has to do something totally different.
108
109 What bup does instead of xdelta is what we call "hashsplitting." We wanted
110 a general-purpose way to efficiently back up *any* large file that might
111 change in small ways, without storing the entire file every time. In fact,
112 the original versions of bup could only store a single file at a time;
113 surprisingly enough, this was enough to give us a large part of bup's
114 functionality. If you just take your entire filesystem and put it in a
115 giant tarball each day, then send that tarball to bup, bup will be able to
116 efficiently store only the changes to that tarball from one day to the next.
117 For small files, bup's compression won't be as good as xdelta's, but for
118 anything over a few megabytes in size, bup's compression will actually
119 *work*, which is a bit advantage over xdelta.
120
121 How does hashsplitting work? It's deceptively simple. We read through the
b60768b @apenwarr DESIGN: update mentions of stupidsum to reflect new rollsum algorithm.
apenwarr authored
122 file one byte at a time, calculating a rolling checksum of the last 128
123 bytes. (Why 128? No reason. Literally. We picked it out of the air.
08faabb @apenwarr First draft of a DESIGN document.
apenwarr authored
124 Probably some other number is better. Feel free to join the mailing list
125 and tell us which one and why.) (The rolling checksum idea is actually
b60768b @apenwarr DESIGN: update mentions of stupidsum to reflect new rollsum algorithm.
apenwarr authored
126 stolen from rsync and xdelta, although we use it differently. And they use
127 some kind of variable window size based on a formula we don't totally
128 understand.)
08faabb @apenwarr First draft of a DESIGN document.
apenwarr authored
129
b60768b @apenwarr DESIGN: update mentions of stupidsum to reflect new rollsum algorithm.
apenwarr authored
130 The original rolling checksum algorithm we used was called "stupidsum,"
131 because it was based on the only checksum Avery remembered how to calculate at
08faabb @apenwarr First draft of a DESIGN document.
apenwarr authored
132 the time. He also remembered that it was the introductory checksum
133 algorithm in a whole article about how to make good checksums that he read
134 about 15 years ago, and it was thoroughly discredited in that article for
135 being very stupid. But, as so often happens, Avery couldn't remember any
b60768b @apenwarr DESIGN: update mentions of stupidsum to reflect new rollsum algorithm.
apenwarr authored
136 better algorithms from the article. So what we got is stupidsum.
137
138 Since then, we have replaced the stupidsum algorithm with what we call
139 "rollsum," based on code in librsync. It's essentially the same as what
140 rsync does, except we use a fixed window size.
141
142 (If you're a computer scientist and can demonstrate that some other rolling
08faabb @apenwarr First draft of a DESIGN document.
apenwarr authored
143 checksum would be faster and/or better and/or have fewer screwy edge cases,
144 we need your help! Avery's out of control! Join our mailing list! Please!
145 Save us! ... oh boy, I sure hope he doesn't read this)
146
b60768b @apenwarr DESIGN: update mentions of stupidsum to reflect new rollsum algorithm.
apenwarr authored
147 In any case, rollsum seems to do pretty well at its job.
148 You can find it in bupsplit.c. Basically, it converts the last 128 bytes
149 read into a 32-bit integer. What we then do is take the lowest 13
150 bits of the rollsum, and if they're all 1's, we consider that to be the end
08faabb @apenwarr First draft of a DESIGN document.
apenwarr authored
151 of a chunk. This happens on average once every 2^13 = 8192 bytes, so the
152 average chunk size is 8192 bytes.
153
154 (Why 13 bits? Well, we picked the number at random and... eugh. You're
155 getting the idea, right? Join the mailing list and tell us why we're
156 wrong.)
157
158 (Incidentally, even though the average chunk size is 8192 bytes, the actual
159 probability distribution of block sizes ends up being non-uniform; if we
160 remember our stats classes correctly, which we probably don't, it's probably
161 an "exponential distribution." The idea is that for each byte in the block,
162 the probability that it's the last block is one in 8192. Thus, the
163 block sizes end up being skewed toward the smaller end. That's not
164 necessarily for the best, but maybe it is. Computer science to the rescue?
165 You know the drill.)
166
167 Anyway, so we're dividing up those files into chunks based on the rolling
168 checksum. Then we store each chunk separately (indexed by its sha1sum) as a
169 git blob. Why do we split this way? Well, because the results are actually
170 really nice. Let's imagine you have a big mysql database dump (produced by
171 mysqldump) and it's basically 100 megs of SQL text. Tomorrow's database
172 dump adds 100 rows to the middle of the file somewhere, soo it's 100.01 megs
173 of text.
174
175 A naive block splitting algorithm - for example, just dividing the file into
176 8192-byte blocks - would be a disaster. After the first bit of text has
177 changed, every block after that would have a different boundary, so most of
178 the blocks in the new backup would be different from the previous ones, and
179 you'd have to store the same data all over again. But with hashsplitting,
180 no matter how much data you add, modify, or remove in the middle of the
181 file, all the chunks *before* and *after* the affected chunk are absolutely
182 the same. All that matters to the hashsplitting algorithm is the 32-byte
183 "separator" sequence, and a single change can only affect, at most, one
184 separator sequence or the bytes between two separator sequences. And
b60768b @apenwarr DESIGN: update mentions of stupidsum to reflect new rollsum algorithm.
apenwarr authored
185 because of rollsum, about one in 8192 possible 128-byte sequences is a
08faabb @apenwarr First draft of a DESIGN document.
apenwarr authored
186 separator sequence. Like magic, the hashsplit chunking algorithm will chunk
187 your file the same way every time, even without knowing how it had chunked
188 it previously.
189
190 The next problem is less obvious: after you store your series of chunks as
191 git blobs, how do you store their sequence? Each blob has a 20-byte sha1
192 identifier, which means the simple list of blobs is going to be 20/8192 =
193 0.25% of the file length. For a 200GB file, that's 488 megs of just
194 sequence data.
195
196 As an overhead percentage, 0.25% basically doesn't matter. 488 megs sounds
197 like a lot, but compared to the 200GB you have to store anyway, it's
198 irrelevant. What *is* relevant is that 488 megs is a lot of memory you have
199 to use in order to to keep track of the list. Worse, if you back up an
200 almost-identical file tomorrow, you'll have *another* 488 meg blob to keep
201 track of, and it'll be almost but not quite the same as last time.
202
203 Hmm, big files, each one almost the same as the last... you know where this
204 is going, right?
205
206 Actually no! Ha! We didn't split this list in the same way. We could
207 have, in fact, but it wouldn't have been very "git-like", since we'd like to
208 store the list as a git 'tree' object in order to make sure git's
209 refcounting and reachability analysis doesn't get confused. Never mind the
210 fact that we want you to be able to 'git checkout' your data without any
211 special tools.
212
213 What we do instead is we extend the hashsplit algorithm a little further
214 using what we call "fanout." Instead of checking just the last 13 bits of
215 the checksum, we use additional checksum bits to produce additional splits.
216 For example, let's say we use a 4-bit fanout. That means we'll break a
217 series of chunks into its own tree object whenever the last 13+4 = 17 bits
218 of the rolling checksum are 1. Naturally, whenever the lowest 17 bits are
219 1, the lowest 13 bits are *also* 1, so the boundary of a chunk group is
220 always also the boundary of a particular chunk.
221
222 And so on. Eventually you'll have too many chunk groups, but you can group
223 them into supergroups by using another 4 bits, and continue from there.
224
225 What you end up with is an actual tree of blobs - which git 'tree' objects
226 are ideal to represent. And if you think about it, just like the original
227 list of chunks, the tree itself is pretty stable across file modifications.
228 Any one modification will only affect the chunks actually containing the
229 modifications, thus only the groups containing those chunks, and so on up
230 the tree. Essentially, the number of changed git objects is O(log n)
231 where n is the number of chunks. Since log 200 GB, using a base of 16 or
232 so, is not a very big number, this is pretty awesome. Remember, any git
233 object we *don't* change in a new backup is one we can reuse from last time,
234 so the deduplication effect is pretty awesome.
235
236 Better still, the hashsplit-tree format is good for a) random instead of
237 sequential access to data (which you can see in action with 'bup fuse'); and
238 b) quickly showing the differences between huge files (which we haven't
239 really implemented because we don't need it, but you can try 'git diff -M -C
240 -C backup1 backup2 -- filename' for a good start).
241
242 So now we've split out 200 GB file into about 24 million pieces. That
243 brings us to git limitation number 2.
244
245
246 Handling huge numbers of files (git.PackWriter)
247 ------------------------------
248
249 git is designed for handling reasonably-sized repositories that change
250 relatively infrequently. (You might think you change your source code
251 "frequently" and that git handles much more frequent changes than, say, svn
252 can handle. But that's not the same kind of "frequently" we're talking
253 about. Imagine you're backing up all the files on your disk, and one of
254 those files is a 100 GB database file with hundreds of daily users. You
255 disk changes so frequently you can't even back up all the revisions even if
256 you were backing stuff up 24 hours a day. That's "frequently.")
257
258 git's way of doing things works really nicely for the way software
259 developers write software, but it doesn't really work so well for everything
260 else. The #1 killer is the way it adds new objects to the repository: it
261 creates one file per blob. Then you later run 'git gc' and combine those
262 files into a single file (using highly efficient xdelta compression, and
263 ignoring any files that are no longer relevant).
264
265 'git gc' is slow, but for source code repositories, the resulting
266 super-efficient storage (and associated really fast access to the stored
267 files) is worth it. For backups, it's not; you almost never access your
268 backed-up data, so storage time is paramount, and retrieval time is mostly
269 unimportant.
270
271 To back up that 200 GB file with git and hashsplitting, you'd have to create
272 24 million little 8k files, then copy them into a 200 GB packfile, then
273 delete the 24 million files again. That would take about 400 GB of disk
274 space to run, require lots of random disk seeks, and require you to go
275 through your data twice.
276
277 So bup doesn't do that. It just writes packfiles directly. Luckily, these
278 packfiles are still git-formatted, so git can happily access them once
279 they're written.
280
281 But that leads us to our next problem.
282
283
284 Huge numbers of huge packfiles (git.PackMidx, cmd/midx)
285 ------------------------------
286
287 Git isn't actually designed to handle super-huge repositories. Most git
288 repositories are small enough that it's reasonable to merge them all into a
289 single packfile, which 'git gc' usually does eventually.
290
291 The problematic part of large packfiles isn't the packfiles themselves - git
292 is designed to expect the total size of all packs to be larger than
293 available memory, and once it can handle that, it can handle virtually any
294 amount of data about equally efficiently. The problem is the packfile
295 indexes (.idx) files. In bup we call these idx (pronounced "idix") files
296 instead of using the word "index," because the word index is already used
297 for something totally different in git (and thus bup) and we'll become
298 hopelessly confused otherwise.
299
300 Anyway, each packfile (*.pack) in git has an associated idx (*.idx) that's a
301 sorted list of git object hashes and file offsets. If you're looking for a
302 particular object based on its sha1, you open the idx, binary search it to
303 find the right hash, then take the associated file offset, seek to that
304 offset in the packfile, and read the object contents.
305
306 The performance of the binary search is about O(log n) with the number of
307 hashes in the pack, with an optimized first step (you can read about it
308 elsewhere) that somewhat improves it to O(log(n)-7).
309
310 Unfortunately, this breaks down a bit when you have *lots* of packs. Say
311 you have 24 million objects (containing around 200 GB of data) spread across
312 200 packfiles of 1GB each. To look for an object requires you search
313 through about 122000 objects per pack; ceil(log2(122000)-7) = 10, so you'll
314 have to search 10 times. About 7 of those searches will be confined to a
315 single 4k memory page, so you'll probably have to page in about 3-4 pages
316 per file, times 200 files, which makes 600-800 4k pages (2.4-3.6 megs)...
317 every single time you want to look for an object.
318
319 This brings us to another difference between git's and bup's normal use
320 case. With git, there's a simple optimization possible here: when looking
321 for an object, always search the packfiles in MRU (most recently used)
322 order. Related objects are usually clusted together in a single pack, so
323 you'll usually end up searching around 3 pages instead of 600, which is a
324 tremendous improvement. (And since you'll quickly end up swapping in all
325 the pages in a particular idx file this way, it isn't long before searching
326 for a nearby object doesn't involve any swapping at all.)
327
328 bup isn't so lucky. git users spend most of their time examining existing
329 objects (looking at logs, generating diffs, checking out branches), which
330 lends itself to the above optimization. bup, on the other hand, spends most
331 of its time looking for *nonexistent* objects in the repository so that it
332 can back them up. When you're looking for objects that aren't in the
333 repository, there's no good way to optimize; you have to exhaustively check
334 all the packs, one by one, to ensure that none of them contain the data you
335 want.
336
337 To improve performance of this sort of operation, bup introduces midx
338 (pronounced "midix" and short for "multi-idx") files. As the name implies,
339 they index multiple packs at a time.
340
341 Imagine you had a midx file for your 200 packs. midx files are a lot like
342 idx files; they have a lookup table at the beginning that narrows down the
343 initial search, followed by a binary search. The unlike idx files (which
344 have a fixed-size 256-entry lookup table) midx tables have a variably-sized
345 table that makes sure the entire binary search can be contained to a single
346 page of the midx file. Basically, the lookup table tells you which page to
347 load, and then you binary search inside that page. A typical search thus
348 only requires the kernel to swap in two pages, which is better than results
349 with even a single large idx file. And if you have lots of RAM, eventually
350 the midx lookup table (at least) will end up cached in memory, so only a
351 single page should be needed for each lookup.
352
353 You generate midx files with 'bup midx'. The downside of midx files is that
354 generating one takes a while, and you have to regenerate it every time you
355 add a few packs.
356
357 (Computer Sciency observers will note that there are some interesting data
358 structures out there that could help make things even better. A very
359 promising sounding one is called a "bloom filter." Look it up in Wikipedia.)
360
361 midx files are a bup-specific optimization and git doesn't know what to do
362 with them. However, since they're stored as separate files, they don't
363 interfere with git's ability to read the repository.
364
365
366 Detailed Metadata
367 -----------------
368
369 So that's the basic structure of a bup repository, which is also a git
370 repository. There's one more thing we have to deal with in bup: filesystem
371 metadata. git repositories are really only intended to store file contents
372 with a small bit of extra information, like symlink support and
373 differentiating between executable and non-executable files. For the rest,
374 we'll have to store it some other way.
375
376 As of this writing, bup's support for metadata is... pretty much
377 nonexistent. People are working on it. But the plan goes like this:
378
379 - Each git tree will contain a file called .bupmeta.
380
381 - .bupmeta contains an entry for every entry in the tree object, sorted in
382 the same order as in the tree.
383
384 - the .bupmeta entry lists information like modification times, attributes,
385 file ownership, and so on for each file in the tree.
386
387 - for backward compatibility with pre-metadata versions of bup (and git,
388 for that matter) the .bupmeta file for each tree is optional, and if it's
389 missing, files will be assumed to have default permissions.
390
391 The nice thing about this design is that you can walk through each file in
392 a tree just by opening the tree and the .bupmeta contents, and iterating
393 through both at the same time.
394
395 Trust us, it'll be awesome.
396
397
398 Filesystem Interaction
399 ======================
400
401 Storing data is just half of the problem of making a backup; figuring out
402 what to store is the other half.
403
404 At the most basic level, piping the output of 'tar' into 'bup split' is an
405 easy way to offload that decision; just let tar do all the hard stuff. And
406 if you like tar files, that's a perfectly acceptable way to do it. But we
407 can do better.
408
409 Backing up with tarballs would totally be the way to go, except for two
410 serious problems:
411
412 1. The result isn't easily "seekable." Tar files have no index, so if (as
413 commonly happens) you only want to restore one file in a 200 GB backup,
414 you'll have to read up to 200 GB before you can get to the beginning of
415 that file. tar is short for "tape archive"; on a tape, there was no
416 better way to do it anyway, so they didn't try. But on a disk, random
417 file access is much, much better when you can figure out how.
418
419 2. tar doesn't remember which files it backed up last time, so it has to
420 read through the entire file contents again in order to generate the
421 tarball, large parts of which will then be skipped by bup since they've
422 already been stored. This is much slower than necessary.
423
424 (The second point isn't entirely true for all versions of tar. For example,
425 GNU tar has an "incremental" mode that can somewhat mitigate this problem,
426 if you're smart enough to know how to use it without hurting yourself. But
427 you still have to decide which backups are "incremental" and which ones will
428 be "full" and so on, so even when it works, it's more error-prone than bup.)
429
430 bup divides the backup process into two major steps: a) indexing the
431 filesystem, and b) saving file contents into the repository. Let's look at
432 those steps in detail.
433
434
435 Indexing the filesystem (cmd/drecurse, cmd/index, index.py)
436 -----------------------
437
438 Splitting the filesystem indexing phase into its own program is
439 nontraditional, but it gives us several advantages.
440
441 The first advantage is trivial, but might be the most important: you can
442 index files a lot faster than you can back them up. That means we can
443 generate the index (.bup/bupindex) first, then have a nice, reliable,
444 non-lying completion bar that tells you how much of your filesystem remains
445 to be backed up. The alternative would be annoying failures like counting
446 the number of *files* remaining (as rsync does), even though one of the
447 files is a virtual machine image of 80 GB, and the 1000 other files are each
448 under 10k. With bup, the percentage complete is the *real* percentage
449 complete, which is very pleasant.
450
451 Secondly, it makes it easier to debug and test; you can play with the index
452 without actually backing up any files.
453
454 Thirdly, you can replace the 'bup index' command with something else and not
455 have to change anything about the 'bup save' command. The current 'bup
456 index' implementation just blindly walks the whole filesystem looking for
457 files that have changed since the last time it was indexed; this works fine,
458 but something using inotify instead would be orders of magnitude faster.
459 Windows and MacOS both have inotify-like services too, but they're totally
460 different; if we want to support them, we can simply write new bup commands
461 that do the job, and they'll never interfere with each other.
462
463 And fourthly, git does it that way, and git is awesome, so who are we to
464 argue?
465
466 So let's look at how the index file works.
467
468 First of all, note that the ".bup/bupindex" file is not the same as git's
469 ".git/index" file. The latter isn't used in bup; as far as git is
470 concerned, your bup repository is a "bare" git repository and doesn't have a
471 working tree, and thus it doesn't have an index either.
472
473 However, the bupindex file actually serves exactly the same purpose as git's
474 index file, which is why we still call it "the index." We just had to
475 redesign it for the usual bup-vs-git reasons, mostly that git just isn't
476 designed to handle millions of files in a single repository. (The only way
477 to find a file in git's index is to search it linearly; that's very fast in
478 git-sized repositories, but very slow in bup-sized ones.)
479
480 Let's not worry about the exact format of the bupindex file; it's still not
481 optimal, and will probably change again. The most important things to know
482 about bupindex are:
483
484 - You can iterate through it much faster than you can iterate through the
485 "real" filesystem (using something like the 'find' command).
486
487 - If you delete it, you can get it back just by reindexing your filesystem
488 (although that can be annoying to wait for); it's not critical to the
489 repository itself.
490
491 - You can iterate through only particular subtrees if you want.
492
493 - There is no need to have more than one index for a particular filesystem,
494 since it doesn't store anything about backups; it just stores file
495 metadata. It's really just a cache (or 'index') of your filesystem's
496 existing metadata. You could share the bupindex between repositories, or
497 between multiple users on the same computer. If you back up your
498 filesystem to multiple remote repositories to be extra safe, you can
499 still use the same bupindex file across all of them, because it's the
500 same filesystem every time.
501
502 - Filenames in the bupindex are absolute paths, because that's the best way
503 to ensure that you only need one bupindex file and that they're
504 interchangeable.
505
506
507 A note on file "dirtiness"
508 --------------------------
509
510 The concept on which 'bup save' operates is simple enough; it reads through
511 the index and backs up any file that is "dirty," that is, doesn't already
512 exist in the repository.
513
514 Determination of dirtiness is a little more complicated than it sounds. The
515 most dirtiness-relevant relevant flag in the bupindex is IX_HASHVALID; if
516 this flag is reset, the file *definitely* is dirty and needs to be backed
517 up. But a file may be dirty even if IX_HASHVALID is set, and that's the
518 confusing part.
519
520 The index stores a listing of files, their attributes, and
521 their git object ids (sha1 hashes), if known. The "if known" is what
522 IX_HASHVALID is about. When 'bup save' backs up a file, it sets
523 the sha1 and sets IX_HASHVALID; when 'bup index' sees that a file has
524 changed, it leaves the sha1 alone and resets IX_HASHVALID.
525
526 Remember that the index can be shared between users, repositories, and
527 backups. So IX_HASHVALID doesn't mean your repository *has* that sha1 in
528 it; it only means that if you *do* have it, that you don't need to back up
529 the file. Thus, 'bup save' needs to check every file in the index to make
530 sure its hash exists, not just that it's valid.
531
532 There's an optimization possible, however: if you know a particular tree's
533 hash is valid and exists (say /usr), then you don't need to check the
534 validity of all its children; because of the way git trees and blobs work,
535 if your repository is valid and you have a tree object, then you have all
536 the blobs it points to. You won't back up a tree object without backing up
537 its blobs first, so you don't need to double check it next time. (If you
538 really want to double check this, it belongs in a tool like 'bup fsck' or
539 'git fsck'.) So in short, 'bup save' on a "clean" index (all files are
540 marked IX_HASHVALID) can be very fast; we just check our repository and see
541 if the top level IX_HASHVALID sha1 exists. If it does, then we're done.
542
543 Similarly, if not the entire index is valid, you can still avoid recursing
544 into subtrees if those particular subtrees are IX_HASHVALID and their sha1s
545 are in the repository. The net result is that, as long as you never lose
546 your index, 'bup save' can always run very fast.
547
548 Another interesting trick is that you can skip backing up files even if
549 IX_HASHVALID *isn't* set, as long as you have that file's sha1 in the
550 repository. What that means is you've chosen not to backup the latest
551 version of that file; instead, your new backup set just contains the
552 most-recently-known valid version of that file. This is a good trick if you
553 want to do frequent backups of smallish files and infrequent backups of
554 large ones (as in 'bup save --smaller'). Each of your backups will be
555 "complete," in that they contain all the small files and the large ones, but
556 intermediate ones will just contain out-of-date copies of the large files.
557
558 A final game we can play with the bupindex involves restoring: when you
559 restore a directory from a previous backup, you can update the bupindex
560 right away. Then, if you want to restore a different backup on top, you can
561 compare the files in the index against the ones in the backup set, and
562 update only the ones that have changed. (Even more interesting things
563 happen if people are using the files on the restored system and you haven't
564 updated the index yet; the net result would be an automated merge of all
f8175fa @apenwarr DESIGN: 'bup restore' exists now.
apenwarr authored
565 non-conflicting files.) This would be a poor man's distributed filesystem.
566 The only catch is that nobody has written this feature for 'bup restore'
567 yet. Someday!
08faabb @apenwarr First draft of a DESIGN document.
apenwarr authored
568
569
570 How 'bup save' works (cmd/save)
571 --------------------
572
573 This section is too boring and has been omitted. Once you understand the
574 index, there's nothing special about bup save.
575
576
577 Retrieving backups: the bup vfs layer (vfs.py, cmd/ls, cmd/ftp, cmd/fuse)
578 =====================================
579
580 One of the neat things about bup's storage format, at least compared to most
581 backup tools, is it's easy to read a particular file, or even part of a
582 file. That means a read-only virtual filesystem is easy to generate and
583 it'll have good performance characteristics. Because of git's commit
584 structure, you could even use branching and merging to make a transactional
585 read-write filesystem... but that's probably getting a little out of bup's
586 scope. Who knows what the future might bring, though?
587
588 Read-only filesystems are well within our reach today, however. The 'bup
589 ls', 'bup ftp', and 'bup fuse' commands all use a "VFS" (virtual filesystem)
590 layer to let you access your repositories. Feel free to explore the source
591 code for these tools and vfs.py - they're pretty straightforward. Some
592 things to note:
593
594 - None of these use the bupindex for anything.
595
596 - For user-friendliness, they present your refs/commits/trees as a single
597 hierarchy (ie. a filesystem), which isn't really how git repositories
598 are formatted. So don't get confused!
599
600
601 We hope you'll enjoy bup. Looking forward to your patches!
602
603 -- apenwarr and the rest of the bup team
Something went wrong with that request. Please try again.