Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

Test cases for a couple of repozo bugs in the ZODB3 package.

  • Loading branch information...
commit 0c5f53e4dd201a3212b2d9fc630784aea6bfeffa 0 parents
@mgedmin authored
Showing with 107 additions and 0 deletions.
  1. +3 −0  .gitignore
  2. +47 −0 Makefile
  3. +28 −0 README.txt
  4. +29 −0 testcase.py
3  .gitignore
@@ -0,0 +1,3 @@
+Data.fs*
+backup
+sandbox
47 Makefile
@@ -0,0 +1,47 @@
+build: sandbox/bin/repozo
+
+test1: build
+ rm -fr Data.fs* backup
+ mkdir backup
+ # Test Case 1: call repozo -B twice in the same second
+ # It's likely that you'll end up with backup/YYYY-MM-DD-hh-mm-ss.fs and
+ # backup/YYYY-MM-DD-hh-mm-ss.deltafs (same YYYY-MM-DD-hh-mm-ss!). Then
+ # cmp will show that repozo -R failed to reconstruct the DB correctly.
+ sandbox/bin/python testcase.py --set foo bar
+ sandbox/bin/repozo -BQ -r backup -f Data.fs
+ sandbox/bin/python testcase.py --set x y
+ sandbox/bin/repozo -BQ -r backup -f Data.fs
+ sandbox/bin/repozo -R -r backup -o Data.fs.recovered
+ cmp Data.fs Data.fs.recovered
+
+test2: build
+ rm -fr Data.fs* backup
+ mkdir backup
+ # Test Case 2: call repozo -R with a truncated .deltafs in the middle
+ # repozo ought to notice and complain; instead it goes ahead and
+ # produces a corrupted Data.fs
+ sandbox/bin/python testcase.py --set foo bar
+ sandbox/bin/repozo -BQ -r backup -f Data.fs
+ sleep 1 # baaad idea to run repozo twice during the same second
+ sandbox/bin/python testcase.py --set x y
+ sandbox/bin/repozo -BQ -r backup -f Data.fs
+ sleep 1 # baaad idea to run repozo twice during the same second
+ sandbox/bin/python testcase.py --set u v
+ sandbox/bin/repozo -BQ -r backup -f Data.fs
+ # truncate the first deltafs file
+ first_delta=`ls backup/*.deltafs|head -n 1`; > $$first_delta
+ # this should fail:
+ sandbox/bin/repozo -R -r backup -o Data.fs.recovered
+ # instead this fails
+ cmp Data.fs Data.fs.recovered
+
+
+sandbox:
+ virtualenv --no-site-packages sandbox
+
+sandbox/bin/repozo: sandbox
+ sandbox/bin/pip install ZODB3
+ touch -c $@
+
+clean:
+ rm -fr Data.fs* backup sandbox
28 README.txt
@@ -0,0 +1,28 @@
+Background:
+
+ * there's this server with a Data.fs and a cron script that makes incremental
+ backups using repozo
+ * the backups are gpg-encrypted and transferred to a remote storage server
+ * recovery procedure involves rsyncing the backups to a local machine,
+ decrypting them, and reassembling them into Data.fs with repozo
+ * I've done that twice: it worked the first time, but I got a corrupted
+ Data.fs the second time
+ * after a few hours of investigation I discovered that three of the 5000-odd
+ incremental deltafs files were 0-length
+ * turns out the gpg-decrypting script did not resume its work at the right
+ place when when interrupted with ^C, leaving empty files in the middle
+
+So, the bug: repozo does not notice when some of the deltafs files are
+truncated. It should notice, complain loudly, and abort, instead of silently
+producing a corrupted Data.fs. To reproduce, run ::
+
+ make test2
+
+While writing this test case I discovered another bug: if you run repozo
+twice in quick succession, you end up with a backup repository that fails to
+be restored correctly. To reproduce, run ::
+
+ make test1
+
+
+-- Marius Gedminas <marius@gedmin.as>, 2011-12-07
29 testcase.py
@@ -0,0 +1,29 @@
+#!sandbox/bin/python
+import optparse
+import transaction
+from ZODB.DB import DB
+from ZODB.FileStorage import FileStorage
+
+
+def main():
+ parser = optparse.OptionParser()
+ parser.add_option('--set', '-s', nargs=2, dest='set_', metavar='KEY VALUE',
+ help='write a key/value pair into the root dict of Data.fs')
+ opts, args = parser.parse_args()
+
+ if not opts.set_:
+ parser.error('Nothing to do!')
+
+ key, value = opts.set_
+
+ db = DB(FileStorage('Data.fs'))
+ conn = db.open()
+ root = conn.root()
+ root[key] = value
+ transaction.commit()
+ conn.close()
+ db.close()
+
+
+if __name__ == '__main__':
+ main()
Please sign in to comment.
Something went wrong with that request. Please try again.