Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Branch: master
Commits on Nov 1, 2012
  1. file.h: cleanup constness of buffer pointer

    Signed-off-by: Sergei Trofimovich <>
  2. libbdelta.cpp: amend types for format strings (gcc's -Wformat)

    Use %lu instead of %zu (mingw32-gcc and MSVC do not understand it
    as their libc is C99 incompatible).
    Signed-off-by: Sergei Trofimovich <>
  3. file.h: limit maximum amount of file I/O by 1MB

    The problem was observed when I tried to run
    bdelta.exe --all-in-ram on 32-bit windows on
    network-mounted files.
    fread(size=170MB) failed with 'out of memory' there.
    I think it is a result of network-attached drives is
    implemented in userspace or calling process which leads
    to massive memory overhead when reading/writing large
    chunks of data.
    Fixed it by limiting I/O on 1MB size. It should I/O patterns
    slightly better for fuse-mounted linux filesystems as well.
    Signed-off-by: Sergei Trofimovich <>
Commits on Oct 31, 2012
  1. @jjwhitney

    Merge pull request #2 from trofi/master

    jjwhitney authored
    Achieve 25% speedup in '--all-in-ram' mode
  2. bdelta: optimize --all-in-ram case by avoiding memcpy()

    libbdelta allows avoidance of temporary buffers if
    caller guarantees persistence of read data.
    --all-in-ram case is exactly this kind of workload!
    Adjust memory reading function to just return pointer to data.
    On my workload it speeds things up about ~25%
        time ./bdelta --all-in-ram win32.udb.{old,new,old-new.bdt}
        time bdelta --all-in-ram win32.udb.{old,new,old-new.bdt.orig}
        real    0m33.888s
        user    0m28.790s
        sys     0m2.316s
        real    0m39.990s
        user    0m35.116s
        sys     0m2.189s
    win32.udb.old and are files 171MB sized,
    patch is 27MB with 1 million of chunks.
    Signed-off-by: Sergei Trofimovich <>
  3. match_backward(): don't overrun buffer when user supplies block size …

    …more, than 4096 bytes
    User can (And I did) pass large block sizes for initial passes,
    but the code is not ready for it:
        match_backward() {
            if (numtoread > blocksize) numtoread = blocksize;
            Token buf1[4096], buf2[4096];
            const Token *read1 = b->read1(buf1, p1, numtoread),
    Signed-off-by: Sergei Trofimovich <>
  4. Makefile: support for static library (handy to make better optimized …

    …static binary)
    Usage example:
        make libbdelta.a bdelta LDFLAGS=-static
    Signed-off-by: Sergei Trofimovich <>
  5. constify return value of 'read' callback.

    Make sure we don't modify data supplied by user.
    Signed-off-by: Sergei Trofimovich <>
Commits on Sep 28, 2012
  1. @jjwhitney

    Merge pull request #1 from trofi/master

    jjwhitney authored
    bdelta: add '--all-in-ram' commandline option
Commits on Sep 26, 2012
  1. bdelta: add '--all-in-ram' commandline option

    $ time ./bdelta /tmp/foo.{old,new} foo-old-to-new.bdt;  time ./bdelta --all-in-ram /tmp/foo.{old,new} foo-old-to-new.bdt
    real    3m19.176s
    user    2m1.324s
    sys     1m17.076s
    real    1m46.074s
    user    1m41.454s
    sys     0m3.669s
    File sizes are ~80 megabytes each.
    The option greatly reduces I/O overhead (sys time) and speeds up delta creation.
    Signed-off-by: Sergei Trofimovich <>
Commits on Feb 17, 2012
  1. @jjwhitney

    Merge branch 'experimental'

    jjwhitney authored
Commits on Feb 16, 2012
  1. @jjwhitney
Commits on Feb 15, 2012
  1. @jjwhitney

    Fix MSVC++ compile errors.

    jjwhitney authored
Commits on Feb 7, 2012
  1. @jjwhitney
  2. @jjwhitney

    Use BDELTA_GLOBAL as a flag, instead of BDELTA_LOCAL. Also, fix BDelt…

    jjwhitney authored
    …a's Python wrapper for flag handling.
  3. @jjwhitney

    Add ability to require that the hole sides be ordered or the hole sid…

    jjwhitney authored
    …e be under a specified maximum.
  4. @jjwhitney
  5. @jjwhitney
  6. @jjwhitney
  7. @jjwhitney

    Checkpoint 5

    jjwhitney authored
  8. @jjwhitney
  9. @jjwhitney

    Small optimization.

    jjwhitney authored
  10. @jjwhitney
  11. @jjwhitney
  12. @jjwhitney
  13. @jjwhitney
  14. @jjwhitney

    More Cleanups.

    jjwhitney authored
  15. @jjwhitney


    jjwhitney authored
  16. @jjwhitney
  17. @jjwhitney
  18. @jjwhitney
  19. @jjwhitney
  20. @jjwhitney
  21. @jjwhitney
Something went wrong with that request. Please try again.