Skip to content
Permalink
Browse files
Update snappy to 1.1.3 release
COUCHDB-2873
  • Loading branch information
kxepal committed Nov 9, 2015
1 parent 0ab2796 commit 4f05d845a4657d5a8ef340c9bb1348197c9ef3f5
Showing 9 changed files with 784 additions and 208 deletions.
@@ -1,3 +1,97 @@
Snappy v1.1.3, July 6th 2015:

This is the first release to be done from GitHub, which means that
some minor things like the ChangeLog format has changed (git log
format instead of svn log).

* Add support for Uncompress() from a Source to a Sink.

* Various minor changes to improve MSVC support; in particular,
the unit tests now compile and run under MSVC.


Snappy v1.1.2, February 28th 2014:

This is a maintenance release with no changes to the actual library
source code.

* Stop distributing benchmark data files that have unclear
or unsuitable licensing.

* Add support for padding chunks in the framing format.


Snappy v1.1.1, October 15th 2013:

* Add support for uncompressing to iovecs (scatter I/O).
The bulk of this patch was contributed by Mohit Aron.

* Speed up decompression by ~2%; much more so (~13-20%) on
a few benchmarks on given compilers and CPUs.

* Fix a few issues with MSVC compilation.

* Support truncated test data in the benchmark.


Snappy v1.1.0, January 18th 2013:

* Snappy now uses 64 kB block size instead of 32 kB. On average,
this means it compresses about 3% denser (more so for some
inputs), at the same or better speeds.

* libsnappy no longer depends on iostream.

* Some small performance improvements in compression on x86
(0.5–1%).

* Various portability fixes for ARM-based platforms, for MSVC,
and for GNU/Hurd.


Snappy v1.0.5, February 24th 2012:

* More speed improvements. Exactly how big will depend on
the architecture:

- 3–10% faster decompression for the base case (x86-64).

- ARMv7 and higher can now use unaligned accesses,
and will see about 30% faster decompression and
20–40% faster compression.

- 32-bit platforms (ARM and 32-bit x86) will see 2–5%
faster compression.

These are all cumulative (e.g., ARM gets all three speedups).

* Fixed an issue where the unit test would crash on system
with less than 256 MB address space available,
e.g. some embedded platforms.

* Added a framing format description, for use over e.g. HTTP,
or for a command-line compressor. We do not have any
implementations of this at the current point, but there seems
to be enough of a general interest in the topic.
Also make the format description slightly clearer.

* Remove some compile-time warnings in -Wall
(mostly signed/unsigned comparisons), for easier embedding
into projects that use -Wall -Werror.


Snappy v1.0.4, September 15th 2011:

* Speeded up the decompressor somewhat; typically about 2–8%
for Core i7, in 64-bit mode (comparable for Opteron).
Somewhat more for some tests, almost no gain for others.

* Make Snappy compile on certain platforms it didn't before
(Solaris with SunPro C++, HP-UX, AIX).

* Correct some minor errors in the format description.


Snappy v1.0.3, June 2nd 2011:

* Speeded up the decompressor somewhat; about 3-6% for Core 2,
@@ -76,11 +76,11 @@ your calling file, and link against the compiled library.

There are many ways to call Snappy, but the simplest possible is

snappy::Compress(input, &output);
snappy::Compress(input.data(), input.size(), &output);

and similarly

snappy::Uncompress(input, &output);
snappy::Uncompress(input.data(), input.size(), &output);

where "input" and "output" are both instances of std::string.

@@ -28,8 +28,8 @@
//
// Internals shared between the Snappy implementation and its unittest.

#ifndef UTIL_SNAPPY_SNAPPY_INTERNAL_H_
#define UTIL_SNAPPY_SNAPPY_INTERNAL_H_
#ifndef THIRD_PARTY_SNAPPY_SNAPPY_INTERNAL_H_
#define THIRD_PARTY_SNAPPY_SNAPPY_INTERNAL_H_

#include "snappy-stubs-internal.h"

@@ -85,15 +85,15 @@ char* CompressFragment(const char* input,
static inline int FindMatchLength(const char* s1,
const char* s2,
const char* s2_limit) {
DCHECK_GE(s2_limit, s2);
assert(s2_limit >= s2);
int matched = 0;

// Find out how long the match is. We loop over the data 64 bits at a
// time until we find a 64-bit block that doesn't match; then we find
// the first non-matching bit and use that to calculate the total
// length of the match.
while (PREDICT_TRUE(s2 <= s2_limit - 8)) {
if (PREDICT_FALSE(UNALIGNED_LOAD64(s2) == UNALIGNED_LOAD64(s1 + matched))) {
if (UNALIGNED_LOAD64(s2) == UNALIGNED_LOAD64(s1 + matched)) {
s2 += 8;
matched += 8;
} else {
@@ -108,7 +108,7 @@ static inline int FindMatchLength(const char* s1,
}
}
while (PREDICT_TRUE(s2 < s2_limit)) {
if (PREDICT_TRUE(s1[matched] == *s2)) {
if (s1[matched] == *s2) {
++s2;
++matched;
} else {
@@ -122,7 +122,7 @@ static inline int FindMatchLength(const char* s1,
const char* s2,
const char* s2_limit) {
// Implementation based on the x86-64 version, above.
DCHECK_GE(s2_limit, s2);
assert(s2_limit >= s2);
int matched = 0;

while (s2 <= s2_limit - 4 &&
@@ -147,4 +147,4 @@ static inline int FindMatchLength(const char* s1,
} // end namespace internal
} // end namespace snappy

#endif // UTIL_SNAPPY_SNAPPY_INTERNAL_H_
#endif // THIRD_PARTY_SNAPPY_SNAPPY_INTERNAL_H_
@@ -40,6 +40,21 @@ char* Sink::GetAppendBuffer(size_t length, char* scratch) {
return scratch;
}

char* Sink::GetAppendBufferVariable(
size_t min_size, size_t desired_size_hint, char* scratch,
size_t scratch_size, size_t* allocated_size) {
*allocated_size = scratch_size;
return scratch;
}

void Sink::AppendAndTakeOwnership(
char* bytes, size_t n,
void (*deleter)(void*, const char*, size_t),
void *deleter_arg) {
Append(bytes, n);
(*deleter)(deleter_arg, bytes, n);
}

ByteArraySource::~ByteArraySource() { }

size_t ByteArraySource::Available() const { return left_; }
@@ -68,4 +83,22 @@ char* UncheckedByteArraySink::GetAppendBuffer(size_t len, char* scratch) {
return dest_;
}

void UncheckedByteArraySink::AppendAndTakeOwnership(
char* data, size_t n,
void (*deleter)(void*, const char*, size_t),
void *deleter_arg) {
if (data != dest_) {
memcpy(dest_, data, n);
(*deleter)(deleter_arg, data, n);
}
dest_ += n;
}

char* UncheckedByteArraySink::GetAppendBufferVariable(
size_t min_size, size_t desired_size_hint, char* scratch,
size_t scratch_size, size_t* allocated_size) {
*allocated_size = desired_size_hint;
return dest_;
}

} // namespace snappy
@@ -26,12 +26,11 @@
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

#ifndef UTIL_SNAPPY_SNAPPY_SINKSOURCE_H_
#define UTIL_SNAPPY_SNAPPY_SINKSOURCE_H_
#ifndef THIRD_PARTY_SNAPPY_SNAPPY_SINKSOURCE_H_
#define THIRD_PARTY_SNAPPY_SNAPPY_SINKSOURCE_H_

#include <stddef.h>


namespace snappy {

// A Sink is an interface that consumes a sequence of bytes.
@@ -60,6 +59,47 @@ class Sink {
// The default implementation always returns the scratch buffer.
virtual char* GetAppendBuffer(size_t length, char* scratch);

// For higher performance, Sink implementations can provide custom
// AppendAndTakeOwnership() and GetAppendBufferVariable() methods.
// These methods can reduce the number of copies done during
// compression/decompression.

// Append "bytes[0,n-1] to the sink. Takes ownership of "bytes"
// and calls the deleter function as (*deleter)(deleter_arg, bytes, n)
// to free the buffer. deleter function must be non NULL.
//
// The default implementation just calls Append and frees "bytes".
// Other implementations may avoid a copy while appending the buffer.
virtual void AppendAndTakeOwnership(
char* bytes, size_t n, void (*deleter)(void*, const char*, size_t),
void *deleter_arg);

// Returns a writable buffer for appending and writes the buffer's capacity to
// *allocated_size. Guarantees *allocated_size >= min_size.
// May return a pointer to the caller-owned scratch buffer which must have
// scratch_size >= min_size.
//
// The returned buffer is only valid until the next operation
// on this ByteSink.
//
// After writing at most *allocated_size bytes, call Append() with the
// pointer returned from this function and the number of bytes written.
// Many Append() implementations will avoid copying bytes if this function
// returned an internal buffer.
//
// If the sink implementation allocates or reallocates an internal buffer,
// it should use the desired_size_hint if appropriate. If a caller cannot
// provide a reasonable guess at the desired capacity, it should set
// desired_size_hint = 0.
//
// If a non-scratch buffer is returned, the caller may only pass
// a prefix to it to Append(). That is, it is not correct to pass an
// interior pointer to Append().
//
// The default implementation always returns the scratch buffer.
virtual char* GetAppendBufferVariable(
size_t min_size, size_t desired_size_hint, char* scratch,
size_t scratch_size, size_t* allocated_size);

private:
// No copying
@@ -122,6 +162,12 @@ class UncheckedByteArraySink : public Sink {
virtual ~UncheckedByteArraySink();
virtual void Append(const char* data, size_t n);
virtual char* GetAppendBuffer(size_t len, char* scratch);
virtual char* GetAppendBufferVariable(
size_t min_size, size_t desired_size_hint, char* scratch,
size_t scratch_size, size_t* allocated_size);
virtual void AppendAndTakeOwnership(
char* bytes, size_t n, void (*deleter)(void*, const char*, size_t),
void *deleter_arg);

// Return the current output pointer so that a caller can see how
// many bytes were produced.
@@ -131,7 +177,6 @@ class UncheckedByteArraySink : public Sink {
char* dest_;
};

} // namespace snappy

}

#endif // UTIL_SNAPPY_SNAPPY_SINKSOURCE_H_
#endif // THIRD_PARTY_SNAPPY_SNAPPY_SINKSOURCE_H_

0 comments on commit 4f05d84

Please sign in to comment.