Go
Clone or download
mxplusb Merge branch 'jshufro-tracking'
Adding new interface for deleting cache items with a reason.

Signed-off-by: Mike Lloyd <mike@reboot3times.org>
Latest commit ca9a59e Apr 23, 2018
Permalink
Failed to load latest commit information.
caches_bench fixed gc bug and updated readme. (#69) Mar 21, 2018
queue Add Reset() to cache (#25) Nov 3, 2016
server added initial middleware test. (#78) Mar 29, 2018
.gitignore updated .gitignore. (#75) Mar 28, 2018
.travis.yml Fix some race conditions (#68) Mar 19, 2018
LICENSE Initial commit Mar 23, 2016
README.md Add remove reason signal to OnRemove callback Apr 23, 2018
bigcache.go Add remove reason signal to OnRemove callback Apr 23, 2018
bigcache_bench_test.go Use named key initializer for Config in tests (#42) Aug 7, 2017
bigcache_test.go Add remove reason signal to OnRemove callback Apr 23, 2018
clock.go BigCache implementation Mar 23, 2016
config.go Add remove reason signal to OnRemove callback Apr 23, 2018
encoding.go Zero alloc byte to string conversion (#24) Nov 3, 2016
encoding_test.go Wrap entry optimization Mar 30, 2016
entry_not_found_error.go BigCache implementation Mar 23, 2016
fnv.go Improve fnv64a hashing algorithm Jun 6, 2016
fnv_bench_test.go Improve fnv64a hashing algorithm Jun 6, 2016
fnv_test.go Improve fnv64a hashing algorithm Jun 6, 2016
hash.go Improve fnv64a hashing algorithm Jun 6, 2016
hash_test.go Hash + other optimizations Mar 31, 2016
iterator.go Improve encapsulation of shards (#49) Sep 25, 2017
iterator_test.go Improve encapsulation of shards (#49) Sep 25, 2017
logger.go Adding logging interface & default logger (#50) Oct 10, 2017
shard.go Add remove reason signal to OnRemove callback Apr 23, 2018
stats.go HTTP Server is done! (#60) Nov 30, 2017
utils.go Add Reset() to cache (#25) Nov 3, 2016

README.md

BigCache Build Status Coverage Status GoDoc Go Report Card

Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on heap but omits GC for them. To achieve that operations on bytes arrays take place, therefore entries (de)serialization in front of the cache will be needed in most use cases.

Usage

Simple initialization

import "github.com/allegro/bigcache"

cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))

cache.Set("my-unique-key", []byte("value"))

entry, _ := cache.Get("my-unique-key")
fmt.Println(string(entry))

Custom initialization

When cache load can be predicted in advance then it is better to use custom initialization because additional memory allocation can be avoided in that way.

import (
	"log"

	"github.com/allegro/bigcache"
)

config := bigcache.Config {
		// number of shards (must be a power of 2)
		Shards: 1024,
		// time after which entry can be evicted
		LifeWindow: 10 * time.Minute,
		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,
		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,
		// prints information about additional memory allocation
		Verbose: true,
		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,
		// callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A bitmask representing the reason will be returned.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: nil,
		// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A constant representing the reason will be passed through.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		// Ignored if OnRemove is specified.
		OnRemoveWithReason: nil,
	}

cache, initErr := bigcache.NewBigCache(config)
if initErr != nil {
	log.Fatal(initErr)
}

cache.Set("my-unique-key", []byte("value"))

if entry, err := cache.Get("my-unique-key"); err == nil {
	fmt.Println(string(entry))
}

Benchmarks

Three caches were compared: bigcache, freecache and map. Benchmark tests were made using an i7-6700K with 32GB of RAM on Windows 10.

Writes and reads

cd caches_bench; go test -bench=. -benchtime=10s ./... -timeout 30m

BenchmarkMapSet-8                     	 2000000	       716 ns/op	     336 B/op	       3 allocs/op
BenchmarkConcurrentMapSet-8           	 1000000	      1292 ns/op	     347 B/op	       8 allocs/op
BenchmarkFreeCacheSet-8               	 3000000	       501 ns/op	     371 B/op	       3 allocs/op
BenchmarkBigCacheSet-8                	 3000000	       482 ns/op	     303 B/op	       2 allocs/op
BenchmarkMapGet-8                     	 5000000	       309 ns/op	      24 B/op	       1 allocs/op
BenchmarkConcurrentMapGet-8           	 2000000	       659 ns/op	      24 B/op	       2 allocs/op
BenchmarkFreeCacheGet-8               	 3000000	       541 ns/op	     152 B/op	       3 allocs/op
BenchmarkBigCacheGet-8                	 3000000	       420 ns/op	     152 B/op	       3 allocs/op
BenchmarkBigCacheSetParallel-8        	10000000	       184 ns/op	     313 B/op	       3 allocs/op
BenchmarkFreeCacheSetParallel-8       	10000000	       195 ns/op	     357 B/op	       4 allocs/op
BenchmarkConcurrentMapSetParallel-8   	 5000000	       242 ns/op	     200 B/op	       6 allocs/op
BenchmarkBigCacheGetParallel-8        	20000000	       100 ns/op	     152 B/op	       4 allocs/op
BenchmarkFreeCacheGetParallel-8       	10000000	       133 ns/op	     152 B/op	       4 allocs/op
BenchmarkConcurrentMapGetParallel-8   	10000000	       202 ns/op	      24 B/op	       2 allocs/op

Writes and reads in bigcache are faster than in freecache. Writes to map are the slowest.

GC pause time

cd caches_bench; go run caches_gc_overhead_comparison.go

Number of entries:  20000000
GC pause for bigcache:  5.8658ms
GC pause for freecache:  32.4341ms
GC pause for map:  52.9661ms

Test shows how long are the GC pauses for caches filled with 20mln of entries. Bigcache and freecache have very similar GC pause time. It is clear that both reduce GC overhead in contrast to map which GC pause time took more than 10 seconds.

How it works

BigCache relies on optimization presented in 1.5 version of Go (issue-9477). This optimization states that if map without pointers in keys and values is used then GC will omit its content. Therefore BigCache uses map[uint64]uint32 where keys are hashed and values are offsets of entries.

Entries are kept in bytes array, to omit GC again. Bytes array size can grow to gigabytes without impact on performance because GC will only see single pointer to it.

Bigcache vs Freecache

Both caches provide the same core features but they reduce GC overhead in different ways. Bigcache relies on map[uint64]uint32, freecache implements its own mapping built on slices to reduce number of pointers.

Results from benchmark tests are presented above. One of the advantage of bigcache over freecache is that you don’t need to know the size of the cache in advance, because when bigcache is full, it can allocate additional memory for new entries instead of overwriting existing ones as freecache does currently. However hard max size in bigcache also can be set, check HardMaxCacheSize.

HTTP Server

This package also includes an easily deployable HTTP implementation of BigCache, which can be found in the server package.

More

Bigcache genesis is described in allegro.tech blog post: writing a very fast cache service in Go

License

BigCache is released under the Apache 2.0 license (see LICENSE)