Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrap entry optimization #7

Merged
merged 1 commit into from
Mar 31, 2016
Merged

Wrap entry optimization #7

merged 1 commit into from
Mar 31, 2016

Conversation

druminski
Copy link
Contributor

Alternative implementation of wrap entry optimization proposed by @wendigo in #5

Instead of using global sync.Pool with locks, entry byte buffer is allocated per shard.

Bench test results before and after optimization:

go test -bench=WriteToCacheWith -benchtime=5s

BenchmarkWriteToCacheWith1Shard-4                       5000000      1520 ns/op
BenchmarkWriteToCacheWith500Shards-4                        10000000          647 ns/op
BenchmarkWriteToCacheWith1kShards-4                     10000000          627 ns/op
BenchmarkWriteToCacheWith10kShards-4                        10000000          646 ns/op
BenchmarkWriteToCacheWith1kShardsAndSmallShardInitSize-4    10000000          789 ns/op

BenchmarkWriteToCacheWith1Shard-4                       5000000      1320 ns/op
BenchmarkWriteToCacheWith500Shards-4                        20000000          594 ns/op
BenchmarkWriteToCacheWith1kShards-4                     20000000          616 ns/op
BenchmarkWriteToCacheWith10kShards-4                        10000000          629 ns/op
BenchmarkWriteToCacheWith1kShardsAndSmallShardInitSize-4    10000000          717 ns/op

Total time spent on entries wrapping during bench tests:

//before opt
1.13s     15.34s    104:    w := wrapEntry(currentTimestamp, hashedKey, key, entry)

//after opt
2.50s      3.84s    106:    w := wrapEntry(currentTimestamp, hashedKey, key, entry, &shard.entryBuffer)

@wendigo
Copy link
Contributor

wendigo commented Mar 31, 2016

Wow, nice one :) It seems that using sync.Pool has no sense here and this kind of buffer is fine in wrapEntry

@druminski
Copy link
Contributor Author

I am glad you like it, although credits goes to you with this pr as you discovered the optimization possibility and proposed a solution with comparable results, thanks :)

@wendigo
Copy link
Contributor

wendigo commented Mar 31, 2016

👍

@@ -11,17 +11,22 @@ const (
headersSizeInBytes = timestampSizeInBytes + hashSizeInBytes + keySizeInBytes // Number of bytes used for all headers
)

func wrapEntry(timestamp uint64, hash uint64, key string, entry []byte) []byte {
var blob []byte
func wrapEntry(timestamp uint64, hash uint64, key string, entry []byte, buffer *[]byte) []byte {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 arguments is quite much. How about introducing

type Entry struct {
    timestamp uint64
    hash uint64
    key uint64
    entry []byte
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a little bit to soon for this structure, I will happily use it if it will be use more than in one place (only to pass its values to func)

@janisz
Copy link
Collaborator

janisz commented Mar 31, 2016

I left one comment but it looks good. 👍

@janisz janisz mentioned this pull request Mar 31, 2016
@druminski druminski merged commit e70ba5d into master Mar 31, 2016
@druminski druminski deleted the wrap_entry_optimization branch March 31, 2016 09:58
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants