-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce memory allocation in bulk op #56
Conversation
bulk.go
Outdated
@@ -294,6 +306,9 @@ func (b *Bulk) Run() (*BulkResult, error) { | |||
break | |||
} | |||
} | |||
action.idxs = action.idxs[0:0] | |||
action.docs = action.docs[0:0] | |||
actionPool.Put(action) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be above the if !ok
check - if an operation fails the actions leak.
Hey @feliixx, Good idea and an easy win! Just need to return the Dom |
use memory pooling to reuse bulkActions and avoid some allocations
Hi @domodwyer thanks or the review, it should be fine now |
Fantastic, cheers @feliixx! |
Includes: * Reduced memory in bulk operations (#56) * Native x509 authentication (#55) * Better connection recovery (#69) * Example usage (#75 and #78) Thanks to: * @bachue * @csucu * @feliixx --- [Throughput overview](https://user-images.githubusercontent.com/9275968/34954403-3d3253dc-fa18-11e7-8eef-0f2b0f21edc3.png) Select throughput has increased by ~600 requests/second with slightly increased variance: ``` x => r2017.11.06-select-zipfian-throughput.log y => 9acbd68-select-zipfian-throughput.log n min max median average stddev p99 x 3600 49246 71368 66542 66517.26 2327.675 70927.01 y 3600 53304 72005 67151 67145.36 2448.534 71630.00 62000 64000 66000 68000 70000 72000 |----------+-----------+-----------+------------+-----------+-----------+-----| +---------+--------+ 1 -------------------| | |-------------------- +---------+--------+ +---------+---------+ 2 ----------------------------| | |-------------------- +---------+---------+ Legend: 1=data$x, 2=data$y At 95% probablitiy: ===> average is statistically significant (p=0.000000, diff ~628.094444) ===> variance is statistically significant (p=0.002398) ``` * [insert-latency.txt](https://github.com/globalsign/mgo/files/1632474/insert-latency.txt) * [insert-throughput.txt](https://github.com/globalsign/mgo/files/1632475/insert-throughput.txt) * [select-zipfian-latency.txt](https://github.com/globalsign/mgo/files/1632476/select-zipfian-latency.txt) * [select-zipfian-throughput.txt](https://github.com/globalsign/mgo/files/1632477/select-zipfian-throughput.txt) * [update-zipfian-latency.txt](https://github.com/globalsign/mgo/files/1632478/update-zipfian-latency.txt) * [update-zipfian-throughput.txt](https://github.com/globalsign/mgo/files/1632479/update-zipfian-throughput.txt) Note: latencies are approximations calculated from grouped data
Includes: * Reduced memory in bulk operations (globalsign#56) * Native x509 authentication (globalsign#55) * Better connection recovery (globalsign#69) * Example usage (globalsign#75 and globalsign#78) Thanks to: * @bachue * @csucu * @feliixx --- [Throughput overview](https://user-images.githubusercontent.com/9275968/34954403-3d3253dc-fa18-11e7-8eef-0f2b0f21edc3.png) Select throughput has increased by ~600 requests/second with slightly increased variance: ``` x => r2017.11.06-select-zipfian-throughput.log y => 9acbd68-select-zipfian-throughput.log n min max median average stddev p99 x 3600 49246 71368 66542 66517.26 2327.675 70927.01 y 3600 53304 72005 67151 67145.36 2448.534 71630.00 62000 64000 66000 68000 70000 72000 |----------+-----------+-----------+------------+-----------+-----------+-----| +---------+--------+ 1 -------------------| | |-------------------- +---------+--------+ +---------+---------+ 2 ----------------------------| | |-------------------- +---------+---------+ Legend: 1=data$x, 2=data$y At 95% probablitiy: ===> average is statistically significant (p=0.000000, diff ~628.094444) ===> variance is statistically significant (p=0.002398) ``` * [insert-latency.txt](https://github.com/globalsign/mgo/files/1632474/insert-latency.txt) * [insert-throughput.txt](https://github.com/globalsign/mgo/files/1632475/insert-throughput.txt) * [select-zipfian-latency.txt](https://github.com/globalsign/mgo/files/1632476/select-zipfian-latency.txt) * [select-zipfian-throughput.txt](https://github.com/globalsign/mgo/files/1632477/select-zipfian-throughput.txt) * [update-zipfian-latency.txt](https://github.com/globalsign/mgo/files/1632478/update-zipfian-latency.txt) * [update-zipfian-throughput.txt](https://github.com/globalsign/mgo/files/1632479/update-zipfian-throughput.txt) Note: latencies are approximations calculated from grouped data
Use memory pooling to reuse
bulkActions
and avoid some allocationsHere are some benchmarks results:
and here is the benchmark code: