Conversation
| def run_gc | ||
| GC.enable | ||
| GC.start | ||
| GC.disable | ||
| end |
There was a problem hiding this comment.
Is the GCSuite mostly used to restart the GC? So that each of the benchmark runs can get the GC cleared before running next one?
There was a problem hiding this comment.
yeah this attempts to remove GC overhead and random skew from the benchmarks by running GC between each iteration and not allowing GC to run during the measured code execution.
In a scenario of network intense, will the async add more loads to network, and make the situation worse? For example, in the multiple get calls we chunked the keys into smaller batches (100 keys in each batch). Doing the calls async will issue more requests concurrently, and may cause extra loads. |
|
yes it can increase load on the server for sure when apps increase concurrent calls to memcached, in this example with the connection pool if you max out the connection pool so all 10 workers are actively busy... that is the equivalent to 10x the clients if they were all waiting serially to do work... So leveraging this type of concurrency has additional costs for the overall infrastructure and performance of the platform. |
We want to verify dalli works well with the async library and check some basic performance impacts... This is a small proof of concept to show how dalli can be used with async along with a connection pool to increase performance across multiple cache requests that could be in threads or fibers.
There is some overhead and a normal non async set of calls will win on very small and very fast calls... but as the IO increases either with payload size or by making remote network calls that have higher latency async shows how it can more efficiently balance the IO.
Async across toxiproxy with around 2X the latency of normal localhost, async easily out performing in this case
50k payload on fast localhost, holding nearly equal with a loop, showing some of the overhead involved in async: