-
Notifications
You must be signed in to change notification settings - Fork 532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--big-value-split-threshold Details #26
Comments
Every value over the size of N will be split into smaller chunks of size N (last chunk might be less than N). The original key becomes the 'index' key which stores a random suffix and total number of chunks; the chunks are stored at modified keys that includes the same random suffix and the chunk id. The random suffix is needed for consistency - you can simply remove the original key and the chunks will be 'deleted' - there's no way to access them without knowing the random suffix. It also takes care of simultaneous sets, since only one key will win the race, and only its random suffix will be valid. The way we deployed it on a live system was in stages. First we deployed reads only - if you set N to some large value (like 1000000000), the logic is still enabled on the read path, but will not actually split any values. This makes sure that all clients can understand split values once we start writing them. Note that all chunks will be sent to the same memcache box as the original key would be, so that means you're still transferring the same amount of data from a single memcache box to the client. If you want to transfer huge values this way, you still have to wait for individual chunks to arrive serially, so that might explain the timeouts you see - can you share the size of the values you're setting/fetching and the value of N you tried? |
Thanks this clears it all up. My timeouts were unrelated it turned out. |
If anyone wants some fun, it turned out that the cached value for the view context of our blog posts was weighing in at 6.5mb. Each. This got rejected but I think once we turned on big value splitting it overloaded our cache servers. ;-) |
I have few questions about big value split threshold option:
mcrouter is 1.0 version (built using Dockerfile). Memcached version is 1.4.13. # 5 bytes just in test purposes
mcrouter --big-value-split-threshold=5 --config-str='{"pools":{"A":{"servers":["127.0.0.1:5001"]}},"route":"PoolRoute|A"}' -p 5000 |
I was looking for some more details on the
--big-value-split-threshold
option.N
? The size of a value in bytes?The text was updated successfully, but these errors were encountered: