-
Notifications
You must be signed in to change notification settings - Fork 23.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix RESTORE command size limits #4568
Conversation
- protocol parsing (processMultibulkBuffer) was limitted to 32big positions in the buffer readQueryFromClient potential overflow - rioWriteBulkCount used int, although rioWriteBulkString gave it size_t - several places in sds.c that used int for string length or index. - bugfix in RM_SaveAuxField (return was 1 or -1 and not length) - RM_SaveStringBuffer was limitted to 32bit length
…pport RESTORE of large keys
Hi @oranagra We're currently evaluating several pull requests addressing the issue of 32bit/64bit values for large databases. I think we should face this issue with care and from a global point of view in order to handle it in different places. I've already talked with @antirez about it and we're analysing the problem. Thank you. |
Thanks Oran, It's worth to note that this is kinda of critical for several reasons:
The second problem is a bit of a non trivial issue because a protocol error/desync would cause a mass alloc, so in theory the way to go would be to allow the client to be served by "overcommitting" and later actually allocating the memory as needed, or as a much better alternative, if we are sure to call the allocator the proper way, we could just have the kernel itself actually allocating zero pages at almost no cost that will later be really allocated by copy-on-write. This should be as simple as to make sure we call calloc() instead of malloc()+memset or alike. I need to look at the core more closely. I hope to merge tomorrow morning and continue with the other type size fixes (pushed by @soloestoy and a few others). |
Hello @oranagra, I just reviewed this and it looks great. I'm quite surprised by the amount of wrong types that there were in the code, even reaching sds itself. When most of that code was written, to me strings over 4GB seemed unrealistic for Redis in any context, which is obviously no longer true at all. Thanks to these fixes, the recent ones about dict and quicklist by @soloestoy, the adoption of the 64 bit hash function, and other changes, we are heading towards having a true 64 bit Redis. I'm merging the configuration options as well, however I think I'll change the name of the parameters a bit because currently things like max or min are never used as prefix, but the prefix is most of the times what you are going to change. Thanks for your work. |
Fixes applied to antirez/sds as well. |
We already had client buffer limits exported as configuration options. Stick with the naming scheme already used. See #4568.
Changed name again to one of the options because we had very related options, so I kept the original naming schema. |
excellent change, and look forward to a true 64 bit Redis (after merge some PRs and other fixes) |
fix RESTORE command size limits
We already had client buffer limits exported as configuration options. Stick with the naming scheme already used. See redis#4568.
We already had client buffer limits exported as configuration options. Stick with the naming scheme already used. See #4568.
Equivalent changes introduced to redis sds.c via: redis/redis#4568
Equivalent changes introduced to redis sds.c via: redis/redis#4568
fix RESTORE command size limits
We already had client buffer limits exported as configuration options. Stick with the naming scheme already used. See redis#4568.
Reasoning:
Redis limits individual strings to 512MB, but obviously it is possible to create a hash object containing many fields each of 512MB, such a key can be serialized to well over 4GB, and while saving and loading it from RDB works perfectly well, trying to DUMP / RESTORE such a key will fail.
This PR has two commits: