New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any way to make reshard process faster? It's really slow for use in production! #2710
Comments
My application use lua script to wrap a bunch of operations in one to achieve an atomic transaction effect, because my business really require a high-level data consistency . I really hope my app can benefit from the reshard feature |
@Calabor-Hoc |
@HeartSaVioR |
We use the new variadic/pipelined MIGRATE for faster migration. Testing is not easy because to see the time it takes for a slot to be migrated requires a very large data set, but even with all the overhead of migrating multiple slots and to setup them properly, what used to take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is a good improvement. However the improvement can be a lot larger if: 1. We use large datasets where a single slot has many keys. 2. By moving more than 10 keys per iteration, making this configurable, which is planned. Close #2710 Close #2711
We use the new variadic/pipelined MIGRATE for faster migration. Testing is not easy because to see the time it takes for a slot to be migrated requires a very large data set, but even with all the overhead of migrating multiple slots and to setup them properly, what used to take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is a good improvement. However the improvement can be a lot larger if: 1. We use large datasets where a single slot has many keys. 2. By moving more than 10 keys per iteration, making this configurable, which is planned. Close #2710 Close #2711
We use the new variadic/pipelined MIGRATE for faster migration. Testing is not easy because to see the time it takes for a slot to be migrated requires a very large data set, but even with all the overhead of migrating multiple slots and to setup them properly, what used to take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is a good improvement. However the improvement can be a lot larger if: 1. We use large datasets where a single slot has many keys. 2. By moving more than 10 keys per iteration, making this configurable, which is planned. Close #2710 Close #2711
We use the new variadic/pipelined MIGRATE for faster migration. Testing is not easy because to see the time it takes for a slot to be migrated requires a very large data set, but even with all the overhead of migrating multiple slots and to setup them properly, what used to take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is a good improvement. However the improvement can be a lot larger if: 1. We use large datasets where a single slot has many keys. 2. By moving more than 10 keys per iteration, making this configurable, which is planned. Close redis#2710 Close redis#2711
We use the new variadic/pipelined MIGRATE for faster migration. Testing is not easy because to see the time it takes for a slot to be migrated requires a very large data set, but even with all the overhead of migrating multiple slots and to setup them properly, what used to take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is a good improvement. However the improvement can be a lot larger if: 1. We use large datasets where a single slot has many keys. 2. By moving more than 10 keys per iteration, making this configurable, which is planned. Close redis#2710 Close redis#2711
Reshard by redis-trib.rb migrating the keys in a rate of 60,000 keys per minutes. Although The resharding can be done on the fly not affecting incoming queries, it did affect multi-keys queries. In my business senario, I use a lot of lua script to wrap a series of operation involving multi-keys. Almost most of my business logic involve multi-keys operations. So, during a so-called "resharding in the runtime", my application donot have "runtime" at all... My application cannot do any thing when redis is resharding. And that is still good for me until I found this "down time" caused by reshard is so long that could consume a whole day. I think my data in redis isn't that big for production senario. So, I really think the reshard function provided by redis-trib.rb should be much much faster to have practical use.
I believe redis can do this much faster, after all it's just moving hundreds of MBs data over network cable line. I think the slow point may lie in somewhere in the redis-trib.rb. Did printing those dots for every key moved slow it down? Can sending those "migrate" commands in a pipeline rather then one by one improve a lot of speed?
The text was updated successfully, but these errors were encountered: