Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any way to make reshard process faster? It's really slow for use in production! #2710

Closed
Calabor-Hoc opened this issue Aug 3, 2015 · 3 comments

Comments

@Calabor-Hoc
Copy link

Reshard by redis-trib.rb migrating the keys in a rate of 60,000 keys per minutes. Although The resharding can be done on the fly not affecting incoming queries, it did affect multi-keys queries. In my business senario, I use a lot of lua script to wrap a series of operation involving multi-keys. Almost most of my business logic involve multi-keys operations. So, during a so-called "resharding in the runtime", my application donot have "runtime" at all... My application cannot do any thing when redis is resharding. And that is still good for me until I found this "down time" caused by reshard is so long that could consume a whole day. I think my data in redis isn't that big for production senario. So, I really think the reshard function provided by redis-trib.rb should be much much faster to have practical use.

I believe redis can do this much faster, after all it's just moving hundreds of MBs data over network cable line. I think the slow point may lie in somewhere in the redis-trib.rb. Did printing those dots for every key moved slow it down? Can sending those "migrate" commands in a pipeline rather then one by one improve a lot of speed?

@Calabor-Hoc
Copy link
Author

My application use lua script to wrap a bunch of operations in one to achieve an atomic transaction effect, because my business really require a high-level data consistency . I really hope my app can benefit from the reshard feature

@HeartSaVioR
Copy link
Contributor

@Calabor-Hoc
I modified redis-trib to use pipeline while moving keys in slot.
Please refer #2711 and check it out to see if it helps you.

@Calabor-Hoc
Copy link
Author

@HeartSaVioR
We meet again!
Thank you a lot!

antirez added a commit that referenced this issue Dec 11, 2015
We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:

1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
   which is planned.

Close #2710
Close #2711
antirez added a commit that referenced this issue Dec 13, 2015
We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:

1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
   which is planned.

Close #2710
Close #2711
antirez added a commit that referenced this issue Dec 13, 2015
We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:

1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
   which is planned.

Close #2710
Close #2711
JackieXie168 pushed a commit to JackieXie168/redis that referenced this issue Aug 29, 2016
We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:

1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
   which is planned.

Close redis#2710
Close redis#2711
JackieXie168 pushed a commit to JackieXie168/redis that referenced this issue Aug 29, 2016
We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:

1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
   which is planned.

Close redis#2710
Close redis#2711
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants