Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[leo_storage] Possibility to become imbalance of a total mq's msgs during a rebalance (detach-node) #472

Closed
yosukehara opened this issue Apr 7, 2016 · 6 comments

Comments

@yosukehara
Copy link
Member

Description

In the current version, v1.2.21, after executing detach and rebalance command, LeoStorage assigns consumption messages of a rebalance into a primary of a vnode(RING). According to that, there is a possibility to become imbalance of a total messages between storage-nodes.

Solution

LeoStorage adjusts a total number of assigned storage-nodes so that a storage-node is able to have same consumption messages with other nodes into a LeoFS' cluster.

@windkit
Copy link
Contributor

windkit commented Apr 7, 2016

grafana

@yosukehara yosukehara changed the title [leo_storage] Possibility to become imbalance of a total mq's msgs during a rebalance [leo_storage] Possibility to become imbalance of a total mq's msgs during a rebalance (detach-node) Apr 8, 2016
@yosukehara yosukehara modified the milestones: 1.2.22, 1.4.0-RC2 Apr 8, 2016
yosukehara added a commit to leo-project/leo_storage that referenced this issue May 12, 2016
@yosukehara
Copy link
Member Author

yosukehara commented May 13, 2016

This issue seems to be improved with a yesterday update. We're going to test the detach-feature for more several times.

[Result]

  • 1M objects rebalance (detach)
  • Duration: approx 50 min

screencapture-222-230-139-43-13000-dashboard-db-leofs-dashboard-yosuke-1463104849355

@yosukehara
Copy link
Member Author

To be honest, I've found that there is one issue which is imbalance remote access during the data-rebalance like as below:

screen shot 2016-05-13 at 15 47 41

In the next action, I need to fix imbalance remote access of the data-rebalance.

@yosukehara
Copy link
Member Author

yosukehara commented May 14, 2016

We've checked the data-rebalance performance after executing a detach node from 8 nodes to 7 nodes as below:

[Result]

  • 10M objects rebalance (detach)
  • Duration: approx 11 hours

screencapture-222-230-139-43-13000-dashboard-db-leofs-dashboard-yosuke-1463227781349

As I mentioned at the former comment, I'm going to improve the remote access processing as the next action.

yosukehara added a commit to leo-project/leo_storage that referenced this issue May 16, 2016
@yosukehara
Copy link
Member Author

yosukehara commented May 16, 2016

We've checked the data-rebalance performance again, which is the same situation of the previous test. The data-rebalance performance has been dramatically increased, the duration time from 11 hours to 8.5 hours.

screencapture-222-230-139-43-13000-dashboard-db-leofs-dashboard-yosuke-rebalance-1463440448863

@yosukehara
Copy link
Member Author

Today, we've recognized this issue was fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants