Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't set any keepalived master on first run #50

Closed
wants to merge 1 commit into from
Closed

Don't set any keepalived master on first run #50

wants to merge 1 commit into from

Conversation

jbonjean
Copy link
Contributor

@jbonjean jbonjean commented Jul 2, 2015

During the first run, the fact gluster_vrrp_fqdns is empty, leading to
all nodes to be set as keepalived master. This could cause race
conditions on the second run when gluster volumes are created because
every node has the VIP, and it is an invalid state anyway.

During the first run, the fact gluster_vrrp_fqdns is empty, leading to
all nodes to be set as keepalived master. This could cause race
conditions on the second run when gluster volumes are created because
every node has the VIP, and it is an invalid state anyway.
@purpleidea
Copy link
Owner

@jbonjean This would take a bit of convincing... When the list is empty, we're basically alone, and should be MASTER, no? I'll admit I haven't looked at this code passage recently...

Let's look at this another way. Can you provide your configuration and LMK where the bug/issue is? Since this all WFM.

@jbonjean
Copy link
Contributor Author

jbonjean commented Jul 2, 2015

@purpleidea During the first run, the list is empty for every node (I start puppet simultaneously on all nodes), so all become master and take the VIP. This is fixed on the second run, when the list is populated from the fact, and the first node is selected, at least it is how I understand it.
I would say that when the list is empty, the node is more in an "unknown" state, and doesn't really know if it is alone.

I will provide my configuration as soon as I can but it's very standard, 4 nodes, replication 2, with a hardcoded hosts list (no exported resources).

@purpleidea
Copy link
Owner

Does anything break or not work correctly because of this?

@jbonjean jbonjean closed this Jan 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants