You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have made a three-server consul cluster which is well-functioning on both service discovery and dns forwarding.
What confuses me is that, on the web-ui, one or two of the three consul server nodes become yellow randomly and occasionally, which represents its failure of serf health check.
And there're a lot of logs like this below:
2015/06/12 08:04:26 [INFO] memberlist: Marking consul-server3 as failed, suspect timeout reached2015/06/12 08:04:26 [INFO] serf: EventMemberFailed: consul-server3 10.170.201.262015/06/12 08:04:26 [INFO] consul: removing server consul-server3 (Addr: 10.170.201.26:8300) (DC: dc1)2015/06/12 08:04:26 [INFO] consul: member 'consul-server3' failed, marking health critical
I have made a three-server consul cluster which is well-functioning on both service discovery and dns forwarding.
What confuses me is that, on the web-ui, one or two of the three consul server nodes become yellow randomly and occasionally, which represents its failure of serf health check.
And there're a lot of logs like this below:
I'm not sure why it behaves this way.
I start the server nodes with:
consul-server2
andconsul-server3
are almost the same as that above, except for the ip address and using-join
instead of-bootstrap
.Where have I done wrong?
The text was updated successfully, but these errors were encountered: