Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nats-streaming-cluster pod was not able to restore the snapshot #1182

Open
pycaster opened this issue Apr 19, 2021 · 2 comments
Open

nats-streaming-cluster pod was not able to restore the snapshot #1182

pycaster opened this issue Apr 19, 2021 · 2 comments

Comments

@pycaster
Copy link

pycaster commented Apr 19, 2021

We have a 3-node nats-streaming cluster setup with a FILE backend with a separate nats pod. The entire setup is TLS backed.

- args:
          - --cluster_id=nats-streaming
          - -m=8222
          - -store=FILE
          - --dir=/nats-datastore
          - --max_age=86400s
          - -sc=/nats/nats-streaming.conf

We have a failure scenario where one of the cluster node is failing to restore messages from the snapshot. I have attached logs from all 3 pods. This has happened only once in many setups we have tried till date in staging environment.

restore-failure.txt

We do not know how the cluster got into this situation. Next steps I am looking for is in this order,

  1. Reasoning on how cluster could have ended up in this situation and how do we interpret logs (attached)?
  2. How to successfully restore from here?

Please advice. I can provide you with much information you would need on the environment to help investigate this.

@pycaster
Copy link
Author

  1. It is also not clear from these logs who is trying to be the new leader? Is the peer that's trying to restore attempting to become new leader? Because I see nats-streaming-cluster-2 which was a leader, stepping down.
  2. [1] 2021/02/23 20:04:00.066723 [ERR] STREAM: channel "naas.object.meta.store" - unable to restore messages (snapshot 858577/861234, store 0/0, cfs 858577): nats: timeout Does this mean this peer was trying to restore messages (with other peers) but timed out on nats? We checked for network issues but dint find any. Does it have to do with the size of the messages and a configuration issue?

@kozlovic
Copy link
Member

@pycaster Yes, such message:

channel "naas.object.meta.store" - unable to restore messages (snapshot 858577/861234, store 0/0, cfs 858577): nats: timeout

indicate that this node started, realize that it should have messages 858577 to 861234 in channel naas.object.meta.store, but as none, so it will try to get them from the leader. The nats: timeout means that requests to get those messages were not answered, indicating either that there is no NATS communication happening or no leader currently running.

Looking at the logs from the 3 nodes, it's difficult to say what's going on, but clearly cluster does not seem to be running. As you said, the log for nats-streaming-cluster-2 seemed to be the leader on 2021/02/19 15:13:33.927999 but lost leadership about 6 minutes later: 2021/02/19 15:19:32.168896 [INF] STREAM: finished leader stepdown actions and there is no new entry after that and no other log shows leadership acquired. There is also in this log mention about reconnection to NATS servers: reconnected to NATS Server at "nats://nats.nginx-controller.svc.cluster.local.:4222", so it does seem that there are some connectivity issues.

The log for nats-streaming-cluster-1 shows only reconnection messages on 02/23, I would say about 4 times between 07:43:18 to 07:55:59, and nothing else.

Also, you are running 0.19.0, so you could try with latest version (0.21.2), but this is a bit strange:

[1] 2021/02/23 20:03:30.263973 [INF] STREAM: Git commit: [{{.ShortCommit}}]

Not sure how you built your image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants