Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to start: discovered another streaming server with cluster ID "example-stan" #61

Open
veerapatyok opened this issue Dec 26, 2019 · 25 comments

Comments

@veerapatyok
Copy link

I got error when I deploy NatsStreamingCluster

[1] 2019/12/26 07:16:45.762521 [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "example-stan"

I use GKE

full message

[1] 2019/12/26 07:16:45.747712 [INF] STREAM: ServerID: JTmPHIR4BFp2ZuAWkekcIl
[1] 2019/12/26 07:16:45.747715 [INF] STREAM: Go version: go1.11.13
[1] 2019/12/26 07:16:45.747717 [INF] STREAM: Git commit: [910d6e1]
[1] 2019/12/26 07:16:45.760913 [INF] STREAM: Recovering the state...
[1] 2019/12/26 07:16:45.761073 [INF] STREAM: No recovered state
[1] 2019/12/26 07:16:45.762399 [INF] STREAM: Shutting down.
[1] 2019/12/26 07:16:45.762521 [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "example-stan"
@makkus
Copy link

makkus commented Jan 8, 2020

I'm getting the same error, on a dev k3d cluster.

@timjkelly
Copy link

I'm also seeing this error when installing via the instructions here: https://github.com/nats-io/nats-streaming-operator#deploying-a-nats-streaming-cluster

@bfalese-navent
Copy link

Is not working any more. same error

@dannylesnik
Copy link

Stuck with the same problem, only one replica is nats streaming pod is working. All other exit wit the same error.

[FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "example-stan"

@kelvin-yue-scmp
Copy link

Having the same issue

@maertu
Copy link

maertu commented Feb 4, 2020

Same

@veerapatyok
Copy link
Author

I have temporary solution: I made nat-streaming-cluster.yaml and inside a file I added

config:
    debug: true

nat-streaming-cluster.yaml

---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
  name: "example-stan"
spec:
  # Number of nodes in the cluster
  size: 3

  # NATS Streaming Server image to use, by default
  # the operator will use a stable version
  #
  image: "nats-streaming:latest"

  # Service to which NATS Streaming Cluster nodes will connect.
  #
  natsSvc: "example-nats"

 config:
    debug: true

@veerapatyok
Copy link
Author

I change to KubeMQ

@hasanovkhalid
Copy link

hasanovkhalid commented Apr 20, 2020

Any update on this issue? The same behaviour on EKS. If I keep retrying it works eventually, however, when there is a pod restart it starts happening again.

@sneerin
Copy link

sneerin commented May 19, 2020

the same issue for me

@lundbird
Copy link

After trying the config above I get the error: [FTL] STREAM: Failed to start: failed to join Raft group example-stan. I am able to create a working nats+stan configuration by using the statefulsets here: https://docs.nats.io/nats-on-kubernetes/minimal-setup#ha-setup-using-statefulsets

@drshade
Copy link

drshade commented Jun 17, 2020

Same problem, and adding the "debug: true" worked for me once, but unpredictably on the next few attempts (had to delete and apply the cluster a few times).

For my configuration I suspect that it may be a timing issue with NATS Streaming racing my Envoy proxy sidecar (I have Istio installed in my cluster) and that by adding the "debug: true" NATS Streaming takes a bit longer to boot up, giving Envoy enough time to be ready. Bit of a tricky one to debug as the images are based on scratch with no real ability to inject a sleep as part of the image cmd.

Am I the only one using Istio, or is this a common theme?

@lanox
Copy link

lanox commented Jul 23, 2020

I have the same issue, is there anyone can help.

[1] [INF] STREAM: Starting nats-streaming-server[stan-service] version 0.16.2
[1] [INF] STREAM: ServerID: ZfhJYXPEJEzpUKNLHWlD0F
[1] [INF] STREAM: Go version: go1.11.13
[1] [INF] STREAM: Git commit: [910d6e1]
[1] [INF] STREAM: Recovering the state...
[1] [INF] STREAM: No recovered state
[1] [INF] STREAM: Shutting down.
[1] [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "stan-service"

@hbobenicio
Copy link
Contributor

hbobenicio commented Jul 23, 2020

Same issue here.

describing the pods created by the nats-streaming-operator, I see the cli command line args setting the cluster-id as follows:

$ kubectl describe -n mynamespace stan-cluster-2

Name:         stan-cluster-2
Containers:
  stan:
    Image:         nats-streaming:0.18.0
    Command:
      /nats-streaming-server
      -cluster_id
      stan-cluster
      -nats_server
      nats://nats-cluster:4222
      -m
      8222
      -store
      file
      -dir
      store
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1

pod 1 runs ok. Then pod 2 and 3 try to run with the same cluster-id and fails because it's already in use (by pod 1).

What is the correct way the nats-streaming-operator should assign the cluster-id's to the cluster servers? is it some config I'm missing here?

PS.: I'm not mounting any volume yet on the pod spec.

@hbobenicio
Copy link
Contributor

hbobenicio commented Jul 23, 2020

maybe this line can be a clue what's happening:

isn't it supposed to be pod.Name or something?

@hbobenicio
Copy link
Contributor

hbobenicio commented Jul 23, 2020

I downloaded the code, changed o.Name for pod.Name and then I've put some logs to compare both values.
I docker built the image and redeploy the operator in my minikube... this is what follows:

$ kubectl logs -n poc nats-streaming-operator-5d4777f476-2wf7n

time="2020-07-23T20:25:22Z" level=info msg="cluster name: stan-cluster" # this is the o.Name
time="2020-07-23T20:25:22Z" level=info msg="pod name: stan-cluster-2" # this is the pod.Name

now the cluster id is correctly set for the pods:

$ kubectl logs -n poc stan-cluster-2 # stan-cluster-2 is the correct cluster-id!

[1] 2020/07/23 20:27:22.126726 [INF] STREAM: Starting nats-streaming-server[stan-cluster-2] version 0.18.0 

and all servers are ready.

@lanox
Copy link

lanox commented Jul 23, 2020

@hbobenicio that is what I have done and it seem to work although I am not sure how to validate that all 3 nodes are functioning correctly.

I can see 3 nodes been connected but that is about is

Is there a way to check which nodes is receiving?

@hbobenicio
Copy link
Contributor

hbobenicio commented Jul 24, 2020

@lanox there are some ways to test it... a quick test would be running nats-box on the cluster and sending/receiving some messages, or maybe writing some test app and running it on your cluster. try checking logs and trying some caos testing at last.

Good to know that it worked for you too

@hbobenicio
Copy link
Contributor

hbobenicio commented Jul 24, 2020

My bad... I mixed up the concept of cluster_id (o.Name is actually correct) with cluster_node_id. The bug is somewhere else bellow here:

isClustered := o.Spec.Config != nil && (o.Spec.Size > 1 || o.Spec.Config.Clustered)

if isClustered && !ftModeEnabled {

storeArgs = append(storeArgs, fmt.Sprintf("--cluster_node_id=%q", pod.Name))

my yaml describing NatsStreamingCluster doesn't have a config entry, so isClustered fails and it doesn't get the cluster_node_id set.

@lanox
Copy link

lanox commented Jul 24, 2020

Thanks for looking into this. I think it looked like it was working because it was working as an individual node and not clustered nodes? , hence I was saying I am not sure if this worked as it supposes to, however, I could be wrong in what I am saying.

@lanox
Copy link

lanox commented Jul 24, 2020

@hbobenicio so this is what fixed the problem for me, I added this ft_group_name:"production-cluster" in my config section and that told the streeaming-operator that it is running in fault tolerance mode and that one singe node will be active while other 2 are in the standby mode.

This is what I did to test.

 kubectl get pods
NAME                                       READY   STATUS    RESTARTS   AGE
nats-operator-58644766bf-hpx9p             1/1     Running   1          24h
nats-service-1                             1/1     Running   0          15m
nats-service-2                             1/1     Running   0          15m
nats-service-3                             1/1     Running   0          15m
nats-streaming-operator-56d59c9846-l6qlm   1/1     Running   0          52m
stan-service-1                             1/1     Running   1          15m
stan-service-2                             1/1     Running   0          15m
stan-service-3                             1/1     Running   0          15m

then

kubectl logs stan-service-1 -c stan
[1] [INF] STREAM: Starting nats-streaming-server[stan-service] version 0.16.2
[1] [INF] STREAM: ServerID: ZwZuUeXKPK3Y7OjI7R1hLd
[1] [INF] STREAM: Go version: go1.11.13
[1] [INF] STREAM: Git commit: [910d6e1]
[1] [INF] STREAM: Starting in standby mode
[1] [INF] STREAM: Server is active
[1] [INF] STREAM: Recovering the state...
[1] [INF] STREAM: No recovered state
[1] [INF] STREAM: Message store is FILE
[1] [INF] STREAM: Store location: store
[1] [INF] STREAM: ---------- Store Limits ----------
[1] [INF] STREAM: Channels:            unlimited
[1] [INF] STREAM: --------- Channels Limits --------
[1] [INF] STREAM:   Subscriptions:     unlimited
[1] [INF] STREAM:   Messages     :     unlimited
[1] [INF] STREAM:   Bytes        :     unlimited
[1] [INF] STREAM:   Age          :        1h0m0s
[1] [INF] STREAM:   Inactivity   :     unlimited *
[1] [INF] STREAM: ----------------------------------
[1] [INF] STREAM: Streaming Server is ready

Then I deleted stan-service-1

Then I checked which other nodes have become master

 kubectl logs stan-service-2 -c stan
[1] [INF] STREAM: Starting nats-streaming-server[stan-service] version 0.16.2
[1] [INF] STREAM: ServerID: BlQLxnAFPv7yf7uaWdXsa9
[1] [INF] STREAM: Go version: go1.11.13
[1] [INF] STREAM: Git commit: [910d6e1]
[1] [INF] STREAM: Starting in standby mode
kubectl logs stan-service-3 -c stan
[1] [INF] STREAM: Starting nats-streaming-server[stan-service] version 0.16.2
[1] [INF] STREAM: ServerID: B3niCweLpvzSewgx3mUsJ9
[1] [INF] STREAM: Go version: go1.11.13
[1] [INF] STREAM: Git commit: [910d6e1]
[1] [INF] STREAM: Starting in standby mode
[1] [INF] STREAM: Server is active
[1] [INF] STREAM: Recovering the state...
[1] [INF] STREAM: No recovered state
[1] [INF] STREAM: Message store is FILE
[1] [INF] STREAM: Store location: store
[1] [INF] STREAM: ---------- Store Limits ----------
[1] [INF] STREAM: Channels:            unlimited
[1] [INF] STREAM: --------- Channels Limits --------
[1] [INF] STREAM:   Subscriptions:     unlimited
[1] [INF] STREAM:   Messages     :     unlimited
[1] [INF] STREAM:   Bytes        :     unlimited
[1] [INF] STREAM:   Age          :        1h0m0s
[1] [INF] STREAM:   Inactivity   :     unlimited *
[1] [INF] STREAM: ----------------------------------
[1] [INF] STREAM: Streaming Server is ready

and stan-service-1 is showing standby.

I think the documentation needs to be updated as well as the example deployments.

@lanox
Copy link

lanox commented Jul 24, 2020

My bad... I mixed up the concept of cluster_id (o.Name is actually correct) with cluster_node_id. The bug is somewhere else bellow here:

storeArgs = append(storeArgs, fmt.Sprintf("--cluster_node_id=%q", pod.Name))

my yaml describing NatsStreamingCluster doesn't have a config entry, so isClustered fails and it doesn't get the cluster_node_id set.

oh and it seems you can only run it in cluster mode or FT mode, but can't be together.

@hbobenicio
Copy link
Contributor

hbobenicio commented Jul 24, 2020

Yeah, they are mutually exclusive modes. My use case is for cluster mode.

I think those checks for modes could be improved or, if the config object for the spec is really necessary, then a validation to check if this is missing would be a better error report. But I still think the best approach is that it works even without the config entry.

So, until de fix is made, this is the workaround:

If you have a yaml without a config entry like this, just put an empty config entry then:

apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
  name: "my-stan-cluster"
  namespace: ${NAMESPACE}
spec:
  size: ${CLUSTER_SIZE}
  image: "nats-streaming:0.18.0"
  natsSvc: ${NATS_CLUSTER_NAME}

  # Here... without a config entry, isClustered is false even with spec.Size > 1.
  # Just put an empty config
  config: {}

hbobenicio added a commit to hbobenicio/nats-streaming-operator that referenced this issue Jul 24, 2020
… code correctly set cluster-node-id for cluster mode
wallyqs added a commit that referenced this issue Jul 27, 2020
…figs

workaround issue #61 - adding missing configs to all examples of cluster mode
@sergeyshaykhullin
Copy link

@hbobenicio @wallyqs Hello. I'am getting same error using STAN helm chart

[1] 2020/09/11 15:51:26.922551 [INF] STREAM: Starting nats-streaming-server[stan] version 0.18.0
[1] 2020/09/11 15:51:26.922673 [INF] STREAM: ServerID: PWeRnm2bTpcMaHZatM8MdC
[1] 2020/09/11 15:51:26.922678 [INF] STREAM: Go version: go1.14.4
[1] 2020/09/11 15:51:26.922681 [INF] STREAM: Git commit: [026e3a6]
[1] 2020/09/11 15:51:26.951206 [INF] STREAM: Recovering the state...
[1] 2020/09/11 15:51:26.953525 [INF] STREAM: Recovered 0 channel(s)
[1] 2020/09/11 15:51:26.961610 [INF] STREAM: Shutting down.
[1] 2020/09/11 15:51:26.962248 [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "stan"
stan:
  replicas: 3

  nats:
    url: nats://nats.nats:4222

  store:
    ...:

  cluster:
    enabled: true

  sql:
    ...:

@wallyqs
Copy link
Member

wallyqs commented Sep 11, 2020

Hi @sergeyshaykhullin I think this is an error from the helm charts? Btw I think the error is that it missing defining
ft: https://github.com/nats-io/k8s/tree/master/helm/charts/stan#fault-tolerance-mode

  ft:
    group: "stan"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests