You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On AWS (EKS and S3 Bucket), the Cassandra Statefulset won't come up sometimes after restoration.
On a K8ssandra cluster with 1 dc of 3 nodes, 2 are not reaching readiness probe.
On failed pods, the following error message is showing up in logs:
java.lang.RuntimeException: A node with address /172.0.238.69:7000 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:749)
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:1024)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:874)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:819)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:418)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:759)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:893)
Did you expect to see something different?
How to reproduce it (as minimally and precisely as possible):
Environment
K8ssandra Operator version:
Insert image tag or Git SHA here
Kubernetes version information:
kubectl version
Kubernetes cluster kind:
insert how you created your cluster: kops, bootkube, etc.
Manifests:
insert manifests relevant to the issue
K8ssandra Operator Logs:
insert K8ssandra Operator logs relevant to the issue here
Anything else we need to know?:
The text was updated successfully, but these errors were encountered:
What happened?
On AWS (EKS and S3 Bucket), the Cassandra Statefulset won't come up sometimes after restoration.
![image](https://private-user-images.githubusercontent.com/111791574/333225376-039faab8-92b8-412f-af5b-7dd937b88785.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk0MTYxNTcsIm5iZiI6MTcxOTQxNTg1NywicGF0aCI6Ii8xMTE3OTE1NzQvMzMzMjI1Mzc2LTAzOWZhYWI4LTkyYjgtNDEyZi1hZjViLTdkZDkzN2I4ODc4NS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNjI2JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDYyNlQxNTMwNTdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1kMTc5YTljYTE2NjBjMmRkZDdlZDdhNDYzNTMxNjgwY2YwYTgwMTMxNjljMTZhN2MyYmNlOGI2ZDJjNzk5NGFiJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.61FHeKID596jMAOfaXS-xhRq4WFtoNlurrxNYZg6Nbc)
On a K8ssandra cluster with 1 dc of 3 nodes, 2 are not reaching readiness probe.
On failed pods, the following error message is showing up in logs:
Did you expect to see something different?
How to reproduce it (as minimally and precisely as possible):
Environment
K8ssandra Operator version:
Insert image tag or Git SHA here
Kubernetes version information:
kubectl version
Kubernetes cluster kind:
insert how you created your cluster: kops, bootkube, etc.
Manifests:
Anything else we need to know?:
The text was updated successfully, but these errors were encountered: