-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
websocket error #15262
Comments
@lly835 the error code is 500. Please look at what the error is in chrome network tab and console log tab |
I'm getting this error too. Here is what I get in the Network tab : And in the console log: It seems to be node related (vs container related) because for the same workloads on different nodes if I request the shell it works for some nodes and some not. EDIT: The calico-node_canal container is caught in a restart loop on the nodes that are not working Installation info Rancher versions: Infrastructure Stack versions: Docker version: (docker version,docker info preferred) Server: Operating system and kernel: (cat /etc/os-release, uname -r preferred) Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO) Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB) Environment Template: (Cattle/Kubernetes/Swarm/Mesos) |
Possibly:
Run |
@leodotcloud I'm running into a similar issue. Imported GKE clusters, and requests to tail logs or exec into containers results in a 500 error. The cluster is otherwise operational and I don't see anything particularly telling in the Rancher logs. |
Same as #13149 |
On further review I do notice, however, that the clusters which are not able to be exec'd to have similar reports in the Rancher logs (repeated multiple times for different resource types):
The IP matches the kubernetes service cluster IP. |
Both of the affected clusters previously had calico network policy support enabled, prior to it being disabled for non-RKE clusters. |
Fixed in my case: The issue was with a missing firewall rule for the VPC on GCP to allow ssh connections from the master to nodes. It would be great to see some more helpful logging from Rancher on the failure's origin, but this is not a Rancher error, at least in my case. https://stackoverflow.com/a/36378625 |
I have the same issue , when I ran pod on a special node . The node had a vpn client . And I can see 2 IP address on the rancher UI
All the pods run on this node can not access to Shell or Log On the node I input ifconfig
|
have you any way to solve it ? |
The text was updated successfully, but these errors were encountered: