You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Linux us-mky01 5.14.0-162.6.1.el9_1.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 18 02:06:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Linux us-mky02 5.14.0-162.6.1.el9_1.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 18 02:06:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration: I have one cluster with one external node. I use netmaker on both servers, but the server that hosts the netmaker instance is not on either of these servers.
Describe the bug:
When I start up my k3s cluster, things seem to be working normally and I don't have an issue all day, but the next day I begin getting gateway timeout errors. Sometimes it only times out to pods on the external node, and will return a broken webpage from the node on the cluster server; other times it will just always return a 504 gateway timeout error.
I am able to resolve the bug for the day by simply restarting the k3s cluster service every day (systemctl restart k3s)
Steps To Reproduce:
Installed K3s: $ curl -sfL https://get.k3s.io | sh -
Added K3S external node using netmaker ip schema
Cluster (Redacted public ip address):
External Node:
Ingress Config (same for all services with different service name)
I expected k3s ingress to be resolving and load balancing correctly
Actual behavior:
It will work as expected for a period of time, but at a point it will just straight up stop working correctly, but can be fixed by restarting k3s cluster service once.
Additional context / logs:
The first problem that I thought it might've been was that it was dns because I was getting a warning that I had too many DNS servers, after resolving that issue it still happened again. I do not know where to pull the logs from because. I am not sure what the root of the issues is. The web server logs look as expected with little problem.
The text was updated successfully, but these errors were encountered:
HoloPanio
changed the title
K3S returning 504 to some pods and not others (sometimes)
K3S returning 504 to some pods and not others
Mar 22, 2023
Environmental Info:
K3s Version: v1.25.7+k3s1
Node(s) CPU architecture, OS, and Version:
Linux us-mky01 5.14.0-162.6.1.el9_1.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 18 02:06:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Linux us-mky02 5.14.0-162.6.1.el9_1.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 18 02:06:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration: I have one cluster with one external node. I use netmaker on both servers, but the server that hosts the netmaker instance is not on either of these servers.
Describe the bug:
When I start up my k3s cluster, things seem to be working normally and I don't have an issue all day, but the next day I begin getting gateway timeout errors. Sometimes it only times out to pods on the external node, and will return a broken webpage from the node on the cluster server; other times it will just always return a
504 gateway timeout
error.I am able to resolve the bug for the day by simply restarting the k3s cluster service every day (
systemctl restart k3s
)Steps To Reproduce:
$ curl -sfL https://get.k3s.io | sh -
Cluster (Redacted public ip address):
![image](https://user-images.githubusercontent.com/30759238/226924909-a0ba981a-9ea0-4037-b4d4-d8e22a31b11c.png)
External Node:
![image](https://user-images.githubusercontent.com/30759238/226923550-00cf5193-a3c5-4f6d-aea4-7a5edc5e3765.png)
Ingress Config (same for all services with different service name)
Api Deployment Yaml:
Website Deployment yaml (uses same ingress as api with different service name):
Expected behavior:
I expected k3s ingress to be resolving and load balancing correctly
Actual behavior:
It will work as expected for a period of time, but at a point it will just straight up stop working correctly, but can be fixed by restarting k3s cluster service once.
Additional context / logs:
The first problem that I thought it might've been was that it was dns because I was getting a warning that I had too many DNS servers, after resolving that issue it still happened again. I do not know where to pull the logs from because. I am not sure what the root of the issues is. The web server logs look as expected with little problem.
The text was updated successfully, but these errors were encountered: