-
Notifications
You must be signed in to change notification settings - Fork 464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sporadic "Kernel error" when a lot of pods are scheduled #1521
Comments
Hmm... So this shouldn't happen because we have our own mutex surrounding ipset logic and all the functions that you posted above has the correct locking logic. However, I can't argue with the logs. Can you please run kube-router with
and the ordering of those from the various controllers surrounding the errors. |
I have found out that I cannot reproduce it on any of my development clusters, but only in prod. Hence I will do that next time I a security bulletin for kernel is issued (and a new canonical kernel is released): it usually happens at least every 2 weeks. |
Here is entire log which includes events slightly before and slightly after incident when a pod could not connect to the service IP. Timestamps when the pod tried and failed to connect: And entire (mildly anonymised but no lines removed) log: https://www.dropbox.com/scl/fi/ima6kknivqs17nj075t1d/kube-router-v1.log?rlkey=aigkfmrl08n35ksik3nddw2nz&dl=0 Log is attached via dropbox because github does not accept comments longer than 64kB. |
Hmm... So I don't see anything in the logs. It seems that the error that you initially opened the issue for doesn't appear in the logs at all. There's only 1 error, and its a pretty common one, where an ipset fails to delete due to a reference not having cleared the kernel yet, that got corrected less than 2 seconds later in the next sync. In terms of not having reachability to a pod for the time window that you mentioned, I don't see anything in the logs that would cause that. The failed ipset does get cleared right around there, but I think that is a red-herring as there are numerous successful syncs between the time period that it broke and that error that synced without any errors. You do have a pretty high amount of churn on your host, but all of the controllers seem to be syncing fairly quickly. In terms the service IP, that IP doesn't show up in the logs at all, because most things are logged by service name, so you might be able to diagnose that side more than I would be able to. |
It shows:
at this point in time it's removed. And apparently this is when it's added back:
And in-between there is no ipvs service available. And yes, sorry for not mentioning what |
And a tiny update on my previous
Statement: this timestamp is when the application has started. It definitely tried to connect some time later than that. As the application does not have verbose logging on when exactly it connected to the database during initialisation I can only tell that it happened between: |
@zerkms - Sorry for missing the service IP in the logs, I must have made a typo or something. From what I can see, without knowing more about this specific service that you're deploying, it looks like Kubernetes likely told us that the pod was no longer ready or healthy or deployed or some such, and so we withdrew it from the service. Later on it came back so we put it back. So as far as I can see, again without knowing more, it looks like kube-router did what it was supposed to do. However, I think this error is a bit off topic from the original issue reported. The first one was about kube-router encountering a kernel error where it wasn't able to update IPVS. This one is about something different. I'd recommend that we keep this thread about the kernel error (of which I can't find any evidence that it happened in this case from the log you provided). If you want to pursue this other error, we should probably open another issue with more information about how |
That's how I read it too. BUT!! It's 100% healthy pods available there (and they happily serve during the same time frames). And it's not a single service - as you can see in that log it's a large batch of them removed. And those services don't belong to the same (or similar applications) - those are just random services from the entire cluster.
Agree. Should we close this (as I don't have any more details for the original one) and create a new one?
As I mentioned above: What they have in common - they are the pods from the same node:
If I needed to take a guess - to me it looks Btw, is it suspicious that coredns ip/port is twice there: |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
I think it's not stale, but I will bring more logs with extra verbose flag, next week on a next kernel upgrade loop. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Okay, I forgot about it, sorry :-D Nonetheless, within next couple of weeks on next upgrade cycle I will provide more logs and will stop bumping the report. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Ok, I need some more time. |
Okay, it looks like I cannot reproduce it anymore on 1.27.x. It was easy and reliable to reproduce in 100% on 1.26.x though. Hence closing :-) |
@aauren I am having same issue. NetworkPolicies do not work for me. I run 1.26.4. In logs I see
|
@vladimirtiukhtin can you open a new issue with all of the fields in the template asking with as many other details as possible? Maybe some debug logs and reproduction instructions? |
What happened?
When I drain a node and pods get rescheduled on a different machine - more often than not the node that receives an instant spike in the pods to be scheduled gets this in logs:
And when it happens pods lose network connectivity (at least to service IPs).
I think the corresponding commands that update kernel resources should have their own retry queues that are tighter - eg retry in a loop for 10 seconds, if "Resource busy".
From another hand, I don't entirely understand what "Resource busy" exactly means in this context? Is it a race condition around updating/removing kernel objects?
What did you expect to happen?
I think it should not happen at all.
How can we reproduce the behavior you experienced?
Steps to reproduce the behavior:
**Screenshots / Architecture Diagrams / Network Topologies **
If applicable, add those here to help explain your problem.
** System Information (please complete the following information):**
kube-router --version
): 1.5.4kubectl version
) : [e.g. 1.18.3] 1.26.4** Logs, other output, metrics **
Please provide logs, other kind of output or observed metrics here.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: