-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VLAN interfaces do not get offload enabled #6629
Comments
Please see if this fixes.
|
@ssahani Debian patches: http://sources.debian.net/patches/systemd/232-25%2Bdeb9u1/ We have It seems that the issue was fixed for everything except for VLAN links. |
@ssahani this appears to be something for you? Any idea? |
I can't reproduce with my set up. I created vlan using ip link v230 networkd and the latest all result are same. ip link
networkd (latest git)
networkd 230
|
@ssahani can you try systemd v232 that comes with Debian Stretch? |
Sorry Only current version please. |
Is there a corresponding Debian bug report? |
guys i do have similar issue with rhel7 & onload which uses systemd 219
can someone at least tell me which version of systemd should be |
I cannot follow this issue. Is this really caused by systemd, rather than kernel? Why do you think so?
|
I replaced the workaround in our fleet with a check and a failure if this is detected. We rebooted thousands of machines and there were no issues. Whatever the issue was back in the day, we don't see it today, so let's just close it. |
Submission type
systemd version the issue has been seen with
Used distribution
Debian Stretch
In case of bug report: Expected behaviour you didn't see
Offload features are enabled for VLAN interfaces.
In case of bug report: Unexpected behaviour you saw
Offload features are not enabled for VLAN interfaces.
In case of bug report: Steps to reproduce the problem
Recently we upgraded some machines from Debian Jessie to Debain Stretch and noticed that system cpu jumped by quite a lot, even though we use exactly the same kernel and have exactly the same workloads. In some cases machines were so overwhelmed, they got stalled page allocations for over a minute and all other stuff you get when it comes to this.
These machines are stateless, so we were able to rollback and see performance improvements.
Tracing revealed that system CPU time was wasted in timers started from TCP subsystem. In the end we were able to pin point the difference to enabled features of vlan interface that was getting all the traffic.
We have vlan interface on top of bonded interface, all interfaces are configured with
systemd-networkd
exactly the same between distributions. Bond is correctly configured, while vlan is not.Jessie with systemd v230 from backports:
This is a little bit hard to grasp, so here's the diff:
bond0
:vlan10
:cc @ssahani
The text was updated successfully, but these errors were encountered: