New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker: update cilium-{runtime,builder} images #11734
Conversation
Coverage decreased (-0.007%) to 36.922% when pulling ab93f0d595dea3dbf183ef7a06baf40287b419bf on pr/cilium-images-update into 87f0905 on master. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, this is expected to reduce BPF complexity a bit? Would be nice to have some report of complexity in GitHub, as we have for test coverage.
Also adds ss tool to the runtime image.
Fixes #11648, right?
It adds it to the runtime, but the actual bugtool change @soumynathan is working on along with removing all the other non-working cmd invocations. |
test-me-please |
Yes, expected to slightly reduce as well. Agree, some sort of GH action coverage would be really useful to find bad offenders which have a significant increase in verifier complexity. Maybe there could be one action per tested kernel under a number of pre-set test configs for the BPF datapath. We have this BUILD_PERMUTATION right now in the Makefile, but perhaps we can come up with something better which would then rework the former and be useable for the GH actions. |
38b15f1
to
ab93f0d
Compare
(Noticed that I need to add copy of ss to |
test-me-please |
retest-4.19 |
quay issue on 4.19:
|
retest-4.19 |
On the 4.19 CI from quay:
|
retest-4.19 |
retest-net-next |
From K8s-1.11-Kernel-netnext is this expected?
|
Heh, no. Looks like https://github.com/cilium/image-tools/blob/master/images/bpftool/checkout-linux.sh is using a too old revision here. |
@borkmann ah, looks your commit went in (cilium/image-tools@7877cb0) while my PR (cilium/image-tools#3) was in flight... apologies for that... |
cilium/image-tools#22 is building right now. Will update the rest once done. |
Pull in updated LLVM into cilium-{runtime,builder} images. On top of the base 10.0.0, they include the cherry-picked commits from John's work: - llvm/llvm-project@29bc5dd - llvm/llvm-project@13f6c81 Given we build https://github.com/cilium/image-tools via GH actions, move all the tools from there to the docker.io auto-built repos. Also add ss tool to the runtime image for diagnostics via bugtool (e.g. for the L7 proxy). Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
ab93f0d
to
07e80dc
Compare
*whispers* This only outputs complexity numbers for the binaries currently compiled in the As a side note it'd be nice if the kernel actually presented complexity in a more structured manner so we don't have to give it a log and then parse the arbitrary log output to figure these numbers out. What I don't understand is how github actions can help present these over time. I had imagined previously that we could improve this script to output a text file during CI runs and use that with either Gingko Also worth evaluating whether it makes sense for this to actually be per-PR, or automatically for specific PRs (all PRs that change |
I think Daniel had this script in mind; at least I did. The issue is that it only tests one specific datapath config. right now, that which achieves the max. complexity on 4.9. We would need to test different config. for different kernels. We already have #10798 opened for that though.
I think it's just that the integration with GitHub Actions is much easier. We could even annotate program sections with their complexity changes. GitHub Actions have several Ubuntu images, so maybe one way to get some of the kernel versions we need. We would probably need some Jenkins test to get all the kernels we need though :-/
One more thing that'd be super easy with GitHub Actions. |
On complexity measurement topic, ref. #4837, now reopened. |
retest-net-next |
test-me-please |
Yeah, I think by now we have explosion of ifdefs in our BPF datapath, so it is hard to say which config exactly causes the highest complexity. Would be nice to categorise which config items are available for the three kernels we test and then, maybe, generate a bunch of random combinations (say ~250) to compile for them (as GH action) and trying to load plus taking the diff of complexity. Seed could be fixed so it's reproducible.
That would also be cool!
|
retest-runtime |
Pull in updated LLVM into cilium-{runtime,builder} images. On top of the
base 10.0.0, they include the cherry-picked commits from John's work:
Given we build https://github.com/cilium/image-tools via GH actions, move
all the tools from there to the docker.io auto-built repos.
Also adds ss tool to the runtime image.
Signed-off-by: Daniel Borkmann daniel@iogearbox.net