Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Third party package updates for upcoming v1.13 release #2825

Merged
merged 32 commits into from
Mar 1, 2023

Conversation

etungsten
Copy link
Contributor

@etungsten etungsten commented Feb 23, 2023

Issue number:
Resolves #2768, #2824

Description of changes:

packages: update glibc to 2.37
packages: update hotdog to 1.0.6
packages: update tcpdump to 4.99.3
packages: update systemd to 250.11
packages: update readline to 8.2
packages: update policycoreutils to 3.5
packages: update open-vm-tools to 12.1.5
packages: update nvidia-k8s-device-plugin to 0.13.0
packages: update libpcre to 10.42
packages: update nvidia-container-toolkit to 1.12.0
packages: update zstd to 1.5.4
packages: update libxcrypt to 4.4.33
packages: update libsepol to 3.5
packages: update libsemanage to 3.5
packages: update libselinux to 3.5
packages: update libnvidia-container to 1.12.0
packages: update liblzma to 5.4.1
packages: update libglib to 2.75.3
packages: update libdbus to 1.15.4
packages: update libcap to 2.67
packages: update libaudit to 3.1
packages: update kexec-tools to 2.0.26
packages: update iputils to 20221126
packages: update iptables to 1.8.9
packages: update ecs-agent to 1.68.2
packages: update ecr-credential-provider to 1.25.3
packages: update e2fsprogs to 1.47.0
packages: update dbus-broker to 33
packages: update containerd to 1.6.19
packages: update ca-certificates to 2023-01-10
packages: update aws-signing-helper to 1.0.4
packages: update aws-iam-authenticator to 0.6.2

Regenerated several patches that no longer cleanly apply.

Updates all third party packages except for the following:

Testing done:

  • aws-k8s-1.24 x86_64 joins cluster and runs workloads
$ kubectl get nodes -o wide --no-headers
ip-192-168-31-20.us-west-2.compute.internal   Ready   <none>   6m21s   v1.24.10-eks-08ad9cc   192.168.31.20   34.209.71.53   Bottlerocket OS 1.13.0 (aws-k8s-1.24)   5.15.90   containerd://1.6.18+bottlerocket
  • aws-k8s-1.24-nvidia aarch64 can run sample K8s GPU workloads
  • aws-ecs-1 can run an simple ECS nginx task
  • aws-ecs-1-nvidia can run simple GPU workloads
  • vmware-k8s, created cluster with eks-a with OVA built from changes and can run workloads fine
$ kubectl --kubeconfig br-eksa-124-eks-a-cluster.kubeconfig get nodes -o wide --no-headers
198.19.131.115   Ready   control-plane   7m33s   v1.24.10-eks-08ad9cc   198.19.131.115   198.19.131.115   Bottlerocket OS 1.13.0 (vmware-k8s-1.24)   5.15.90   containerd://1.6.18+bottlerocket
198.19.32.220    Ready   control-plane   9m19s   v1.24.10-eks-08ad9cc   198.19.32.220    198.19.32.220    Bottlerocket OS 1.13.0 (vmware-k8s-1.24)   5.15.90   containerd://1.6.18+bottlerocket
198.19.64.34     Ready   <none>          8m2s    v1.24.10-eks-08ad9cc   198.19.64.34     198.19.64.34     Bottlerocket OS 1.13.0 (vmware-k8s-1.24)   5.15.90   containerd://1.6.18+bottlerocket
198.19.65.185    Ready   <none>          8m1s    v1.24.10-eks-08ad9cc   198.19.65.185    198.19.65.185    Bottlerocket OS 1.13.0 (vmware-k8s-1.24)   5.15.90   containerd://1.6.18+bottlerocket
  • metal-k8s, created cluster with eks-a with image built from changes and can run workloads fine
$ kubectl --kubeconfig ./br-124-pup-eks-a-cluster.kubeconfig get nodes -o wide --no-headers
10.80.50.21   Ready   control-plane   16m     v1.24.10-eks-08ad9cc   10.80.50.21   <none>   Bottlerocket OS 1.13.0 (metal-k8s-1.24)   5.15.90   containerd://1.6.18+bottlerocket
10.80.50.23   Ready   <none>          2m36s   v1.24.10-eks-08ad9cc   10.80.50.23   <none>   Bottlerocket OS 1.13.0 (metal-k8s-1.24)   5.15.90   containerd://1.6.18+bottlerocket
10.80.50.25   Ready   <none>          6m52s   v1.24.10-eks-08ad9cc   10.80.50.25   <none>   Bottlerocket OS 1.13.0 (metal-k8s-1.24)   5.15.90   containerd://1.6.18+bottlerocket

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

@etungsten
Copy link
Contributor Author

Push above

@etungsten
Copy link
Contributor Author

Push above reintroduces tcpdump update to 4.99.3 and instead drops libpcap update to 1.10.3 which was the real culprit.

Copy link
Contributor

@stmcginnis stmcginnis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ran a test on aws-ecs-1 arm64 variant with #2828 cherry-picked on top. Able to run tasks on an ECS cluster without error.

Copy link
Contributor

@jpmcb jpmcb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hotdog package looks good 👍🏼

@etungsten
Copy link
Contributor Author

etungsten commented Feb 24, 2023

Push above updates the SHA-512 checksum for https://github.com/NVIDIA/k8s-device-plugin/archive/v0.13.0/v0.13.0.tar.gz.

@markusboehme caught a mismatch with the k8s-device-plugin 1.3.0 checksum between the cached archive and the upstream archive.

The contents didn't materially change except for a single short SHA in one of the source file being 1 character longer:

diff --recursive hash-mismatch/cache/k8s-device-plugin-0.13.0/vendor/k8s.io/client-go/pkg/version/base.go hash-mismatch/self/k8s-device-plugin-0.13.0/vendor/k8s.io/client-go/pkg/version/base.go
58c58
< 	gitVersion   string = "v0.0.0-master+1f8a4853"
---
> 	gitVersion   string = "v0.0.0-master+1f8a48537"

We currently can't explain what happened to the upstream archive between yesterday and today to have cause a change like this...

We need to consider moving away from GitHub's archive URLs if they cannot provide static checksums.

edit: We think it probably has to do with k8s-device-plugin adding more commits which caused the abbreviated SHA to increase in length by 1, which then kicked off some go-vendor github hook to generate a new archive for the same commit/tag.

@markusboehme
Copy link
Member

We need to consider moving away from GitHub's archive URLs if they cannot provide static checksums.

Separate issue to track this: #2831

@arnaldo2792
Copy link
Contributor

@markusboehme do you think we should test cgroups v2 in NVIDIA variants? Systemd and the NVIDIA tools were updated as part of this PR.

@etungsten
Copy link
Contributor Author

containerd 1.6.19 just came out: https://github.com/containerd/containerd/releases/tag/v1.6.19

@etungsten
Copy link
Contributor Author

Push above bumps containerd to 1.6.19

@markusboehme
Copy link
Member

@markusboehme do you think we should test cgroups v2 in NVIDIA variants? Systemd and the NVIDIA tools were updated as part of this PR.

Sorry I had missed this comment. Yes, I tested those, still works with cgroup v1 and cgroup v2.

Drops 1001-xshared-Fix-build-for-Werror-format-security.patch since the
changes are included in 1.8.9

Adds a new 1001-extensions-NAT-Fix-for-Werror-format-security.patch for
the same type of problem.
Removed 1000-skip-ipv6-test.patch since it's no longer needed since
iputils/iputils@7e12805

iputils removed ninfod, rarpd, rdisc from list of tools being built
Switch to release artifacts.

No longer need `autogen.sh` to generate configure script.
Switch to release artifacts
Regenerates readline-8.1-shlib.patch as readline-8.2-shlib.patch as it
no longer cleanly applies
Regenerated 9010-sysusers-set-root-shell-to-sbin-nologin.patch
@etungsten
Copy link
Contributor Author

etungsten commented Mar 1, 2023

Push above addresses @bcressey 's comments.
With the exception of the dbus-broker change since we still need the patch

Tested aws-k8s-1.24 and the node joins the cluster and runs workloads.

@etungsten etungsten requested a review from bcressey March 1, 2023 22:03
@etungsten etungsten merged commit bb73a4c into bottlerocket-os:develop Mar 1, 2023
@etungsten etungsten deleted the 3pup branch March 1, 2023 23:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

v1.13.0 update third party packages
7 participants