Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The containerd runtime breaks execContainer hooks to install software on host/node machine. #11726

Closed
henryj opened this issue Jun 9, 2021 · 1 comment · Fixed by #11852
Closed
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@henryj
Copy link

henryj commented Jun 9, 2021

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

Version 1.20.1 (git-5a27dad40a703f646433595a2a40cf94a0c43cd5)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:32:49Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
The containerd runtime seems to break execContainer hooks. if I change back to containerRuntime: docker, it works. see the cluster spec below.

5. What happened after the commands executed?
Nodes are up and running after rolling-update, but the hooks are not effective. Basically, it doesn't install the software I'd like it to install on the host machine.

6. What did you expect to happen?
the apt-get cmds in the hooks to install the software

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

  hooks:
  - execContainer:
      command:
      - sh
      - -c
      - chroot /rootfs apt install -y software-properties-common && chroot /rootfs add-apt-repository -y ppa:gluster/glusterfs-7
        && chroot /rootfs apt-get update && chroot /rootfs apt-get install -y glusterfs-client
      image: busybox

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

sorry, not sure where to see the hooks exec logs.

9. Anything else do we need to know?
After the nodes are up, if I ssh into the host machine, with containerd runtime, the glusterfs cmds are not there and not installed. but with docker runtime, all work as before in 1.19.x.

Maybe there's a new way to install things on the host after the containerd change? The doc seems still saying to use exec hooks as before. (https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#hooks)

Thanks

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 9, 2021
@hakman hakman self-assigned this Jun 9, 2021
@hakman
Copy link
Member

hakman commented Jun 9, 2021

Hi @henryj. This is indeed missing for containerd.
You may try this as an alternative:
https://kops.sigs.k8s.io/instance_groups/#additionaluserdata

For the record, at the moment, hooks only work with Docker:

// are we a raw unit file or a docker exec?
switch hook.ExecContainer {
case nil:
unit.SetSection("Service", hook.Manifest)
default:
if err := h.buildDockerService(unit, hook); err != nil {
return nil, err
}
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants