Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Containerd container runtime #7986

Merged
merged 17 commits into from
Dec 19, 2019

Conversation

hakman
Copy link
Member

@hakman hakman commented Nov 21, 2019

Docker and containerd are separate packages now. One could run Kubernetes with Docker + containerd or just with containerd with very little effort.
At the same time, Docker can run with newer versions of containerd have a more stable cluster.

This change brings new user facing changes with 2 new ClusterSpec options:

  • containerRuntime - choose the container runtime
  • containerd - chose version and tweak the config as needed

All new options have defaults and they default to the same values as without this change:

spec:
  containerRuntime: docker
  docker:
    skipInstall: false
    version: 19.03.4
  containerd:
    address: /run/containerd/containerd.sock
    configFile: |
      disabled_plugins = ["cri"]
    logLevel: warn
    root: /var/lib/containerd
    state: /run/containerd
    skipInstall: false
    version: 1.2.10

To use containerd as the container runtime, one would have to edit the cluster and change containerRuntime to containerd:

spec:
  containerRuntime: containerd

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Nov 21, 2019
@k8s-ci-robot
Copy link
Contributor

Hi @hakman. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Nov 21, 2019
@mikesplain
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 21, 2019
@hakman hakman force-pushed the split-containerd branch 2 times, most recently from 77d284c to ef23b9b Compare November 23, 2019 11:25
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 23, 2019
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 23, 2019
@hakman hakman force-pushed the split-containerd branch 4 times, most recently from eb97bb3 to 0aa01f8 Compare December 3, 2019 14:30
@hakman hakman changed the title [WIP] Add support for Containerd container runtime Add support for Containerd container runtime Dec 3, 2019
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 3, 2019
@hakman
Copy link
Member Author

hakman commented Dec 3, 2019

I think this is ready for a few more pair of eyes. It works reliably in all my tests.
There may be some less obvious cases where Docker is used to load images or run tasks, but nothing that I can see with basic clusters.

/assign @granular-ryanbonham
/assign @justinsb

root:
description: Directory for persistent data (default "/var/lib/containerd")
type: string
skipInstall:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming nit... What about managed: false or external: true ? It's more than just install (I believe?); e.g. if we specify configFile + skipInstall, do we write the config file?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was confused also in the beginning. Current implementation is to not touch it at all, so no config, no managed service, just expect it to be running. The problem is that this is the current behaviour for Docker:
https://github.com/kubernetes/kops/blob/release-1.17/nodeup/pkg/model/docker.go#L1079

We may have to use 'managed: falseandinstall: falseand deprecateskipInstall: true` completely.

package model

import (
// "encoding/json"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: we should probably not have these in new code. But I imagine staticcheck or similar will pick it up!


// DefaultContainerdVersion is the (legacy) containerd version we use if one is not specified in the manifest.
// We don't change this with each version of kops, we expect newer versions of kops to populate the field.
const DefaultContainerdVersion = "1.2.10"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just require this version? I think this was an accommodation for a mistake I made previously when doing docker, but if we don't have to carry it forward, that feels better to me...

const DefaultContainerdVersion = "1.2.10"

var containerdVersions = []packageVersion{
// 1.2.10 - Debian Stretch
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love to try using the tar.gz packages in general for containerd, but I agree that this approach lets us have the best of both worlds :-)

Reasons for using the tar.gz:

  • In theory, supports more OSes.
  • Should be easier to package.. easier to specify just a URL to a tar.gz file (but the problems of architecture were pointed out previously!)
  • Easier to deal with if we have to hotfix... building a tar.gz is much easier than building a package.

But I agree we should start here!

Copy link
Member Author

@hakman hakman Dec 15, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had exactly the same thought. I would love to continue on this path and same with Docker. Also, would like to include at some point the custom version tar.gz along the lines of what is done in #7719.
Btw, not sure if #7719 takes into account that containerd needs to be managed also in newer Docker versions.

switch b.Distribution {
case distros.DistributionCoreOS:
klog.Infof("Detected CoreOS; won't install containerd")
if err := b.buildContainerOSConfigurationDropIn(c); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idea for future: We probably could flatten these three into a single switch (or even if statement). If there are differences in the drop-in, we could switch on b.Distribution in that function.

@@ -140,6 +142,7 @@ type ClusterSpec struct {
// EtcdClusters stores the configuration for each cluster
EtcdClusters []*EtcdClusterSpec `json:"etcdClusters,omitempty"`
// Component configurations
Containerd *ContainerdConfig `json:"containerd,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We did discuss in office hours whether this should be top-level; I think it's fine to leave it here.

  • Docker is already at top-level, so we can move both at the same time when/if we ever refactor these.
  • I think we want to put them on the InstanceGroup. When we do that maybe we can refactor. But this suggests something...
  • Perhaps the configuration should be external, like other addons. Previously we've used a size argument to motivate that (keep each type manageable and understandable), but now's a DRY argument - multiple instancegroups might refer to the same shared containerd configuration.
  • In that light, I imagine ContainerRuntime actually being a ref to the CRI configuration
  • I don't think we should actually make it a full apimachinery ref (with a kind), that feels like overkill
  • I think there's a path from where we are to that world. Specifying the containerd field in Cluster would be equivalent to create a kind: ContainerdConfiguration (or whatever) with name: containerd

So I think this is more intuitive today, and gives us a reasonable path to the future.

cc @geojaz ... I think it was you that brought this up in office hours?

return fmt.Errorf("Kubernetes version is required")
}

sv, err := KubernetesVersion(clusterSpec)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tip (for the future): We do have the IsKubernetesGTE helper in a bunch of places, not sure if it's available here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to keep this as close as possible to the Docker version to make it easier to review.
The IsKubernetesGTE function family is quite nice. Thanks.

dockerVersion := fi.StringValue(clusterSpec.Docker.Version)
switch dockerVersion {
case "19.03.4":
containerdVersion = "1.2.10"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh! So we are always installing containerd for these docker versions! Now I get it!

This is really clever, in that now we have a consistent way of configuring containerd underneath docker! I like it!

The downside is that we need to be super-careful that we haven't broken docker. I will ponder - this is great, but it's also a little risky, and maybe we should just do it for versions of docker we haven't yet introduced...

We could also leave docker as-is, and encourage people to go to containerd.

@@ -133,6 +133,12 @@ Resources.AWSAutoScalingLaunchConfigurationmasterustest1bmastersadditionalcidrex

cat > cluster_spec.yaml << '__EOF_CLUSTER_SPEC'
cloudConfig: null
containerRuntime: docker
containerd:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if seeing docker and containerd is going to cause user confusion

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depends a lot on users and how we present this in the release notes and docs.
My guess is that most users either know that Kubernetes runs on Docker + containerd, or think Kops does something magic and Kubernetes just runs. Those confused will ask in #kops-users and we should direct them to the release notes and docs.

if err != nil {
return err
}
defer writer.Close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not to self: So there are some edge cases here; e.g. if writer.Close() fails the output will likely be missing its last chunk or something. This is absolutely fine for how you are using it (with a tempdir), but we probably should be careful about making it a utils function. I'll add some paranoid error handling :-)

@justinsb
Copy link
Member

This is really great, thank you @hakman

I would classify my comments into 3 types:

  • Nits / notes to self; no need to fix in this PR. I might send follow-on PRs or other people can if they beat me to it!
  • Comments on the api field names e.g. skipInstall vs managed. Because we likely wouldn't cherry-pick this back to kops 1.17 (I think?) I don't think we need to resolve these in this PR, but we should probably just figure them out.
  • The "should docker install containerd" question

The "should docker install containerd" question is really tricky and the one that I think we should figure out. It is really clever, and I didn't get it at first, but it does make sense. I think there are a few challenges:

  • It's possible that it changes existing configurations, and we try not to change existing configurations (Could we make this change only for newer docker versions that haven't shipped by default in any k8s versions <= 1.17 - is there such a thing?)
  • It is a little confusing to specify docker and see containerd config as well. Do we need to materialize it, or can we do it only in nodeup? (I realize we'd be leaving functionality on the table...)
  • Should we just encourage users to switch to containerd? My understanding - and this could well be wrong - is that even docker inc would rather that we installed containerd and not docker on servers.

Thank you so much for this though - I think if we can figure out the right answer for "should docker install containerd" we should get this merged and iterate on it. I might also write a test for the protokube manifest generation, just to satisfy my own paranoia!

@justinsb
Copy link
Member

Thanks so much @hakman for all the great comments! You've persuaded me on the idea that docker can now install docker + containerd - the idea that it will encourage users to go to containerd is a good one. (And looking at the docker additional features you cited over containerd, those do feel like client-side features which we probably don't want on servers!)

The challenge is that we generally have a policy that you can update kops and we won't change configurations (except for bug / security fixes). We do that by keying most of our changes off the kubernetes version, not the kops version. This in turn lets us avoid having to maintain too many versions of kops, because in theory you can run k8s 1.14 with kops 1.14, 1.15, 1.16, 1.17 and should have the same behaviour....

So if we change the configuration of the docker version used for 1.16 or 1.17, based on that we should backport it. I don't think we want to do that, because then we don't have the time to stabilize containerd support on master.

(I wasn't able to find which comment you were referring to when you mentioned it, so forgive me if this duplicates something you said)

I think there are a few options:

1(a) Just accept that the docker behaviour will change for k8s 1.16 / 1.17 in kops 1.18 - it's not something we've done in the past and it might impact the upgrade story for kops.
1(b) Accept the change in behaviour, document it in the release notes, but try to make sure that we haven't actually changed the containerd installation.
2 Change the docker configuration logic so that only for k8s >= 1.18 do we install "both" docker & containerd with separate configurations.
3 Introduce this for the next version of docker. One snafu with this is that there is no next version of docker yet AFAICT. (There's a 19.03.5 but we probably should do this on a major/minor version docker upgrade in case there's a docker security issue that means we must upgrade 1.17's docker version)

My view is that the best options here are 1(b) and 3 - and that 3 is easier. Do you have a view @hakman ? (Perhaps you've already pursued 1(b) in this PR? )

@hakman
Copy link
Member Author

hakman commented Dec 17, 2019

That's great @justinsb! :)

My comment was this:
#7986 (comment)

Pretty much on the same lines as your conclusion:

  • revert changes to docker split for current Docker versions
  • change defaults and add condition to only apply for 1.18+
  • implement the change for Docker 19.04.5, maybe just for 1-2 OSes used in testing (we could also maybe add a warning when selecting it)
  • do the final switch when new major Docker version arrives.

I would add that we should maybe try the e2e tests soon.

@hakman
Copy link
Member Author

hakman commented Dec 18, 2019

@justinsb would it be ok to merge this to master as is and continue with 1.b) and 3) as separate PRs?

@justinsb
Copy link
Member

So I'm generally very much in favor of that @hakman ... but are we going to have to put back the containerd packages into docker.go?

I may have the wrong end of the stick here, but I was interpreting the conclusion as being that we would not change the existing versions of docker, but we would split out containerd for some future version of docker.

Is that right? If so, I was imagining it would be a pain to re-add containerd, but I'm ready to be corrected :-)

@hakman
Copy link
Member Author

hakman commented Dec 18, 2019

@justinsb Re-adding containerd to existing Docker versions in docker.go shouldn't be that hard if we really want. We can also keep them in containerd.go and skip all the other tasks except installing the package (may be easier). The management in containerd.go is limited to creating a service file and a config file that's almost empty. These 2 are also created on install in a similar way.

The containerd part will remain the same in both cases, independent, so anyone can select containerRuntime: containerd. The only question remains if containerd is installed from docker.go or containerd.go.

@justinsb
Copy link
Member

So you've persuaded me generally, and I'm OK with basically taking on a bit more validation work, except that we're changing the containerd version for older docker versions (we're essentially upgrading everyone to containerd 1.2.10).

OK - let's get this in!

A few things I want to do in follow-up:

  • I think we should keep containerd on 1.2.4-1 for docker 18.09.3; I know containerd claims to be validated but kubernetes validation has been a bit stricter than docker validation in the past; I don't know the extent to which that carries forward to containerd.

  • We might need to not populate the containerd version by default, because otherwise if I want to update my docker version I now also have to update my containerd version (because realistically I likely want to update containerd even more than I do docker). I do still like the idea of exposing the containerd structure though, just so we can nudge people towards containerd.

Thanks for all your hard work on this @hakman - great stuff!

/approve
/lgtm

@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Dec 19, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: hakman, justinsb

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 19, 2019
@k8s-ci-robot k8s-ci-robot removed lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Dec 19, 2019
@justinsb
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 19, 2019
@k8s-ci-robot k8s-ci-robot merged commit 0b52b99 into kubernetes:master Dec 19, 2019
@hakman
Copy link
Member Author

hakman commented Dec 19, 2019

Wonderful, @justinsb ! Thanks!

Regarding your concern about the containerd upgrade to 1.2.10 for everyone, I assumed no one will mind. Someone that would install Docker 18.09.3 would get 1.2.10 as a dependency, even though the validated version was 1.2.4 at the release time:
https://github.com/docker/docker-ce/blob/18.09/components/packaging/rpm/SPECS/docker-ce.spec#L23

Containerd has a very strict policy regarding parch releases, I guess that's why Docker is so relaxed in regards to the dependency:
https://github.com/containerd/containerd/blob/master/RELEASES.md#upgrade-path

The upgrade path for containerd is such that the 0.0.x patch releases are always backward compatible with its major and minor version.

I don't mind changing, but will need to apply the change in branches for 1.15, 1.16 and 1.17. The versions I added there last month use 1.2.10 as a dependency. Please let me know if you want to change 18.09.3 dependency to 1.2.4 and I will send the PRs your way in a few days.

PS: I now see that you meant the case of 18.09.3 here, not 18.09.9. Will take care of it:

case "18.09.3":
containerdVersion = "1.2.10"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants