Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

c8d: Use a specific containerd namespace when userns are remapped #46786

Merged
merged 1 commit into from Jan 24, 2024

Conversation

rumpl
Copy link
Member

@rumpl rumpl commented Nov 8, 2023

- What I did

We need to isolate the images that we are remapping to a userns, we can't mix them with "normal" images. In the graph driver case this means that we create a new root directory where we store the images and everything else, in the containerd case we can use a new namespace.

- How I did it

- How to verify it

Run a daemon with --userns-remap=default and pull and image, then:

  • stop the daemon
  • run the daemon without userns remapping
  • list images

And you shouldn't see the image that you pulled.

- Description for the changelog

- A picture of a cute animal (not mandatory but encouraged)

@rumpl rumpl added status/2-code-review containerd-integration Issues and PRs related to containerd integration labels Nov 8, 2023
@rumpl rumpl self-assigned this Nov 8, 2023
daemon/daemon.go Outdated Show resolved Hide resolved
cmd/dockerd/daemon.go Outdated Show resolved Hide resolved
@rumpl
Copy link
Member Author

rumpl commented Jan 22, 2024

@thaJeztah I rebased this one, the whole config situation needs a closer look, I think we should clean it up massively but that would be outside of the scope of this one

Copy link
Member

@laurazard laurazard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, left a couple comments on a typo, but overall LGTM! (and we really should try to refactor that config code)

cmd/dockerd/daemon.go Outdated Show resolved Hide resolved
daemon/daemon.go Outdated Show resolved Hide resolved
cmd/dockerd/daemon.go Outdated Show resolved Hide resolved
daemon/daemon.go Outdated Show resolved Hide resolved
daemon/daemon.go Show resolved Hide resolved
cmd/dockerd/daemon.go Outdated Show resolved Hide resolved
Copy link
Member

@thaJeztah thaJeztah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment on lines +1533 to +1541
ns = config.ContainerdNamespace
if _, ok := config.ValuesSet["containerd-namespace"]; !ok {
ns = fmt.Sprintf("%s-%d.%d", config.ContainerdNamespace, root.UID, root.GID)
}

pluginNs = config.ContainerdPluginNamespace
if _, ok := config.ValuesSet["containerd-plugin-namespace"]; !ok {
pluginNs = fmt.Sprintf("%s-%d.%d", config.ContainerdPluginNamespace, root.UID, root.GID)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There might be a usability issue; if someone specifies the namespaces in their config, we will use those. Which means that if someone specified a custom namespace, and that namespace is not empty, things will likely start failing;

     --containerd-namespace string             Containerd namespace to use (default "moby")
      --containerd-plugins-namespace string     Containerd namespace to use for plugins (default "plugins.moby")

Wondering if there's a way for us to detect;

  • if the namespace is non-empty
  • if it's not empty, if it was created with the correct user-mapping

And to otherwise bail out and produce an error

(Should be fine for a follow-up, but maybe we should create a tracking ticket?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My idea was that, if they tell us which namespace to use then they know what they are doing. Now yeah, the namespace could have an incorrect mapping, we should do something about that, we could add a label to the namespace (looks like that's possible in containerd), but then that would work only for the ones we create, I'm not sure how we can check that an existing namespace has the right user-mapping... Maybe we can make it work only with the namespaces we create? Ok let's open an issue for this, a comment in a PR is not the best place to find a solution for this

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's not a blocker for this PR, but something to look at. My consideration here was;

  1. user configures daemon (with namespace)
  2. runs the daemon with that config
  3. user now decides to enable user-namespaces
  4. daemon now fails with potentially confusing errors
  5. and.. now how to solve? (how to clean the namespace etc?)

We need to isolate the images that we are remapping to a userns, we
can't mix them with "normal" images. In the graph driver case this means
we create a new root directory where we store the images and everything
else, in the containerd case we can use a new namespace.

Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
@@ -616,6 +616,10 @@ func loadDaemonCliConfig(opts *daemonOptions) (*config.Config, error) {
conf.CDISpecDirs = nil
}

if err := loadCLIPlatformConfig(conf); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Yup, I think this is the cleanest way to do this indeed, as it keeps it internal.

Copy link
Member

@thaJeztah thaJeztah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@thaJeztah thaJeztah merged commit e8346c5 into moby:master Jan 24, 2024
136 of 138 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants