-
Notifications
You must be signed in to change notification settings - Fork 2k
run: flag to include the docker socket #5858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #5858 +/- ##
==========================================
- Coverage 59.05% 58.88% -0.17%
==========================================
Files 357 357
Lines 29843 29941 +98
==========================================
+ Hits 17624 17632 +8
- Misses 11241 11329 +88
- Partials 978 980 +2 🚀 New features to boost your workflow:
|
Alright, I just validated an approach to the credential leak issue using copy into the container. Let's validate the cli ux and then we can add that implementation if we want to go forward with the change. |
This is a tricky one; bind-mounting the docker socket by setting these options from the CLI will break in many scenarios. The current approach in this PR uses the default location ( For a daemon running locally (same machine), the CLI could have slightly more information, and look at the docker host configuration in the docker context inspect | jq .[].Endpoints.docker.Host
"unix:///var/run/docker.sock" In general, these could be somewhat tricky even with the good location, as there's a potential race condition where the container is started before the API server (and thus socket) is up, in which case using the I recall some older proposals to have an option on the daemon side to grant a container access to the API; for those, the daemon would handle setting up a socket and mounting it into the container. Those were a really long time ago, and IIRC the initial implementation was to do so automatically for any container with
I think something along those lines (i.e., having an option to forward the socket, and have the daemon set this up) could solve most of the issues mentioned earlier; the daemon would be able to set up the socket for the container, regardless how it was configured, which could even mean a dedicated socket for the container (which could potentially be expanded to more granular / limited access to the socket, instead of access to all endpoints). Slightly related (forwarding a socket from the client to the daemon to allow ssh-agent-like constructs for forwarding into the container); this could possibly be tying in to the "forward authentication" part; |
For the credentials; potentially we could use some env-var, although env-vars may not be ideal as they will leak both into the container's config ( Possibly we can make the CLI aware of an env-var along those lines; IIRC, BuildKit requests auth from the CLI when building, so not 100% sure if that would work in that combination, and likely all CLI plugins would have to be updated to be aware of this approach. |
Tangential, but regarding:
Ultimately, this should be unnecessary and we should make the necessary changes to stop the "credentials in |
@laurazard Yes but unfortunately, there is no way to call the credential helps from inside the container. We could have an ssh-agent like call api for it but that would require an interactive client to that to work correctly. @thaJeztah It sounds like I should at least use a mount rather that volume notation to avoid the accidental file creation. |
Right, that's definitely true. I think there's two separate issues that folks frequently run into that we should keep in mind:
It'd be awesome if whatever solution we came up with could account for/solve both cases (and splitting out creds – even if just to another file – from |
@laurazard Looking at compose, they actually use the same technique that I've employed here: https://github.com/docker/compose/blob/145bb8466dffd18d7828ca0bc0725b1059506098/pkg/compose/secrets.go#L31. The secrets are copied over to Here's my proposal:
I know this isn't ideal but this has been a long-lived pain point. The abstraction of the I hope this is reasonable and can allow us to move forward. |
A few fixes that have been put into place:
I'll need guidance on marking the flag as experimental. |
8cf394b
to
f4d4426
Compare
I've created a POC to instead share credentials over a unix socket which we could bind mount into a container. I don't know if that's the correct approach or if it's something that would be better than what is inside this PR. #5948 |
@Benehiko We do have discussions around the best way to allow secrets access from within a container but we don't want to block this addition too much longer. We can always change the implementation to use such a system in the future. Let's keep the discussion on your PR going and work from there. |
Seems like this should be a feature on the engine rather than client side. |
I like the idea of a mount long term but we'd have to teach every proxy about it. Plumbing for /var/run/docker.sock mostly already exists. |
@cpuguy83 Do you want me to change the flag to the more general |
Either way for the flag is fine for me. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me everything except the credentials part looks alright. I know we have already discussed on the PR a bit about it. If this is critical to get in now then I would be okay with merging as is, but we would end up having credential sync issues:
- On credential expiration
- Re-authenticating on the Host would completely invalidate any credentials in the container
- Currently CLI doesn't really use the OAuth Access and Refresh tokens - except for fetching a PAT. If this were to change, token refreshes would also invalidate container credentials.
@Benehiko Indeed, those are the current limitations we have here. I hope not to leave it in this condition. Thanks for your review! I'll added some tests before the end of this week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stevvooe the other thing about the run command line is that this won't immediately be available to use in compose without it also supporting this option. Currently, the container that needs this on the MCP gateway side is being started from a compose.yaml file inside of a Desktop extension. So I'm assuming that Desktop extensions just use compose to launch this container. Does that make sense?
@slimslenderslacks Yes, that would be a current limitation. We could port a similar implementation to compose. The secret machinery already uses the same technique as this PR so it should be straightforward. |
cli/command/container/create.go
Outdated
containerCfg.Config.Env = append(containerCfg.Config.Env, | ||
"DOCKER_CONFIG="+path.Dir(dockerConfigPathInContainer)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should rather use the DOCKER_AUTH_CONFIG
here and push the auth configs into the environment variable instead (as discussed with @thaJeztah).
It's already a commonly used variable among TestContainers and GitLab. They advise to use it and then write the contents of the environment variable to a file (~/.docker/config.json
).
See:
- https://docs.gitlab.com/ci/docker/authenticate_registry/#option-3-use-docker_auth_config
- https://github.com/testcontainers/testcontainers-go/blob/6d13d7a72bf5d1f15002e8c0b4d88ea8b88138f3/docker_auth.go#L262-L282
I have a PR open which would use this natively here #6008
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do any existing docker clis support this env var? This seems like a nice addition but can be done later.
cli/command/container/create.go
Outdated
|
||
// Set our special little location for the config file. | ||
containerCfg.Config.Env = append(containerCfg.Config.Env, | ||
"DOCKER_CONFIG="+path.Dir(dockerConfigPathInContainer)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to find a solution for this part; the DOCKER_CONFIG
option not only controls the location of where the CLI looks for the config-file, but (currently) all state of the CLI, including contexts, cli-plugins, and state stored by various plugins, such as buildx
and compose
😞. Setting a custom path would likely result in things breaking in containers that have some config and/or state set up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you demonstrate a container that breaks today? With the docker:cli
image, there is no existing config in place and it seems like we could easily work around this, if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least the docker:cli
plugin will now use /run/secrets/docker/
for all its state, e.g. things like buildx
will now save build-history there; https://github.com/docker/buildx/blob/eb74b483bd991c9afb5e0661d3c182266bf83153/util/confutil/config.go#L62-L67
I don't know about other containers and what they have configured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docker:cli
is a container image with the cli inside.
I'm not sure I see the problem here. Do we want to maintain the existing path in the home directory then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I see the problem here. Do we want to maintain the existing path in the home directory then?
It's part of a general problem we already have; ~/.docker/
has grown to become a bit of a kitchen-sink of things, and no longer is just the CLI's config, but various plugins started to use it for <who knows what>
; Docker Desktop stores various bits there, including sockets, and at some point, docker scout
used it as cache, which ended up with multiple GB's of data that should've been temporary (I think that was fixed, and it now uses somewhere in TEMP
, but it still stores SBOMS
and other bits there).
After running a couple of builds from within a docker:cli
container with this PR;
C /run
A /run/docker.sock
A /run/secrets
A /run/secrets/docker
A /run/secrets/docker/.token_seed
A /run/secrets/docker/.token_seed.lock
A /run/secrets/docker/buildx
A /run/secrets/docker/buildx/.buildNodeID
A /run/secrets/docker/buildx/.lock
A /run/secrets/docker/buildx/activity
A /run/secrets/docker/buildx/activity/default
A /run/secrets/docker/buildx/current
A /run/secrets/docker/buildx/defaults
A /run/secrets/docker/buildx/instances
A /run/secrets/docker/buildx/refs
A /run/secrets/docker/buildx/refs/default
A /run/secrets/docker/buildx/refs/default/default
A /run/secrets/docker/buildx/refs/default/default/lqga45irm59xvgavd4tntqcft
A /run/secrets/docker/buildx/refs/default/default/ovfmov0i5o63umkyhrhbmhpyu
A /run/secrets/docker/buildx/refs/default/default/w3m8ggkzghh1ivuunw1fl8ct3
A /run/secrets/docker/buildx/refs/default/default/1xf73waihbtkr53ijk716v8zp
A /run/secrets/docker/buildx/refs/default/default/6wmh6xclpvdzsrvkwgn4gdqp5
A /run/secrets/docker/buildx/refs/default/default/l15tkm6r6kwpntvvcrqgwhkfv
A /run/secrets/docker/config.json
For the "docker socket" part and setting Some considerations;
How to set the
|
@thaJeztah I think these are good suggestions for a follow up but not sure that the suggestions impact this PR. Let's get this merged and we can follow up with improvements. |
For the experimental annotation, you can use the same as for this flag (with Line 66 in 1adc158
Can you squash / cleanup commits, and it looks like |
28a8712
to
fe5a2c3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks ok; left two comments; let me know if you want to look at those as part of this PR or separate
cli/command/container/create.go
Outdated
if options.useAPISocket { | ||
creds, err := dockerCli.ConfigFile().GetAllCredentials() | ||
if err != nil { | ||
return "", fmt.Errorf("resolving credentials failed: %w", err) | ||
} | ||
|
||
// Create a new config file with just the auth. | ||
newConfig := &configfile.ConfigFile{ | ||
AuthConfigs: creds, | ||
} | ||
|
||
if err := copyDockerConfigIntoContainer(ctx, dockerCli.Client(), containerID, dockerConfigPathInContainer, newConfig); err != nil { | ||
return "", fmt.Errorf("injecting docker config.json into container failed: %w", err) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a blocker, but perhaps something we should look at after this; both these errors are returned after the container was created, so potentially the user would end up with a container that was created, but the CLI won't return its ID.
I realize this combination of "create container" and "provision it with configs" is never gonna be an atomic, operation, but we could possibly put the dockerCli.ConfigFile().GetAllCredentials()
early in this function, so that we don't attempt to create a container if we're gonna fail (because we can't read the credentials).
The "copy credentials to the container" of course requires the container to be present, so that must be kept at the end; we could consider;
- still returning the
containerID
(to provide a reference to what's created - (TBD); print the failure as a warning and proceed (but this may not be ideal if the
create
was part of adocker run
, as we'd be mixing our warnings with container output).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pushed up cred resolution but its unclear where this cid file is being written.
for _, w := range response.Warnings { | ||
_, _ = fmt.Fprintln(dockerCli.Err(), "WARNING:", w) | ||
} | ||
err = containerIDFile.Write(response.ID) | ||
return response.ID, err | ||
err = containerIDFile.Write(containerID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slightly related; looks like we may be discarding this error if failing to write to a container-id file failed, and something fails below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We return it below but first try to resolve and write the creds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it would likely be a corner case; I was looking at these lines, where we don't return the containerID, and don't include the error about the container id-file being written.
if err := copyDockerConfigIntoContainer(ctx, dockerCli.Client(), containerID, dockerConfigPathInContainer, newConfig); err != nil {
return "", fmt.Errorf("injecting docker config.json into container failed: %w", err)
}
We could do an errors.Join
or something along those lines, but not sure if that's super great, so let's keep that for follow-ups.
Doing the "copy to container" as part of docker start
was something that crossed my mind, but not sure if we want to / can update the files before start (that may open up possible race-conditions as well).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why we write the containerID to a file on create? That's a bit unexpected. I tried to keep a light foot through here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh! It's not written to a file by default (that function's signature is a bit misleading); it's only written to a file if the uses passes the --cidfile=<some path>
option.
If the flag isn't set, nothing is written and the function always returns nil
I'm not a huge fan of that feature, but I think it was added for some users that wanted to automate things and/or start containers through some init system (not a good fit)
Adds a flag to the create and run command, `--use-api-socket`, that can be used to start a container with the correctly configured parameters to ensure that accessing the docker socket will work with out managing bind mounts and authentication injection. The implementation in this PR resolves the tokens for the current credential set in the client and then copies it into a container at the well know location of /run/secrets/docker/config.json, setting DOCKER_CONFIG to ensure it is resolved by existing tooling. We use a compose-compatible secret location with the hope that the CLI and compose can work together seamlessly. The bind mount for the socket is resolved from the current context, erroring out if the flag is set and the provided socket is not a unix socket. There are a few drawbacks to this approach but it resolves a long standing pain point. We'll continue to develop this as we understand more use cases but it is marked as experimental for now. Signed-off-by: Stephen Day <stephen.day@docker.com>
fe5a2c3
to
1a502e9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
as discussed, we're gonna bring this one in as a starting point; much more things to fix around this, which is why we added the "experimental" annotation (as the implementation will likely change in future).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Adds a flag,
--use-api-socket
that can be used to start a container with the correctly configured parameters to ensure that accessing the docker socket will work with out the fiddly flags.There are a few problems with this approach:
Either way, this is good start and resolves a long standing issue.
To try it out, you can do the following:
This PR is somewhat incomplete and needs the following: