-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plugins content addressability #29487
Conversation
|
||
// tag | ||
// Required: true | ||
Tag string `json:"Tag"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will require a small change in the executors (both SwarmKit's and Docker's), but it will be a simplification.
https://github.com/docker/docker/pull/29378/files#diff-ac864daec98b5e7e257026311a914256R71
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What exactly is a plugin tag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that daemon/cluster/executor/container/executor.go has been updated in this PR. SwarmKit will need a similar change. Making a note here so we don't forget.
@@ -81,6 +77,9 @@ type PluginConfig struct { | |||
// workdir | |||
// Required: true | |||
Workdir string `json:"Workdir"` | |||
|
|||
// rootfs | |||
Rootfs *PluginConfigRootfs `json:"rootfs,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RootFS
type PluginConfigRootfs struct { | ||
|
||
// diff ids | ||
DiffIds []string `json:"diff_ids"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DiffIDs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good luck controlling this in swagger
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm glad we went to a system we can't control...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with @dnephin and this should be solvable with x-go-name:
but it has some issue atm. The json names are correct so we won't have breaking changes when we switch the go names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this vein, the problem becomes obvious when we start looking that the godoc.
} | ||
|
||
func runInstall(dockerCli *command.DockerCli, opts pluginOptions) error { | ||
ref, err := distreference.ParseNamed(opts.name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yuck :(. Is this using distreference
to avoid the normalization? Needs a huge FIXME to explain why distreference
and what should be done when the packages are merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's using distreference
to allow references with both tags and digest. But not sure if this is needed at all in here. Daemon side should be enough for this. cc @dmcgowan
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can add a comment here, this really needs to converge in 1.14 to not have these differences. This validation is here because there was something here before and I didn't want to just rely on the daemon. The daemon certainly should be doing the right thing already and I can minimize the duplication of validation here if that makes sense.
return err | ||
} | ||
fmt.Fprintln(dockerCli.Out(), opts.name) | ||
fmt.Fprintf(dockerCli.Out(), "Installed plugin %v\n", opts.name) // todo: return proper values from the API for this result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
%s
query := url.Values{} | ||
query.Set("name", name) | ||
if _, err := reference.ParseNamed(options.RemoteRef); err != nil { | ||
return nil, err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrap error?
@@ -151,6 +158,9 @@ func (serv *v2MetadataService) GetDiffID(dgst digest.Digest) (layer.DiffID, erro | |||
// Add associates metadata with a layer DiffID. If too many metadata entries are | |||
// present, the oldest one is dropped. | |||
func (serv *v2MetadataService) Add(diffID layer.DiffID, metadata V2Metadata) error { | |||
if serv.store == nil { | |||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I understand why the getters return an error but the setters silently do nothing, but it might be good to have a comment explaining it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add a comment here. I think the better solution is just to have an inmemory implementation or allow the pullers/pushers to have this service be nil. In the end we probably just want to support this interface backed by disk because it can optimize pushing.
// Parse remote reference, supporting remotes with name and tag | ||
// NOTE: Using distribution reference to handle references | ||
// containing both a name and digest | ||
remoteRef, err := distreference.ParseNamed(remote) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this going to cause problems wrt normalization? I believe Pull
expects a normalized reference, which you won't get from the distribution package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Never mind, I somehow missed the conversion below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We end up calling with both, we use the WithName
later on from the docker reference package which will normalize. It is a completely mess, but need the tag and digest support from the real reference package and the normalization from the docker one.
5ca4bab
to
768b9e9
Compare
return computePrivileges(config) | ||
} | ||
|
||
// Pull pulls a plugin, check if the correct privileges are provided and install the plugin. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please document the difference between name
and remote
.
"github.com/docker/docker/api/types" | ||
"github.com/docker/docker/pkg/archive" | ||
dockerdist "github.com/docker/docker/distribution" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why "dockerdist"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can update now, previously plugin also had a "distribution" package. But we just removed it
tarWriter := tar.NewWriter(pw) | ||
defer in.Close() | ||
|
||
hasRootfs := false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hasRootFS
content := io.Reader(tarReader) | ||
name := filepath.Clean(hdr.Name) | ||
if filepath.IsAbs(name) { | ||
name = name[1:] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks unportable. Granted, we're in an _linux.go
file here, so it doesn't matter for now.
manager *Manager | ||
) | ||
const configFileName = "config.json" | ||
const rootfsFileName = "rootfs" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rootFSFileName
:)
768b9e9
to
2e791fd
Compare
ee06495
to
745923b
Compare
return | ||
} | ||
if err != nil { | ||
pw.CloseWithError(errors.Wrap(err, "failed to read from tar")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tarWriter.Close()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tarWriter
would be just cleaned up by gc. Only need to close it when returning without error so that paddings are correct.
if name == configFileName { | ||
dt, err := ioutil.ReadAll(content) | ||
if err != nil { | ||
pw.CloseWithError(errors.Wrap(err, "failed to read config.json")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "failed to read %s", configFileName
} | ||
gzw := gzip.NewWriter(n) | ||
digester := digest.Canonical.New() | ||
rootfsReader := io.TeeReader(rootfs, io.MultiWriter(gzw, digester.Hash())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you not directly call digest.Canonical.Hash()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
digester
is used later to get the digest for the data.
} | ||
|
||
type insertion struct { | ||
io.Writer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Writer
can be made private?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Write()
method required by the interface directly comes from the embedded io.Writer
. Also insertion
itself is private.
for { | ||
hdr, err := tarReader.Next() | ||
if err == io.EOF { | ||
if !hasRootfs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not have a similar logic for !hasConfig
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Config can be easily checked from the caller side in https://github.com/docker/docker/pull/29487/files#diff-28e4bbe4b8a4c1b80bf43d15a145427dR557 where it is cleaner. No way to check rootfs from the caller side so it needs to be included in the middle of tar creation.
configBlob, err := pm.blobStore.New() | ||
if err != nil { | ||
return err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
defer configBlob.Close()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Never mind, Looks like you are taking care of it in insertion.
reader = progress.NewProgressReader(ioutils.NewCancelReadCloser(ctx, contentReader), progressOutput, size, pd.ID(), "Pushing") | ||
|
||
switch m := pd.layer.MediaType(); m { | ||
case "application/vnd.docker.image.rootfs.diff.tar": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have this defined elsewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after rebase this will ago away, we defined in distribution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, should this be plugin.rootfs
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The content types are not very consistent. Really we have plugin images and container images, but for the name roots we have
application/vnd.docker.container.image
- for container image configs
application/vnd.docker.plugin
- for plugin configs
application/vnd.docker.image
- for image layers
The plugin image layers are the same as container image layers though and should be able to share a backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
application/vnd.docker.container.image
is the one that went awry. This should have been application/vnd.docker.image.config
, which is what is done in OCI.
Nothing is stopping these from being stored together. The mediatype isn't necessarily store with the object. The mediatype is part of the qualified reference, but not a property of the thing itself. For that to work, we'd have to hash the object with its type.
@@ -81,6 +77,9 @@ type PluginConfig struct { | |||
// workdir | |||
// Required: true | |||
Workdir string `json:"Workdir"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WorkDir
745923b
to
6b086e3
Compare
e57fddd
to
94ec8ad
Compare
94ec8ad
to
885705d
Compare
resp, err := cli.tryPluginPrivileges(ctx, query, options.RegistryAuth) | ||
if resp.statusCode == http.StatusUnauthorized && options.PrivilegeFunc != nil { | ||
newAuthHeader, privilegeErr := options.PrivilegeFunc() | ||
if privilegeErr != nil { | ||
ensureReaderClosed(resp) | ||
return privilegeErr | ||
return nil, privilegeErr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not this change: If a plugin exists and we try to pull it, it should error out before tryPluginPrivileges
. Please add a TODO item.
Tested the PR and happy with the changes. LGTM. |
885705d
to
6bf801d
Compare
@@ -139,9 +141,16 @@ func (s *DockerSuite) TestPluginInstallArgs(c *check.C) { | |||
c.Assert(strings.TrimSpace(env), checker.Equals, "[DEBUG=1]") | |||
} | |||
|
|||
func (s *DockerSuite) TestPluginInstallImage(c *check.C) { | |||
func (s *DockerRegistrySuite) TestPluginInstallImage(c *check.C) { | |||
testRequires(c, DaemonIsLinux, IsAmd64, Network) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Network
is no longer required
6bf801d
to
dc1cebf
Compare
if parts := strings.Split(name, "/"); len(parts) != 0 && parts[0] == rootFSFileName { | ||
hdr.Name = path.Clean(path.Join(parts[1:]...)) | ||
if strings.HasPrefix(strings.ToLower(hdr.Linkname), rootFSFileName+"/") { | ||
hdr.Linkname = hdr.Linkname[len(rootFSFileName):] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hdr.Linkname = hdr.Linkname[len(rootFSFileName)+1:]
Move plugins to shared distribution stack with images. Create immutable plugin config that matches schema2 requirements. Ensure data being pushed is same as pulled/created. Store distribution artifacts in a blobstore. Run init layer setup for every plugin start. Fix breakouts from unsafe file accesses. Add support for `docker plugin install --alias` Uses normalized references for default names to avoid collisions when using default hosts/tags. Some refactoring of the plugin manager to support the change, like removing the singleton manager and adding manager config struct. Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com> Signed-off-by: Derek McGowan <derek@mcgstyle.net>
dc1cebf
to
3d86b0c
Compare
LGTM |
This fix is a follow up based on comment: and a follow up to: moby#29222 (comment) As moby#28789 has been merged in, it is possible for `docker plugin inspect` to search based on Name or ID Prefix. However, ID-based `docker plugin enable/disable/rm/set` are still not possible. This fix addes test for `docker plugin enable/disable/rm/set` to search based on: - Full ID - Full Name - Partial ID (prefix) The actual fix is done in moby#29487. This fix is a follow up of moby#28789 and moby#29487. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This addresses:
plugin distribution is moved to a shared stack with images. This fixes quirks with previous implementation including pull-by-digest, content verification and progressbars. fixes [Feature request] Show progress in plugin install #28622
creates immutable plugin config for configuration that matches schema2 requirements. Hash of this config defines the whole distributable part of the config.
Ensures data being pushed is the same that was created/pulled so that plugin can't leak anything in there.
Keeps immutable distribution artifacts in blobstore and uses that for the data that is pushed.
Runs the init layer setup on every plugin start to match the safety guarantees of containers
Fixes some possible breakouts caused by unsafe filesystem accesses or extraction.
Verifies the tar sent on
plugin create
and splits it up to a config and rootfs.Adds
docker install --alias
to control the name of the plugin.Uses normalized references for default names to avoid collisions when using default hosts/tags.
Some refactoring of the plugin manager to support the change, like removing the singleton manager and adding manager config stuct.
Does not address:
Store
have been left out@anusha-ragunathan @tiborvass @vieux