New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement long running interactive session and sending build context incrementally #32677

Merged
merged 5 commits into from Jun 22, 2017

Conversation

@tonistiigi
Member

tonistiigi commented Apr 18, 2017

depends on #31984
fixes #31829

This PR adds a new POST /session endpoint to the daemon that can be used for running interactive long-running protocols between client and the daemon.

Not ready for code review yet, but open for feedback on the design. Currently, this is missing cache handling features and validations for the fs change stream.

Client calls this endpoint and exposes features that daemon can call back to. The first use case for this, also included in this PR, is the implementation to send incremental context changes for the builder. There is work going on to use this for accessing pull credentials. Later it could be used for accessing secrets from the client, transferring build artifacts back to client etc.

When client calls /session endpoint it requests upgrade to a different transport protocol it exposes possible callback features in the request headers. Only possible transport method currently supported is gRPC.

Client API package adds only one new method client.DialSession that is used for requesting a hijacked connection over the HTTP API. The other logic is in separate packages so that api package doesn't add any extra dependencies.

client/session (location tbd) contains the actual code for creating the interactive sessions. This is called by the cli and takes client.DialSession as a dependency. This package also defines the interfaces that need to be implemented to expose new features or new transports on top of the session.

The implementation for the incrementally sending filesystem changes is in client/session/fssession. This uses the fs changes stream algorithm from containerd that is modified to compare files from the client with the files from the previous build instead of comparing rwlayer to rolayer. It currently uses helper package tonistiigi/fsutil for the file checks, this package would likely go away in the future and the responsibility will be split by containerd and continuity repos.

fscache component implements the access point for the filesystem data has been transferred from external sources. After building the build context remains there so it can be used by the next invocations. GC routine of docker prune(tbd) can delete this data any time and would cause the client to resend full context next time.

Testing out:

The feature is behind experimental flag so to test it, daemon needs to be started with --experimental.

To invoke a builder with using the session stream instead of sending the tar context you have to set the --stream flag.

Currently, for testing individual components this will use the session but still transfer the files with a tar archive(so there shouldn't be meaningful performance gain). To use the protocol that can do rsync-like incremental updates, set env variable BUILD_STREAM_PROTOCOL to diffcopy.

# BUILD_STREAM_PROTOCOL=diffcopy docker build --stream .
Streaming build context to Docker daemon  109.3MB
Step 1/2 : FROM busybox
 ---> d9551b4026f0
Step 2/2 : COPY . .
 ---> a4c80257278d
Removing intermediate container 36029a58b3c4
Successfully built a4c80257278d

# BUILD_STREAM_PROTOCOL=diffcopy docker build --stream .
Streaming build context to Docker daemon      55B
Step 1/2 : FROM busybox
 ---> d9551b4026f0
Step 2/2 : COPY . .
 ---> Using cache
 ---> a4c80257278d
Successfully built a4c80257278d

# echo "foobar" > foo
# BUILD_STREAM_PROTOCOL=diffcopy docker build --stream .
Streaming build context to Docker daemon      90B
Step 1/2 : FROM busybox
 ---> d9551b4026f0
Step 2/2 : COPY . .
 ---> d50e63e1fe1d
Removing intermediate container 0561b43df53e
Successfully built d50e63e1fe1d
@simonferquel

This comment has been minimized.

Show comment
Hide comment
@simonferquel

simonferquel Apr 18, 2017

Contributor

As discussed while working on credential pulls, the actual session ID should not be part of the docker remote context identifier: in the future we will want to leverage this session for other stuff than build context, and possibly we will have case where context streaming is off, but other features like credential pulls or copy-out would be required.
You can see a candidate design here: https://github.com/simonferquel/docker/tree/remote-context-credentials-ondemand.

Additionaly, client-side services seem a bit clunky both to declare and to discover. I would really like an api on client like: session.Expose(reflect.TypeOf((*myInterface)(nil)).Elem(), myImplementation) and on the backend, having svc, ok := session.Discover((reflect.TypeOf((*myInterface)(nil)).Elem()) (there might be a better way to get hold on an interface type though). That would certainly require describing shared interface in .proto files, but that would feel both more robust and cleaner than using strings for method names and maps of slices of strings for parameters).

Contributor

simonferquel commented Apr 18, 2017

As discussed while working on credential pulls, the actual session ID should not be part of the docker remote context identifier: in the future we will want to leverage this session for other stuff than build context, and possibly we will have case where context streaming is off, but other features like credential pulls or copy-out would be required.
You can see a candidate design here: https://github.com/simonferquel/docker/tree/remote-context-credentials-ondemand.

Additionaly, client-side services seem a bit clunky both to declare and to discover. I would really like an api on client like: session.Expose(reflect.TypeOf((*myInterface)(nil)).Elem(), myImplementation) and on the backend, having svc, ok := session.Discover((reflect.TypeOf((*myInterface)(nil)).Elem()) (there might be a better way to get hold on an interface type though). That would certainly require describing shared interface in .proto files, but that would feel both more robust and cleaner than using strings for method names and maps of slices of strings for parameters).

@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Apr 24, 2017

Member

@simonferquel @dnephin @cpuguy83 @dmcgowan What's your thoughts on the transport interface for this? I only made the generic interface https://github.com/moby/moby/pull/32677/files#diff-8d234f28fa68cff9ee839315dbeaeb49R51 because I couldn't decide in favor of a single transport(grpc callbacks, websocket, single hijacked grpc stream, ssh etc). If we only support grpc then this could be described more with proto files. I still wouldn't prefer to use grpc generation directly because I think the callbacks should follow the object.method pattern instead of exposing functions. But we could solve it with a proto plugin that wraps grpc code and adds this and also automatically registers the headers for supported methods. With current testing, grpc seems to work quite well. One of the issues is that the proto parser currently does unnecessary allocations https://github.com/grpc/grpc-go/blob/38df39bad1892a821a4feac7f7506a19a13661f2/rpc_util.go#L231-L233 but that could be patched.

I also made another poc implementation using this, for exposing ssh signers to the builder with docker build --ssh . tonistiigi@a175773 Currently missing limiting auth socket to specific keys and forbidding key management requests.

Member

tonistiigi commented Apr 24, 2017

@simonferquel @dnephin @cpuguy83 @dmcgowan What's your thoughts on the transport interface for this? I only made the generic interface https://github.com/moby/moby/pull/32677/files#diff-8d234f28fa68cff9ee839315dbeaeb49R51 because I couldn't decide in favor of a single transport(grpc callbacks, websocket, single hijacked grpc stream, ssh etc). If we only support grpc then this could be described more with proto files. I still wouldn't prefer to use grpc generation directly because I think the callbacks should follow the object.method pattern instead of exposing functions. But we could solve it with a proto plugin that wraps grpc code and adds this and also automatically registers the headers for supported methods. With current testing, grpc seems to work quite well. One of the issues is that the proto parser currently does unnecessary allocations https://github.com/grpc/grpc-go/blob/38df39bad1892a821a4feac7f7506a19a13661f2/rpc_util.go#L231-L233 but that could be patched.

I also made another poc implementation using this, for exposing ssh signers to the builder with docker build --ssh . tonistiigi@a175773 Currently missing limiting auth socket to specific keys and forbidding key management requests.

@simonferquel

This comment has been minimized.

Show comment
Hide comment
@simonferquel

simonferquel May 2, 2017

Contributor

I have added 2 PRs directly on Tonis branch to fix issues on Windows:

  • tonistiigi#2 (which is basically a cherry-pick of #32959, fixing issues with tarsums)
  • tonistiigi#3 which fixes an issue with dockerfile not being removed from the context on the daemon side. This should make all tests pass on Windows.
Contributor

simonferquel commented May 2, 2017

I have added 2 PRs directly on Tonis branch to fix issues on Windows:

  • tonistiigi#2 (which is basically a cherry-pick of #32959, fixing issues with tarsums)
  • tonistiigi#3 which fixes an issue with dockerfile not being removed from the context on the daemon side. This should make all tests pass on Windows.
@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 2, 2017

Member

Looks like this needs a rebase

Member

thaJeztah commented May 2, 2017

Looks like this needs a rebase

@tonistiigi tonistiigi changed the title from [WIP] Implement long running interactive session and sending build context incrementally to Implement long running interactive session and sending build context incrementally May 16, 2017

@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi May 16, 2017

Member

I've removed the WIP indicator, rebased after cli split and added the persistent cache and garbage collection. I removed the transport interface and replaced it with grpc only logic similar to what @simonferquel suggested. The discovery part is optional, independent layer. It is generated from proto but the grpc itself does not rely on it. gRPC itself only uses the generated proto code to connect a client with the handler.

# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              3                   1                   479.5MB             479.5MB (100%)
Containers          2                   0                   0B                  0B
Local Volumes       0                   0                   0B                  0B
Build Cache                                                 178MB               178MB

I've separated out the implementation of client session itself and file syncing that is one example feature on top of that but they could be reviewed separately or the order which we use to merge actual features doesn't matter. I've also included the CLI update to ease testing, that I will remove before merge.

Base: master...tonistiigi:client-session-base-nocli
tonistiigi/docker-cli@master...tonistiigi:client-session-base-nocli

File syncing:
tonistiigi/docker@tonistiigi:client-session-base-nocli...client-session-fssession-nocli
tonistiigi/docker-cli@tonistiigi:client-session-base-nocli...client-session-fssession-nocli

@dnephin @simonferquel @dmcgowan @tiborvass

Member

tonistiigi commented May 16, 2017

I've removed the WIP indicator, rebased after cli split and added the persistent cache and garbage collection. I removed the transport interface and replaced it with grpc only logic similar to what @simonferquel suggested. The discovery part is optional, independent layer. It is generated from proto but the grpc itself does not rely on it. gRPC itself only uses the generated proto code to connect a client with the handler.

# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              3                   1                   479.5MB             479.5MB (100%)
Containers          2                   0                   0B                  0B
Local Volumes       0                   0                   0B                  0B
Build Cache                                                 178MB               178MB

I've separated out the implementation of client session itself and file syncing that is one example feature on top of that but they could be reviewed separately or the order which we use to merge actual features doesn't matter. I've also included the CLI update to ease testing, that I will remove before merge.

Base: master...tonistiigi:client-session-base-nocli
tonistiigi/docker-cli@master...tonistiigi:client-session-base-nocli

File syncing:
tonistiigi/docker@tonistiigi:client-session-base-nocli...client-session-fssession-nocli
tonistiigi/docker-cli@tonistiigi:client-session-base-nocli...client-session-fssession-nocli

@dnephin @simonferquel @dmcgowan @tiborvass

@simonferquel

This comment has been minimized.

Show comment
Hide comment
@simonferquel

simonferquel May 17, 2017

Contributor

I am still not sure about the Supports method at the method level. If we want to keep it, we should at least encapsulate that in the fssession package (version negociation of client side services should happen at the protocol level). So from fscache we should just have a function call like:

fssyncProtocol, err := fssession.NegociateSyncProtocol(caller)

This way client code does not have to know about the notion of method url etc.

In term of versioning, I propose that we experiment a little with the following scenario:

  • Client and server have version n they both know only protocol a for file sync
  • Client and server have version n+1 they both know protocol a and b, and when they negotiate, they should use b as it is more efficient / provides more features
  • Client has version n and server has version n+1 they should negociate to find the best common protocol
  • Client has version n+1 and server has version n they should negociate to find the best common protocol

In the fs cache scenario, we have a quite easy scenario as both protocol provide the same value (with more or less efficiency), but if we take the exemple of authentication, we could have:

  • v1: only exposes GetRegistryAuthConfig(registry string) types.AuthConfig
  • v2: also exposes GetAccessToken(registry string, oauthParams OAuthParams) string

In the case where client cli only implements v1, a v2 daemon should implement v2 on top of a v1 client (getting oauth access token using AuthConfig provided by the v1 client), in a transparent manner

Contributor

simonferquel commented May 17, 2017

I am still not sure about the Supports method at the method level. If we want to keep it, we should at least encapsulate that in the fssession package (version negociation of client side services should happen at the protocol level). So from fscache we should just have a function call like:

fssyncProtocol, err := fssession.NegociateSyncProtocol(caller)

This way client code does not have to know about the notion of method url etc.

In term of versioning, I propose that we experiment a little with the following scenario:

  • Client and server have version n they both know only protocol a for file sync
  • Client and server have version n+1 they both know protocol a and b, and when they negotiate, they should use b as it is more efficient / provides more features
  • Client has version n and server has version n+1 they should negociate to find the best common protocol
  • Client has version n+1 and server has version n they should negociate to find the best common protocol

In the fs cache scenario, we have a quite easy scenario as both protocol provide the same value (with more or less efficiency), but if we take the exemple of authentication, we could have:

  • v1: only exposes GetRegistryAuthConfig(registry string) types.AuthConfig
  • v2: also exposes GetAccessToken(registry string, oauthParams OAuthParams) string

In the case where client cli only implements v1, a v2 daemon should implement v2 on top of a v1 client (getting oauth access token using AuthConfig provided by the v1 client), in a transparent manner

@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi May 17, 2017

Member

@simonferquel The Supports is only called in the fssession already. https://github.com/moby/moby/pull/32677/files#diff-c9cd583e507bd7a151ab39450cf2f9f9R144 is the only place it is called and that section is the whole negotiation area. Using discovery is optional as you could just call the grpc client, it would fail and you could call the fallback. It would just be more complicated and less performant to handle the error cases like this.

Updates to the API should go through the gRPC best practices. If types get updated you just need to keep the proto backward compatible. Method signatures should not be changed(unless deprecating) and new methods are added if new functionality is needed.

For the upgrade patterns you described, all of them should work properly. Local and remote definition is compared to find the best method that local knows about and remote supports. It should work exactly the same for auth as well if that is done in multiple steps. It should be part of the auth feature to decide if they want to expose both methods and make user code choose or just expose a generic "authenticate" interface that would use the best implementation supported by remote under the hood.

Member

tonistiigi commented May 17, 2017

@simonferquel The Supports is only called in the fssession already. https://github.com/moby/moby/pull/32677/files#diff-c9cd583e507bd7a151ab39450cf2f9f9R144 is the only place it is called and that section is the whole negotiation area. Using discovery is optional as you could just call the grpc client, it would fail and you could call the fallback. It would just be more complicated and less performant to handle the error cases like this.

Updates to the API should go through the gRPC best practices. If types get updated you just need to keep the proto backward compatible. Method signatures should not be changed(unless deprecating) and new methods are added if new functionality is needed.

For the upgrade patterns you described, all of them should work properly. Local and remote definition is compared to find the best method that local knows about and remote supports. It should work exactly the same for auth as well if that is done in multiple steps. It should be part of the auth feature to decide if they want to expose both methods and make user code choose or just expose a generic "authenticate" interface that would use the best implementation supported by remote under the hood.

@simonferquel

Does the fsutil package contains the fixes I wrote for windows ? I did not check that

Show outdated Hide outdated client/session/fssession/send.go
// Caller can invoke requests on the session
type Caller interface {
Context() context.Context
Supports(method string) bool

This comment has been minimized.

@simonferquel

simonferquel May 18, 2017

Contributor

It would be nice if we could have both a SupportsMethod / SupportsService method so that service implementers can choose whever to use a per method versioning strategy or a per service one.

@simonferquel

simonferquel May 18, 2017

Contributor

It would be nice if we could have both a SupportsMethod / SupportsService method so that service implementers can choose whever to use a per method versioning strategy or a per service one.

This comment has been minimized.

@simonferquel

simonferquel May 18, 2017

Contributor

Additionally, it would be more convenient for user code if SupportsMethod takes (serviceName string, method name) instead of a method url.

@simonferquel

simonferquel May 18, 2017

Contributor

Additionally, it would be more convenient for user code if SupportsMethod takes (serviceName string, method name) instead of a method url.

This comment has been minimized.

@tonistiigi

tonistiigi May 18, 2017

Member

grpc wire definition only defines a request/response structure that is based on url(path). You can always check the service by checking part of that path. You could write SupportsService based on current info if you know the semantics of grpc URL, but I don't see a reason to add it atm when I don't see any use case for it. Avoiding helper would be more convenient but user code doesn't need to call it. I'd prefer to keep it in the terms of wire format to make sure that this is not abused with random strings.

@tonistiigi

tonistiigi May 18, 2017

Member

grpc wire definition only defines a request/response structure that is based on url(path). You can always check the service by checking part of that path. You could write SupportsService based on current info if you know the semantics of grpc URL, but I don't see a reason to add it atm when I don't see any use case for it. Avoiding helper would be more convenient but user code doesn't need to call it. I'd prefer to keep it in the terms of wire format to make sure that this is not abused with random strings.

Show outdated Hide outdated cmd/dockerd/daemon.go
Show outdated Hide outdated hack/dockerfile/binaries-commits
@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi May 19, 2017

Member

@simonferquel Yes, your PRs for fsutil have been merged. Feel free to do more windows testing though.

Member

tonistiigi commented May 19, 2017

@simonferquel Yes, your PRs for fsutil have been merged. Feel free to do more windows testing though.

@dnephin

I'll have to resume reviewing later today.

I got up to builder/remotecontext/tarsum.go skipping builder/fscache/fscache.go

Show outdated Hide outdated builder/dockerfile/builder.go
Show outdated Hide outdated builder/dockerfile/builder.go
Show outdated Hide outdated builder/dockerfile/clientsession.go
Show outdated Hide outdated builder/dockerfile/builder.go
Show outdated Hide outdated builder/dockerfile/clientsession.go
Show outdated Hide outdated builder/remotecontext/detect.go
Show outdated Hide outdated builder/remotecontext/detect.go
@dnephin

Ok, I think I've taken a look at everything but builder/fscache/fscache.go (will look at it next).

Most of my comments are code-review related. I don't know if I completely understand all the grpc/session pieces, but from what I understand it looks good.

When this goes to code-review it will need some API integration tests and a check in hack/validate that the pb.go are generated correctly from the source.

Show outdated Hide outdated client/session.go
Show outdated Hide outdated client/session.go
Show outdated Hide outdated client/session/filesync/filesync.go
Show outdated Hide outdated client/session/manager.go
Show outdated Hide outdated client/session/manager.go
Show outdated Hide outdated cmd/dockerd/daemon.go
Show outdated Hide outdated hack/validate/gofmt
Show outdated Hide outdated hack/dockerfile/binaries-commits
@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Jun 13, 2017

Member

@dnephin Thanks for the review! I've rebased again and added a commit for the daemon start method refactor you asked.

Member

tonistiigi commented Jun 13, 2017

@dnephin Thanks for the review! I've rebased again and added a commit for the daemon start method refactor you asked.

@vieux

This comment has been minimized.

Show comment
Hide comment
@vieux

vieux Jun 14, 2017

Collaborator

@tonistiigi do you know if the powerpc error is related ?

Collaborator

vieux commented Jun 14, 2017

@tonistiigi do you know if the powerpc error is related ?

@simonferquel

I think we should also introduce new integration tests maybe quite complicated to do, but that feel necessary:

  • A test with a daemon in a version that knows nothing about long running sessions
  • A test with a client version that knows nothing about long running sessions.

I think this change is very usefull, but also very pervasive and we should make sure we don have regressions

@@ -77,6 +103,36 @@ func (bm *BuildManager) Build(ctx context.Context, config backend.BuildConfig) (
return newBuilder(ctx, builderOptions).build(source, dockerfile)
}
func (bm *BuildManager) initializeClientSession(ctx context.Context, cancel func(), options *types.ImageBuildOptions) (builder.Source, error) {

This comment has been minimized.

@simonferquel

simonferquel Jun 15, 2017

Contributor

When other features will be based on client session (such as registry authent, secrets,...) it would be good to split this one between initializeClientSession (returning a session.Caller) and initializeRemoteContext for initializing the builder.Source.

@simonferquel

simonferquel Jun 15, 2017

Contributor

When other features will be based on client session (such as registry authent, secrets,...) it would be good to split this one between initializeClientSession (returning a session.Caller) and initializeRemoteContext for initializing the builder.Source.

Show outdated Hide outdated builder/dockerfile/clientsession.go
@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Jun 15, 2017

Member

@tophj-ibm Can you check the size of that file in power? In my system, it seems to be 16K. That test seems to expect that all files created by docker never exceed 2MB.

Member

tonistiigi commented Jun 15, 2017

@tophj-ibm Can you check the size of that file in power? In my system, it seems to be 16K. That test seems to expect that all files created by docker never exceed 2MB.

@tophj-ibm

This comment has been minimized.

Show comment
Hide comment
@tophj-ibm

tophj-ibm Jun 15, 2017

Contributor

@tonistiigi sure, which file?

Contributor

tophj-ibm commented Jun 15, 2017

@tonistiigi sure, which file?

@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Jun 15, 2017

Member

@tophj-ibm /tmp/no-space-left-on-device-test110029009/test-mount/builder/fscache.db. Or /var/lib/docker/builder/fscache.db if you are running with default options.

Member

tonistiigi commented Jun 15, 2017

@tophj-ibm /tmp/no-space-left-on-device-test110029009/test-mount/builder/fscache.db. Or /var/lib/docker/builder/fscache.db if you are running with default options.

@tophj-ibm

This comment has been minimized.

Show comment
Hide comment
@tophj-ibm

tophj-ibm Jun 15, 2017

Contributor

interesting as I'm seeing it as 262KB

Contributor

tophj-ibm commented Jun 15, 2017

interesting as I'm seeing it as 262KB

@vdemeester

LGTM 🐮

@simonferquel

This comment has been minimized.

Show comment
Hide comment
@simonferquel

simonferquel Jun 16, 2017

Contributor

I suppose my comments can be implemented in another pr -> lgtm :)

Contributor

simonferquel commented Jun 16, 2017

I suppose my comments can be implemented in another pr -> lgtm :)

@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Jun 16, 2017

Member

I removed the cli commit. Old branch with that commit is in https://github.com/tonistiigi/docker/tree/builder-remote-context-4-withcli

@simonferquel I added the rename change and integration-cli test for the session endpoint. As new intergration tests using cli are not allowed it's hard to do better atm. I did run the whole TestBuild* suite locally with updated cli and BuildCmd wrapped to always use streaming. I can probably add e2e tests for it as well when it is merged. The file transfer functions themselves are covered with unit tests.

For the power test, I increased the minimal disk size as these machines seem to need more room. It is not that this PR needs a lot but it seems that we were much closer to the old limit in power before.

Swagger yaml was also updated.

Member

tonistiigi commented Jun 16, 2017

I removed the cli commit. Old branch with that commit is in https://github.com/tonistiigi/docker/tree/builder-remote-context-4-withcli

@simonferquel I added the rename change and integration-cli test for the session endpoint. As new intergration tests using cli are not allowed it's hard to do better atm. I did run the whole TestBuild* suite locally with updated cli and BuildCmd wrapped to always use streaming. I can probably add e2e tests for it as well when it is merged. The file transfer functions themselves are covered with unit tests.

For the power test, I increased the minimal disk size as these machines seem to need more room. It is not that this PR needs a lot but it seems that we were much closer to the old limit in power before.

Swagger yaml was also updated.

@tophj-ibm

This comment has been minimized.

Show comment
Hide comment
@tophj-ibm

tophj-ibm Jun 16, 2017

Contributor

@tonistiigi thanks! test change seems fine to me. I'll keep looking to see if I can figure out why it would be bigger only on power.

Contributor

tophj-ibm commented Jun 16, 2017

@tonistiigi thanks! test change seems fine to me. I'll keep looking to see if I can figure out why it would be bigger only on power.

@dnephin

I'm going to take another pass of this today, but I noticed this is very light on tests.

We need some api tests for the builder using the new feature. We should not need to CLI for this. We can create a temp directory and run a build that uses the session with the client/ package.

@dnephin

Some unit tests for fsCacheStore would be great.

Show outdated Hide outdated builder/fscache/fscache.go
Show outdated Hide outdated builder/fscache/fscache.go
Show outdated Hide outdated builder/fscache/fscache.go
Show outdated Hide outdated builder/fscache/fscache.go
@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Jun 16, 2017

Member

We need some api tests for the builder using the new feature

@dnephin That would mean duplicating lots of code from the cli as it is not a trivial feature. I agree that this is a problem but not sure if duplicating cli is the right solution. One way could be moving more cli builder features to a client library so that the client functions actually corresponds builder behaviors and not HTTP requests. This is not a new problem to this PR, lots of builder features like remotes sources, dockerignore handling, content trust are untested by api tests. Separately these can be tested as can be the transfer functions(some tests are in vendored repo though). As I commented earlier, this feature is actually very easy to test with a wrapper to BuildCmd, but it's politics that stops me from including it.

Member

tonistiigi commented Jun 16, 2017

We need some api tests for the builder using the new feature

@dnephin That would mean duplicating lots of code from the cli as it is not a trivial feature. I agree that this is a problem but not sure if duplicating cli is the right solution. One way could be moving more cli builder features to a client library so that the client functions actually corresponds builder behaviors and not HTTP requests. This is not a new problem to this PR, lots of builder features like remotes sources, dockerignore handling, content trust are untested by api tests. Separately these can be tested as can be the transfer functions(some tests are in vendored repo though). As I commented earlier, this feature is actually very easy to test with a wrapper to BuildCmd, but it's politics that stops me from including it.

@dnephin

This comment has been minimized.

Show comment
Hide comment
@dnephin

dnephin Jun 16, 2017

Member

I don't really see it as duplication. Every client is going to have some similarities. It seems like there is enough code in client/ already that a reasonable test could be written.

Member

dnephin commented Jun 16, 2017

I don't really see it as duplication. Every client is going to have some similarities. It seems like there is enough code in client/ already that a reasonable test could be written.

@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Jun 19, 2017

Member

@dnephin Updated. PTAL

Member

tonistiigi commented Jun 19, 2017

@dnephin Updated. PTAL

@vieux

This comment has been minimized.

Show comment
Hide comment
@vieux

vieux Jun 19, 2017

Collaborator

LGTM

Collaborator

vieux commented Jun 19, 2017

LGTM

@vieux

This comment has been minimized.

Show comment
Hide comment
@vieux

vieux Jun 20, 2017

Collaborator

@tonistiigi can you please rebase ?
@dnephin any other comment ?

Collaborator

vieux commented Jun 20, 2017

@tonistiigi can you please rebase ?
@dnephin any other comment ?

@dnephin

dnephin approved these changes Jun 22, 2017 edited

LGTM

Testing nits can be handled later

defer fscache.Close()
err = fscache.RegisterTransport("test", &testTransport{})
assert.Nil(t, err)

This comment has been minimized.

@dnephin

dnephin Jun 22, 2017

Member

Very minor nit: I generally use require.Nil(t, err) for errors. The difference is that assert is not fatal, it keeps running the rest of the function, where as require is fatal. In the case of errors I generally think it should end the test, because the state is probably going to make the rest of the assertions fail. Using assert here will make some test failures a bit harder to read, sometimes you end up with a nil panic after the assertions error.

I think it's fine for now.

@dnephin

dnephin Jun 22, 2017

Member

Very minor nit: I generally use require.Nil(t, err) for errors. The difference is that assert is not fatal, it keeps running the rest of the function, where as require is fatal. In the case of errors I generally think it should end the test, because the state is probably going to make the rest of the assertions fail. Using assert here will make some test failures a bit harder to read, sometimes you end up with a nil panic after the assertions error.

I think it's fine for now.

assert.Nil(t, err)
assert.Equal(t, string(dt), "data")
// same id doesn't recalculate anything

This comment has been minimized.

@dnephin

dnephin Jun 22, 2017

Member

It would be nice to have these as different cases so that the cause of failures are more obvious.

@dnephin

dnephin Jun 22, 2017

Member

It would be nice to have these as different cases so that the cause of failures are more obvious.

This comment has been minimized.

@tonistiigi

tonistiigi Jun 22, 2017

Member

These do use the same state that is generated by previous cases.

@tonistiigi

tonistiigi Jun 22, 2017

Member

These do use the same state that is generated by previous cases.

tonistiigi added some commits May 15, 2017

Add long-running client session endpoint
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
vendor: update deps for fssession
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
session: update swagger yaml
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Implement incremental file sync using client session
Also exposes shared cache and garbage collection/prune
for the source data.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Improve routes initialization
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
@vieux

This comment has been minimized.

Show comment
Hide comment
@vieux

vieux Jun 22, 2017

Collaborator

LGTM, let's take care the nits if needed in a following PR

Collaborator

vieux commented Jun 22, 2017

LGTM, let's take care the nits if needed in a following PR

@vieux vieux merged commit 050c1bb into moby:master Jun 22, 2017

7 checks passed

dco-signed All commits are signed
experimental Jenkins build Docker-PRs-experimental 35223 has succeeded
Details
janky Jenkins build Docker-PRs 43830 has succeeded
Details
powerpc Jenkins build Docker-PRs-powerpc 4198 has succeeded
Details
vendor Jenkins build Docker-PRs-vendor 3551 has succeeded
Details
windowsRS1 Jenkins build Docker-PRs-WoW-RS1 15152 has succeeded
Details
z Jenkins build Docker-PRs-s390x 3926 has succeeded
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment