Conversation
| @@ -0,0 +1,82 @@ | |||
| package common | |||
There was a problem hiding this comment.
unfortunately this is all duplicated from warp-server but it's not too much. better than trying to share the code for now and complicate the release process IMO
| } | ||
| }() | ||
|
|
||
| // The image pull doesn't actually happen until you read from this stream, but we don't need the output. |
There was a problem hiding this comment.
Not at all blocking, but does this reader provide progress updates? If so, they might be worth logging as a nice-to-have
There was a problem hiding this comment.
I believe so. I'll look into logging this as a follow-up.
| } | ||
|
|
||
| log.Debugf(ctx, "Checking if sidecar image %s exists locally", assignment.SidecarImage) | ||
| _, err = dockerClient.ImageInspect(ctx, assignment.SidecarImage) |
There was a problem hiding this comment.
Should we have an ImageInspect check for the base image before pulling too? Alternatively, if the Docker daemon caches each layer, it might be ok to pull each time for both the base image and the sidecar.
There was a problem hiding this comment.
Yes, ideally we rely on cache and do not call ImageInspect at all. However, the reason I am calling ImageInspect for the sidecar is that the sidecar is currently in a private repo (in nscr.io) and this was a way to side-step the issue of authenticating to nscr.io for now. I built the sidecar locally on the host I was testing this stuff.
I think we can generally assume/require that users will handle authenticating their docker daemon with the repository that their base images live in. However, we shouldn't expect them to access our private Namespace repository containing the agent sidecar image. This really is a public image. Once we make it public, I'll delete this ImageInspect call.
| log.Infof(ctx, "Extracting sidecar filesystem to volume") | ||
|
|
||
| // Use an arbitrary image to copy the exported filesystem onto a volume. | ||
| alpineImage := "alpine:latest" |
There was a problem hiding this comment.
Don't feel strongly, but would it be possible to skip a step and run a container using the agent sidecar image that copies its own root filesystem to the target?
There was a problem hiding this comment.
That should def be possible. I did it this way to minimize assumptions about the agent sidecar. However, since we control the sidecar I think we can uphold that.
| } | ||
|
|
||
| // Task represents a work item in the queue | ||
| type Task struct { |
There was a problem hiding this comment.
Something we can iterate on later, but I wonder if the task schema we send over the websocket should be more distinct from our internal data model (e.g. we don't necessarily need to expose the task source details or worker data). We could even determine all CLI args on the server, and send the final list to the worker
There was a problem hiding this comment.
Yeah I need to prune these fields.
This just gets the thing roughly working. Warp environments should be fully supported and tasks are completing in my local testing. There's still more to do.
Note that I have not yet attempted to add the interface behind which the Docker Sandbox implementation will be behind. I'll do that once we actually start working on Docker sandbox.
Corresponding warp-server PR coming very soon.