-
Notifications
You must be signed in to change notification settings - Fork 288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cli: add demo subcommand #4988
cli: add demo subcommand #4988
Conversation
I'm finding it hard to find a continuous block of time in my schedule long enough to review this PR, since there are a lot of moving independent pieces. Would it be possible to break it up a bit? (e.g., the Docker client new functionality from the other stuff) |
internal/cli/demo/k3d.go
Outdated
|
||
func (k *K3dClient) command(ctx context.Context, cmd []string, stdout io.Writer, stderr io.Writer, wait bool) error { | ||
k.ensurePulled.Do(func() { | ||
ref, err := k.cli.ImagePull(ctx, k.k3dImage) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason why you chose to pull the image up-front rather than Pull as part of the Run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to speed things up a bit because we run multiple k3d commands, each one in an ephemeral container, so it's redundant after the first time. This isn't hugely critical at startup, but on shutdown, we have < 2 sec to fire off the Docker API request before the signal handler forcibly terminates the process 😬
(For reference, it takes >500ms to ImagePull()
on something I've already got pulled, which I'm guessing is largely dependent on local I/O to find the image digest and then the network request to registry API to compare.)
internal/cli/demo/k3d.go
Outdated
} | ||
}) | ||
|
||
runOpts := docker.RunOptions{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the idea of this to avoid tooling to install k3d?
I'm a little a bit about having k3d state living inside of a container and being hard to cleanup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the idea of this to avoid tooling to install k3d?
Precisely - the value prop of "download tilt and run tilt demo
to try it out" vanishes if you need to download k3d/minikube/KIND - at that point, you might as well actually create the cluster yourself 🙃
Running it via Docker seemed faster to develop [vs creating a half-baked package manager], especially since Docker is the one hard dependency still for this (unless we go full Lima VMs 😉)
(TBH I think that an approach either like this in Docker OR that actually downloads k3d/kind/minikube before execing if needed would probably be a good addition to ctlptl
, and ideally we could use that here.)
I'm a little a bit about having k3d state living inside of a container and being hard to cleanup
There's not any state other than the Docker containers it creates - each invocation of k3d
actually runs in an ephemeral container and gets thrown away. Clean-up wise, I delegated to k3d cluster delete
for ease, but it actually might be more sane to just use some labels and delete the containers directly based off that, which is presumably what it's doing anyway. (The generated kubeconfig
is at a tmp path to not interfere with any the user might already have, so there's no config cleanup needed.)
LMK if that makes sense or if I'm overlooking something here!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went ahead and opened #4995 with the k3d-via-Docker client portion to (hopefully?) make it easier to discuss
Yes, I'll chunk this up! Sorry, it was much longer than expected once done |
This is in preparation for directly creating+running containers with Docker, where it's necessary to do a pull first to ensure the image exists before creating the container (this is automatic behavior from the CLI but not when using the API).
Single-shot method to create and run a container, similar to `docker run`. The container create + container run semantics of the Docker API are more low-level than we care about right now, so it's easier to encapsulate these into a single method. It also has some convenience bits around getting container logs and the execution result (which has quirky semantics).
`tilt demo` is meant for first-time Tilt user who want to explore without needing to get a local K8s cluster up and running, intended for use as part of the revamped tutorial. It will create an ephemeral cluster using k3d+k3s against the local Docker daemon, which is torn down on exit. (In the event that something goes awry, `tilt demo --teardown` will clean up any k3d clusters whose names start with `tilt-demo-`.) Then, it will clone the `tilt-avatars` project (using `go-get` that we use for cloning extensions) and simulate a `tilt up` against it, using the generated kubeconfig.
90b9ea6
to
cee0c29
Compare
(This is rebased and JUST contains the demo CLI stuff now so is shorter) |
tilt demo
is meant for first-time Tilt user who want to explorewithout needing to get a local K8s cluster up and running, intended
for use as part of the revamped tutorial.
It will create an ephemeral cluster using k3d+k3s against the local
Docker daemon, which is torn down on exit. (In the event that
something goes awry,
tilt demo --teardown
will clean up any k3dclusters whose names start with
tilt-demo-
.)Then, it will clone the
tilt-avatars
project (usinggo-get
that we use for cloning extensions) and simulate a
tilt up
againstit, using the generated kubeconfig.