Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

skaffold build: --kube-context flag does not default correctly. #4477

Open
j-windsor opened this issue Jul 15, 2020 · 9 comments
Open

skaffold build: --kube-context flag does not default correctly. #4477

j-windsor opened this issue Jul 15, 2020 · 9 comments
Labels
area/build cloud-code/cloud-run has-workaround kind/bug Something isn't working priority/p3 agreed that this would be good to have, but no one is available at the moment.

Comments

@j-windsor
Copy link

j-windsor commented Jul 15, 2020

Expected behavior

skaffold build --kube-context='' and skaffold build output the same thing, given that the cli docs specify that the default for this command is --kube-context=''.

Actual behavior

skaffold build seems to take the active context as the kube-context and not '' as documented.

When I am trying to deploy images to GCR for a later deployment (to Cloud Run), I am not paying attention to what my active context is, but sometimes it is a stopped minikube cluster. I would expect when running skaffold build with the implicit default of --kube-context='', it acts as a 'contextless' build that doesn't assume a local build, but I instead get the error...

Failed to build the app. Error: creating runner: creating builder: getting docker client: getting minikube env: running [minikube docker-env --shell none]
 - stdout: "* The control plane node must be running for this command\n  - To fix this, run: \"minikube start\"\n"
 - stderr: ""
 - cause: exit status 69

Information

  • Skaffold version: v1.12.0
  • Operating system: macOS v10.15.4
@tejal29
Copy link
Member

tejal29 commented Jul 15, 2020

This could be related to #4347

@tejal29
Copy link
Member

tejal29 commented Jul 15, 2020

@j-windsor Can you please paste the output of

skaffold config list -a

I want to see if there are any global defaults set.

@j-windsor
Copy link
Author

Note that this is the managed Skaffold in Cloud Code VSCode

➜  bin ./skaffold config list -a
skaffold config: 
global:
  survey:
    last-prompted: "2020-07-06T09:56:37-07:00"
kubeContexts: []

@briandealwis
Copy link
Member

briandealwis commented Jul 30, 2020

I checked to see fi can build with Skaffold without any contexts or minikube and it worked fine:

$ rm ~/.kube/config
$ kubectl config get-contexts
CURRENT   NAME   CLUSTER   AUTHINFO   NAMESPACE
$ minikube delete
🙄  "minikube" profile does not exist, trying anyways.
💀  Removed all traces of the "minikube" cluster.
$ cd examples/getting-started
$ skaffold build --default-repo briandealwis --cache-artifacts=false
Generating tags...
 - skaffold-example -> briandealwis/skaffold-example:v1.12.0-103-gcca734ed5-dirty
Building [skaffold-example]...
[...]
Successfully tagged briandealwis/skaffold-example:v1.12.0-103-gcca734ed5-dirty

/me now trying to recover my ~/.kube/config file.

@briandealwis
Copy link
Member

So is the answer here really to change the documented default to show --kube-context=<active-context> instead?

@gsquared94 gsquared94 added the priority/p2 May take a couple of releases label Aug 10, 2020
@briandealwis
Copy link
Member

@j-windsor so this is a funny situation! Your code is executing:

const args = ['build', '--filename', skaffoldConfigPath, `--kube-context=''`];
const buildProcess = await this.options.skaffoldClient.executeAsChildProcess(args, ...

This appears to using an exec of some form, rather than running this via a shell, and so the '' is literally being passed through (since exec doesn't do shell-style arguments processing). That kube context (maybe we should just call it a kontext 😄 ) doesn't match our minikube detector and so we don't try obtaining the docker context returned by minikube docker-env, and so your build happens to run under your local docker daemon.

The docs are correct in that an empty kontext is meant to refer to kubectl's active kontext. (That doc string is auto-generated by the command-argument library that we use.). You're just passing in an explicit kontext.

Arguably you've revealed a bug in that we're not validating the kontext. And ideally we want to keep skaffold run --kube-context=XXX as close to skaffold build --kube-context=XXX && skaffold deploy --kube-context=XXX as possible, and that won't be the case with a literal ''. We should probably explicitly validate the --kube-context; I'm guessing it is validated implicitly when we use kubectl apply --dry-run to validate the manifests.

But you have a valid use-case: we should support builders that don't require a cluster (e.g., Jib and Bazel builds don't require a cluster).

Perhaps that would be best addressed by ensuring our minikube docker-env detection works if minikube is stopped?

@j-windsor
Copy link
Author

@briandealwis I think that fix could work! I understand the desire to keep skaffold run == build && deploy,and we are probably messing up that flow by trying to use skaffold as a builder for something other than k8s. We just simply need a way to build and push to gcr regardless of the current state of minikube.

@nkubala
Copy link
Contributor

nkubala commented Aug 12, 2020

thanks for digging into this one @briandealwis, I think this is exposing several different issues actually:

  • we should validate the kube context before we start the skaffold pipeline - i thought this was the case already, but seemingly there's a disconnect somewhere
  • we should have a fallback in skaffold for when we do a minikube docker-env and things don't work correctly
  • arguably, instead of that last point, we should stop doing minikube docker-env whenever possible and instead build the images locally/push to minikube or build directly to minikube

@nkubala nkubala added priority/p1 High impact feature/bug. and removed priority/p2 May take a couple of releases triage/discuss Items for discussion labels Aug 24, 2020
@nkubala nkubala added this to the v1.15.0 milestone Aug 24, 2020
@nkubala nkubala modified the milestones: v1.15.0, Backlog Sep 1, 2020
@briandealwis
Copy link
Member

Or maybe we need a way to say ignore minikube and build with the local docker daemon.

@briandealwis briandealwis added priority/p3 agreed that this would be good to have, but no one is available at the moment. and removed priority/p1 High impact feature/bug. labels Oct 8, 2020
@nkubala nkubala removed this from the Backlog [P0/P1] milestone May 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/build cloud-code/cloud-run has-workaround kind/bug Something isn't working priority/p3 agreed that this would be good to have, but no one is available at the moment.
Projects
None yet
Development

No branches or pull requests

5 participants