New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new option "env-from" for `kubectl run` command. #48684

Closed
wants to merge 1 commit into
base: master
from

Conversation

@xingzhou
Copy link
Contributor

xingzhou commented Jul 10, 2017

Propose to add new option "env-from" for kubectl run command so that user
can config ConfigMap or Secret env variables from run command.

Fixed #48361

Release note:

None
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jul 10, 2017

Hi @xingzhou. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Jul 11, 2017

/unassign

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Jul 11, 2017

/assign @shiywang

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Jul 11, 2017

/assign @mengqiy

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Jul 11, 2017

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Jul 11, 2017

@alexandercampbell FYI, this should probably wait until your refactor is finished.

},
},
}
envFroms = append(envFroms, envFrom)

This comment has been minimized.

@shiywang

shiywang Jul 11, 2017

Member

this line and line952 are same, could move out of code block and merge to one ?

if len(refType) == 0 || len(name) == 0 {
return nil, fmt.Errorf("invalid envFrom: %v", envFrom)
}
if refType == "ConfigMapRef" {

This comment has been minimized.

@shiywang

shiywang Jul 11, 2017

Member

I would prefer switch instead of if else if else

This comment has been minimized.

@xingzhou

xingzhou Jul 12, 2017

Contributor

ok, let me update the code

@shiywang

This comment has been minimized.

Copy link
Member

shiywang commented Jul 11, 2017

/ok-to-test

@alexandercampbell
Copy link
Member

alexandercampbell left a comment

Thanks!

@@ -117,6 +123,7 @@ func addRunFlags(cmd *cobra.Command) {
cmd.Flags().Bool("rm", false, "If true, delete resources created in this command for attached containers.")
cmd.Flags().String("overrides", "", i18n.T("An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field."))
cmd.Flags().StringSlice("env", []string{}, "Environment variables to set in the container")
cmd.Flags().StringSlice("envFrom", []string{}, "Environment variables to set from ConfigMap or Secret in the container")

This comment has been minimized.

@alexandercampbell

alexandercampbell Jul 11, 2017

Member

This flag should be named "env-from" to fit with our standard.
I can find the specific documentation requiring this if you want.

This comment has been minimized.

@xingzhou

xingzhou Jul 12, 2017

Contributor

yeap, let me correct it

envFromStrings := cmdutil.GetFlagStringSlice(cmd, "envFrom")
if len(envFromStrings) != 2 || !reflect.DeepEqual(envFromStrings, test.expected) {
t.Errorf("expected: %s, saw: %s", test.expected, envFromStrings)
}

This comment has been minimized.

@alexandercampbell

alexandercampbell Jul 11, 2017

Member

This test doesn't test any of our code.

This comment has been minimized.

@xingzhou

xingzhou Jul 12, 2017

Contributor

This is just a test to get the flag from cmd, I saw there is a similar one for env, so added this one, I can remove this.

}
delete(genericParams, "envFrom")
} else {
return nil, fmt.Errorf("expected []string, found: %v", envFromStrings)

This comment has been minimized.

@alexandercampbell

This comment has been minimized.

@xingzhou

xingzhou Jul 12, 2017

Contributor

will update the code then.

@xingzhou xingzhou force-pushed the xingzhou:kube-48361 branch from d86537c to c3a3340 Jul 12, 2017

@xingzhou xingzhou force-pushed the xingzhou:kube-48361 branch from c3a3340 to 5f52c2f Jul 12, 2017

@xingzhou xingzhou changed the title Add new option "envFrom" for `kubectl run` command. Add new option "env-from" for `kubectl run` command. Jul 12, 2017

@xingzhou

This comment has been minimized.

Copy link
Contributor

xingzhou commented Jul 12, 2017

@alexandercampbell and @shiywang, updated the patch according to your comments, PTAL, thx

continue
}
if !reflect.DeepEqual(envFroms, test.expected) {
t.Errorf("\nexpected:\n%#v\nsaw:\n%#v (%s)", test.expected, envFroms, test.test)

This comment has been minimized.

@alexandercampbell

alexandercampbell Jul 13, 2017

Member

Nice, I like this test style.

@alexandercampbell

This comment has been minimized.

Copy link
Member

alexandercampbell commented Jul 13, 2017

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label Jul 13, 2017

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Aug 7, 2017

Sorry, I meant what is your usecase where you want to use run over configuration files to run your workloads.

@nurus

This comment has been minimized.

Copy link

nurus commented Aug 7, 2017

Thanks for clarifying. We actually do use config files for our deployments and services. However sometimes it's necessary to run a one off interactive session or a task within a container running the same image as the deployed application. For that we don't want to have to create a config for a job or deployment and then exec in to the container. By using --env-from the container will have all the configuration it needs to mimic the deployed application. It's also nice to have the pod automatically terminate once the user exists or a one off task completes.

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Aug 8, 2017

@nurus Thanks for the explanation. Are you using run to create a job or a deployment? Is this primarily for development, debugging, or something else? Are you doing this in the same cluster? If so, have you considered orphaning a Pod and using that?

We definitely want to avoid supporting everything in the API as a flag, but want to support the most common / important use cases.

@lev-kuznetsov

This comment has been minimized.

Copy link

lev-kuznetsov commented Aug 8, 2017

Why are you using kubectl run instead of config?

It's not possible to use kubectl run -f file.yaml as far as I'm aware. What should I be doing instead?

@nurus

This comment has been minimized.

Copy link

nurus commented Aug 8, 2017

No problem @pwittrock. We use run for development and diagnostics. We wouldn't use it to run
a highly available application. Our applications usually consist of a web server and workers of various types which are all deployed using deployment configuration files to ensure high availability.

The applications tend to be monolithic and require the entire codebase to be baked into the image. The codebase contains methods and scripts related to the application usually for interacting with persistent storage (database, elasticsearch, etc.). The code itself relies on environment variables that it uses as parameters to connect to the persistent storage.

For example someone many want to run a one off sql query wrapped up in a script just to glean some information from the database. run is perfect for this since we don't want to disrupt existing workers and prefer to spin up a one off pod to do this query which outputs to the users terminal. Pairing this with --env-from makes it easy just to hook into the configmap and secretsmaps which already contain the variables necessary to make the connection to the persistent storage.

We could simply exec into a pod that's part of a deployment and has all the environment variables already after orphaning it but we also don't want to endanger any work that pod is currently doing. Part of the problem is the nature of the work itself. It's much safer just to spin up a pod to do the one off task.

I definitely see your point about not wanting to clutter up the run command with lots of options. As @lev-kuznetsov mentioned earlier the proper way to do with would be with overrides which reference the configmaps and secretsmaps. Unfortunately this seems to be broken. Our workaround has been to use a version of kubectl built from this branch. I'm not sure how others are using k8s but I think our use case is very common when running rails applications which have extensive configuration set as environment variables.

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Aug 10, 2017

@nurus Thanks for sticking with me. I like to really try to understand our user's use cases as much as possible to make sure we are taking them into account in a holistic manner.

Is it correct that the main application is managed through config (kubectl apply -f I assume)? If there was a simple way to spin up a new Deployment with a single Pod and change the name, Pod labels, replica, would that solve your use case? One thing that I am thinking is that your proposed feature may make it easier to duplicate the original Pod, but still relies on the correct command line flags being supplied.

How big of a deal is the "create" + "attach" in one command vs a "create" then "attach or exec" workflow? By adding flag to run it solves a limited set of usecases, whereas if it was added as a set command (set env-from) and piping it together with run (kubectl run -o yaml --dry-run | set env-from ... | apply -f -), it could be used for other use cases (modifying existing workloads, combining with other kubectl create foo commands, etc).

I'd like to talk through this more. The deadline for 1.8 code freeze is just a couple weeks away. How important to you that something lands in 1.8 vs 1.9?

@nurus

This comment has been minimized.

Copy link

nurus commented Aug 14, 2017

Is it correct that the main application is managed through config (kubectl apply -f I assume)?

Yes, the application and its workers are deployed this way.

If there was a simple way to spin up a new Deployment with a single Pod and change the name, Pod labels, replica, would that solve your use case?

Definitely, if we can spin up a pod from the same deployment and strip it of labels which are used as selectors this will work. If we're debugging a web worker we may not want it to be available to the service it would normally belong to.

How big of a deal is the "create" + "attach" in one command vs a "create" then "attach or exec" workflow?

Not a huge deal. We really like using run for pod self termination after the command completes but seems like this may be possible with piping and your proposed solution.

How important to you that something lands in 1.8 vs 1.9?

It's not critical that this lands in 1.8.

Thanks for your help on this @pwittrock.

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Aug 15, 2017

@nurus

Thanks for the insight. I will continue to work with you on this, but won't plan on it landing in 1.8 given the approaching code freeze.

One thing we have been talking about is a utility to easily apply basic transformations to a config file and emit the transformed result - e.g. something that can rename and relabel the objects in your config files and emit the transformed result. We are prototyping this now, and it could be a helpful tool - e.g. run transform to change the labels and name on your deployment config, pipe to apply, profit. This approach would allow you to capture things that might not be in the env piece of your config - e.g. initializers or beta fields driven off of annotations, the command and args, resource limits, etc

@nurus

This comment has been minimized.

Copy link

nurus commented Aug 17, 2017

@pwittrock that sounds useful and a more complete solution to replicating the environment of a deployment. I'll stay tuned.

@lev-kuznetsov

This comment has been minimized.

Copy link

lev-kuznetsov commented Aug 18, 2017

Would these transforms work on kubectl run? Because I really don't have a deployment to copy from

@toddgardner

This comment has been minimized.

Copy link

toddgardner commented Oct 12, 2017

Is there a decision on this change? We also ran into this not working, and have a similar use-case.

We want to run one off Jobs (like django's "python manage.py migrate" or "python manage.py shell") in the same environment variables as a running Deployment; since this is shared between multiple Deployments (like the web server and background workers), it uses envFrom in the deployment, and we'd like to just that in the Job created by kubectl run.

I don't think "run" in our use case is really just "create/apply" then "attach"; it's:

  • create/apply
  • wait for pod to be ready&running
  • attach
  • wait for final Job status
  • set exit status appropriately

So you'd need more than just "set" to replicate the functionality, the "wait conditions" #1899 request would also be needed and some shell scripting around understanding status. I made a half-hearted stab at implementing that before doing the workaround mentioned above (some scripting to transform config maps and secrets to equivalent --env=FOO=bar invocations) This obviously has several drawbacks like a particularly lengthy command line and embedding secrets into a Job spec directly.

@nehresma

This comment has been minimized.

Copy link

nehresma commented Oct 23, 2017

I landed here with the same use case as @toddgardner -- running migrations. I'll also probably fall back to scripting to set the environment variables, but it would sure be handy to have envFrom support here.

@kvitajakub

This comment has been minimized.

Copy link

kvitajakub commented Nov 3, 2017

Same use case/feature request here. We want to run separate pod with replicated environment and shell/console access (interactive) that is removed on finish.

@nurus

This comment has been minimized.

Copy link

nurus commented Nov 7, 2017

I've gotten envFrom to work with --overrides on kubectl v1.7.5.
Here's an example of the override json:

{  
   "spec":{  
      "containers":[  
         {  
            "name": "podname",
            "image": "image",
            "args":[  
               "command"
            ],
            "envFrom":[  
               {  
                  "configMapRef":{  
                     "name": "configmap"
                  }
               },
               {  
                  "secretRef":{  
                     "name": "secrets"
                  }
               }
            ]
         }
      ]
   }
}

The final kubectl command looks like this:

kubectl run -i --rm --tty podname --image image  \
--namespace=namespace --restart=Never \
--overrides='{"spec": {"containers": [{"image": "image", "args": ["command"], "name": "podname", "envFrom": [{"configMapRef": {"name": "configmap"}}, {"secretRef": {"name": "secrets"}}]}]}}'

There's some redundancy needed to pass validation.

This should meet the use case which brought most of us here. I ended up wrapping it up in a script to make it easier to form the command.

@kvitajakub

This comment has been minimized.

Copy link

kvitajakub commented Nov 8, 2017

@nurus Thank you, that is definitely helpful. I managed to get the environment there, only problem I have is that I can't open console. There is no output, kubectl finishes and pod is not listed anywhere.

kubectl run -it --rm ubuntu --overrides='
{
  "spec": {
    "containers": [
      {
        "name": "ubuntu",
        "image": "ubuntu",
        "args": ["bash"],
        "envFrom": [
          {
            "configMapRef": {
              "name": "configmap-env"
            }
          }
        ]
      }
    ]
  }
}
'  --image=ubuntu --restart=Never

GIven the configmap-env exists, am I missing something?

@nurus

This comment has been minimized.

Copy link

nurus commented Nov 8, 2017

@kvitajakub it seems anything that isn't explicitly specified in the container spec reverts to the default option. For an interactive session add these fields to the container spec

                    "stdin": True,
                    "stdinOnce": True,
                    "tty": True
@kvitajakub

This comment has been minimized.

Copy link

kvitajakub commented Nov 8, 2017

@nurus Thank you, it works!
Final command that does get the shell with evnvars is

kubectl run -it --rm ubuntu --image=ubuntu --restart=Never --overrides='
{
  "spec": {
    "containers": [
      {
        "name": "ubuntu",
        "image": "ubuntu",
        "args": ["bash"],
        "envFrom": [
          {
            "configMapRef": {
              "name": "configmap-env"
            }
          }
        ],
        "stdin": true,
        "stdinOnce": true,
        "tty": true
      }
    ]
  }
}'
@k8s-merge-robot

This comment has been minimized.

Copy link
Contributor

k8s-merge-robot commented Dec 9, 2017

This PR hasn't been active in 30 days. It will be closed in 59 days (Feb 6, 2018).

cc @alexandercampbell @eparis @mengqiy @pwittrock @shiywang @xingzhou

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Mar 9, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Apr 8, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@javanthropus

This comment has been minimized.

Copy link

javanthropus commented Apr 17, 2018

This issue seems to have gone stale, but we have the same need where I work. However, rather than add another option to the run command for each pod option now and in the future, we would be completely satisfied if there was an equivalent of the apply command that waited on the container in the pod just like the run command does.

In other words, given a full pod manifest (pod.json), the run command could work like this:

kubectl run -it --rm --restart=Never --filename pod.json

You can emulate this now by adding a dummy name, a dummy --image option, and using the --overrides option:

kubectl run -it --rm --restart=Never DUMMY --image dummy --overrides="$(cat pod.json)"

This seems a little backwards logically though because we want the manifest to be the base definition, where the additional command line options provide overrides. It's also a mess to have to pass the entire manifest on the command line.

Going further, as with the create and apply commands, it would be great if the manifest could be supplied in either JSON or YAML formats.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented May 17, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@guizmaii

This comment has been minimized.

Copy link

guizmaii commented Jan 7, 2019

Why this PR is closed while this addition is super useful ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment