Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new option "env-from" for kubectl run command. #48684

Closed
wants to merge 1 commit into from

Conversation

xingzhou
Copy link
Contributor

@xingzhou xingzhou commented Jul 10, 2017

Propose to add new option "env-from" for kubectl run command so that user
can config ConfigMap or Secret env variables from run command.

Fixed #48361

Release note:

None

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jul 10, 2017
@k8s-ci-robot
Copy link
Contributor

Hi @xingzhou. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jul 10, 2017
@k8s-github-robot k8s-github-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. release-note-none Denotes a PR that doesn't merit a release note. labels Jul 10, 2017
@pwittrock
Copy link
Member

/unassign

@pwittrock
Copy link
Member

/assign @shiywang

@pwittrock
Copy link
Member

/assign @mengqiy

@pwittrock
Copy link
Member

/assign @alexandercampbell

@pwittrock
Copy link
Member

@alexandercampbell FYI, this should probably wait until your refactor is finished.

},
},
}
envFroms = append(envFroms, envFrom)
Copy link
Contributor

@shiywang shiywang Jul 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line and line952 are same, could move out of code block and merge to one ?

if len(refType) == 0 || len(name) == 0 {
return nil, fmt.Errorf("invalid envFrom: %v", envFrom)
}
if refType == "ConfigMapRef" {
Copy link
Contributor

@shiywang shiywang Jul 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer switch instead of if else if else

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, let me update the code

@shiywang
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jul 11, 2017
Copy link
Contributor

@alexandercampbell alexandercampbell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@@ -117,6 +123,7 @@ func addRunFlags(cmd *cobra.Command) {
cmd.Flags().Bool("rm", false, "If true, delete resources created in this command for attached containers.")
cmd.Flags().String("overrides", "", i18n.T("An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field."))
cmd.Flags().StringSlice("env", []string{}, "Environment variables to set in the container")
cmd.Flags().StringSlice("envFrom", []string{}, "Environment variables to set from ConfigMap or Secret in the container")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This flag should be named "env-from" to fit with our standard.
I can find the specific documentation requiring this if you want.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeap, let me correct it

envFromStrings := cmdutil.GetFlagStringSlice(cmd, "envFrom")
if len(envFromStrings) != 2 || !reflect.DeepEqual(envFromStrings, test.expected) {
t.Errorf("expected: %s, saw: %s", test.expected, envFromStrings)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test doesn't test any of our code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just a test to get the flag from cmd, I saw there is a similar one for env, so added this one, I can remove this.

}
delete(genericParams, "envFrom")
} else {
return nil, fmt.Errorf("expected []string, found: %v", envFromStrings)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not %T?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will update the code then.

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 12, 2017
@k8s-github-robot k8s-github-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 12, 2017
@xingzhou xingzhou changed the title Add new option "envFrom" for kubectl run command. Add new option "env-from" for kubectl run command. Jul 12, 2017
@xingzhou
Copy link
Contributor Author

@alexandercampbell and @shiywang, updated the patch according to your comments, PTAL, thx

continue
}
if !reflect.DeepEqual(envFroms, test.expected) {
t.Errorf("\nexpected:\n%#v\nsaw:\n%#v (%s)", test.expected, envFroms, test.test)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I like this test style.

@alexandercampbell
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 13, 2017
@pwittrock
Copy link
Member

@nurus Thanks for the explanation. Are you using run to create a job or a deployment? Is this primarily for development, debugging, or something else? Are you doing this in the same cluster? If so, have you considered orphaning a Pod and using that?

We definitely want to avoid supporting everything in the API as a flag, but want to support the most common / important use cases.

@lev-kuznetsov
Copy link

Why are you using kubectl run instead of config?

It's not possible to use kubectl run -f file.yaml as far as I'm aware. What should I be doing instead?

@nurus
Copy link

nurus commented Aug 8, 2017

No problem @pwittrock. We use run for development and diagnostics. We wouldn't use it to run
a highly available application. Our applications usually consist of a web server and workers of various types which are all deployed using deployment configuration files to ensure high availability.

The applications tend to be monolithic and require the entire codebase to be baked into the image. The codebase contains methods and scripts related to the application usually for interacting with persistent storage (database, elasticsearch, etc.). The code itself relies on environment variables that it uses as parameters to connect to the persistent storage.

For example someone many want to run a one off sql query wrapped up in a script just to glean some information from the database. run is perfect for this since we don't want to disrupt existing workers and prefer to spin up a one off pod to do this query which outputs to the users terminal. Pairing this with --env-from makes it easy just to hook into the configmap and secretsmaps which already contain the variables necessary to make the connection to the persistent storage.

We could simply exec into a pod that's part of a deployment and has all the environment variables already after orphaning it but we also don't want to endanger any work that pod is currently doing. Part of the problem is the nature of the work itself. It's much safer just to spin up a pod to do the one off task.

I definitely see your point about not wanting to clutter up the run command with lots of options. As @lev-kuznetsov mentioned earlier the proper way to do with would be with overrides which reference the configmaps and secretsmaps. Unfortunately this seems to be broken. Our workaround has been to use a version of kubectl built from this branch. I'm not sure how others are using k8s but I think our use case is very common when running rails applications which have extensive configuration set as environment variables.

@pwittrock
Copy link
Member

@nurus Thanks for sticking with me. I like to really try to understand our user's use cases as much as possible to make sure we are taking them into account in a holistic manner.

Is it correct that the main application is managed through config (kubectl apply -f I assume)? If there was a simple way to spin up a new Deployment with a single Pod and change the name, Pod labels, replica, would that solve your use case? One thing that I am thinking is that your proposed feature may make it easier to duplicate the original Pod, but still relies on the correct command line flags being supplied.

How big of a deal is the "create" + "attach" in one command vs a "create" then "attach or exec" workflow? By adding flag to run it solves a limited set of usecases, whereas if it was added as a set command (set env-from) and piping it together with run (kubectl run -o yaml --dry-run | set env-from ... | apply -f -), it could be used for other use cases (modifying existing workloads, combining with other kubectl create foo commands, etc).

I'd like to talk through this more. The deadline for 1.8 code freeze is just a couple weeks away. How important to you that something lands in 1.8 vs 1.9?

@nurus
Copy link

nurus commented Aug 14, 2017

Is it correct that the main application is managed through config (kubectl apply -f I assume)?

Yes, the application and its workers are deployed this way.

If there was a simple way to spin up a new Deployment with a single Pod and change the name, Pod labels, replica, would that solve your use case?

Definitely, if we can spin up a pod from the same deployment and strip it of labels which are used as selectors this will work. If we're debugging a web worker we may not want it to be available to the service it would normally belong to.

How big of a deal is the "create" + "attach" in one command vs a "create" then "attach or exec" workflow?

Not a huge deal. We really like using run for pod self termination after the command completes but seems like this may be possible with piping and your proposed solution.

How important to you that something lands in 1.8 vs 1.9?

It's not critical that this lands in 1.8.

Thanks for your help on this @pwittrock.

@pwittrock
Copy link
Member

@nurus

Thanks for the insight. I will continue to work with you on this, but won't plan on it landing in 1.8 given the approaching code freeze.

One thing we have been talking about is a utility to easily apply basic transformations to a config file and emit the transformed result - e.g. something that can rename and relabel the objects in your config files and emit the transformed result. We are prototyping this now, and it could be a helpful tool - e.g. run transform to change the labels and name on your deployment config, pipe to apply, profit. This approach would allow you to capture things that might not be in the env piece of your config - e.g. initializers or beta fields driven off of annotations, the command and args, resource limits, etc

@nurus
Copy link

nurus commented Aug 17, 2017

@pwittrock that sounds useful and a more complete solution to replicating the environment of a deployment. I'll stay tuned.

@lev-kuznetsov
Copy link

Would these transforms work on kubectl run? Because I really don't have a deployment to copy from

@toddgardner
Copy link

Is there a decision on this change? We also ran into this not working, and have a similar use-case.

We want to run one off Jobs (like django's "python manage.py migrate" or "python manage.py shell") in the same environment variables as a running Deployment; since this is shared between multiple Deployments (like the web server and background workers), it uses envFrom in the deployment, and we'd like to just that in the Job created by kubectl run.

I don't think "run" in our use case is really just "create/apply" then "attach"; it's:

  • create/apply
  • wait for pod to be ready&running
  • attach
  • wait for final Job status
  • set exit status appropriately

So you'd need more than just "set" to replicate the functionality, the "wait conditions" #1899 request would also be needed and some shell scripting around understanding status. I made a half-hearted stab at implementing that before doing the workaround mentioned above (some scripting to transform config maps and secrets to equivalent --env=FOO=bar invocations) This obviously has several drawbacks like a particularly lengthy command line and embedding secrets into a Job spec directly.

@nehresma
Copy link

I landed here with the same use case as @toddgardner -- running migrations. I'll also probably fall back to scripting to set the environment variables, but it would sure be handy to have envFrom support here.

@kvitajakub
Copy link

Same use case/feature request here. We want to run separate pod with replicated environment and shell/console access (interactive) that is removed on finish.

@nurus
Copy link

nurus commented Nov 7, 2017

I've gotten envFrom to work with --overrides on kubectl v1.7.5.
Here's an example of the override json:

{  
   "spec":{  
      "containers":[  
         {  
            "name": "podname",
            "image": "image",
            "args":[  
               "command"
            ],
            "envFrom":[  
               {  
                  "configMapRef":{  
                     "name": "configmap"
                  }
               },
               {  
                  "secretRef":{  
                     "name": "secrets"
                  }
               }
            ]
         }
      ]
   }
}

The final kubectl command looks like this:

kubectl run -i --rm --tty podname --image image  \
--namespace=namespace --restart=Never \
--overrides='{"spec": {"containers": [{"image": "image", "args": ["command"], "name": "podname", "envFrom": [{"configMapRef": {"name": "configmap"}}, {"secretRef": {"name": "secrets"}}]}]}}'

There's some redundancy needed to pass validation.

This should meet the use case which brought most of us here. I ended up wrapping it up in a script to make it easier to form the command.

@kvitajakub
Copy link

@nurus Thank you, that is definitely helpful. I managed to get the environment there, only problem I have is that I can't open console. There is no output, kubectl finishes and pod is not listed anywhere.

kubectl run -it --rm ubuntu --overrides='
{
  "spec": {
    "containers": [
      {
        "name": "ubuntu",
        "image": "ubuntu",
        "args": ["bash"],
        "envFrom": [
          {
            "configMapRef": {
              "name": "configmap-env"
            }
          }
        ]
      }
    ]
  }
}
'  --image=ubuntu --restart=Never

GIven the configmap-env exists, am I missing something?

@nurus
Copy link

nurus commented Nov 8, 2017

@kvitajakub it seems anything that isn't explicitly specified in the container spec reverts to the default option. For an interactive session add these fields to the container spec

                    "stdin": True,
                    "stdinOnce": True,
                    "tty": True

@kvitajakub
Copy link

kvitajakub commented Nov 8, 2017

@nurus Thank you, it works!
Final command that does get the shell with evnvars is

kubectl run -it --rm ubuntu --image=ubuntu --restart=Never --overrides='
{
  "spec": {
    "containers": [
      {
        "name": "ubuntu",
        "image": "ubuntu",
        "args": ["bash"],
        "envFrom": [
          {
            "configMapRef": {
              "name": "configmap-env"
            }
          }
        ],
        "stdin": true,
        "stdinOnce": true,
        "tty": true
      }
    ]
  }
}'

@k8s-github-robot
Copy link

This PR hasn't been active in 30 days. It will be closed in 59 days (Feb 6, 2018).

cc @alexandercampbell @eparis @mengqiy @pwittrock @shiywang @xingzhou

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 9, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 8, 2018
@javanthropus
Copy link

This issue seems to have gone stale, but we have the same need where I work. However, rather than add another option to the run command for each pod option now and in the future, we would be completely satisfied if there was an equivalent of the apply command that waited on the container in the pod just like the run command does.

In other words, given a full pod manifest (pod.json), the run command could work like this:

kubectl run -it --rm --restart=Never --filename pod.json

You can emulate this now by adding a dummy name, a dummy --image option, and using the --overrides option:

kubectl run -it --rm --restart=Never DUMMY --image dummy --overrides="$(cat pod.json)"

This seems a little backwards logically though because we want the manifest to be the base definition, where the additional command line options provide overrides. It's also a mess to have to pass the entire manifest on the command line.

Going further, as with the create and apply commands, it would be great if the manifest could be supplied in either JSON or YAML formats.

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@guizmaii
Copy link

guizmaii commented Jan 7, 2019

Why this PR is closed while this addition is super useful ?

@nwsparks
Copy link

nwsparks commented May 21, 2020

This is a pretty useful addition, odd that is has been neglected and closed.

We have env and secret objects that are generated from terraform, things like rds connections and such.

We have requirements that prevent data from leaving the environment, so in order to interact with and view the data we would like to launch a container inside of the env via kubectl run and attach to the shell. These are not intended to be "deployed" services and they should exit when we are done with them so the use case fits into kubectl run. Being able to simply reference the terraform generated secret objects which are required to make the connections with this switch as opposed to condensed json would certainly be nice.

This seems like a simple and useful addition so I am wondering what the resistance is to adding it?

@xingzhou any desire to reopen this and make another attempt at getting it added? Based on the feedback here from the community it looks like it would be a welcome addition.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

kubectl run overrides envFrom doesn't add env vars