New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

envFrom/ConfigMap not mapping environment variables to Docker container #230

Closed
tristanpemble opened this Issue Jul 19, 2017 · 9 comments

Comments

Projects
None yet
2 participants
@tristanpemble

tristanpemble commented Jul 19, 2017

I am using the --docker-run argument with --swap-deployment on Telepresence 0.60

My deployment's pod template uses envFrom:, which maps values from a ConfigMap into the environment.

The image running locally in Docker does not seem to receive the environment variables from this ConfigMap; it only receives the environment variables defined in the env: block of the template.

For example:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap
data:
  EXAMPLE_ENVFROM: foobar
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 1
  template:
    metadata:
      name: my-deployment
      labels:
        tier: whatever
    spec:
      containers:
        - name: alpine
          image: alpine
          command: ["/bin/sh", "-c", "env | grep EXAMPLE"]
          envFrom:
            - configMapRef:
                name: my-configmap
          env:
            - name: EXAMPLE_ENV
              value: foobar
λ kubectl apply -f ./test.yml
configmap "my-configmap" configured
deployment "my-deployment" configured
λ kubectl logs my-deployment-4135175423-3rzj5
EXAMPLE_ENVFROM=foobar
EXAMPLE_ENV=foobar
λ telepresence --swap-deployment my-deployment --docker-run alpine /bin/sh -c "env | grep EXAMPLE"
Volumes are rooted at $TELEPRESENCE_ROOT. See http://www.telepresence.io/howto/volumes.html for details.

EXAMPLE_ENV=foobar
@tristanpemble

This comment has been minimized.

tristanpemble commented Jul 19, 2017

OK, so I traced this to get_deployment_set_keys, which is used to whitelist which environment variables get copied. it's not aware of the ConfigMaps. I was going to attempt to fix this myself, but it seems like this is not a simple fix. My impression is that it is going to require some effort in retrieving the ConfigMaps in get_remote_info or something.

@itamarst

This comment has been minimized.

Contributor

itamarst commented Jul 19, 2017

Sorry you hit a problem, and thanks for the bug report! I'll take a look at fixing this later today.

@itamarst itamarst added the bug label Jul 19, 2017

@itamarst itamarst added this to In progress in Telepresence Jul 19, 2017

@itamarst

This comment has been minimized.

Contributor

itamarst commented Jul 19, 2017

Some options:

  1. Get list of env variables from ConfigMaps and Secrets indicated in envFrom, applying prefix if specified.
  2. Retrieve actual environment variables, instead of list.
  3. Maybe some way to get actual list of env variables out of pod?

Originally I used 2nd approach, and abandoned it because I was getting shell env variables... but they arguably was due to bad implementation on part. It seems like calling env in pod without a shell ought to give the specific env variables set.

So:

  • Investigate option 3.
  • If not viable, see if kubectl exec <pod> env will work.
  • If not that, fall back to loading ConfigMaps.
@itamarst

This comment has been minimized.

Contributor

itamarst commented Jul 19, 2017

If we run kubectl exec <pod> env after we're using the telepresence-k8s image, we have a consistent Alpine environment. And in this environment only added env variables are HOME, PATH, HOSTNAME. So we blacklist those and all the rest are the env variables we want to clone to the local process. So I'll go with solution 2.

itamarst added a commit that referenced this issue Jul 19, 2017

Merge pull request #231 from datawire/envfrom-support
Better method for getting env variables.

Fixes #230.
@itamarst

This comment has been minimized.

Contributor

itamarst commented Jul 19, 2017

If all goes well 0.61 will be released within 30 minutes to an hour - let me know if it fixes the problem.

Would also love to hear about your use case for Telepresence.

@tristanpemble

This comment has been minimized.

tristanpemble commented Jul 19, 2017

Sure -- we're transitioning to using Docker/Kubernetes, starting with development environments first. We're initially focusing on running everything locally within Minikube (we do not have many services, yet, so it all fits on our laptop at the moment). Most of our applications are PHP based, and at the moment I am planning to use the --docker-run functionality to run a php-fpm container while mounting the project directory in the container. The goal is to continue to use our IDEs and edit code as we normally do, without having to impact our workflow drastically.

Eventually I know we will outgrow a local Minikube cluster, and will have to move on to a remote development environment, which is where I can tell Telepresence will shine. Hopefully by using Telepresence earlier on, our team has flexibility with how and where it chooses to develop as our needs change.

I'm sure it will also find its way into our staging/production debugging workflows, once we move upward into those environments.

@tristanpemble

This comment has been minimized.

tristanpemble commented Jul 19, 2017

@itamarst it seems that the TravisCI tests failed, so the deployment did not execute

@itamarst

This comment has been minimized.

Contributor

itamarst commented Jul 20, 2017

Restarted build, and one way or another will get this out today.

@itamarst itamarst removed this from In progress in Telepresence Jul 20, 2017

@itamarst

This comment has been minimized.

Contributor

itamarst commented Jul 20, 2017

OK, 0.61 is out now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment