New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to parse existing Deployment object to generate the one for telepresence #9

Closed
itamarst opened this Issue Mar 6, 2017 · 4 comments

Comments

Projects
None yet
2 participants
@itamarst
Contributor

itamarst commented Mar 6, 2017

Given an existing Deployment it should be possible to extract environment variables and the like for the Telepresence Deployment.

@itamarst itamarst added the enhancement label Mar 6, 2017

@itamarst

This comment has been minimized.

Contributor

itamarst commented Mar 15, 2017

Jean-Paul expressed a preference for doing this on level of config files, rather than talking to Kubernetes cluster, since it gives more visibility over how things are done.

@exarkun

This comment has been minimized.

Contributor

exarkun commented Mar 22, 2017

An idea in the opposite direction might be to have telepresence actually edit the existing Deployment for the user. A command line:

$ telepresence --deployment foo --container bar --new-shell

could patch foo so that its bar container uses the telepresence image and then start a shell in the proxied environment for that container. When the shell exits, telepresence could even roll the deployment back.

That seems kinda snazzy to me though it may be sufficiently automagical that there are some unwanted obscure corner-cases or side-effects.

@itamarst

This comment has been minimized.

Contributor

itamarst commented May 2, 2017

Current plan:

$ telepresence --swap-deployment existingservice --run-shell

This swaps out Telepresence proxy for existing pod, and reverts on shutdown.

If you have more than one container:

$ telepresence --swap-deployment existingservice:containername --run-shell

Swap out is done by getting kubectl get deployment --export version of JSON, switching replicas to 1 and image to our image.

My first thought on how revert is to be done was using the apply mechanism: at beginning of process the current revision is recorded, and kubectl rollout undo is used to rollback. Deployments created with both kubectl apply and kubectl create appear to start with a revision, so presumably everything will have a revision. However, this doesn't appear to undo change to number of replicas.

This suggests a different mechanism: the same logic that does the swap out could be used for the swap back, we just need to remember old image name and replicas until shutdown.

@itamarst itamarst added this to Next in Telepresence May 2, 2017

@itamarst

This comment has been minimized.

Contributor

itamarst commented May 18, 2017

Remaining work for swap-deployment branch:

  • Get all tests passing (hopefully just dealing with fiddly bits in Runner.check_call).
  • Test against OpenShift, make sure it works.
  • Make sure no k8s resources are leaked by telepresence or the test suite.
  • Update documentation and asciinema video (that could be done in separate branch, though.)

@itamarst itamarst moved this from Next to In progress in Telepresence May 23, 2017

itamarst added a commit that referenced this issue May 23, 2017

Merge pull request #152 from datawire/swap-deployment
Option for swapping deployment.

Fixes #9.

@itamarst itamarst moved this from In progress to Done in Telepresence May 23, 2017

@itamarst itamarst removed this from Done in Telepresence Jun 22, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment