Skip to content

Commit

Permalink
(from getambassador.io) Merge pull request #780 from datawire/donnyyu…
Browse files Browse the repository at this point in the history
…ng/tp-migration-docs

add legacy Telepresence -> Telepresence migration docs
  • Loading branch information
Donny Yung committed May 17, 2021
2 parents a77a725 + 74be740 commit 53a1e41
Show file tree
Hide file tree
Showing 5 changed files with 122 additions and 44 deletions.
2 changes: 2 additions & 0 deletions doc-links.yml
Expand Up @@ -14,6 +14,8 @@
link: /install/
- title: Upgrade
link: /install/upgrade/
- title: Migrate from legacy Telepresence
link: /install/migrate-from-legacy/
- title: Core concepts
items:
- title: The changing development workflow
Expand Down
9 changes: 8 additions & 1 deletion install/index.md
@@ -1,5 +1,6 @@
import Alert from '@material-ui/lab/Alert';
import QSTabs from '../quick-start/qs-tabs'
import OldVersionTabs from './old-version-tabs'

# Install

Expand All @@ -9,4 +10,10 @@ Install Telepresence by running the commands below for your OS.

## <img class="os-logo" src="../images/logo.png"/> What's Next?

Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment.
Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment.

## Installing older versions of Telepresence

Use these URLs to download an older version for your OS, replacing `x.y.z` with the versions you want.

<OldVersionTabs/>
99 changes: 99 additions & 0 deletions install/migrate-from-legacy.md
@@ -0,0 +1,99 @@
# Migrate from legacy Telepresence

Telepresence (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence](https://www.telepresence.io/) when working with local instances of your services.

In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment".

In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time.

Telepresence introduces a [new architecture](../../reference/architecture/) built around "intercepts" that addresses these problems. With Telepresence, a sidecar proxy is injected onto the pod. The proxy then intercepts traffic intended for the pod and routes it to the workstation/laptop. The advantage of this approach is that the service is running at all times, and no swapping is used. By using the proxy approach, we can also do selective intercepts, where certain types of traffic get routed to the service while other traffic gets routed to your laptop/workstation.

Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts.

## Using legacy Telepresence commands

First please ensure you've [installed Telepresence](../).

Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands.
So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used
to with the Telepresence binary.

For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in
Telepresence) with a python server, you could run the following command:

```
$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090
< help text >
Legacy telepresence command used
Command roughly translates to the following in Telepresence:
telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090
running...
Connecting to traffic manager...
Connected to context <your k8s cluster>
Using Deployment myserver
intercepted
Intercept name : myserver
State : ACTIVE
Workload kind : Deployment
Destination : 127.0.0.1:9090
Intercepting : all TCP connections
Serving HTTP on :: port 9090 (http://[::]:9090/) ...
```

Telepresence will let you know what the legacy Telepresence command has mapped to and automatically
runs it. So you can get started with Telepresence today, using the commands you are used to
and it will help you learn the Telepresence syntax.

### Legacy command mapping

Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and
are supported).

| Legacy Telepresence Command | Telepresence Command |
|--------------------------------------------------|--------------------------------------------|
| --swap-deployment $workload | intercept $workload |
| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] |
| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] |
| --swap-deployment $workload --run-shell | intercept $workload -- bash |
| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd |
| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd |
| --run-shell | connect -- bash |
| --run $cmd | connect -- $cmd |
| --env-file,--env-json | --env-file, --env-json (haven't changed) |
| --context,--namespace | --context, --namespace (haven't changed) |
| --mount,--docker-mount | --context, --namespace (haven't changed) |

### Legacy Telepresence command limitations

Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or
aren't yet supported in Telepresence. For some known popular commands, such as --method,
Telepresence will include output letting you know that the flag has gone away. For flags that
Telepresence can't translate yet, it will let you know that that flag is "unsupported".

If Telepresence is missing any flags or functionality that is integral to your usage, please let us know
by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](https://a8r.io/Slack)!

## Telepresence changes

Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including
with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process
dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you
want to remove them from the cluster, the following Telepresence command will help:
```
$ telepresence uninstall --help
Uninstall telepresence agents and manager
Usage:
telepresence uninstall [flags] { --agent <agents...> |--all-agents | --everything }
Flags:
-d, --agent uninstall intercept agent on specific deployments
-a, --all-agents uninstall intercept agent on all deployments
-e, --everything uninstall agents and the traffic manager
-h, --help help for uninstall
-n, --namespace string If present, the namespace scope for this CLI request
```

Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at
our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence.
32 changes: 1 addition & 31 deletions install/upgrade.md
Expand Up @@ -3,40 +3,10 @@ description: "How to upgrade your installation of Telepresence and install previ
---

import QSTabs from '../quick-start/qs-tabs'
import OldVersionTabs from './old-version-tabs'

# Upgrade

<div class="docs-article-toc">
<h3>Contents</h3>

* [Upgrade Process](#upgrade-process)
* [Installing Older Versions of Telepresence](#installing-older-versions-of-telepresence)
* [Migrating from Telepresence 1 to Telepresence 2](#migrating-from-telepresence-1-to-telepresence-2)

</div>

## Upgrade process
# Upgrade Process
The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version.

<QSTabs/>

After upgrading your CLI, the Traffic Manager **must be uninstalled** from your cluster. This can be done using `telepresence uninstall --everything` or by `kubectl delete svc,deploy -n ambassador traffic-manager`. The next time you run a `telepresence` command it will deploy an upgraded Traffic Manager.

## Installing older versions of Telepresence

Use these URLs to download an older version for your OS, replacing `x.x.x` with the version you want.

<OldVersionTabs/>

## Migrating from Telepresence 1 to Telepresence 2

Telepresence 2 (the current major version) has different mechanics and requires a different mental model from [Telepresence 1](https://www.telepresence.io/) when working with local instances of your services.

In Telepresence 1, a pod running a service is swapped with a pod running the Telepresence proxy. This proxy receives traffic intended for the service, and sends the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment".

In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time.

Telepresence 2 introduces a [new architecture](../../reference/architecture/) built around "intercepts" that addresses this problem. With Telepresence 2, a sidecar proxy is injected onto the pod. The proxy then intercepts traffic intended for the pod and routes it to the workstation/laptop. The advantage of this approach is that the service is running at all times, and no swapping is used. By using the proxy approach, we can also do selective intercepts, where certain types of traffic get routed to the service while other traffic gets routed to your laptop/workstation.

Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts.
24 changes: 12 additions & 12 deletions reference/rbac.md
@@ -1,10 +1,10 @@
import Alert from '@material-ui/lab/Alert';

# Telepresence RBAC
The intention of this document is to provide a template for securing and limiting the permissions of Telepresence 2.
This documentation will not cover the full extent of permissions necessary to administrate Telepresence 2 components in a cluster. [Telepresence administration](/products/telepresence/) requires permissions for creating Service Accounts, ClusterRoles and ClusterRoleBindings, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator.
The intention of this document is to provide a template for securing and limiting the permissions of Telepresence.
This documentation will not cover the full extent of permissions necessary to administrate Telepresence components in a cluster. [Telepresence administration](/products/telepresence/) requires permissions for creating Service Accounts, ClusterRoles and ClusterRoleBindings, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator.

There are two general categories for cluster permissions with respect to Telepresence 2. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources.
There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources.

In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page.

Expand All @@ -15,7 +15,7 @@ In addition to the above, there is also a consideration of how to manage Users a

## Editing your kubeconfig

This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp2-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​
This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​

```yaml
apiVersion: v1
Expand All @@ -28,9 +28,9 @@ contexts:
- name: my-context
context:
cluster: my-cluster # Must match the name field in the clusters config
user: tp2-user
user: tp-user
users:
- name: tp2-user # Must match the name of the Service Account created by the cluster admin
- name: tp-user # Must match the name of the Service Account created by the cluster admin
user:
token: <service-account-token> # See note below
```
Expand All @@ -50,7 +50,7 @@ To allow users to make intercepts across all namespaces, but with more limited `
apiVersion: v1
kind: ServiceAccount
metadata:
name: tp2-user # Update value for appropriate value
name: tp-user # Update value for appropriate value
namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace
---
apiVersion: rbac.authorization.k8s.io/v1
Expand Down Expand Up @@ -92,7 +92,7 @@ kind: ClusterRoleBinding
metadata:
name: telepresence-rolebinding
subjects:
- name: tp2-user
- name: tp-user
kind: ServiceAccount
namespace: ambassador
roleRef:
Expand All @@ -112,7 +112,7 @@ RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cl
apiVersion: v1
kind: ServiceAccount
metadata:
name: tp2-user # Update value for appropriate user name
name: tp-user # Update value for appropriate user name
namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace
---
kind: ClusterRole
Expand Down Expand Up @@ -152,7 +152,7 @@ metadata:
namespace: ambassador
subjects:
- kind: ServiceAccount
name: tp2-user # Should be the same as metadata.name of above ServiceAccount
name: tp-user # Should be the same as metadata.name of above ServiceAccount
namespace: ambassador
roleRef:
kind: ClusterRole
Expand All @@ -166,7 +166,7 @@ metadata:
namespace: test # Update "test" for appropriate namespace to be intercepted
subjects:
- kind: ServiceAccount
name: tp2-user # Should be the same as metadata.name of above ServiceAccount
name: tp-user # Should be the same as metadata.name of above ServiceAccount
namespace: ambassador
roleRef:
kind: ClusterRole
Expand All @@ -190,7 +190,7 @@ metadata:
name: telepresence-namespace-binding
subjects:
- kind: ServiceAccount
name: tp2-user # Should be the same as metadata.name of above ServiceAccount
name: tp-user # Should be the same as metadata.name of above ServiceAccount
namespace: ambassador
roleRef:
kind: ClusterRole
Expand Down

0 comments on commit 53a1e41

Please sign in to comment.