Skip to content

Commit

Permalink
Merge commit '7c7d805008d282b483c4724c2be2da698a58a22f'
Browse files Browse the repository at this point in the history
  • Loading branch information
LukeShu committed Jun 12, 2021
2 parents d0d443a + 2c1475d commit dd63682
Show file tree
Hide file tree
Showing 6 changed files with 138 additions and 6 deletions.
4 changes: 4 additions & 0 deletions faqs.md
Expand Up @@ -16,6 +16,10 @@ You can “intercept” any requests made to a target Kubernetes workload, and c

By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes.

** What operating systems does Telepresence work on?**

Telepresence currently works natively on macOS and Linux. We are working on a native Windows port, but in the meantime, Windows users can use Telepresence with WSL 2.

** What protocols can be intercepted by Telepresence?**

All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes:
Expand Down
2 changes: 1 addition & 1 deletion quick-start/qs-tabs.js
Expand Up @@ -103,7 +103,7 @@ export default function SimpleTabs() {
<TabPanel value={value} index={2}>
<div class="docs-hubspot-formwrapper">
<p>
Telepresence for Windows is coming soon! Sign up here to notified when it is available.
Telepresence for Windows is coming soon! Sign up here to notified when it is available. Until then, Telepresence will work with WSL 2.
</p>
<div class="docs-hubspot-form">
<HubspotForm
Expand Down
34 changes: 34 additions & 0 deletions reference/cluster-config.md
@@ -1,3 +1,5 @@
import Alert from '@material-ui/lab/Alert';

# Cluster-side configuration

For the most part, Telepresence doesn't require any special
Expand Down Expand Up @@ -118,3 +120,35 @@ run this command to generate the Cluster ID:
3. Save the output as a YAML file and apply it to your
cluster with `kubectl`. Once applied, you will be able to use selective intercepts with the
`--preview-url=false` flag (since use of preview URLs requires a connection to Ambassador Cloud).

## Mutating Webhook

By default, Telepresence updates the intercepted workload (Deployment, StatefulSet, ReplicaSet)
template to add the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the
port definitions. If you use GitOps workflows (with tools like ArgoCD) to automatically update your
cluster so that it reflects the desired state from an external Git repository, this behavior can make
your workload out of sync with that external desired state.

To solve this issue, you can use Telepresence's Mutating Webhook alternative mechanism. Intercepted
workloads will then stay untouched and only the underlying pods will be modified to inject the Traffic
Agent sidecar container and update the port definitions.

<Alert severity="info">
A current limitation of the Mutating Webhook mechanism is that the <code>targetPort</code> of your intercepted
Service needs to point to the <strong>name</strong> of a port on your container, not the port number itself.
</Alert>

Simply add the `telepresence.getambassador.io/inject-traffic-agent: enabled` annotation to your
workload template's annotations:

```diff
spec:
template:
metadata:
labels:
service: your-service
+ annotations:
+ telepresence.getambassador.io/inject-traffic-agent: enabled
spec:
containers:
```
74 changes: 70 additions & 4 deletions reference/config.md
@@ -1,13 +1,14 @@
# Laptop-side configuration

Telepresence uses a `config.yml` file to store and change certain values. The location of this file varies based on your OS:
## Global Configuration
Telepresence uses a `config.yml` file to store and change certain global configuration values that will be used for all clusters you use Telepresence with. The location of this file varies based on your OS:

* macOS: `$HOME/Library/Application Support/telepresence/config.yml`
* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml`

For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence.

## Values
### Values

The config file currently supports values for the `timeouts` and `logLevels` keys.

Expand All @@ -21,7 +22,7 @@ logLevels:
userDaemon: debug
```

### Timeouts
#### Timeouts
Values for `timeouts` are all durations either as a number respresenting seconds or a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings can be fractional (`1.5h`) or combined (`2h45m`).

These are the valid fields for the `timeouts` key:
Expand All @@ -36,11 +37,76 @@ These are the valid fields for the `timeouts` key:
|`trafficManagerConnect`|Waiting for the Traffic Manager API to connect for port fowards|20 seconds|
|`trafficManagerAPI`|Waiting for connection to the gPRC API after `trafficManagerConnect` is successful|5 seconds|

### Log Levels
#### Log Levels
Values for `logLevels` are one of the following strings: `trace`, `debug`, `info`, `warning`, `error`, `fatal` and `panic`.
These are the valid fields for the `logLevels` key:

|Field|Description|Default|
|---|---|---|
|`userDaemon`|Logging level to be used by the User Daemon (logs to connector.log)|debug|
|`rootDaemon`|Logging level to be used for the Root Daemon (logs to daemon.log)|info|

## Per-Cluster Configuration
Some configuration is not global to Telepresence and is actually specific to a cluster. Thus, we store that config information in your kubeconfig file, so that it is easier to maintain per-cluster configuration.

### Values
The current per-cluster configuration supports `dns` and `alsoProxy` keys.
To add configuration, simply add a `telepresence.io` entry to the cluster in your kubeconfig like so:

```
apiVersion: v1
clusters:
- cluster:
server: https://127.0.0.1
extensions:
- name: telepresence.io
extension:
dns:
also-proxy:
name: example-cluster
```
#### DNS
The fields for `dns` are: local-ip, remote-ip, exclude-suffixes, include-suffixes, and lookup-timeout.

|Field|Description|Type|Default|
|---|---|---|---|
|`local-ip`|The address of the local DNS server. This entry is only used on Linux system that are not configured to use systemd.resolved|ip|first line of /etc/resolv.conf|
|`remote-ip`|the address of the cluster's DNS service|ip|IP of the kube-dns.kube-system or the dns-default.openshift-dns service|
|`exclude-suffixes`|suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver)|list||
|`include-suffixes`|suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes.|list||
|`lookup-timeout`|maximum time to wait for a cluster side host lookup|duration||

Here is an example kubeconfig:
```
apiVersion: v1
clusters:
- cluster:
server: https://127.0.0.1
extensions:
- name: telepresence.io
extension:
dns:
include-suffixes:
- .se
exclude-suffixes:
- .com
name: example-cluster
```


#### AlsoProxy
When using `also-proxy`, you provide a list of subnets after the key in your kubeconfig file to be added to the TUN device. All connections to addresses that the subnet spans will be dispatched to the cluster

Here is an example kubeconfig for the subnet `1.2.3.4/32`:
```
apiVersion: v1
clusters:
- cluster:
server: https://127.0.0.1
extensions:
- name: telepresence.io
extension:
also-proxy:
- 1.2.3.4/32
name: example-cluster
```
2 changes: 1 addition & 1 deletion reference/rbac.md
Expand Up @@ -2,7 +2,7 @@ import Alert from '@material-ui/lab/Alert';

# Telepresence RBAC
The intention of this document is to provide a template for securing and limiting the permissions of Telepresence.
This documentation will not cover the full extent of permissions necessary to administrate Telepresence components in a cluster. [Telepresence administration](/products/telepresence/) requires permissions for creating Service Accounts, ClusterRoles and ClusterRoleBindings, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator.
This documentation will not cover the full extent of permissions necessary to administrate Telepresence components in a cluster. [Telepresence administration](/products/telepresence/) requires permissions for creating Namespaces, ServiceAccounts, ClusterRoles, ClusterRoleBindings, Secrets, Services, MutatingWebhookConfiguration, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator.

There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources.

Expand Down
28 changes: 28 additions & 0 deletions releaseNotes.yml
Expand Up @@ -30,6 +30,34 @@ docDescription: >-
changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md

items:
- version: 2.3.1
date: 'TBD'
notes:
- title: DNS Resolver Configuration
body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis."
image: ./telepresence-2.3.1-dns.png
docs: reference/config
type: feature
- title: AlsoProxy Configuration
body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet."
image: ./telepresence-2.3.1-alsoProxy.png
docs: reference/config
type: feature
- title: Mutating Webhook for Injecting Traffic Agents
body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the <code>telepresence.getambassador.io/traffic-agent: enabled</code> annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past"
image: ./telepresence-2.3.1-inject.png
docs: reference/rbac
type: feature
- title: Traffic Manager Connect Timeout
body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook."
image: ./telepresence-2.3.1-trafficmanagerconnect.png
docs: reference/config
type: change
- title: Fix for large file transfers
body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely"
image: ./telepresence-2.3.1-large-file-transfer.png
docs: reference/tun-device
type: bugfix
- version: 2.3.0
date: '2021-06-01'
notes:
Expand Down

0 comments on commit dd63682

Please sign in to comment.