Skip to content
This repository has been archived by the owner on Feb 27, 2023. It is now read-only.

Commit

Permalink
Merge pull request #119 from Bradamant3/doc-edits-0.2
Browse files Browse the repository at this point in the history
edit docs for 0.2 release

Signed-off-by: Ross Kukulinski <ross@kukulinski.com>
  • Loading branch information
rosskukulinski committed May 21, 2018
2 parents 9e02e0a + 0552d8e commit 9b6d4db
Show file tree
Hide file tree
Showing 8 changed files with 91 additions and 97 deletions.
26 changes: 12 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Heptio Gimbal is a layer-7 load balancing platform built on Kubernetes, the [Env

Gimbal was developed out of a joint effort between Heptio and Yahoo Japan Corporation's subsidiary, Actapio, to modernize Yahoo Japan’s infrastructure with Kubernetes, without affecting legacy investments in OpenStack.

At launch, Gimbal can discover services from Kubernetes and OpenStack clusters, but we expect to support additional platforms in the future.
Early releases of Gimbal can discover services that run on Kubernetes and OpenStack clusters, but support for additional platforms is expected in future releases.

### Common Use Cases

Expand All @@ -19,11 +19,9 @@ At launch, Gimbal can discover services from Kubernetes and OpenStack clusters,

![OverviewDiagram](docs/images/overview.png)

## Prerequisites
## Supported versions

Gimbal is tested with Kubernetes clusters running version 1.9 and later but should work with any cluster running version 1.7 or later.

Gimbal's service discovery is currently tested with Kubernetes 1.7+ and OpenStack Mitaka.
Gimbal runs on Kubernetes version 1.9 or later, but is tested to provide service discovery for clusters running Kubernetes 1.7 or later, or OpenStack Mitaka.

## Get started

Expand All @@ -36,18 +34,18 @@ Documentation for all the Gimbal components can be found in the [docs directory]

## Known Limitations

* Upstream Kubernetes Pods and OpenStack VMs must be routable from the Gimbal load balancing cluster
* No support for Kubernetes clusters with overlay networks
* We are looking for feedback on community requirements to design a solution. One potential option is to use one GRE tunnel per upstream cluster. [Feedback welcome here](https://github.com/heptio/gimbal/issues/39)!
* The Kubernetes Ingress API is limited and insecure
* Only one backend per route
* Anyone can modify route rules for a domain
* More complex load balancing features like weighting and strategy are not supported
* Gimbal & Contour will solve this with a [new IngressRoute CRD](https://github.com/heptio/contour/blob/master/design/ingressroute-design.md)
* Upstream Kubernetes Pods and OpenStack VMs must be routable from the Gimbal load balancing cluster.
* Support is not available for Kubernetes clusters with overlay networks.
* We are looking for community feedback on design requirements for a solution. A possible option is one GRE tunnel per upstream cluster. [Feedback welcome here](https://github.com/heptio/gimbal/issues/39)!
* The Kubernetes Ingress API is limited and insecure.
* Provides only one backend per route.
* Anyone can modify route rules for a domain.
* More complex load balancing features like weighting and strategy are not supported.
* Gimbal & Contour will provide a solution with a [new IngressRoute CRD](https://github.com/heptio/contour/blob/master/design/ingressroute-design.md).

## Troubleshooting

If you encounter any problems that the documentation does not address, please [file an issue](https://github.com/heptio/gimbal/issues) or talk to us on the Kubernetes Slack team channel [#gimbal](https://kubernetes.slack.com/messages/gimbal)
If you encounter any problems that the documentation does not address, please [file an issue](https://github.com/heptio/gimbal/issues) or talk to us on the Kubernetes Slack team channel [#gimbal](https://kubernetes.slack.com/messages/gimbal).

## Contributing

Expand Down
36 changes: 17 additions & 19 deletions deployment/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,10 @@
```

- A single Kubernetes cluster to deploy Gimbal
- Kubernetes or Openstack clusters with flat networking. That is, each Pod has a route-able IP address on the network.
- Kubernetes or Openstack clusters with flat networking. That is, each Pod has a routable IP address on the network.

## Deploy Contour

For additional information about Contour, see [the Gimbal architecture doc](../docs/gimbal-architecture.md).

```sh
# Navigate to deployment directory
$ cd deployment
Expand All @@ -48,13 +46,15 @@ $ cd deployment
$ kubectl create -f contour/
```

The deployment also includes sample Network Policies which restrict access to Contour and Envoy as well as allow access from Prometheus to scrape for metrics.
The deployment includes sample Network Policies that restrict access to Contour and Envoy. The policies explicitly allow access from Prometheus to scrape for metrics.

**NOTE**: The current configuration exposes the `/stats` path from the Envoy Admin UI so that Prometheus can scrape for metrics.

For additional information about Contour, see [the Gimbal architecture doc](../docs/gimbal-architecture.md).

## Deploy Discoverers

Service discovery is enabled with the Discoverers, which have both Kubernetes and Openstack implementations.
Service discovery is enabled with Discoverers, which have both Kubernetes and Openstack implementations.

```sh
# Create gimbal-discoverer namespace
Expand Down Expand Up @@ -103,11 +103,11 @@ For more information, see [the OpenStack Discoverer doc](../docs/openstack-disco

## Deploy Prometheus

Included in the Gimbal repo is a sample development deployment of Prometheus and Alertmanager using temporary storage and may not be suitable for all environments.
A sample deployment of Prometheus and Alertmanager is provided that uses temporary storage. This deployment can be used for testing and development, but might not be suitable for all environments.

### Stateful Deployment

A stateful deployment of Prometheus should utilize persistent storage within your Kubernetes cluster. This is accomplished by utilizing [Persistent Volumes and Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to maintain a correlation between a data volume and the Prometheus pod. Persistent volumes can be `static` or `dynamic` and depends on the backend storage implementation utilized in environment in which the cluster is deployed. Please reference the [documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes) which best matches your environment & needs.
A stateful deployment of Prometheus should use persistent storage with [Persistent Volumes and Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to maintain a correlation between a data volume and the Prometheus Pod. Persistent volumes can be static or dynamic and depends on the backend storage implementation utilized in environment in which the cluster is deployed. For more information, see the [Kubernetes documentation on types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes).

### Quick start

Expand All @@ -127,19 +127,19 @@ $ kubectl apply -f kubernetes/
$ kubectl -n gimbal-monitoring port-forward $(kubectl -n gimbal-monitoring get pods -l app=prometheus -l component=server -o jsonpath='{.items[0].metadata.name}') 9090:9090
```

then go to [http://localhost:9090](http://localhost:9090) in your browser
then go to [http://localhost:9090](http://localhost:9090) in your browser.

### Access the Alertmanager web UI

```sh
$ kubectl -n gimbal-monitoring port-forward $(kubectl -n gimbal-monitoring get pods -l app=prometheus -l component=alertmanager -o jsonpath='{.items[0].metadata.name}') 9093:9093
```

then go to [http://localhost:9093](http://localhost:9093) in your browser
then go to [http://localhost:9093](http://localhost:9093) in your browser.

## Deploy Grafana

Sample development deployment of Grafana using temporary storage.
A sample deployment of Grafana is provided that uses temporary storage.

### Quick start

Expand All @@ -159,28 +159,28 @@ $ kubectl create secret generic grafana -n gimbal-monitoring \
$ kubectl port-forward $(kubectl get pods -l app=grafana -n gimbal-monitoring -o jsonpath='{.items[0].metadata.name}') 3000 -n gimbal-monitoring
```

then go to [http://localhost:3000](http://localhost:3000) in your browser, with `admin` as the username.
then go to [http://localhost:3000](http://localhost:3000) in your browser. The username is `admin`.

### Configure Grafana

Grafana requires some configuration after it's deployed. These steps configure a datasource and import a dashboard to validate the connection.
Grafana requires some configuration after it's deployed.

#### Configure datasource

1. On the main Grafana page, click **Add Datasource**
2. For **Name** enter _prometheus_
3. In `Type` selector, choose _Prometheus_
3. In the **Type** selector, choose _Prometheus_
4. For the URL, enter `http://prometheus:9090`
5. Click **Save & Test**
6. Look for the message box in green stating _Data source is working_
6. Look for the message box _Data source is working_

#### Dashboards
#### Configure dashboards

Dashboards for Envoy and the Discovery components are included as part of the sample Grafana deployment.

##### Add Sample Kubernetes Dashboard

Add sample dashboard to validate that the data source is collecting data:
Add a sample dashboard to validate that the data source is collecting data:

1. On the main page, click the plus icon and choose **Import dashboard**
2. Enter _1621_ in the first box
Expand All @@ -189,11 +189,9 @@ Add sample dashboard to validate that the data source is collecting data:

## Validation

Now you can verify the deployment:

### Discovery cluster

This example deploys a sample application into the default namespace of [the discovered Kubernetes cluster that you created](#kubernetes).
This example deploys a sample application in the default namespace of [the discovered Kubernetes cluster that you created](#kubernetes).

```sh
# Deploy sample apps
Expand Down
12 changes: 6 additions & 6 deletions discovery/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,23 +3,23 @@
[![Build Status](https://travis-ci.com/heptio/gimbal.svg?token=dGsEGqM7L7s2vaK7wDXC&branch=master)](https://travis-ci.com/heptio/gimbal)

## Overview
The Gimbal Discoverer currently has two different systems it can monitor, Kubernetes and Openstack. The purpose of the Discoverers are to perform service discovery for remote clusters by finding remote endpoints and synchronizing them to a central Kubernetes cluster as Services & Endpoints.
The Gimbal Discoverer currently can monitor two systems, Kubernetes and Openstack. The Discoverers perform service discovery of remote clusters by finding remote endpoints and synchronizing them to a central Kubernetes cluster as Services & Endpoints.

### Kubernetes
The Kubernetes discoverer monitors available Services and Endpoints for a single Kubernetes cluster. The credentials to access the each API server will be provided by the Administrators via a Kubernetes Secret.
The Kubernetes Discoverer monitors available Services and Endpoints for a single Kubernetes cluster. The credentials to access each API server are provided with a Kubernetes Secret.

The Discoverer will leverage the `watch` feature of the Kubernetes API to receive changes dynamically, rather than having to poll the API. All available services & endpoints will be synchronized to the Team namespace matching the source system.
The Discoverer leverages the watch operation of the Kubernetes API to receive changes dynamically, instead of polling the API. All available Services and Endpoints are synchronized to the Team namespace that matches the source system.

### Openstack
The Openstack discoverer monitors all Load Balancer as a Service (LBaaS) configured as well as the corresponding Members. They are synchronized to the Team namespace as Services and Endpoints, with the Namespace being configured as the TenantName in Openstack.
The Openstack Discoverer monitors all configured Load Balancers as a Service (LBaaS) plus their corresponding Members. They are synchronized to the Team namespace as Services and Endpoints. The namespace is configured as the OpenStack TenantName.

The Discoverer will poll the Openstack API on a customizable interval.
The Discoverer polls the Openstack API on a customizable interval.

## Get started

#### Args

Arguments are available to customize the discoverer:
The following arguments are available to customize the Discoverer:

| flag | default | description | discoverer |
|---|---|---|---|
Expand Down
21 changes: 10 additions & 11 deletions docs/discovery-naming-conventions.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,22 @@
# Discovery Naming Conventions
# Discovery naming conventions

In order to load balance and route traffic to backend systems, Gimbal must
discover the backends and sync them to the Gimbal cluster. This is done
by the Gimbal discovery components, such as the Kubernetes discoverer and the
OpenStack discoverer.
To load balance and route traffic to backend systems, Gimbal must
discover the backends and synchronize them to the Gimbal cluster. This is done
by the Gimbal discovery components -- currently the Kubernetes Discoverer and the
OpenStack Discoverer.

During the discovery process, Gimbal translates the discovered backends into
Kubernetes Services and Endpoints. The name of the discovered Services and
Endpoints is called the _Discovered Name_, and is built from the following
_Components_:
Kubernetes Services and Endpoints. The _Discovered Name_ of each Service and
Endpoint is formed by concatenating:

```
${backend-name}-${service-name}
```

The name of service ports is not specified, and is handled independently by each
discoverer implementation.
The name of a service port is not specified, and is handled independently by each
Discoverer implementation.

## Kubernetes Service Naming Requirements
## Kubernetes Service naming requirements

Kubernetes Service names must adhere to the [rfc1035 DNS Label](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/identifiers.md) specification:

Expand Down
4 changes: 2 additions & 2 deletions docs/list-discovered-services.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# List Discovered Services

The Gimbal discoverers add labels to the discovered services and endpoints before storing them in the Gimbal cluster. These labels are useful when it comes to querying the Gimbal cluster for information about these services and endpoints.
The Gimbal Discoverers add labels to the discovered services and endpoints before storing them in the Gimbal cluster. These labels are useful for querying the Gimbal cluster.

## List all discovered services and endpoints

```sh
kubectl get svc,endpoints -l gimbal.heptio.com/backend
```

You may add `--all-namespaces` to list across all namespaces in the Gimbal cluster.
You can add `--all-namespaces` to list across all namespaces in the Gimbal cluster.

## List services and endpoints that were discovered from a specific cluster

Expand Down
39 changes: 19 additions & 20 deletions docs/manage-backends.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,11 @@

## Add a new backend

In order to route traffic to a new backend, you must deploy a new discoverer instance that will discover all the services and endpoints.
To route traffic to a new backend, you must deploy a new Discoverer instance that discovers all Services and Endpoints and routes them appropriately.

### Kubernetes

1. Obtain the cluster's kubeconfig file.
2. Create a new secret for the discoverer, using the kubeconfig obtained in the previous step:
1. Create a new Secret from the kubeconfig file for the cluster:

```sh
BACKEND_NAME=new-k8s
Expand All @@ -17,19 +16,19 @@ In order to route traffic to a new backend, you must deploy a new discoverer ins
--from-literal=backend-name=${BACKEND_NAME}
```

3. Update the [deployment manfiest](../deployment/gimbal-discoverer/02-kubernetes-discoverer.yaml). Set the deployment name to the name of the new backend, and update the secret name to the one created in the previous step.
4. Apply the updated manifest against the Gimbal cluster:
1. Update the [deployment manfiest](../deployment/gimbal-discoverer/02-kubernetes-discoverer.yaml). Set the deployment name to the name of the new backend, and set the Secret name to the name of the new Secret.
1. Apply the updated manifest to the Gimbal cluster:

```sh
kubectl -n gimbal-discovery apply -f new-k8s-discoverer.yaml
```

5. Verify the discoverer is running by checking the number of Available replicas in the new deployment, and by verifying the logs of the new pod.
1. Verify the Discoverer is running by checking the number of available replicas in the new deployment, and by checking the logs of the new pod.

### OpenStack

1. Ensure you have all the required [credentials](./openstack-discoverer.md#credentials) to the remote OpenStack cluster.
2. Create a new secret for the discoverer:
1. Ensure you have all the required [credentials](./openstack-discoverer.md#credentials) for the OpenStack cluster.
1. Create a new Secret:

```sh
BACKEND_NAME=new-openstack
Expand All @@ -43,57 +42,57 @@ In order to route traffic to a new backend, you must deploy a new discoverer ins
--from-literal=tenant-name=${OS_TENANT_NAME}
```

3. Update the [deployment manifest](../deployment/gimbal-discoverer/02-openstack-discoverer.yaml). Set the deployment name to the name of the new backend, and update the secret name to the one created in the previous step.
4. Apply the updated manifest against the Gimbal cluster:
1. Update the [deployment manifest](../deployment/gimbal-discoverer/02-openstack-discoverer.yaml). Set the deployment name to the name of the new backend, and update the secret name to the one created in the previous step.
1. Apply the updated manifest to the Gimbal cluster:

```sh
kubectl -n gimbal-discovery apply -f new-openstack-discoverer.yaml
```

5. Verify the discoverer is running by checking the number of Available replicas in the new deployment, and by verifying the logs of the new pod.
1. Verify the Discoverer is running by checking the number of available replicas in the new deployment, and by verifying the logs of the new pod.

## Remove a backend

To remove a backend from the Gimbal cluster, the discoverer and the discovered services must be deleted.
To remove a backend from the Gimbal cluster, the Discoverer and the discovered services must be deleted.

### Delete the discoverer

1. Find the discoverer instance responsable of the backend:
1. Find the Discoverer instance that's responsible for the backend:

```sh
# Assuming a Kubernetes backend
kubectl -n gimbal-discovery get deployments -l app=kubernetes-discoverer
```

2. Delete the discoverer instance responsable of the backend:
1. Delete the instance:

```sh
kubectl -n gimbal-discovery delete deployment ${DISCOVERER_NAME}
```

3. Delete the secret that belongs to the backend cluster
1. Delete the Secret that holds the credentials for the backend cluster:

```sh
kubectl -n gimbal-discovery delete secret ${DISCOVERER_SECRET_NAME}
```

### Delete all services/endpoints that were discovered
### Delete Services and Endpoints

**Warning: Performing this operation will result in Gimbal not sending traffic to this backend.**
**Warning: Performing this operation results in Gimbal not sending traffic to this backend.**

1. List services that belong to the cluster, and verify the list:
1. List the Services that belong to the cluster, and verify the list:

```sh
kubectl --all-namespaces get svc -l gimbal.heptio.com/backend=${CLUSTER_NAME}
```

2. Get a list of namespaces that have services discovered from this cluster:
1. List the namespaces with Services that were discovered:

```sh
kubectl get svc --all-namespaces -l gimbal.heptio.com/backend=${CLUSTER_NAME} -o jsonpath='{range .items[*]}{.metadata.namespace}{"\n"}{end}' | uniq
```

3. Iterate over the namespaces and delete all services and endpoints discovered from this cluster:
1. Iterate over the namespaces and delete all Services and Endpoints:

```sh
NAMESPACES=$(kubectl get svc --all-namespaces -l gimbal.heptio.com/backend=${CLUSTER_NAME} -o jsonpath='{range .items[*]}{.metadata.namespace}{"\n"}{end}' | uniq)
Expand Down
Loading

0 comments on commit 9b6d4db

Please sign in to comment.