Skip to content

Commit

Permalink
documented steps to gain access to clusters
Browse files Browse the repository at this point in the history
  • Loading branch information
jhrv committed Aug 13, 2019
1 parent b4490e0 commit d263b37
Show file tree
Hide file tree
Showing 3 changed files with 31 additions and 21 deletions.
2 changes: 2 additions & 0 deletions content/clusters/README.md
Expand Up @@ -6,6 +6,8 @@ We currently provide these Kubernetes clusters:
- prod-fss
- dev-sbs (previously preprod-sbs)
- prod-sbs
- dev-gcp
- prod-gcp

The name of each cluster is on the format `<environment class>-<zone>`

Expand Down
50 changes: 29 additions & 21 deletions content/getting-started/README.md
Expand Up @@ -15,47 +15,55 @@ The `kubectl` tool uses a `kubeconfig` file to get the information it needs in o

`kubectl` will by default look for a file named `config` in the `$HOME/.kube/` folder. You can also override this by having the absolute path of the file in the environment variable `KUBECONFIG`.

## ScaleFT
<todo>
## Connect to ScaleFT

In order to reach our clusters, you have to be connected to the right ScaleFT host. For GCP, it's one host per cluster (`dev-gcp` and `prod-gcp`), for on-premise you select `devWeb02`.

Start `navTunnel` app, click the icon. If you are not authenticated, it will open your browser and prompt you for your credentials. When done, click the icon again and select your cluster (see below)

![Connect ScaleFT](_media/scale_connect.png)

## Authenticate `kubectl`

### On-premise

The first time you're attempting to communicate with a cluster, you are required to authenticate with Azure AD.
When connecting to on-premise clusters, you need to authenticate with Azure AD.

```
$ kubectl config use-context prod-fss
Switched to context "prod-fss".
$ kubectl get pods
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code CR69DPQQZ to authenticate.
```

Go to the address, use the code, and log in with your NAV e-post and NAV-ident password. When done, `kubectl` will update your `kubeconfig`-file with the tokens needed to gain access to the cluster.
When prompted like above, go to the address and enter the code. You then log in with your NAV e-mail and password.
When done, `kubectl` will update your `kubeconfig`-file with the tokens needed to gain access to the cluster.

### Google Cloud Platform (GCP)
<todo>

## Aquire the `nais:developer` role
To access the clusters running in GCP, you need to have [gcloud installed](https://cloud.google.com/sdk/docs/#install_the_latest_cloud_tools_version_cloudsdk_current_version) and authenticated, as well as being connected to the right ScaleFT host.

This role is granted automatically by being member of the AD-group called "0000-GA-UTVIKLING-FRA-LAPTOP". This is granted by your identity manager (identansvarlig) and will give you the base access to the cluster.
First you need to install `gcloud` following the [instructions](https://cloud.google.com/sdk/docs/#install_the_latest_cloud_tools_version_cloudsdk_current_version) for your platform.

## Get access to operate your team's applications
Once installed, you need to authenticate with Google using your NAV e-mail.

Every resource in the cluster is restricted to the team that owns the application.
In order to gain access, you have to be a member of your team in Azure AD. This can be done by your team administrator.
```
$ gcloud auth login
```

Make sure you are connected to the right cluster, and verify that it works.

> todo
create new team
add members to existing team
mapping label > group
service user
```
$ kubectl config use-context prod-gcp
Switched to context "prod-gcp".
$ kubectl cluster-info
gcp-terraform $ k cluster-info
Kubernetes master is running at https://127.0.0.1:14131
...
```

## Recommended tools

[kubectx](https://github.com/ahmetb/kubectx) - Simplifies changing cluster and namespace context.
[kubeaware](https://github.com/jhrv/kubeaware) - Visualize which cluster and namespace is currently active.
[kail](https://github.com/boz/kail) - Kubernetes tail, log multiplexer.

## Setting up on VDI
If you need to use your VDI to access the clusters, there's a guide to configure [vdi](/content/getting-started/vdi.md)

Binary file added content/getting-started/_media/scale_connect.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit d263b37

Please sign in to comment.