Skip to content

Commit

Permalink
Standardize code blocks on docs (#1485)
Browse files Browse the repository at this point in the history
Signed-off-by: juan131 <juan@bitnami.com>
  • Loading branch information
Juan Ariza Toledano committed Jan 29, 2020
1 parent 0ff31ec commit eefec73
Show file tree
Hide file tree
Showing 12 changed files with 44 additions and 40 deletions.
6 changes: 3 additions & 3 deletions docs/developer/assetsvc.md
Expand Up @@ -63,17 +63,17 @@ Note that the assetsvc should be rebuilt for new changes to take effect.

Note: By default, Kubeapps will try to fetch the latest version of the image so in order to make this workflow work in Minikube you will need to update the imagePullPolicy first:

```
```bash
kubectl patch deployment kubeapps-internal-assetsvc -n kubeapps --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "IfNotPresent"}]'
```

```
```bash
kubectl set image -n kubeapps deployment kubeapps-internal-assetsvc assetsvc=kubeapps/assetsvc:latest
```

For further redeploys you can change the version to deploy a different tag or rebuild the same image and restart the pod executing:

```
```bash
kubectl delete pod -n kubeapps -l app=kubeapps-internal-assetsvc
```

Expand Down
2 changes: 1 addition & 1 deletion docs/developer/basic-form-support.md
Expand Up @@ -23,7 +23,7 @@ In order to identify which values should be presented in the form, it's necessar

First of all, it's necessary to specify the tag `form` and set it to `true`. All the properties marked with this tag in the schema will be represented in the form. For example:

```
```json
"wordpressUsername": {
"type": "string",
"form": true
Expand Down
5 changes: 4 additions & 1 deletion docs/developer/dashboard.md
Expand Up @@ -55,12 +55,15 @@ telepresence --namespace kubeapps --method inject-tcp --swap-deployment kubeapps

> **NOTE**: If you encounter issues getting this setup working correctly, please try switching the telepresence proxying method in the above command to `vpn-tcp`. Refer to [the telepresence docs](https://www.telepresence.io/reference/methods) to learn more about the available proxying methods and their limitations.
Finally, launch the dashboard within the telepresence shell
Finally, launch the dashboard within the telepresence shell:

```bash
export TELEPRESENCE_CONTAINER_NAMESPACE=kubeapps
yarn run start
```

> **NOTE**: The commands above assume you install Kubeapps in the `kubeapps` namespace. Please update the environment variable `TELEPRESENCE_CONTAINER_NAMESPACE` if you are using a different namespace.
You can now access the local development server simply by accessing the dashboard as you usually would (e.g. doing a port-forward or accesing the Ingress URL).

#### Troubleshooting
Expand Down
8 changes: 4 additions & 4 deletions docs/developer/kubeops.md
Expand Up @@ -43,14 +43,14 @@ This builds the `kubeops` binary in the working directory.

If you are using Minikube it is important to start the cluster enabling RBAC (on by default in Minikube 0.26+) in order to check the authorization features:

```
```bash
minikube start
eval $(minikube docker-env)
```

Note: By default, Kubeapps will try to fetch the latest version of the image so in order to make this workflow work in Minikube you will need to update the imagePullPolicy first:

```
```bash
kubectl patch deployment kubeapps-internal-kubeops -n kubeapps --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "IfNotPresent"}]'
```

Expand All @@ -62,13 +62,13 @@ IMAGE_TAG=dev make kubeapps/kubeops

This will generate an image `kubeapps/kubeops:dev` that you can use in the current deployment:

```
```bash
kubectl set image -n kubeapps deployment kubeapps-internal-kubeops kubeops=kubeapps/kubeops:dev
```

For further redeploys you can change the version to deploy a different tag or rebuild the same image and restart the pod executing:

```
```bash
kubectl delete pod -n kubeapps -l app=kubeapps-internal-kubeops
```

Expand Down
8 changes: 4 additions & 4 deletions docs/developer/tiller-proxy.md
Expand Up @@ -43,14 +43,14 @@ This builds the `tiller-proxy` binary in the working directory.

If you are using Minikube it is important to start the cluster enabling RBAC (on by default in Minikube 0.26+) in order to check the authorization features:

```
```bash
minikube start
eval $(minikube docker-env)
```

Note: By default, Kubeapps will try to fetch the latest version of the image so in order to make this workflow work in Minikube you will need to update the imagePullPolicy first:

```
```bash
kubectl patch deployment kubeapps-internal-tiller-proxy -n kubeapps --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "IfNotPresent"}]'
```

Expand All @@ -62,13 +62,13 @@ IMAGE_TAG=dev make kubeapps/tiller-proxy

This will generate an image `kubeapps/tiller-proxy:dev` that you can use in the current deployment:

```
```bash
kubectl set image -n kubeapps deployment kubeapps-internal-tiller-proxy proxy=kubeapps/tiller-proxy:dev
```

For further redeploys you can change the version to deploy a different tag or rebuild the same image and restart the pod executing:

```
```bash
kubectl delete pod -n kubeapps -l app=kubeapps-internal-tiller-proxy
```

Expand Down
1 change: 1 addition & 0 deletions docs/user/access-control.md
Expand Up @@ -111,6 +111,7 @@ kubectl create clusterrolebinding example-kubeapps-service-catalog-admin --clust
#### Read access to App Repositories

In order to list the configured App Repositories in Kubeapps, [bind users/groups Subjects](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#command-line-utilities) to the `$RELEASE_NAME-repositories-read` role in the namespace Kubeapps was installed into by the helm chart.

```bash
export KUBEAPPS_NAMESPACE=kubeapps
export KUBEAPPS_RELEASE_NAME=kubeapps
Expand Down
3 changes: 1 addition & 2 deletions docs/user/getting-started.md
Expand Up @@ -73,9 +73,8 @@ Open a command prompt and run the `GetDashToken.cmd` Your token can be found in
Once Kubeapps is installed, securely access the Kubeapps Dashboard from your system by running:

```bash
export POD_NAME=$(kubectl get pods -n kubeapps -l "app=kubeapps,release=kubeapps" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 in your browser to access the Kubeapps Dashboard"
kubectl port-forward -n kubeapps $POD_NAME 8080:8080
kubectl port-forward -n kubeapps svc/kubeapps 8080:80
```

This will start an HTTP proxy for secure access to the Kubeapps Dashboard. Visit http://127.0.0.1:8080/ in your preferred web browser to open the Dashboard. Here's what you should see:
Expand Down
14 changes: 7 additions & 7 deletions docs/user/migrating-to-v1.0.0-alpha.5.md
Expand Up @@ -13,21 +13,21 @@ These are the steps you need to follow to upgrade Kubeapps to this version.

Please follow the steps in [this guide](./securing-kubeapps.md) to install Tiller securely. Don't install the Kubeapps chart yet since it will fail because it will find resources that already exist. Once the new Tiller instance is ready you can migrate the existing releases using the utility command included in `kubeapps` 1.0.0-alpha.5:

```
```console
$ kubeapps migrate-configmaps-to-secrets --target-tiller-namespace kube-system
2018/08/06 12:24:23 Migrated foo.v1 as a secret
2018/08/06 12:24:23 Done. ConfigMaps are left in the namespace kubeapps to debug possible errors. Please delete them manually
```

**NOTE**: The tool asumes that you have deployed Helm storing releases as secrets. If that is not the case you can still migrate the releases executing:

```
```bash
kubectl get configmaps -n kubeapps -o yaml -l OWNER=TILLER | sed 's/namespace: kubeapps/namespace: kube-system/g' | kubectl create -f -
```

If you list the releases you should be able to see all of them:

```
```console
$ helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
NAME REVISION UPDATED STATUS CHART NAMESPACE
foo 1 Mon Aug 6 12:10:07 2018 DEPLOYED aerospike-0.1.7 default
Expand All @@ -39,7 +39,7 @@ foo 1 Mon Aug 6 12:10:07 2018 DEPLOYED aerospike-0.1.7 default

Now that we have backed up the releases we should delete existing Kubeapps resources. To do so execute:

```
```bash
kubeapps down
kubectl delete crd helmreleases.helm.bitnami.com sealedsecrets.bitnami.com
kubectl delete -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.7.0/controller.yaml
Expand All @@ -48,7 +48,7 @@ kubectl get helmreleases -o=name --all-namespaces | xargs kubectl patch $1 --typ

Wait until everything in the namespace of Kubeapps has been deleted:

```
```console
$ kubectl get all --namespace kubeapps
No resources found.
```
Expand All @@ -57,15 +57,15 @@ No resources found.

If you want to delete Kubeless (if you are not using it) you can delete it executing the following command:

```
```bash
kubectl delete -f https://github.com/kubeless/kubeless/releases/download/v0.6.0/kubeless-v0.6.0.yaml
```

## Install the Kubeapps chart

Now you can install the new version of Kubeapps using the Helm chart included in this repository:

```
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install \
--tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem \
Expand Down
6 changes: 3 additions & 3 deletions docs/user/migrating-to-v1.0.0.md
Expand Up @@ -9,7 +9,7 @@ clean install of Kubeapps.

To backup a custom repository, run the following command for each repository:

```
```bash
kubectl get apprepository -o yaml <repo name> > <repo name>.yaml
```

Expand All @@ -19,14 +19,14 @@ kubectl get apprepository -o yaml <repo name> > <repo name>.yaml
After backing up your custom repositories, run the following command to remove
and reinstall Kubeapps:

```
```bash
helm delete --purge kubeapps
helm install bitnami/kubeapps --version 1.0.0
```

To recover your custom repository backups, run the following command for each
repository:

```
```bash
kubectl apply -f <repo name>.yaml
```
6 changes: 3 additions & 3 deletions docs/user/securing-kubeapps.md
Expand Up @@ -15,7 +15,7 @@ You can follow the Helm documentation for deploying Tiller in a secure way. In p

From these guides you can find out how to create the TLS certificate and the necessary flags to install Tiller securely:

```
```bash
helm init --tiller-tls --tiller-tls-verify \
--override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}' \
--tiller-tls-cert ./tiller.cert.pem \
Expand All @@ -27,7 +27,7 @@ helm init --tiller-tls --tiller-tls-verify \

This is the command to install Kubeapps with our certificate:

```
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install \
--tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem \
Expand All @@ -44,7 +44,7 @@ helm install \

In order to be able to authorize requests from users it is necessary to enable [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes cluster. Some providers have it enabled by default but in some cases you need to set it up explicitly. Check out your provider documentation to know how to enable it. To verify if your cluster has RBAC available you can check if the API group exists:

```
```bash
$ kubectl api-versions | grep rbac.authorization
rbac.authorization.k8s.io/v1
```
Expand Down
14 changes: 7 additions & 7 deletions docs/user/service-catalog.md
Expand Up @@ -40,15 +40,15 @@ You will deploy the Service Catalog as any other Helm chart
installed through Kubeapps. We recommend to at least change the following value in
`values.yaml`:

```
```yaml
asyncBindingOperationsEnabled: true
```

This value is needed for some of the GCP Service Classes to work properly.

Alternatively, you can deploy the Service Catalog using the Helm CLI:

```
```bash
helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com
helm repo update
helm install svc-cat/catalog --name catalog --namespace catalog --set asyncBindingOperationsEnabled=true
Expand All @@ -71,14 +71,14 @@ cluster.

To check that the broker has been successfully deployed run the following:

```
```bash
kubectl get ClusterServiceBroker osba
```

If the Broker has been successfully installed and the catalog has been properly
downloaded you should get the following output:

```
```bash
NAME URL STATUS AGE
osba https://osba-open-service-broker-azure.osba.svc.cluster.local Ready 6m
```
Expand Down Expand Up @@ -141,7 +141,7 @@ It is important to understand the schema of the secret, as it is dependent
on the broker and the instance. For Azure MySQL the secret will have the
following schema:

```
```yaml
database: name of the database
host: the URL of the instance
username: the user name to connect to the database
Expand All @@ -160,7 +160,7 @@ we will search for `wordpress`:
We will click on `Deploy` and will modify the `values.yaml` of the application
with the following values:

```
```yaml
externalDatabase.host: host value in the binding secret
externalDatabase.user: username value in the binding secret
externalDatabase.password: password value in the binding secret
Expand All @@ -176,7 +176,7 @@ deployment is completed:
If we check the wordpress pod log we can see that it connected successfully
to the Azure MySQL database:

```
```bash
kubectl logs wordpress-app-wordpress-597b9dbb5-2rk4k

Welcome to the Bitnami wordpress container
Expand Down
11 changes: 6 additions & 5 deletions docs/user/using-an-OIDC-provider.md
Expand Up @@ -75,7 +75,7 @@ Kubeapps chart allows you to automatically deploy the proxy for you as a sidecar

This example uses `oauth2-proxy`'s generic OIDC provider with Google, but is applicable to any OIDC provider such as Keycloak, Dex, Okta or Azure Active Directory etc. Note that the issuer url is passed as an additional flag here, together with an option to enable the cookie being set over an insecure connection for local development only:

```
```bash
helm install bitnami/kubeapps \
--namespace kubeapps --name kubeapps \
--set authProxy.enabled=true \
Expand All @@ -91,7 +91,8 @@ helm install bitnami/kubeapps \
Some of the specific providers that come with `oauth2-proxy` are using OpenIDConnect to obtain the required IDToken and can be used instead of the generic oidc provider. Currently this includes only the GitLab, Google and LoginGov providers (see [OAuth2_Proxy's provider configuration](https://pusher.github.io/oauth2_proxy/auth-configuration) for the full list of OAuth2 providers). The user authentication flow is the same as above, with some small UI differences, such as the default login button is customized to the provider (rather than "Login with OpenID Connect"), or improved presentation when accepting the requested scopes (as is the case with Google, but only visible if you request extra scopes).

Here we no longer need to provide the issuer -url as an additional flag:
```

```bash
helm install bitnami/kubeapps \
--namespace kubeapps --name kubeapps \
--set authProxy.enabled=true \
Expand All @@ -112,7 +113,7 @@ For this reason, when deploying Kubeapps on GKE we need to ensure that

Note that using the custom `google` provider here enables google to prompt the user for consent for the specific permissions requested in the scopes below, in a user-friendly way. You can also use the `oidc` provider but in this case the user is not prompted for the extra consent:

```
```bash
helm install bitnami/kubeapps \
--namespace kubeapps --name kubeapps \
--set authProxy.enabled=true \
Expand All @@ -128,7 +129,7 @@ helm install bitnami/kubeapps \

In case you want to manually deploy the proxy, first you will create a Kubernetes deployment and service for the proxy. For the snippet below, you need to set the environment variables `AUTH_PROXY_CLIENT_ID`, `AUTH_PROXY_CLIENT_SECRET`, `AUTH_PROXY_DISCOVERY_URL` with the information from the IdP and `KUBEAPPS_NAMESPACE`.

```
```bash
export AUTH_PROXY_CLIENT_ID=<ID>
export AUTH_PROXY_CLIENT_SECRET=<SECRET>
export AUTH_PROXY_DISCOVERY_URL=<URL>
Expand Down Expand Up @@ -203,7 +204,7 @@ The above is a sample deployment, depending on the configuration of the Identity

Once the proxy is in place and it's able to connect to the IdP we will need to expose it to access it as the main endpoint for Kubeapps (instead of the `kubeapps` service). We can do that with an Ingress object. Note that for doing so an [Ingress Controller](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers) is needed. There are also other methods to expose the `kubeapps-auth-proxy` service, for example using `LoadBalancer` as type in a cloud environment. In case an Ingress is used, remember to modify the host `kubeapps.local` for the value that you want to use as a hostname for Kubeapps:

```
```bash
kubectl create -n $KUBEAPPS_NAMESPACE -f - -o yaml << EOF
apiVersion: extensions/v1beta1
kind: Ingress
Expand Down

0 comments on commit eefec73

Please sign in to comment.