diff --git a/docs/developer/assetsvc.md b/docs/developer/assetsvc.md index 0eeb50605d5..d23f4a59a0e 100644 --- a/docs/developer/assetsvc.md +++ b/docs/developer/assetsvc.md @@ -63,17 +63,17 @@ Note that the assetsvc should be rebuilt for new changes to take effect. Note: By default, Kubeapps will try to fetch the latest version of the image so in order to make this workflow work in Minikube you will need to update the imagePullPolicy first: -``` +```bash kubectl patch deployment kubeapps-internal-assetsvc -n kubeapps --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "IfNotPresent"}]' ``` -``` +```bash kubectl set image -n kubeapps deployment kubeapps-internal-assetsvc assetsvc=kubeapps/assetsvc:latest ``` For further redeploys you can change the version to deploy a different tag or rebuild the same image and restart the pod executing: -``` +```bash kubectl delete pod -n kubeapps -l app=kubeapps-internal-assetsvc ``` diff --git a/docs/developer/basic-form-support.md b/docs/developer/basic-form-support.md index 87339d57f65..d6423acf71c 100644 --- a/docs/developer/basic-form-support.md +++ b/docs/developer/basic-form-support.md @@ -23,7 +23,7 @@ In order to identify which values should be presented in the form, it's necessar First of all, it's necessary to specify the tag `form` and set it to `true`. All the properties marked with this tag in the schema will be represented in the form. For example: -``` +```json "wordpressUsername": { "type": "string", "form": true diff --git a/docs/developer/dashboard.md b/docs/developer/dashboard.md index 6c984b4b2ab..57ea52c8896 100644 --- a/docs/developer/dashboard.md +++ b/docs/developer/dashboard.md @@ -55,12 +55,15 @@ telepresence --namespace kubeapps --method inject-tcp --swap-deployment kubeapps > **NOTE**: If you encounter issues getting this setup working correctly, please try switching the telepresence proxying method in the above command to `vpn-tcp`. Refer to [the telepresence docs](https://www.telepresence.io/reference/methods) to learn more about the available proxying methods and their limitations. -Finally, launch the dashboard within the telepresence shell +Finally, launch the dashboard within the telepresence shell: ```bash +export TELEPRESENCE_CONTAINER_NAMESPACE=kubeapps yarn run start ``` +> **NOTE**: The commands above assume you install Kubeapps in the `kubeapps` namespace. Please update the environment variable `TELEPRESENCE_CONTAINER_NAMESPACE` if you are using a different namespace. + You can now access the local development server simply by accessing the dashboard as you usually would (e.g. doing a port-forward or accesing the Ingress URL). #### Troubleshooting diff --git a/docs/developer/kubeops.md b/docs/developer/kubeops.md index 59b75ab54b0..e165317ad5c 100644 --- a/docs/developer/kubeops.md +++ b/docs/developer/kubeops.md @@ -43,14 +43,14 @@ This builds the `kubeops` binary in the working directory. If you are using Minikube it is important to start the cluster enabling RBAC (on by default in Minikube 0.26+) in order to check the authorization features: -``` +```bash minikube start eval $(minikube docker-env) ``` Note: By default, Kubeapps will try to fetch the latest version of the image so in order to make this workflow work in Minikube you will need to update the imagePullPolicy first: -``` +```bash kubectl patch deployment kubeapps-internal-kubeops -n kubeapps --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "IfNotPresent"}]' ``` @@ -62,13 +62,13 @@ IMAGE_TAG=dev make kubeapps/kubeops This will generate an image `kubeapps/kubeops:dev` that you can use in the current deployment: -``` +```bash kubectl set image -n kubeapps deployment kubeapps-internal-kubeops kubeops=kubeapps/kubeops:dev ``` For further redeploys you can change the version to deploy a different tag or rebuild the same image and restart the pod executing: -``` +```bash kubectl delete pod -n kubeapps -l app=kubeapps-internal-kubeops ``` diff --git a/docs/developer/tiller-proxy.md b/docs/developer/tiller-proxy.md index 6347cab7bb6..ff9fefd593f 100644 --- a/docs/developer/tiller-proxy.md +++ b/docs/developer/tiller-proxy.md @@ -43,14 +43,14 @@ This builds the `tiller-proxy` binary in the working directory. If you are using Minikube it is important to start the cluster enabling RBAC (on by default in Minikube 0.26+) in order to check the authorization features: -``` +```bash minikube start eval $(minikube docker-env) ``` Note: By default, Kubeapps will try to fetch the latest version of the image so in order to make this workflow work in Minikube you will need to update the imagePullPolicy first: -``` +```bash kubectl patch deployment kubeapps-internal-tiller-proxy -n kubeapps --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "IfNotPresent"}]' ``` @@ -62,13 +62,13 @@ IMAGE_TAG=dev make kubeapps/tiller-proxy This will generate an image `kubeapps/tiller-proxy:dev` that you can use in the current deployment: -``` +```bash kubectl set image -n kubeapps deployment kubeapps-internal-tiller-proxy proxy=kubeapps/tiller-proxy:dev ``` For further redeploys you can change the version to deploy a different tag or rebuild the same image and restart the pod executing: -``` +```bash kubectl delete pod -n kubeapps -l app=kubeapps-internal-tiller-proxy ``` diff --git a/docs/user/access-control.md b/docs/user/access-control.md index 5d6547cecb3..62f489a4ce8 100644 --- a/docs/user/access-control.md +++ b/docs/user/access-control.md @@ -111,6 +111,7 @@ kubectl create clusterrolebinding example-kubeapps-service-catalog-admin --clust #### Read access to App Repositories In order to list the configured App Repositories in Kubeapps, [bind users/groups Subjects](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#command-line-utilities) to the `$RELEASE_NAME-repositories-read` role in the namespace Kubeapps was installed into by the helm chart. + ```bash export KUBEAPPS_NAMESPACE=kubeapps export KUBEAPPS_RELEASE_NAME=kubeapps diff --git a/docs/user/getting-started.md b/docs/user/getting-started.md index f010473a02f..1ed37dabe77 100644 --- a/docs/user/getting-started.md +++ b/docs/user/getting-started.md @@ -73,9 +73,8 @@ Open a command prompt and run the `GetDashToken.cmd` Your token can be found in Once Kubeapps is installed, securely access the Kubeapps Dashboard from your system by running: ```bash -export POD_NAME=$(kubectl get pods -n kubeapps -l "app=kubeapps,release=kubeapps" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 in your browser to access the Kubeapps Dashboard" -kubectl port-forward -n kubeapps $POD_NAME 8080:8080 +kubectl port-forward -n kubeapps svc/kubeapps 8080:80 ``` This will start an HTTP proxy for secure access to the Kubeapps Dashboard. Visit http://127.0.0.1:8080/ in your preferred web browser to open the Dashboard. Here's what you should see: diff --git a/docs/user/migrating-to-v1.0.0-alpha.5.md b/docs/user/migrating-to-v1.0.0-alpha.5.md index 5e27583c014..07b134a0a43 100644 --- a/docs/user/migrating-to-v1.0.0-alpha.5.md +++ b/docs/user/migrating-to-v1.0.0-alpha.5.md @@ -13,7 +13,7 @@ These are the steps you need to follow to upgrade Kubeapps to this version. Please follow the steps in [this guide](./securing-kubeapps.md) to install Tiller securely. Don't install the Kubeapps chart yet since it will fail because it will find resources that already exist. Once the new Tiller instance is ready you can migrate the existing releases using the utility command included in `kubeapps` 1.0.0-alpha.5: -``` +```console $ kubeapps migrate-configmaps-to-secrets --target-tiller-namespace kube-system 2018/08/06 12:24:23 Migrated foo.v1 as a secret 2018/08/06 12:24:23 Done. ConfigMaps are left in the namespace kubeapps to debug possible errors. Please delete them manually @@ -21,13 +21,13 @@ $ kubeapps migrate-configmaps-to-secrets --target-tiller-namespace kube-system **NOTE**: The tool asumes that you have deployed Helm storing releases as secrets. If that is not the case you can still migrate the releases executing: -``` +```bash kubectl get configmaps -n kubeapps -o yaml -l OWNER=TILLER | sed 's/namespace: kubeapps/namespace: kube-system/g' | kubectl create -f - ``` If you list the releases you should be able to see all of them: -``` +```console $ helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem NAME REVISION UPDATED STATUS CHART NAMESPACE foo 1 Mon Aug 6 12:10:07 2018 DEPLOYED aerospike-0.1.7 default @@ -39,7 +39,7 @@ foo 1 Mon Aug 6 12:10:07 2018 DEPLOYED aerospike-0.1.7 default Now that we have backed up the releases we should delete existing Kubeapps resources. To do so execute: -``` +```bash kubeapps down kubectl delete crd helmreleases.helm.bitnami.com sealedsecrets.bitnami.com kubectl delete -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.7.0/controller.yaml @@ -48,7 +48,7 @@ kubectl get helmreleases -o=name --all-namespaces | xargs kubectl patch $1 --typ Wait until everything in the namespace of Kubeapps has been deleted: -``` +```console $ kubectl get all --namespace kubeapps No resources found. ``` @@ -57,7 +57,7 @@ No resources found. If you want to delete Kubeless (if you are not using it) you can delete it executing the following command: -``` +```bash kubectl delete -f https://github.com/kubeless/kubeless/releases/download/v0.6.0/kubeless-v0.6.0.yaml ``` @@ -65,7 +65,7 @@ kubectl delete -f https://github.com/kubeless/kubeless/releases/download/v0.6.0/ Now you can install the new version of Kubeapps using the Helm chart included in this repository: -``` +```bash helm repo add bitnami https://charts.bitnami.com/bitnami helm install \ --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem \ diff --git a/docs/user/migrating-to-v1.0.0.md b/docs/user/migrating-to-v1.0.0.md index 278477ac697..f51413c4086 100644 --- a/docs/user/migrating-to-v1.0.0.md +++ b/docs/user/migrating-to-v1.0.0.md @@ -9,7 +9,7 @@ clean install of Kubeapps. To backup a custom repository, run the following command for each repository: -``` +```bash kubectl get apprepository -o yaml > .yaml ``` @@ -19,7 +19,7 @@ kubectl get apprepository -o yaml > .yaml After backing up your custom repositories, run the following command to remove and reinstall Kubeapps: -``` +```bash helm delete --purge kubeapps helm install bitnami/kubeapps --version 1.0.0 ``` @@ -27,6 +27,6 @@ helm install bitnami/kubeapps --version 1.0.0 To recover your custom repository backups, run the following command for each repository: -``` +```bash kubectl apply -f .yaml ``` diff --git a/docs/user/securing-kubeapps.md b/docs/user/securing-kubeapps.md index 40607ce0416..1da68c659ee 100644 --- a/docs/user/securing-kubeapps.md +++ b/docs/user/securing-kubeapps.md @@ -15,7 +15,7 @@ You can follow the Helm documentation for deploying Tiller in a secure way. In p From these guides you can find out how to create the TLS certificate and the necessary flags to install Tiller securely: -``` +```bash helm init --tiller-tls --tiller-tls-verify \ --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}' \ --tiller-tls-cert ./tiller.cert.pem \ @@ -27,7 +27,7 @@ helm init --tiller-tls --tiller-tls-verify \ This is the command to install Kubeapps with our certificate: -``` +```bash helm repo add bitnami https://charts.bitnami.com/bitnami helm install \ --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem \ @@ -44,7 +44,7 @@ helm install \ In order to be able to authorize requests from users it is necessary to enable [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes cluster. Some providers have it enabled by default but in some cases you need to set it up explicitly. Check out your provider documentation to know how to enable it. To verify if your cluster has RBAC available you can check if the API group exists: -``` +```bash $ kubectl api-versions | grep rbac.authorization rbac.authorization.k8s.io/v1 ``` diff --git a/docs/user/service-catalog.md b/docs/user/service-catalog.md index 4e20a02a64d..bb239bdc864 100644 --- a/docs/user/service-catalog.md +++ b/docs/user/service-catalog.md @@ -40,7 +40,7 @@ You will deploy the Service Catalog as any other Helm chart installed through Kubeapps. We recommend to at least change the following value in `values.yaml`: -``` +```yaml asyncBindingOperationsEnabled: true ``` @@ -48,7 +48,7 @@ This value is needed for some of the GCP Service Classes to work properly. Alternatively, you can deploy the Service Catalog using the Helm CLI: -``` +```bash helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com helm repo update helm install svc-cat/catalog --name catalog --namespace catalog --set asyncBindingOperationsEnabled=true @@ -71,14 +71,14 @@ cluster. To check that the broker has been successfully deployed run the following: -``` +```bash kubectl get ClusterServiceBroker osba ``` If the Broker has been successfully installed and the catalog has been properly downloaded you should get the following output: -``` +```bash NAME URL STATUS AGE osba https://osba-open-service-broker-azure.osba.svc.cluster.local Ready 6m ``` @@ -141,7 +141,7 @@ It is important to understand the schema of the secret, as it is dependent on the broker and the instance. For Azure MySQL the secret will have the following schema: -``` +```yaml database: name of the database host: the URL of the instance username: the user name to connect to the database @@ -160,7 +160,7 @@ we will search for `wordpress`: We will click on `Deploy` and will modify the `values.yaml` of the application with the following values: -``` +```yaml externalDatabase.host: host value in the binding secret externalDatabase.user: username value in the binding secret externalDatabase.password: password value in the binding secret @@ -176,7 +176,7 @@ deployment is completed: If we check the wordpress pod log we can see that it connected successfully to the Azure MySQL database: -``` +```bash kubectl logs wordpress-app-wordpress-597b9dbb5-2rk4k Welcome to the Bitnami wordpress container diff --git a/docs/user/using-an-OIDC-provider.md b/docs/user/using-an-OIDC-provider.md index 39d39ebd79a..34ea8d92076 100644 --- a/docs/user/using-an-OIDC-provider.md +++ b/docs/user/using-an-OIDC-provider.md @@ -75,7 +75,7 @@ Kubeapps chart allows you to automatically deploy the proxy for you as a sidecar This example uses `oauth2-proxy`'s generic OIDC provider with Google, but is applicable to any OIDC provider such as Keycloak, Dex, Okta or Azure Active Directory etc. Note that the issuer url is passed as an additional flag here, together with an option to enable the cookie being set over an insecure connection for local development only: -``` +```bash helm install bitnami/kubeapps \ --namespace kubeapps --name kubeapps \ --set authProxy.enabled=true \ @@ -91,7 +91,8 @@ helm install bitnami/kubeapps \ Some of the specific providers that come with `oauth2-proxy` are using OpenIDConnect to obtain the required IDToken and can be used instead of the generic oidc provider. Currently this includes only the GitLab, Google and LoginGov providers (see [OAuth2_Proxy's provider configuration](https://pusher.github.io/oauth2_proxy/auth-configuration) for the full list of OAuth2 providers). The user authentication flow is the same as above, with some small UI differences, such as the default login button is customized to the provider (rather than "Login with OpenID Connect"), or improved presentation when accepting the requested scopes (as is the case with Google, but only visible if you request extra scopes). Here we no longer need to provide the issuer -url as an additional flag: -``` + +```bash helm install bitnami/kubeapps \ --namespace kubeapps --name kubeapps \ --set authProxy.enabled=true \ @@ -112,7 +113,7 @@ For this reason, when deploying Kubeapps on GKE we need to ensure that Note that using the custom `google` provider here enables google to prompt the user for consent for the specific permissions requested in the scopes below, in a user-friendly way. You can also use the `oidc` provider but in this case the user is not prompted for the extra consent: -``` +```bash helm install bitnami/kubeapps \ --namespace kubeapps --name kubeapps \ --set authProxy.enabled=true \ @@ -128,7 +129,7 @@ helm install bitnami/kubeapps \ In case you want to manually deploy the proxy, first you will create a Kubernetes deployment and service for the proxy. For the snippet below, you need to set the environment variables `AUTH_PROXY_CLIENT_ID`, `AUTH_PROXY_CLIENT_SECRET`, `AUTH_PROXY_DISCOVERY_URL` with the information from the IdP and `KUBEAPPS_NAMESPACE`. -``` +```bash export AUTH_PROXY_CLIENT_ID= export AUTH_PROXY_CLIENT_SECRET= export AUTH_PROXY_DISCOVERY_URL= @@ -203,7 +204,7 @@ The above is a sample deployment, depending on the configuration of the Identity Once the proxy is in place and it's able to connect to the IdP we will need to expose it to access it as the main endpoint for Kubeapps (instead of the `kubeapps` service). We can do that with an Ingress object. Note that for doing so an [Ingress Controller](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers) is needed. There are also other methods to expose the `kubeapps-auth-proxy` service, for example using `LoadBalancer` as type in a cloud environment. In case an Ingress is used, remember to modify the host `kubeapps.local` for the value that you want to use as a hostname for Kubeapps: -``` +```bash kubectl create -n $KUBEAPPS_NAMESPACE -f - -o yaml << EOF apiVersion: extensions/v1beta1 kind: Ingress