diff --git a/_data/home-content.yml b/_data/home-content.yml
index 7313b9f9..1187d514 100644
--- a/_data/home-content.yml
+++ b/_data/home-content.yml
@@ -1,6 +1,36 @@
+- title: Clients
+ icon: images/home-icons/client.svg
+ url: ''
+ links:
+ - title: Codefresh CLI
+ localurl: /docs/clients/csdp-cli/
+
+
+- title: Installation
+ icon: images/home-icons/runtimes.svg
+ url: ''
+ links:
+ - title: Installation environments
+ localurl: /docs/installation/installation-options/
+ - title: Codefresh Runner installation
+ localurl: /docs/installation/codefresh-runner/
+ - title: Hosted GitOps Runtime installation
+ localurl: /docs/installation/hosted-runtime/
+ - title: Hybrid GitOps Runtime installation
+ localurl: /docs/installation/hybrid-gitops/
+ - title: On-Premises installation
+ localurl: /docs/installation/codefresh-on-prem/
+ - title: On-Premises upgrade
+ localurl: /docs/installation/codefresh-on-prem-upgrade/
+ - title: Monitoring & managing GitOps Runtimes
+ localurl: /docs/installation/monitor-manage-runtimes/
+ - title: Adding external clusters to GitOps Runtimes
+ localurl: /docs/installation/managed-cluster/
+ - title: Adding Git Sources to GitOps Runtimes
+ localurl: /docs/installation/git-sources/
- title: Administration
icon: images/home-icons/administration.svg
@@ -30,6 +60,8 @@
icon: images/home-icons/guides.png
url: ''
links:
+ - title: Runner installation behind firewalls
+ url: /docs/behind-the-firewall/
- title: Git tokens
localurl: /docs/reference/git-tokens/
- title: Secrets
diff --git a/_data/nav.yml b/_data/nav.yml
index c3cfac60..9f312db6 100644
--- a/_data/nav.yml
+++ b/_data/nav.yml
@@ -1,5 +1,38 @@
+
+- title: Clients
+ url: "/clients"
+ pages:
+ - title: Download CLI
+ url: "/csdp-cli"
+
+
+- title: Installation
+ url: "/installation"
+ pages:
+ - title: Installation environments
+ url: "/installation-options"
+ - title: Runtime architectures
+ url: "/runtime-architecture"
+ - title: Codefresh Runner installation
+ url: "/codefresh-runner"
+ - title: Hosted GitOps Runtime installation
+ url: "/hosted-runtime"
+ - title: Hybrid GitOps Runtime installation
+ url: "/hybrid-gitops"
+ - title: On-Premises installation
+ url: "/codefresh-on-prem"
+ - title: On-Premises upgrade
+ url: "/codefresh-on-prem-upgrade"
+ - title: Monitoring & managing GitOps Runtimes
+ url: "/monitor-manage-runtimes"
+ - title: Add external clusters to GitOps Runtimes
+ url: "/managed-cluster"
+ - title: Add Git Sources to to GitOps Runtimes
+ url: "/git-sources"
+
+
- title: Administration
url: "/administration"
pages:
@@ -59,10 +92,6 @@
- title: Common configuration
url: /team-sync
-
-
-
-
- title: Reference
url: "/reference"
@@ -75,3 +104,15 @@
url: "/shared-configuration"
+
+
+- title: Terms and Privacy Policy
+ url: "/terms-and-privacy-policy"
+ pages:
+ - title: Terms of Service
+ url: "/terms-of-service"
+ - title: Privacy Policy
+ url: "/privacy-policy"
+ - title: Service Commitment
+ url: "/sla"
+
diff --git a/_docs/installation/codefresh-on-prem-upgrade.md b/_docs/installation/codefresh-on-prem-upgrade.md
new file mode 100644
index 00000000..335b3075
--- /dev/null
+++ b/_docs/installation/codefresh-on-prem-upgrade.md
@@ -0,0 +1,575 @@
+---
+title: "Codefresh On-Premises Upgrade"
+description: "Use the Kubernetes Codefresh Installer to upgrade your Codefresh On-Premises platform "
+group: installation
+redirect_from:
+ - /docs/enterprise/codefresh-on-prem-upgrade/
+toc: true
+---
+Upgrade the Codefresh On-premises platform to the latest version:
+* Prepare for the upgrade: _Before_ the upgrade, based on the version you are upgrading to, complete the required tasks
+* Upgrade On-premises
+* Complete post-upgrade configuration: If needed, also based on the version you are upgrading to, complete the required tasks
+
+
+## Upgrade to 1.1.1
+Prepare for the upgrade to v1.1.1 by performing the tasks listed below.
+
+### Maintain backward compatibility for infrastructure services
+If you have Codefresh version 1.0.202 or lower installed, and are upgrading to v1.1.1, to retain the existing images for the services listed below, update the `config.yaml` for `kcfi`.
+
+* `cf-mongodb`
+* `cf-redis`
+* `cf-rabbitmq`
+* `cf-postgresql`
+* `cf-nats`
+* `cf-consul`
+
+> In the `config.yaml`, as in the example below, if needed, replace the `bitnami` prefix with that of your private repo.
+
+```yaml
+...
+
+global:
+ ### Codefresh App domain name. appUrl is manadatory parameter
+ appUrl: onprem.mydomain.com
+ appProtocol: https
+
+ mongodbImage: bitnami/mongodb:3.6.13-r0 # (default `mongodbImage: bitnami/mongodb:4.2`)
+
+mongodb:
+ image: bitnami/mongodb:3.6.13-r0 # (default `image: bitnami/mongodb:4.2`)
+ podSecurityContext:
+ enabled: true
+ runAsUser: 0
+ fsGroup: 0
+ containerSecurityContext:
+ enabled: false
+
+redis:
+ image: bitnami/redis:3.2.9-r2 # (default `image: bitnami/redis:6.0.16`)
+ podSecurityContext:
+ enabled: false
+ containerSecurityContext:
+ enabled: false
+
+postgresql:
+ imageTag: 9.6.2 # (default `imageTag:13`)
+
+nats:
+ imageTag: 0.9.4 # (default `imageTag:2.7`)
+
+consul:
+ ImageTag: 1.0.0 # (default `imageTag:1.11`)
+...
+```
+## Upgrade to 1.2.0 and higher
+This major release **deprecates** the following Codefresh managed charts:
+* Ingress
+* Rabbitmq
+* Redis
+
+See the instructions below for each of the affected charts.
+
+> Before the upgrade remove any seed jobs left from previous release with:
+ `kubectl delete job --namespace ${CF_NAMESPACE} -l release=cf `
+
+> Before the upgrade remove PDBs for Redis and RabbitMQ left from previous release with:
+ `kubectl delete pdb cf-rabbitmq --namespace ${CF_NAMESPACE}`
+ `kubectl delete pdb cf-redis --namespace ${CF_NAMESPACE}`
+
+### Update configuration for Ingress chart
+From version **1.2.0 and higher**, we have deprecated support for `Codefresh-managed-ingress`.
+Kubernetes community public `ingress-nginx` chart replaces `Codefresh-managed-ingress` chart. For more information on the `ingress-nginx`, see [kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx).
+
+> Parameter locations have changed as the ingress chart name was changed from `ingress` to `ingress-nginx`:
+ **NGINX controller** parameters are now defined under `ingress-nginx`
+ **Ingress object** parameters are now defined under `ingress`
+
+You must update `config.yaml`, if you are using:
+* External ingress controllers, including ALB (Application Load Balancer)
+* Codefresh-managed ingress controller with _custom_ values
+
+#### Update configuration for external ingress controllers
+
+For external ingress controllers, including ALB (Application Load Balancer), update the relevant sections in `config.yaml` to align with the new name for the ingress chart:
+
+* Replace `ingress` with `ingress-nginx`
+
+*v1.1.1 or lower*
+```yaml
+ingress: #disables creation of both Nginx controller deployment and Ingress objects
+ enabled: false
+```
+
+*v1.2.2 or higher*
+```yaml
+ingress-nginx: #disables creation of Nginx controller deployment
+ enabled: false
+
+ingress: #disables creation of Ingress objects (assuming you've manually created ingress resource before)
+ enabled: false
+```
+
+* Replace `annotations` that have been deprecated with `ingressClassName`
+
+*v1.1.1 or lower*
+```yaml
+ingress:
+ annotations:
+ kubernetes.io/ingress.class: my-non-codefresh-nginx
+```
+
+*v1.2.2 or higher*
+```yaml
+ingress-nginx:
+ enabled: false
+
+ingress:
+ ingressClassName: my-non-codefresh-nginx
+### `kubernetes.io/ingress.class` annotation is deprecated from kubernetes v1.22+.
+# annotations:
+# kubernetes.io/ingress.class: my-non-codefresh-nginx
+```
+
+#### Update configuration for Codefresh-managed ingress with custom values
+
+If you were running `Codefresh-managed ingress` controller with _custom_ values refer to [values.yaml](https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml) from the official repo. If needed, update the `ingress-nginx` section in `config.yaml`. The example below shows the default values (already provided in Codefresh chart) for `ingress-nginx`:
+
+```yaml
+ingress-nginx:
+ enabled: true
+ controller:
+ ## This section refers to the creation of the IngressClass resource
+ ## IngressClass resources are supported since k8s >= 1.18 and required since k8s >= 1.19
+ ingressClassResource:
+ # -- Is this ingressClass enabled or not
+ enabled: true
+ # -- Is this the default ingressClass for the cluster
+ default: false
+ # -- Controller-value of the controller that is processing this ingressClass
+ controllerValue: "k8s.io/ingress-nginx-codefresh"
+ # -- Name of the ingressClass
+ name: nginx-codefresh
+ # -- For backwards compatibility with ingress.class annotation.
+ # Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotation
+ ingressClass: nginx-codefresh
+ # -- Process IngressClass per name (additionally as per spec.controller).
+ ingressClassByName: true
+ # Limit the scope of the controller to a specific namespace
+ scope:
+ # -- Enable 'scope' or not
+ enabled: true
+ admissionWebhooks:
+ enabled: false
+```
+> New `ingress-nginx` subchart creates a new `cf-ingress-nginx-controller` service (`type: LoadBalancer`) instead of old `cf-ingress-controller` service. So make sure to update DNS record for `global.appUrl` to point to a new external load balancer IP.
+ You can get external load balancer IP with:
+ `kubectl get svc cf-ingress-nginx-controller -o jsonpath={.status.loadBalancer.ingress[0].ip`
+
+
+### Update configuration for RabbitMQ chart
+From version **1.2.2 and higher**, we have deprecated support for the `Codefresh-managed Rabbitmq` chart. Bitnami public `bitnami/rabbitmq` chart has replaced the `Codefresh-managed rabbitmq`. For more information, see [bitnami/rabbitmq](https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq).
+
+> Configuration updates are not required if you are running an **external** RabbitMQ service.
+
+> RabbitMQ chart was replaced so as a consequence values structure might be different for some parameters.
+ For the complete list of values, see [values.yaml](https://github.com/bitnami/charts/blob/master/bitnami/rabbitmq/values.yaml)
+
+**`existingPvc` changed to `existingClaim` and defined under `persistence`**
+
+*v1.1.1 or lower*
+```yaml
+rabbitmq:
+ existingPvc: my-rabbitmq-pvc
+ nodeSelector:
+ foo: bar
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 2Gi
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ tolerations:
+ - effect: NoSchedule
+ key:
+ operator: Equal
+ value:
+```
+
+*v1.2.2 or higher*
+```yaml
+rabbitmq:
+ volumePermissions: ## Enable init container that changes the owner and group of the persistent volume from existing claim
+ enabled: true
+ persistence:
+ existingClaim: my-rabbitmq-pvc
+ nodeSelector:
+ foo: bar
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 2Gi
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ tolerations:
+ - effect: NoSchedule
+ key:
+ operator: Equal
+ value:
+```
+
+**`storageClass` and `size` defined under `persistence`**
+
+*v1.1.1 or lower*
+```yaml
+rabbitmq:
+ storageClass: my-storage-class
+ storageSize: 32Gi
+```
+
+*v1.2.2 or higher*
+```yaml
+rabbitmq:
+ persistence:
+ storageClass: my-storage-class
+ size: 32Gi
+```
+
+### Update configuration for Redis chart
+From version **1.2.2 and higher**, we have deprecated support for the `Codefresh-managed Redis` chart. Bitnami public `bitnami/redis` chart has replaced the `Codefresh-managed Redis` chart. For more information, see [bitnami/redis](https://github.com/bitnami/charts/tree/master/bitnami/redis).
+
+Redis storage contains **CRON and Registry** typed triggers so you must migrate existing data from the old deployment to the new stateful set.
+This is done by backing up the existing data before upgrade, and then restoring the backed up data after upgrade.
+
+> Configuration updates are not required:
+ * When running an **external** Redis service.
+ * If CRON and Registy triggers have not been configured.
+
+#### Verify existing Redis data for CRON and Registry triggers
+Check if you have CRON and Registry triggers configured in Redis.
+
+* Run `codefresh get triggers`
+ OR
+ Directly from the K8s cluster where Codefresh is installed.
+
+```shell
+NAMESPACE=codefresh
+REDIS_PASSWORD=$(kubectl get secret --namespace $NAMESPACE cf-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
+
+kubectl exec -it deploy/cf-redis -- env REDIS_PASSWORD=$REDIS_PASSWORD bash
+#once inside cf-redis pod
+REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli
+info keyspace # list db
+select 15 # select db 15
+keys * #show keys
+```
+
+* If there are results, continue with _Back up existing Redis data_.
+
+#### Back up existing Redis data
+Back up the existing data before the upgrade:
+
+* Connect to the pod, run `redis-cli`, export AOF data from old `cf-redis-*` pod:
+
+```shell
+NAMESPACE=codefresh
+REDIS_PASSWORD=$(kubectl get secret --namespace $NAMESPACE cf-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
+REDIS_POD=$(kubectl get pods -l app=cf-redis -o custom-columns=:metadata.name --no-headers=true)
+kubectl cp $REDIS_POD:/bitnami/redis/data/appendonly.aof appendonly.aof -c cf-redis
+```
+
+#### Restore backed-up Redis data
+Restore the data after the upgrade:
+
+* Copy `appendonly.aof` to the new `cf-redis-master-0` pod:
+
+ ```shell
+ kubectl cp appendonly.aof cf-redis-master-0:/data/appendonly.aof
+ ````
+* Restart `cf-redis-master-0` and `cf-api` pods:
+
+ ```shell
+ kubectl delete pod cf-redis-master-0
+
+ kubectl scale deployment cf-cfapi-base --replicas=0 -n codefresh
+ kubectl scale deployment cf-cfapi-base --replicas=2 -n codefresh
+ ```
+
+> Redis chart was replaced so as a consequence values structure might be different for some parameters.
+ For the complete list of values, see [values.yaml](https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml).
+
+**`existingPvc` changed to `existingClaim` and defined under `master.persistence`**
+
+*v1.1.1 or lower*
+```yaml
+redis:
+ existingPvc: my-redis-pvc
+ nodeSelector:
+ foo: bar
+ resources:
+ limits:
+ cpu: 1000m
+ memory: 1Gi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ tolerations:
+ - effect: NoSchedule
+ key:
+ operator: Equal
+ value:
+```
+
+*v1.2.2 or higher*
+```yaml
+redis:
+ volumePermissions: ## Enable init container that changes the owner and group of the persistent volume from existing claim
+ enabled: true
+ master:
+ persistence:
+ existingClaim: my-redis-pvc
+ nodeSelector:
+ foo: bar
+ resources:
+ limits:
+ cpu: 1000m
+ memory: 1Gi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ tolerations:
+ - effect: NoSchedule
+ key:
+ operator: Equal
+ value:
+```
+
+**`storageClass` and `size` defined under `master.persistence`**
+
+
+*v1.1.1 or lower*
+```yaml
+redis:
+ storageClass: my-storage-class
+ storageSize: 32Gi
+```
+
+*v1.2.2 or higher*
+```yaml
+redis:
+ master:
+ persistence:
+ storageClass: my-storage-class
+ size: 32Gi
+```
+
+> If you run the upgrade without redis backup and restore procedure, **Helm Releases Dashboard** page might be empty for a few minutes after the upgrade.
+
+## Upgrade to 1.3.0 and higher
+This major release **deprecates** the following Codefresh managed charts:
+* Consul
+* Nats
+
+### Update configuration for Consul
+From version **1.3.0 and higher**, we have deprecated the Codefresh-managed `consul` chart, in favor of Bitnami public `bitnami/consul` chart. For more information, see [bitnami/consul](https://github.com/bitnami/charts/tree/master/bitnami/consul).
+
+Consul storage contains data about **Windows** worker nodes, so if you had any Windows nodes connected to your OnPrem installation, see the following instruction:
+
+> Use `https:///admin/nodes` to check for any existing Windows nodes.
+
+#### Back up existing consul data
+_Before starting the upgrade_, back up existing data.
+
+> Because `cf-consul` is a StatefulSet and has some immutable fields in its spec with both old and new charts having the same names, you cannot perform a direct upgrade.
+ Direct upgrade will most likely fail with:
+ `helm.go:84: [debug] cannot patch "cf-consul" with kind StatefulSet: StatefulSet.apps "cf-consul" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy' and 'minReadySeconds' are forbidden`
+ After backing up existing data, you must delete the old StatefulSet.
+
+
+1. Exec into the consul pod and create a snapshot:
+```shell
+kubectl exec -it cf-consul-0 -n codefresh -- consul snapshot save backup.snap
+```
+1. Copy snapshot locally:
+```shell
+kubectl cp -n codefresh cf-consul-0:backup.snap backup.snap
+```
+1. **Delete the old** `cf-consul` stateful set:
+
+```shell
+kubectl delete statefulset cf-consul -n codefresh
+```
+
+#### Restore backed up data
+
+After completing the upgrade to the current version, restore the `consul` data that you backed up.
+
+1. Copy the snapshot back to the new pod:
+
+```shell
+kubectl cp -n codefresh backup.snap cf-consul-0:/tmp/backup.snap
+```
+1. Restore the data:
+```
+kubectl exec -it cf-consul-0 -n codefresh -- consul snapshot restore /tmp/backup.snap
+```
+> Consul chart was replaced, and values structure might be different for some parameters.
+ For the complete list of values, see [values.yaml](https://github.com/bitnami/charts/blob/master/bitnami/consul/values.yaml)
+
+
+### Update Nats configuration
+From version **1.3.0 and higher**, we have deprecated Codefresh-managed `nats` chart in favor of Bitnami public `bitnami/nats` chart. For more information, see [bitnami/nats](https://github.com/bitnami/charts/tree/master/bitnami/consul).
+
+> Because `cf-nats` is a StatefulSet and has some immutable fields in its spec, both the old and new charts have the same names, preventing a direct upgrade.
+ Direct upgrade will most likely fail with:
+ `helm.go:84: [debug] cannot patch "cf-nats" with kind StatefulSet: StatefulSet.apps "cf-nats" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy' and 'minReadySeconds' are forbidden`
+ After backing up existing data, you must delete the old StatefulSet.
+
+
+* **Delete the old** `cf-nats` stateful set.
+
+```shell
+kubectl delete statefulset cf-nats -n codefresh
+```
+
+> Nats chart was replaced, and values structure might be different for some parameters.
+ For the complete list of values, see [values.yaml](https://github.com/bitnami/charts/blob/master/bitnami/nats/values.yaml).
+
+### Upgrade to 1.3.1 and higher
+
+Chart **v1.3.1** fixes duplicated env vars `CLUSTER_PROVIDERS_URI` and `CLUSTER_PROVIDERS_PORT` in `cf-api` deployment.
+```yaml
+W1010 03:03:55.553842 280 warnings.go:70] spec.template.spec.containers[0].env[94].name: duplicate name "CLUSTER_PROVIDERS_URI"
+W1010 03:03:55.553858 280 warnings.go:70] spec.template.spec.containers[0].env[95].name: duplicate name "CLUSTER_PROVIDERS_PORT"
+```
+
+
+> Due to Helm issue [Removal of duplicate array entry removes completely from Kubernetes](https://github.com/helm/helm/issues/10741), you shoud run `kcfi deploy` or `helm upgrade` two times consecutively.
+
+
+With chart **v1.3.1** [insecure registy](https://docs.docker.com/registry/insecure/) property has been moved under `builder` section:
+
+```yaml
+builder:
+ insecureRegistries:
+ - "myregistrydomain.com:5000"
+```
+
+## Upgrade the Codefresh Platform with [kcfi](https://github.com/codefresh-io/kcfi)
+
+1. Locate the `config.yaml` file you used in the initial installation.
+1. Change the release number inside it.
+ ```yaml
+ metadata:
+ kind: codefresh
+ installer:
+ type: helm
+ helm:
+ chart: codefresh
+ repoUrl: https://chartmuseum.codefresh.io/codefresh
+ version: 1.2.14
+ ```
+1. Perform a dry run and verify that there are no errors:
+ `kcfi deploy --dry-run --debug -c codefresh/config.yaml`
+1. Run the actual upgrade:
+ `kcfi deploy --debug -c codefresh/config.yaml`
+1. Verify that all the pods are are in running state:
+ `kubectl -n codefresh get pods --watch`
+1. Log in to the Codefresh UI, and check the new version.
+1. If needed, enable/disable new feature flags.
+
+## Codefresh with Private Registry
+
+If you install/upgrade Codefresh on the air-gapped environment (without access to public registries or Codefresh Enterprise registry) you will have to copy the images to your organization container registry.
+
+**Obtain [image list](https://github.com/codefresh-io/onprem-images/tree/master/releases) for specific release**
+
+**Push images to private docker registry**
+
+There are 3 types of images:
+
+> localhost:5000 is your
+
+- non-Codefresh like:
+```
+bitnami/mongo:4.2
+k8s.gcr.io/ingress-nginx/controller:v1.2.0
+postgres:13
+```
+convert to:
+```
+localhost:5000/bitnami/mongodb:4.2
+localhost:5000/ingress-nginx/controller:v1.2.0
+localhost:5000/postgres:13
+```
+- Codefresh public images like:
+```
+quay.io/codefresh/dind:20.10.13-1.25.2
+quay.io/codefresh/engine:1.147.8
+quay.io/codefresh/cf-docker-builder:1.1.14
+```
+convert to:
+```
+localhost:5000/codefresh/dind:20.10.13-1.25.2
+localhost:5000/codefresh/engine:1.147.8
+localhost:5000/codefresh/cf-docker-builder:1.1.14
+```
+- Codefresh private images like:
+```
+gcr.io/codefresh-enterprise/codefresh/cf-api:21.153.6
+gcr.io/codefresh-enterprise/codefresh/cf-ui:14.69.38
+gcr.io/codefresh-enterprise/codefresh/pipeline-manager:3.121.7
+```
+convert to:
+```
+localhost:5000/codefresh/cf-api:21.153.6
+localhost:5000/codefresh/cf-ui:14.69.38
+localhost:5000/codefresh/pipeline-manager:3.121.7
+```
+> DELIMITERS are codefresh OR codefresh-io
+
+- To push images via [kcfi](https://github.com/codefresh-io/kcfi) (ver. **0.5.15** is required) use:
+
+`kcfi images push --help`
+
+> Prerequisites: sa.json to access Codefresh Enterprise GCR
+
+`kcfi images push --codefresh-registry-secret sa.json --images-list images-list-v1.2.14 --registry localhost:5000 --user "root" --password "root"`
+
+- To push images via [push-to-registry.sh](https://github.com/codefresh-io/onprem-images/blob/master/push-to-registry.sh) script use (see [prerequisites](https://github.com/codefresh-io/onprem-images#prerequesites)):
+
+`./push-to-registry.sh localhost:5000 v1.2.14`
+
+#### Install/Upgrade Codefresh with private docker registry config**
+
+Set `usePrivateRegistry: true`, and set privateRegistry address, username and password in `config.yaml`.
+
+For Bitnami helm charts ([consul](https://github.com/bitnami/charts/blob/main/bitnami/consul/values.yaml), [nats](https://github.com/bitnami/charts/blob/main/bitnami/nats/values.yaml), [redis](https://github.com/bitnami/charts/blob/main/bitnami/redis/values.yaml), [rabbitmq](https://github.com/bitnami/charts/blob/main/bitnami/rabbimq/values.yaml)) define `global.imageRegistry`.
+
+For [ingress-nginx](https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml) chart define `ingress-nginx.controller.image.registry`.
+
+
+`config.yaml`
+
+```yaml
+global:
+ imageRegistry: myregistry.domain.com
+
+ingress-nginx:
+ controller:
+ image:
+ registry: myregistry.domain.com
+
+images:
+ codefreshRegistrySa: sa.json
+ usePrivateRegistry: true
+ privateRegistry:
+ address: myregistry.domain.com
+ username:
+ password:
+```
+
\ No newline at end of file
diff --git a/_docs/installation/codefresh-on-prem.md b/_docs/installation/codefresh-on-prem.md
new file mode 100644
index 00000000..5aed0afd
--- /dev/null
+++ b/_docs/installation/codefresh-on-prem.md
@@ -0,0 +1,1237 @@
+---
+title: "Codefresh On-Prem Installation & Configuration"
+description: "Use the Kubernetes Codefresh Installer to install the Codefresh On-Premises platform "
+group: installation
+redirect_from:
+ - /docs/enterprise/codefresh-on-prem/
+toc: true
+---
+
+
+This article will guide you through the installation of the Codefresh platform on your on-prem environment. This article covers all aspects of installation and configuration. Please read the article carefully before installing Codefresh.
+
+[kcfi](https://github.com/codefresh-io/kcfi) (the Kubernetes Codefresh Installer) is a one-stop-shop for this purpose. Even though Codefresh offers multiple tools to install components, `kcfi` aggregates all of them into a single tool.
+
+## Survey: What Codefresh needs to know
+
+Fill out this survey before the installation to make sure your on-prem environment is ready for deployment:
+
+[Survey](https://docs.google.com/forms/d/e/1FAIpQLSf18sfG4bEQuwMT7p11F6q70JzWgHEgoAfSFlQuTnno5Rw3GQ/viewform)
+
+## On-prem system requirements
+
+{: .table .table-bordered .table-hover}
+| Item | Requirement |
+| -------------- | -------------- |
+|Kubernetes cluster | Server versions v1.19 through v1.22. {::nomarkdown}
Note: Maintenance support for Kubernetes v1.19 ended on Oct 28, 2021.{:/}|
+|Operating systems|{::nomarkdown}- Windows 10/7
- Linux
- OSX
{:/}|
+|Node requirements| {::nomarkdown}{:/}|
+|Git providers |{::nomarkdown}- GitHub: SaaS and on-premises versions
- Bitbucket: SaaS and Bitbucket server (on-premises) 5.4.0 version and above
- GitLab: SaaS and on-premise versions (API v4 only)
{:/}|
+|Node size | {::nomarkdown}- Single node: 8 CPU core and 16GB RAM
- Multi node: master(s) + 3 nodes with 4 CPU core and 8GB RAM (24 GB in total)
{:/}|
+
+
+
+## Prerequisites
+
+### Service Account file
+The GCR Service Account JSON file, `sa.json` is provided by Codefresh. Contact support to get the file before installation.
+
+### Default app credentials
+Also provided by Codefresh. Contact support to get them file before installation.
+
+### TLS certificates
+For a secured installation, we highly recommend using TLS certificates. Make sure your `ssl.cert` and `private.key` are valid.
+
+> Use a Corporate Signed certificate, or any valid TLS certificate, for example, from lets-encrypt.
+
+### Interent connections
+We require outbound internet connections for these services:
+* GCR to pull platform images
+* Dockerhub to pull pipeline images
+
+
+## Security Constraints
+
+Codefresh has some security assumptions about the Kubernetes cluster it is installed on.
+
+### RBAC for Codefresh
+
+The Codefresh installer should be run with a Kubernetes RBAC role that allows object creation in a single namespace. If, by corporate policy, you do not allow the creation of service accounts or roles, a Kubernetes administrator will need to create the role, service account, and binding as shown below.
+
+>Users with the `codefresh-app` role cannot create other roles or role bindings.
+
+`codefresh-app-service-account.yaml`
+```yaml
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: codefresh-app
+ namespace: codefresh
+```
+
+`codefresh-app-role.yaml`
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: codefresh-app
+ namespace: codefresh
+rules:
+- apiGroups:
+ - ""
+ - apps
+ - codefresh.io
+ - autoscaling
+ - extensions
+ - batch
+ resources:
+ - '*'
+ verbs:
+ - '*'
+- apiGroups:
+ - networking.k8s.io
+ - route.openshift.io
+ - policy
+ resources:
+ - routes
+ - ingresses
+ - poddisruptionbudgets
+ verbs:
+ - '*'
+```
+
+`codefresh-app-roleBinding.yaml`
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ labels:
+ app: codefresh
+ name: codefresh-app-binding
+ namespace: codefresh
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: codefresh-app
+subjects:
+- kind: ServiceAccount
+ name: codefresh-app
+```
+
+To apply these changes, run:
+
+```
+kubectl apply -f [file]
+```
+
+### Operator CRD
+
+If, due to security rules you are not allowed to create a CRD for a client running `kcfi`, have an Administrator create the RBAC (as instructed above) and the CRD as follows:
+
+`codefresh-crd.yaml`
+```yaml
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: codefreshes.codefresh.io
+ labels:
+ app: cf-onprem-operator
+spec:
+ group: codefresh.io
+ names:
+ kind: Codefresh
+ listKind: CodefreshList
+ plural: codefreshes
+ singular: codefresh
+ scope: Namespaced
+ subresources:
+ status: {}
+ versions:
+ - name: v1alpha1
+ served: true
+ storage: true
+```
+
+To apply these changes, run:
+```
+kubectl apply -f codefresh-crd.yaml
+```
+
+You will also need to modify the `config.yaml` for `kcfi` by setting `skipCRD: true` and `serviceAccountName: codefresh-app`:
+
+`config.yaml`
+```yaml
+ operator:
+ #dockerRegistry: gcr.io/codefresh-enterprise
+ #image: codefresh/cf-onprem-operator
+ #imageTag:
+ serviceAccountName: codefresh-app
+ skipCRD: true
+```
+
+## Install the Codefresh Platform
+
+### Before you begin
+
+### Step1 : Download and extract `kcfi`
+Download the binary for `kcfi`. It is a single binary without dependencies.
+
+1. Download the binary from [GitHub](https://github.com/codefresh-io/kcfi/releases){:target="\_blank"}.
+ >Note: Darwin is for OSX
+1. Extract the downloaded file.
+1. Copy the file to your $PATH: `cp /path/to/kcfi /usr/local/bin`
+
+### Step 2: Set the current context
+* Make sure you have a `kubeconfig` file with the correct context, as in this example:
+
+```
+kubectl config get-contexts # display list of contexts
+kubectl config use-context my-cluster-name # set the default context to my-cluster-name
+kubectl config current-context # verify the current-context`
+```
+### Step 3: Initialize and configure `config.yaml`
+Prepare the platform for installation by initializing the directory with `config.yaml`. Then edit `config.yaml` and configure all installation settings, including files and directories required, and then deploy to Kubernetes.
+
+The `config.yaml` is includes descriptions for every parameter.
+
+1. Create the directory with the `config.yaml`:
+
+```
+kcfi init codefresh [-d /path/to/stage-dir]
+```
+1. Below `installer`, define your installation method as either Helm or Codefresh CRD:
+
+```yaml
+ installer:
+ # type:
+ # "operator" - apply codefresh crd definition
+ # "helm" - install/upgrade helm chart from client
+```
+1. If you are installing Codefresh in an air-gapped environment (without access to public Docker Hub or codefresh-enterprise registry), copy the images to your organization container registry (Kubernetes will pull the images from it as part of the installation).
+
+ 1. Set `usePrivateRegistry` to `true`.
+ 1. Define `privateRegistry` `address`, `username` and `password`.
+
+
+```yaml
+images:
+ codefreshRegistrySa: sa.json
+ # usePrivateRegistry: false
+ # privateRegistry:
+ # address:
+ # username:
+ # password:
+ lists:
+ - images/images-list
+```
+1. Push all or a single image:
+ * All images:
+ ```
+ kcfi images push [-c|--config /path/to/config.yaml]
+ ```
+ * Single image:
+ ```
+ kcfi images push [-c|--config /path/to/config.yaml] [options] repo/image:tag [repo/image:tag]
+ ```
+
+ > To get the full list of options, run `kcfi images --help`.
+
+ >Even if you are running a Kubernetes cluster with outgoing access to the public internet, note that Codefresh platform images are not public and can be obtained by using `sa.json` file provided by Codefresh support personnel.
+ Use the flag `--codefresh-registry-secret` to pass the path to the file `sa.json`.
+
+### Step 4: (Optional) Configure TLS certificates
+If you are using TLS, enable it in `config.yaml`.
+
+1. Set `tls.selfSigned =false`.
+1. Place both `ssl.crt` and `private.key` into certs/ directory.
+
+### Step 5: Deploy On-premises platform
+
+1. Run:
+
+```
+kcfi deploy [ -c config.yaml ] [ --kube-context ] [ --atomic ] [ --debug ] [ helm upgrade parameters ]
+```
+### Step 6: Install the Codefresh Kubernetes Agent
+
+Install the `cf-k8s-agent` on a cluster separate from the installer, or in a different namespace on the same cluster.
+The `cf-k8s-agent` accesses Kubernetes resources (pods, deployments, services, etc.) behind the firewall to display them in the Codefresh UI. The agent streams updates from cluster resources and then sends information updates to the `k8s-monitor` service.
+
+1. Create a staging directory for the agent:
+
+```
+kcfi init k8s-agent
+```
+ A staging directory is created, named k8s-agent with a `config.yaml`.
+1. Edit k8s-agent/config.yaml ?? for what??
+
+1. Run:
+
+```
+kcfi deploy [ -c config.yaml ] [-n namespace]
+```
+ where:
+ [namespace] is the namespace if you are installing the agent in the same cluster.
+
+
+
+
+## High-Availability (HA) with active-passive clusters
+Enable high-availability in the Codefresh platform for disaster recovery with an active-passive cluster configuration.
+Review the prerequisites, and then do the following to configure high-availability:
+* For new installations, install Codefresh on the active cluster
+* Install Codefresh on the passive cluster
+* When needed, switch between clusters for disaster recovery
+
+### Prerequisites
+
+* **K8s clusters**
+ Two K8s clusters, one designated as the active cluster, and the other designated as the passive cluster for disaster recovery.
+
+* **External databases and services**
+ Databases and services external to the clusters.
+
+ * Postgres database (see [Configuring an external Postgres database](#configuring-an-external-postgres-database))
+ * MongoDB (see [Configuring an external MongoDB](#configuring-an-external-mongodb))
+ * Redis service (see [Configuring an external Redis service](#configure-an-external-redis-service))
+ * RabbitMQ service (see [Configuring an external RabbitMQ service](#configure-an-external-redis-service))
+ * Consul service (see [Configuring an external Consul service](#configuring-an-external-consul-service))
+
+* **DNS record**
+ To switch between clusters for disaster recovery
+
+### Install Codefresh on active cluster
+
+If you are installing Codefresh for the first time, install Codefresh on the cluster designated as the _active_ cluster.
+See [Installing the Codefresh platform]({{site.baseurl}}/docs/administration/codefresh-on-prem/#install-the-codefresh-platform).
+
+### Install Codefresh on passive cluster
+
+First get the `values.yaml` file from the current Codefresh installation on the active cluster. Then install Codefresh on the passive cluster using Helm.
+
+**1. Get values.yaml**
+1. Switch your kube context to the active cluster.
+1. Get `values.yaml` from the active cluster:
+ `helm get values ${release_name} -n ${namespace} > cf-passive-values.yaml`
+ where:
+ `{release-version}` is the name of the Codefresh release, and is by default `cf`.
+ `${namespace}` is the namespace with the Codefresh release, and is by default `codefresh`.
+
+{:start="3"}
+1. Update the required variables in `cf-passive-values.yaml`.
+ > If the variables do not exist, add them to the file.
+
+ * In the `global` section, disable `seedJobs` by setting it to `false`:
+
+ ```yaml
+ global:
+ seedJobs: false
+ ```
+
+ * Add variable `FREEZE_WORKFLOWS_EXECUTION` to `cfapi`, and set it to `true`.
+
+ ```yaml
+ cfapi:
+ env:
+ FREEZE_WORKFLOWS_EXECUTION: true
+ ```
+
+**2. Install Codefresh on passive cluster**
+
+1. Download the Helm chart:
+ `helm repo add codefresh-onprem https://chartmuseum.codefresh.io/codefresh`
+ `helm fetch codefresh-onprem/codefresh --version ${release-version}`
+ where:
+ `{release-version}` is the version of Codefresh you are downloading.
+
+1. Unzip the Helm chart:
+ `tar -xzf codefresh-${release-version}.tgz`
+1. Go to the folder where you unzipped the Helm chart.
+1. Install Codefresh with the Helm command using `cf-passive-values.yaml`:
+ `helm install cf . -f ${path}/cf-passive-values.yaml -n codefresh`
+
+
+### Switch between clusters for disaster recovery
+
+For disaster recovery, switch between the active and passive clusters.
+
+1. In the `cfapi` deployment on the _active_ cluster, change the value of `FREEZE_WORKFLOWS_EXECUTION` from `false` to `true`.
+ If the variable does not exist, add it, and make sure the value is set to `true`.
+1. In the `cfapi` deployment on the _passive_ cluster, change the value of `FREEZE_WORKFLOWS_EXECUTION` from `true` to `false`.
+1. Switch DNS from the currently active cluster to the passive cluster.
+
+### Services without HA
+
+The following services cannot run in HA, but are not critical in case of downtime or during the process of switchover from active to passive.
+These services are not considered critical as they are part of build-handling. In case of failure, a build retry occurs, ensuring that the build is always handled.
+* `cronus`
+* `cf-sign`
+
+
+## Additional configuration
+
+After you install Codefresh, these are post-installation operations that you should follow.
+
+### Selectively enable SSO provider for account
+As a Codefresh administrator, you can select the providers you want to enable for SSO in your organization, for both new and existing accounts.
+You can always renable a provider when needed.
+
+
+1. Sign in as Codefresh admin.
+1. From the left pane, select **Providers**.
+1. Disable the providers not relevant for the accounts.
+These providers are not displayed as options during sign-up/sign-in.
+
+
+### (Optional) Set up Git integration
+
+Codefresh supports out-of-the-box Git logins using your local username and password, or logins using your Git provider, as described below.You can also configure login to supported SSO providers after installation, as described in [Setting up OpenID Connect (OIDC) Federated Single Sign-On (SSO)]({{site.baseurl}}/docs/administration/single-sign-on/oidc).
+
+If you’d like to set up a login to Codefresh using your Git provider, first login using the default credentials (username: `AdminCF`, password: `AdminCF` and add your Git provider OAuth integration details in our admin console:
+
+**Admin Management** > **IDPs** tab
+
+To get the Client ID and Client Secret for each of the supported Git providers, follow the instructions according to your VCS provider.
+
+#### GitHub Enterprise
+
+Navigate to your GitHub organization settings: https://github.com/organizations/your_org_name/settings.
+
+On the left-hand side, under **Developer settings**, select **OAuth Apps**, and click **Register an Application**.
+
+Complete the OAuth application registration as follows:
+
+- **Application name:** codefresh-on-prem (or a significant name)
+- **Homepage URL:** https://your-codefresh-onprem-domain
+- **Authorization callback URL:** https://your-codefresh-onprem-domain/api/auth/github/callback
+
+After registration, note down the created Client ID and Client Secret. They will be required for the settings in **Codefresh Admin**->**IDPs**
+
+#### GitLab
+
+Navigate to your Applications menu in GitLab User Settings: https://gitlab.com/profile/applications
+
+Complete the application creation form as follows:
+
+- **Name:** codefresh-onprem (or a significant name)
+- **Redirect URI:** https://your-codefresh-onprem-domain/api/auth/gitlab/callback
+- **Scopes (permissions):**
+ - API
+ - read_user
+ - read_registry
+
+Click **Save application**.
+
+After app creation, note down the created Application ID and Client Secret. They will be required for the settings in **Codefresh Admin**->**IDPs**.
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/installation/git-idp.png"
+ url="/images/administration/installation/git-idp.png"
+ %}
+
+>Note: When configuring the default IDP (for GitHub, GitLab, etc), do not modify the Client Name field. Please keep them as GitHub, GitLab, BitBucket, etc. Otherwise, the signup and login views won’t work.
+
+### Proxy Configuration
+
+If your environment resides behind HTTP proxies, you need to uncomment the following section in `config.yaml`:
+
+```yaml
+global:
+ env:
+ HTTP_PROXY: "http://myproxy.domain.com:8080"
+ http_proxy: "http://myproxy.domain.com:8080"
+ HTTPS_PROXY: "http://myproxy.domain.com:8080"
+ https_proxy: "http://myproxy.domain.com:8080"
+ NO_PROXY: "127.0.0.1,localhost,kubernetes.default.svc,.codefresh.svc,100.64.0.1,169.254.169.254,cf-builder,cf-cfapi,cf-cfui,cf-chartmuseum,cf-charts-manager,cf-cluster-providers,cf-consul,cf-consul-ui,cf-context-manager,cf-cronus,cf-helm-repo-manager,cf-hermes,cf-ingress-nginx-controller,cf-kube-integration,cf-mongodb,cf-nats,cf-nomios,cf-pipeline-manager,cf-postgresql,cf-rabbitmq,cf-redis-master,cf-registry,cf-runner,cf-runtime-environment-manager,cf-store"
+ no_proxy: "127.0.0.1,localhost,kubernetes.default.svc,.codefresh.svc,100.64.0.1,169.254.169.254,cf-builder,cf-cfapi,cf-cfui,cf-chartmuseum,cf-charts-manager,cf-cluster-providers,cf-consul,cf-consul-ui,cf-context-manager,cf-cronus,cf-helm-repo-manager,cf-hermes,cf-ingress-nginx-controller,cf-kube-integration,cf-mongodb,cf-nats,cf-nomios,cf-pipeline-manager,cf-postgresql,cf-rabbitmq,cf-redis-master,cf-registry,cf-runner,cf-runtime-environment-manager,cf-store"
+```
+In addition to this, you should also add your Kubernetes API IP address (`kubectl get svc kubernetes`) to both: `NO_PROXY` and `no_proxy`.
+
+### Storage
+
+Codefresh is using both cluster storage (volumes) as well as external storage.
+
+#### Databases
+
+The following table displays the list of databases created as part of the installation:
+
+| Database | Purpose | Latest supported version |
+|----------|---------| ---------------|
+| mongoDB | storing all account data (account settings, users, projects, pipelines, builds etc.) | 4.2.x |
+| postgresql | storing data about events that happened on the account (pipeline updates, deletes, etc.). The audit log uses the data from this database. | 13.x |
+| redis | mainly used for caching, but also used as a key-value store for our trigger manager. | 6.0.x |
+
+#### Volumes
+
+These are the volumes required for Codefresh on-premises:
+
+
+{: .table .table-bordered .table-hover}
+| Name | Purpose | Minimum Capacity | Can run on netfs (nfs, cifs) |
+|----------------|------------------------|------------------|------------------------------|
+| cf-mongodb* | Main database - Mongo | 8GB | Yes** |
+| cf-postgresql* | Events databases - Postgres | 8GB | Yes** |
+| cf-rabbitmq* | Message broker | 8GB | No** |
+| cf-redis* | Cache | 8GB | No** |
+| cf-store | Trigger Redis data | 8GB | No** |
+| cf-cronus | Trigger crontab data | 1GB | Yes |
+| datadir-cf-consul-0 | Consul datadir | 1GB | Yes |
+| cf-chartmuseum | chartmuseum | 10GB | Yes |
+| cf-builder-0 | /var/lib/docker for builder | 100GB | No*** |
+| cf-runner-0 | /var/lib/docker for composition runner | 100GB | No*** |
+
+{% raw %}
+
+ (*) Possibility to use external service
+
+ (**) Running on netfs (nfs, cifs) is not recommended by product admin guide
+
+ (***) Docker daemon can be run on block device only
+
+{% endraw %}
+
+StatefulSets (`cf-builder` and `cf-runner`) process their data on separate physical volumes (PVs) and can be claimed using Persistent Volume Claims (PVCs) with default initial sizes of 100Gi. Also, those StatefulSets have the ability to connect to existing pre-defined PVCs.
+
+The default initial volume size (100 Gi) can be overridden in the custom `config.yaml` file. Values descriptions are in the `config.yaml` file.
+The registry’s initial volume size is 100Gi. It also can be overridden in a custom `config.yaml` file. There is a possibility to use a customer-defined registry configuration file (`config.yaml`) that allows using different registry storage back-ends (S3, Azure Blob, GCS, etc.) and other parameters. More details can be found in the [Docker documentation](https://docs.docker.com/registry/configuration/).
+
+Depending on the customer’s Kubernetes version we can assist with PV resizing. Details are can be found in this [Kubernetes blog post](https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/).
+
+#### Automatic Volume Provisioning
+
+Codefresh installation supports automatic storage provisioning based on the standard Kubernetes dynamic provisioner Storage Classes and Persistent Volume Claims. All required installation volumes will be provisioned automatically using the default Storage Class or custom Storage Class that can be specified as a parameter in `config.yaml` under `storageClass: my-storage-class`.
+
+
+
+### Retention policy for Codefresh builds
+Define a retention policy to manage Codefresh builds. The retention settings are controlled through `cf-api` deployment environment variables, all of which have default settings which you can retain or customize. By default, Codefresh deletes builds older than six months, including offline logs.
+
+The retention mechanism, implemented as a Cron Job, removes data from collections such as:
+* workflowproccesses
+* workflowrequests
+* workflowrevisions
+
+{: .table .table-bordered .table-hover}
+| Env Variable | Description | Default |
+|---------------|--------------------------- |---------------------- |
+|`RETENTION_POLICY_IS_ENABLED` | Determines if automatic build deletion through the Cron job is enabled. | `true` |
+|`RETENTION_POLICY_BUILDS_TO_DELETE`| The maximum number of builds to delete by a single Cron job. To avoid database issues, especially when there are large numbers of old builds, we recommend deleting them in small chunks. You can gradually increase the number after verifying that performance is not affected. | `50` |
+|`RETENTION_POLICY_DAYS` | The number of days for which to retain builds. Builds older than the defined retention period are deleted. | `180` |
+|`RUNTIME_MONGO_URI` | Optional. The URI of the Mongo database from which to remove MongoDB logs (in addition to the builds). | |
+
+
+### Managing Codefresh backups
+
+Codefresh on-premises backups can be automated by installing a specific service as an addon to your Codefresh on-premises installation. It is based on the [mgob](https://github.com/stefanprodan/mgob){:target="\_blank"} open source project, and can run scheduled backups with retention, S3 & SFTP upload, notifications, instrumentation with Prometheus and more.
+
+#### Configure and deploy the Backup Manager
+
+Backup Manager is installed as an addon and therefore it needs an existing Codefresh on-premises installation.
+Before installing it, please make sure you have selected a proper kube config pointing to the cluster, where you have Codefresh installed on.
+
+1. Go to the staging directory of your Codefresh installation, and open the config file: `your-CF-stage-dir/addons/backup-manager/config.yaml`.
+1. Retain or customize the values of these configuration parameters:
+ * `metadada`: Various CF-installer-specific parameters, which should not be changed in this case
+ * `kubernetes`: Specify a kube context, kube config file, and a namespace for the backup manager
+ * `storage`: Storage class, storage size and read modes for persistent volumes to store backups locally within your cluster
+ * Backup plan configuration parameters under `jobConfigs.cfBackupPlan`:
+ * `target.uri` - target mongo URI. It is recommended to leave the mongo uri value blank - it will be taken automatically from the Codefresh release installed in your cluster
+ * `scheduler` - here you can specify cron expression for your backups schedule, backups retention and timeout values
+
+For more advanced backup plan settings, such as specifying various remote cloud-based storage providers for your backups, configuring notifications and other, please refer to [this](https://github.com/stefanprodan/mgob#configure) page
+
+To **deploy the backup manager** service, please select a correct kube context, where you have Codefresh on-premises installed and deploy backup-manager with the following command:
+
+```
+kcfi deploy -c `your-CF-stage-dir/addons/backup-manager/config.yaml`
+```
+
+#### On-demand/ad-hoc backup
+```
+kubectl port-forward cf-backup-manager-0 8090
+curl -X POST http://localhost:8090/backup/cfBackupPlan
+```
+
+#### Restore from backup
+```
+kubectl exec -it cf-backup-manager-0 bash
+mongorestore --gzip --archive=/storage/cfBackupPlan/backup-archive-name.gz --uri mongodb://root:password@mongodb:27017 --drop
+```
+
+### Configuring AWS Load Balancers
+
+By default Codefresh deploys the [ingress-nginx](https://github.com/kubernetes/ingress-nginx/) controller and [Classic Load Balancer](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html) as a controller service.
+
+#### NLB
+
+To use a **Network Load Balancer** - deploy a regular Codefresh installation with the following ingress config for the the `cf-ingress-controller` controller service.
+
+`config.yaml`
+```yaml
+ingress-nginx:
+ controller:
+ service:
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-type: nlb
+ service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
+ service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
+ service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
+
+tls:
+ selfSigned: false
+ cert: certs/certificate.crt
+ key: certs/private.key
+```
+This annotation will create a new Load Balancer - Network Load Balancer, which you should use in the Codefresh UI DNS record.
+Update the DNS record according to the new service.
+
+#### L7 ELB with SSL Termination
+
+When a **Classic Load Balancer** is used, some Codefresh features that (for example `OfflineLogging`), will use a websocket to connect with Codefresh API and they will require secure TCP (SSL) protocol enabled on the Load Balancer listener instead of HTTPS.
+
+To use either a certificate from a third party issuer that was uploaded to IAM or a certificate [requested](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html) within AWS Certificate Manager see the followning config example:
+
+
+`config.yaml`
+```yaml
+ingress-nginx:
+ controller:
+ service:
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
+ service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
+ service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
+ service.beta.kubernetes.io/aws-load-balancer-ssl-cert: < CERTIFICATE ARN >
+ targetPorts:
+ http: http
+ https: http
+
+tls:
+ selfSigned: true
+```
+
+- both http and https target port should be set to **80**.
+- update your AWS Load Balancer listener for port 443 from HTTPS protocol to SSL.
+
+#### ALB
+
+To use the **Application Load Balancer** the [ALB Ingress Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html) should be deployed to the cluster.
+
+To support ALB:
+
+- First disable Nginx controller in the Codefresh init config file - __config.yaml__:
+
+```yaml
+ingress-nginx: #disables creation of Nginx controller deployment
+ enabled: false
+
+ingress: #disables creation of Ingress object
+ enabled: false
+```
+
+- [deploy](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html) the ALB controller;
+- create a new **ingress** resource:
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ annotations:
+ alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
+ alb.ingress.kubernetes.io/scheme: internet-facing
+ alb.ingress.kubernetes.io/target-type: ip
+ kubernetes.io/ingress.class: alb
+ meta.helm.sh/release-name: cf
+ meta.helm.sh/release-namespace: codefresh
+ labels:
+ app: cf-codefresh
+ release: cf
+ name: cf-codefresh-ingress
+ namespace: codefresh
+spec:
+ defaultBackend:
+ service:
+ name: cf-cfui
+ port:
+ number: 80
+ rules:
+ - host: myonprem.domain.com
+ http:
+ paths:
+ - backend:
+ service:
+ name: cf-cfapi
+ port:
+ number: 80
+ path: /api/*
+ pathType: ImplementationSpecific
+ - backend:
+ service:
+ name: cf-cfapi
+ port:
+ number: 80
+ path: /ws/*
+ pathType: ImplementationSpecific
+ - backend:
+ service:
+ name: cf-cfui
+ port:
+ number: 80
+ path: /
+ pathType: ImplementationSpecific
+```
+
+### Configure CSP (Content Security Policy)
+Add CSP environment variables to `config.yaml`, and define the values to be returned in the CSP HTTP headers.
+```yaml
+cfui:
+ env:
+ CONTENT_SECURITY_POLICY: ""
+ CONTENT_SECURITY_POLICY_REPORT_ONLY: "default-src 'self'; font-src 'self'
+ https://fonts.gstatic.com; script-src 'self' https://unpkg.com https://js.stripe.com;
+ style-src 'self' https://fonts.googleapis.com; 'unsafe-eval' 'unsafe-inline'"
+ CONTENT_SECURITY_POLICY_REPORT_TO: ""
+```
+`CONTENT_SECURITY_POLICY` is the string describing content policies. Use semi-colons to separate between policies.
+`CONTENT_SECURITY_POLICY_REPORT_TO` is a comma-separated list of JSON objects. Each object must have a name and an array of endpoints that receive the incoming CSP reports.
+
+For detailed information, see the [Content Security Policy article on MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP).
+
+### Enable x-hub-signature-256 signature for GitHub AE
+Add the `USE_SHA256_GITHUB_SIGNATURE` environment variable to **cfapi** deployment in `config.yaml`.
+```yaml
+cfapi:
+ env:
+ USE_SHA256_GITHUB_SIGNATURE: "true"
+```
+
+For detailed information, see the [Securing your webhooks](https://docs.github.com/en/developers/webhooks-and-events/webhooks/securing-your-webhooks) and [Webhooks](https://docs.github.com/en/github-ae@latest/rest/webhooks).
+
+
+## Using existing external services for data storage/messaging
+
+Normally the Codefresh installer, is taking care of all needed dependencies internally by deploying the respective services (mongo, redis etc) on its own.
+
+You might want however to use your own existing options if you already have those services up and running externally.
+
+### Configuring an external Postgres database
+
+It is possible to configure Codefresh to work with your existing Postgres database service, if you don't want to use the default one as provided by the Codefresh installer.
+
+#### Configuration steps
+
+All the configuration comes down to putting a set of correct values into your Codefresh configuration file `config.yaml`, which is present in `your/stage-dir/codefresh` directory. During the installation, Codefresh will run a seed job, using the values described in the following steps:
+
+1. Specify a user name `global.postgresSeedJob.user` and password `global.postgresSeedJob.password` for a seed job. This must be a privileged user allowed to create databases and roles. It will be used only by the seed job to create the needed database and a user.
+2. Specify a user name `global.postgresUser` and password `global.postgresPassword` to be used by Codefresh installation. A user with the name and password will be created by the seed job and granted with required privileges to access the created database.
+3. Specify a database name `global.postgresDatabase` to be created by the seed job and used by Codefresh installation.
+4. Specify `global.postgresHostname` and optionally `global.postgresPort` (`5432` is a default value).
+5. Disable the postgres subchart installation with the `postgresql.enabled: false` value, because it is not needed in this case.
+
+
+Below is an example of the relevant piece of `config.yaml`:
+
+```yaml
+global:
+ postgresSeedJob:
+ user: postgres
+ password: zDyGp79XyZEqLq7V
+ postgresUser: cf_user
+ postgresPassword: fJTFJMGV7sg5E4Bj
+ postgresDatabase: codefresh
+ postgresHostname: my-postgres.ccjog7pqzunf.us-west-2.rds.amazonaws.com
+ postgresPort: 5432
+
+postgresql:
+ enabled: false #disable default postgresql subchart installation
+```
+#### Running the seed job manually
+
+If you prefer running the seed job manually, you can do it by using a script present in `your/stage-dir/codefresh/addons/seed-scripts` directory named `postgres-seed.sh`. The script takes the following set of variables that you need to have set before running it:
+
+```shell
+export POSTGRES_SEED_USER="postgres"
+export POSTGRES_SEED_PASSWORD="zDyGp79XyZEqLq7V"
+export POSTGRES_USER="cf_user"
+export POSTGRES_PASSWORD="fJTFJMGV7sg5E4Bj"
+export POSTGRES_DATABASE="codefresh"
+export POSTGRES_HOST="my-postgres.ccjog7pqzunf.us-west-2.rds.amazonaws.com"
+export POSTGRES_PORT="5432"
+```
+The variables have the same meaning as the configuration values described in the previous section about Postgres.
+
+However you **still need to specify a set of values** in the Codefresh config file as described in the section above, but with the whole **`postgresSeedJob` section omitted**, like this:
+
+```yaml
+global:
+ postgresUser: cf_user
+ postgresPassword: fJTFJMGV7sg5E4Bj
+ postgresDatabase: codefresh
+ postgresHostname: my-postgresql.prod.svc.cluster.local
+ postgresPort: 5432
+
+postgresql:
+ enabled: false #disable default postgresql subchart installation
+```
+
+### Configuring an external MongoDB
+
+Codefresh recommends to use the Bitnami MongoDB [chart](https://github.com/bitnami/charts/tree/master/bitnami/mongodb) as a Mongo database. The supported version of Mongo is 4.2.x
+
+To configure Codefresh on-premises to use an external Mongo service one needs to provide the following values in `config.yaml`:
+
+- **mongo connection string** - `mongoURI`. This string will be used by all of the services to communicate with mongo. Codefresh will automatically create and add a user with "ReadWrite" permissions to all of the created databases with the username and password from the URI. Optionally, automatic user addition can be disabled - `mongoSkipUserCreation`, in order to use already existing user. In such a case the existing user must have **ReadWrite** permissions to all of newly created databases
+Codefresh does not support [DNS Seedlist Connection Format](https://docs.mongodb.com/manual/reference/connection-string/#connections-dns-seedlist) at the moment, use the [Standard Connection Format](https://docs.mongodb.com/manual/reference/connection-string/#connections-standard-connection-string-format) instead.
+- mongo **root user** name and **password** - `mongodbRootUser`, `mongodbRootPassword`. The privileged user will be used by Codefresh only during installation for seed jobs and for automatic user addition. After installation, credentials from the provided mongo URI will be used. Mongo root user must have permissions to create users.
+
+See the [Mongo required Access](https://docs.mongodb.com/manual/reference/method/db.createUser/#required-access) for more details.
+
+Here is an example of all the related values:
+
+```yaml
+global:
+ mongodbRootUser:
+ mongodbRootPassword:
+ mongoURI:
+ mongoSkipUserCreation: true
+ mongoDeploy: false # disables deployment of internal mongo service
+
+mongo:
+ enabled: false
+ ```
+
+#### MongoDB with Mutual TLS
+
+>The option available in kcfi **v0.5.10**
+
+Codefresh supports enabling SSL/TLS between cf microservices and MongoDB. To enable this option specify in `config.yaml` the following parameters:
+
+ `global.mongoTLS: true`
+ `global.mongoCaCert` - CA certificate file path (in kcfi init directory)
+ `global.mongoCaKey` - CA certificate private key file path (in kcfi init directory)
+
+`config.yaml` example:
+```yaml
+global:
+ mongodbRootUser: root
+ mongodbRootPassword: WOIqcSwr0y
+ mongoURI: mongodb://my-mongodb.prod.svc.cluster.local/?ssl=true&authMechanism=MONGODB-X509&authSource=$external
+ mongoSkipUserCreation: true
+ mongoDeploy: false # disables deployment of internal mongo service
+
+ mongoTLS: true #enable MongoDB TLS support
+ mongoCaCert: mongodb-ca/ca-cert.pem
+ mongoCaKey: mongodb-ca/ca-key.pem
+
+ ### for OfflineLogging feature
+ runtimeMongoURI: mongodb://my-mongodb.prod.svc.cluster.local/?ssl=true&authMechanism=MONGODB-X509&authSource=$external
+
+### for OfflineLogging feature
+cfapi:
+ env:
+ RUNTIME_MONGO_TLS: "true"
+ RUNTIME_MONGO_TLS_VALIDATE: "true" # 'false' if self-signed certificate to avoid x509 errors
+
+## set MONGO_MTLS_VALIDATE to `false` if self-signed certificate to avoid x509 errors
+cluster-providers:
+ env:
+ MONGO_MTLS_VALIDATE: "false"
+
+k8s-monitor:
+ env:
+ MONGO_MTLS_VALIDATE: "false"
+
+mongo:
+ enabled: false #disable default mongodb subchart installation
+ ```
+
+ >Perform an upgarde:
+ >`kcfi deploy -c config.yaml --debug`
+
+### Configure an external Redis service
+Codefresh recommends to use the Bitnami Redis [chart](https://github.com/bitnami/charts/tree/master/bitnami/redis) as a Redis store.
+
+**Limitations**
+
+Codefresh does not support secure connection to Redis (TLS) and AUTH username extension.
+
+**Configuration**
+
+To configure Codefresh to use an external Redis service, add the following parameters to your `config.yaml`:
+
+`config.yaml` example:
+```yaml
+global:
+ redisUrl: my-redis.prod.svc.cluster.local
+ redisPort: 6379
+ redisPassword: 6oOhHI8fI5
+
+ runtimeRedisHost: my-redis.prod.svc.cluster.local
+ runtimeRedisPassword: 6oOhHI8fI5
+ runtimeRedisPort: 6379
+ runtimeRedisDb: 2
+
+redis:
+ enabled: false #disable default redis subchart installation
+```
+
+Where `redis*` - are for the main Redis storage, and `runtimeRedis*` - for storage is used to store pipeline logs in case of `OfflineLogging` feature is turned on. In most cases the host value is the same for these two values.
+
+
+### Configuring an external RabbitMQ service
+
+Codefresh recommends to use the Bitnami RabbitMQ [chart](https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq) as a RabbitMQ service.
+
+To use an external RabbitMQ service instead of the local helm chart, add the following values to the __config.yaml__:
+
+```yaml
+rabbitmq:
+ enabled: false
+
+global:
+ rabbitmqUsername:
+ rabbitmqPassword:
+ rabbitmqHostname:
+```
+
+### Configuring an external Consul service
+
+
+Notice that at the moment Codefresh supports only the deprecated Consul API (image __consul:1.0.0__), and does not support connection via HTTPS and any authentication.
+The Consul host must expose port `8500`.
+
+>In general, we don't recommend to take the Consul service outside the cluster.
+
+
+To configure Codefresh to use your external Consul service, add the following values to the __config.yaml__:
+
+```yaml
+global:
+ consulHost:
+
+consul:
+ enabled: false
+```
+
+## App Cluster Autoscaling
+
+Autoscaling in Kubernetes is implemented as an interaction between Cluster Autoscaler and Horizontal Pod Autoscaler
+
+{: .table .table-bordered .table-hover}
+| | Scaling Target| Trigger | Controller | How it Works |
+| ----------- | ------------- | ------- | --------- | --------- |
+| [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)| Nodes | **Up:** Pending pod
**Down:** Node resource allocations is low | On GKE we can turn on/off autoscaler and configure min/max per node group can be also installed separately | Listens on pending pods for scale up and node allocations for scaledown. Should have permissions to call cloud api. Considers pod affinity, pdb, storage, special annotations |
+| [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) | replicas on deployments or StatefulSets | metrics value thresholds defined in HPA object | part of Kubernetes controller | Controller gets metrics from "metrics.k8s.io/v1beta1" , "custom.metrics.k8s.io/v1beta1", "external.metrics.k8s.io/v1beta1" requires [metrics-server](https://github.com/kubernetes-sigs/metrics-server) and custom metrics adapters ([prometheus-adapter](https://github.com/kubernetes-sigs/prometheus-adapter), [stackdriver-adapter](https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/custom-metrics-stackdriver-adapter)) to listen on this API (see note (1) below) and adjusts deployment or sts replicas according to definitions in HorizontalPodAutocaler
There are v1 and beta api versions for HorizontalPodAutocaler:
[v1](https://github.com/kubernetes/api/blob/master/autoscaling/v1/types.go) - supports for resource metrics (cpu, memory) - `kubect get hpa`
[v2beta2](https://github.com/kubernetes/api/blob/master/autoscaling/v2beta2/types.go) and [v2beta1](https://github.com/kubernetes/api/blob/master/autoscaling/v2beta1/types.go) - supports for both resource and custom metrics - `kubectl get hpa.v2beta2.autoscaling` **The metric value should decrease on adding new pods.**
*Wrong metrics Example:* request rate
*Right metrics Example:* average request rate per pod |
+
+Note (1)
+```
+kubectl get apiservices | awk 'NR==1 || $1 ~ "metrics"'
+NAME SERVICE AVAILABLE AGE
+v1beta1.custom.metrics.k8s.io monitoring/prom-adapter-prometheus-adapter True 60d
+v1beta1.metrics.k8s.io kube-system/metrics-server True 84d
+```
+
+
+**Implementation in Codefresh**
+
+* Default “Enable Autoscaling” settings for GKE
+* Using [prometheus-adapter](https://github.com/kubernetes-sigs/prometheus-adapter) with custom metrics
+
+We define HPA for cfapi and pipeline-manager services
+
+**CFapi HPA object**
+
+It's based on three metrics (HPA controller scales of only one of the targetValue reached):
+
+```
+kubectl get hpa.v2beta1.autoscaling cf-cfapi -oyaml
+```
+
+{% highlight yaml %}
+{% raw %}
+apiVersion: autoscaling/v2beta1
+kind: HorizontalPodAutoscaler
+metadata:
+ annotations:
+ meta.helm.sh/release-name: cf
+ meta.helm.sh/release-namespace: default
+ labels:
+ app.kubernetes.io/managed-by: Helm
+ name: cf-cfapi
+ namespace: default
+spec:
+ maxReplicas: 16
+ metrics:
+ - object:
+ metricName: requests_per_pod
+ target:
+ apiVersion: v1
+ kind: Service
+ name: cf-cfapi
+ targetValue: "10"
+ type: Object
+ - object:
+ metricName: cpu_usage_avg
+ target:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: cf-cfapi-base
+ targetValue: "1"
+ type: Object
+ - object:
+ metricName: memory_working_set_bytes_avg
+ target:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: cf-cfapi-base
+ targetValue: 3G
+ type: Object
+ minReplicas: 2
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: cf-cfapi-base
+{% endraw%}
+{% endhighlight %}
+
+* `requests_per_pod` is based on `rate(nginx_ingress_controller_requests)` metric ingested from nginx-ingress-controller
+* `cpu_usage_avg` based on cadvisor (from kubelet) rate `(rate(container_cpu_user_seconds_total)`
+* `memory_working_set_bytes_avg` based on cadvisor `container_memory_working_set_bytes`
+
+**pipeline-manager HPA**
+
+based on `cpu_usage_avg`
+
+{% highlight yaml %}
+{% raw %}
+apiVersion: autoscaling/v2beta1
+kind: HorizontalPodAutoscaler
+metadata:
+ annotations:
+ meta.helm.sh/release-name: cf
+ meta.helm.sh/release-namespace: default
+ labels:
+ app.kubernetes.io/managed-by: Helm
+ name: cf-pipeline-manager
+spec:
+ maxReplicas: 8
+ metrics:
+ - object:
+ metricName: cpu_usage_avg
+ target:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: cf-pipeline-manager-base
+ targetValue: 400m
+ type: Object
+ minReplicas: 2
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: cf-pipeline-manager-base
+{% endraw%}
+{% endhighlight %}
+
+**prometheus-adapter configuration**
+
+Reference: [https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config.md](https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config.md
+)
+
+{% highlight yaml %}
+{% raw %}
+Rules:
+ - metricsQuery: |
+ kube_service_info{<<.LabelMatchers>>} * on() group_right(service)
+ (sum(rate(nginx_ingress_controller_requests{<<.LabelMatchers>>}[2m]))
+ / on() kube_deployment_spec_replicas{deployment='<>-base',namespace='<>'})
+ name:
+ as: requests_per_pod
+ matches: ^(.*)$
+ resources:
+ overrides:
+ namespace:
+ resource: namespace
+ service:
+ resource: service
+ seriesQuery: kube_service_info{service=~".*cfapi.*"}
+ - metricsQuery: |
+ kube_deployment_labels{<<.LabelMatchers>>} * on(label_app) group_right(deployment)
+ (label_replace(
+ avg by (container) (rate(container_cpu_user_seconds_total{container=~"cf-(tasker-kubernetes|cfapi.*|pipeline-manager.*)", job="kubelet", namespace='<>'}[15m]))
+ , "label_app", "$1", "container", "(.*)"))
+ name:
+ as: cpu_usage_avg
+ matches: ^(.*)$
+ resources:
+ overrides:
+ deployment:
+ group: apps
+ resource: deployment
+ namespace:
+ resource: namespace
+ seriesQuery: kube_deployment_labels{label_app=~"cf-(tasker-kubernetes|cfapi.*|pipeline-manager.*)"}
+ - metricsQuery: "kube_deployment_labels{<<.LabelMatchers>>} * on(label_app) group_right(deployment)\n
+ \ (label_replace(\n avg by (container) (avg_over_time (container_memory_working_set_bytes{container=~\"cf-.*\",
+ job=\"kubelet\", namespace='<>'}[15m]))\n
+ \ , \"label_app\", \"$1\", \"container\", \"(.*)\"))\n \n"
+ name:
+ as: memory_working_set_bytes_avg
+ matches: ^(.*)$
+ resources:
+ overrides:
+ deployment:
+ group: apps
+ resource: deployment
+ namespace:
+ resource: namespace
+ seriesQuery: kube_deployment_labels{label_app=~"cf-.*"}
+ - metricsQuery: |
+ kube_deployment_labels{<<.LabelMatchers>>} * on(label_app) group_right(deployment)
+ label_replace(label_replace(avg_over_time(newrelic_apdex_score[15m]), "label_app", "cf-$1", "exported_app", '(cf-api.*|pipeline-manager|tasker-kuberentes)\\[kubernetes\\]'), "label_app", "$1cfapi$3", "label_app", '(cf-)(cf-api)(.*)')
+ name:
+ as: newrelic_apdex
+ matches: ^(.*)$
+ resources:
+ overrides:
+ deployment:
+ group: apps
+ resource: deployment
+ namespace:
+ resource: namespace
+ seriesQuery: kube_deployment_labels{label_app=~"cf-(tasker-kubernetes|cfapi.*|pipeline-manager)"}
+{% endraw%}
+{% endhighlight %}
+
+**How to define HPA in Codefresh installer (kcfi) config**
+
+Most of Codefresh's Microservices subcharts contain `templates/hpa.yaml`:
+
+{% highlight yaml %}
+{% raw %}
+{{- if .Values.HorizontalPodAutoscaler }}
+apiVersion: autoscaling/v2beta1
+kind: HorizontalPodAutoscaler
+metadata:
+ name: {{ template "cfapi.fullname" . }}
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: {{ template "cfapi.fullname" . }}-{{ .version | default "base" }}
+ minReplicas: {{ coalesce .Values.HorizontalPodAutoscaler.minReplicas .Values.replicaCount 1 }}
+ maxReplicas: {{ coalesce .Values.HorizontalPodAutoscaler.maxReplicas .Values.replicaCount 2 }}
+ metrics:
+{{- if .Values.HorizontalPodAutoscaler.metrics }}
+{{ toYaml .Values.HorizontalPodAutoscaler.metrics | indent 4 }}
+{{- else }}
+ - type: Resource
+ resource:
+ name: cpu
+ targetAverageUtilization: 60
+{{- end }}
+{{- end }}
+{% endraw%}
+{% endhighlight %}
+
+To configure HPA for CFapi add `HorizontalPodAutoscaler` values to config.yaml, for example:
+
+(assuming that we already have prometheus adapter configured for metrics `requests_per_pod`, `cpu_usage_avg`, `memory_working_set_bytes_avg`)
+
+{% highlight yaml %}
+{% raw %}
+cfapi:
+ replicaCount: 4
+ resources:
+ requests:
+ memory: "4096Mi"
+ cpu: "1100m"
+ limits:
+ memory: "4096Mi"
+ cpu: "2200m"
+ HorizontalPodAutoscaler:
+ minReplicas: 2
+ maxReplicas: 16
+ metrics:
+ - type: Object
+ object:
+ metricName: requests_per_pod
+ target:
+ apiVersion: "v1"
+ kind: Service
+ name: cf-cfapi
+ targetValue: 10
+ - type: Object
+ object:
+ metricName: cpu_usage_avg
+ target:
+ apiVersion: "apps/v1"
+ kind: Deployment
+ name: cf-cfapi-base
+ targetValue: 1
+ - type: Object
+ object:
+ metricName: memory_working_set_bytes_avg
+ target:
+ apiVersion: "apps/v1"
+ kind: Deployment
+ name: cf-cfapi-base
+ targetValue: 3G
+{% endraw%}
+{% endhighlight %}
+
+**Querying metrics (for debugging)**
+
+CPU Metric API Call
+
+```
+kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/codefresh/pods/cf-cfapi-base-****-/ | jq
+```
+
+Custom Metrics Call
+
+```
+kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/codefresh/services/cf-cfapi/requests_per_pod | jq
+```
+
+
+## Common Problems, Solutions, and Dependencies
+
+### Dependencies
+
+#### Mongo
+
+All services using the MongoDB are dependent on the `mongo` pod being up and running. If the `mongo` pod is down, the following dependencies will not work:
+
+- `runtime-environment-manager`
+- `pipeline-manager`
+- `cf-api`
+- `cf-broadcaster`
+- `context-manager`
+- `nomios`
+- `cronius`
+- `cluster-promoters`
+- `k8s-monitor`
+- `charts-manager`
+- `tasker-kubernetes`
+
+#### Logs
+
+There is a dependency between the `cf-broadcaster` pod and the `cf-api` pod. If your pipeline runs, but does not show any logs, try restarting the broadcaster pod.
+
+### Problems and Solutions
+
+**Problem:** installer fails because `codefresh` database does not exist.
+
+**Solution:** If you are using an external PostgresSQL database (instead of the internal one that the installer provides), you will first need to manually create a new database named `codefresh` inside your PostgresSQL database before running the installer.
+
+
diff --git a/_docs/installation/codefresh-runner.md b/_docs/installation/codefresh-runner.md
new file mode 100644
index 00000000..fbd64393
--- /dev/null
+++ b/_docs/installation/codefresh-runner.md
@@ -0,0 +1,2072 @@
+---
+title: "Codefresh Runner installation"
+description: "Run Codefresh pipelines on your private Kubernetes cluster"
+group: installation
+redirect_from:
+ - /docs/enterprise/codefresh-runner/
+toc: true
+---
+
+Install the Codefresh Runner on your Kubernetes cluster to run pipelines and access secure internal services without compromising on-premises security requirements. These pipelines run on your infrastructure, even behind the firewall, and keep code on your Kubernetes cluster secure.
+
+[Skip to quick installation →](#installation-with-the-quick-start-wizard)
+
+>Important:
+ You must install the Codefresh Runner on _each cluster running Codefresh pipelines_.
+ The Runner is **not** needed in clusters used for _deployment_. You can deploy applications on clusters other than the ones the runner is deployed on.
+
+The installation process takes care of all Runner components and other required resources (config-maps, secrets, volumes).
+
+## Prerequisites
+
+To use the Codefresh runner the following is required:
+
+1. A Kubernetes cluster with outgoing internet access (versions 1.10 to 1.23). Each node should have 50GB disk size.
+2. A container runtime, such as [docker](https://kubernetes.io/blog/2020/12/02/dockershim-faq/), [containerd](https://containerd.io/) or [cri-o](https://cri-o.io/). Note that the runner is **not** dependent on any special dockershim features, so any compliant container runtime is acceptable. The docker socket/daemon used by Codefresh pipelines is **NOT** the one on the host node (as it might not exist at all in the case of containerd or cri-o), but instead an internal docker daemon created/managed by the pipeline itself.
+3. A [Codefresh account]({{site.baseurl}}/docs/getting-started/create-a-codefresh-account/) with the Hybrid feature enabled.
+4. A [Codefresh CLI token]({{site.baseurl}}/docs/integrations/codefresh-api/#authentication-instructions) that will be used to authenticate your Codefresh account.
+
+The runner can be installed from any workstation or laptop with access (i.e. via `kubectl`) to the Kubernetes cluster running Codefresh builds. The Codefresh runner will authenticate to your Codefresh account by using the Codefresh CLI token.
+
+## System Requirements
+
+Once installed the runner uses the following pods:
+
+* `runner` - responsible for picking tasks (builds) from the Codefresh API
+* `engine` - responsible for running pipelines
+* `dind` - responsible for building and using Docker images
+* `dind-volume-provisioner` - responsible for provisioning volumes (PV) for dind
+* `dind-lv-monitor` - responsible for cleaning **local** volumes
+
+**CPU/Memory**
+
+The following table shows **MINIMUM** resources for each component:
+
+{: .table .table-bordered .table-hover}
+| Component | CPU requests| RAM requests | Storage | Type | Always on |
+| -------------- | --------------|------------- |-------------------------|-------|-------|
+| `runner` | 100m | 100Mi | Doesn't need PV | Deployment | Yes |
+| `engine` | 100m | 500Mi | Doesn't need PV | Pod | No |
+| `dind` | 400m | 800Mi | 16GB PV | Pod | No |
+| `dind-volume-provisioner` | 300m | 400Mi | Doesn't need PV | Deployment | Yes |
+| `dind-lv-monitor` | 300m | 400Mi | Doesn't need PV | DaemonSet | Yes |
+
+Components that are always on consume resources all the time. Components that are not always on only consume resources when pipelines are running (they are created and destroyed automatically for each pipeline).
+
+Node size and count will depend entirely on how many pipelines you want to be “ready” for and how many will use “burst” capacity.
+
+* Ready (nodes): Lower initialization time and faster build times.
+* Burst (nodes): High initialization time and slower build times. (Not recommended)
+
+The size of your nodes directly relates to the size required for your pipelines and thus it is dynamic. If you find that only a few larger pipelines require larger nodes you may want to have two Codefresh Runners associated to different node pools.
+
+
+**Storage**
+
+For the storage options needed by the `dind` pod we suggest:
+
+* [Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) `/var/lib/codefresh/dind-volumes` on the K8S nodes filesystem (**default**)
+* [EBS](https://aws.amazon.com/ebs/) in the case of AWS. See also the [notes](#installing-on-aws) about getting caching working.
+* [Local SSD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd) or [GCE Disks](https://cloud.google.com/compute/docs/disks#pdspecs) in the case of GCP. See [notes](#installing-on-google-kubernetes-engine) about configuration.
+
+
+**Networking Requirements**
+
+* `dind` - this pod will create an internal network in the cluster to run all the pipeline steps; needs outgoing/egress access to Dockerhub and `quay.io`
+* `runner` - this pod needs outgoing/egress access to `g.codefresh.io`; needs network access to [app-proxy]({{site.baseurl}}/docs/administration/codefresh-runner/#optional-installation-of-the-app-proxy) (if app-proxy is used)
+* `engine` - this pod needs outgoing/egress access to `g.codefresh.io`, `*.firebaseio.com` and `quay.io`; needs network access to `dind` pod
+
+All CNI providers/plugins are compatible with the runner components.
+
+## Installation with the Quick-start Wizard
+
+Install the Codefresh CLI
+
+```shell
+npm install -g codefresh
+```
+
+[Alternative install methods](https://codefresh-io.github.io/cli/installation/)
+
+Authenticate the CLI
+
+```shell
+codefresh auth create-context --api-key {API_KEY}
+```
+
+You can obtain an API Key from your [user settings page](https://g.codefresh.io/user/settings).
+>**Note:** Make sure when you generate the token used to authenticate with the CLI, you generate it with *all scopes*.
+
+>**Note:** access to the Codefresh CLI is only needed once during the Runner installation. After that, the Runner will authenticate on it own using the details provided. You do NOT need to install the Codefresh CLI on the cluster that is running Codefresh pipelines.
+
+Then run the wizard with the following command:
+
+```shell
+codefresh runner init
+```
+
+or
+
+```shell
+codefresh runner init --token
+```
+
+Brefore proceeding with installation, the wizard asks you some basic questions.
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/installation-wizard.png"
+ url="/images/administration/runner/installation-wizard.png"
+ alt="Codefresh Runner wizard"
+ caption="Codefresh Runner wizard"
+ max-width="100%"
+ %}
+
+The wizard also creates and runs a sample pipeline that you can see in your Codefresh UI.
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/sample-pipeline.png"
+ url="/images/administration/runner/sample-pipeline.png"
+ alt="Codefresh Runner example pipeline"
+ caption="Codefresh Runner example pipeline"
+ max-width="90%"
+ %}
+
+That's it! You can now start using the Runner.
+
+You can also verify your installation with:
+
+```shell
+codefresh runner info
+```
+
+During installation you can see which API token will be used by the runner (if you don't provide one). The printed token is used by the runner to talk to the Codefresh platform carrying permissions that allow the runner to run pipelines. If you save the token, it can later be used to restore the runner's permissions without creating a new runner installation, if the deployment is deleted.
+
+**Customizing the Wizard Installation**
+
+You can customize the wizard installation by passing your own values in the `init` command.
+To inspect all available options run `init` with the `--help` flag:
+
+```shell
+codefresh runner init --help
+```
+
+**Inspecting the Manifests Before they are Installed**
+
+If you want to see what manifests are used by the installation wizard you can supply the `--dry-run` parameter in the installation process.
+
+```shell
+codefresh runner init --dry-run
+```
+
+This will execute the wizard in a special mode that will not actually install anything in your cluster. After all configuration questions are asked, all Kubernetes manifests used by the installer will be instead saved locally in a folder `./codefresh_manifests`.
+
+## Install Codefresh Runner with values file
+
+To install the Codefresh Runner with pre-defined values file use `--values` flag:
+
+```shell
+codefresh runner init --values values.yaml
+```
+
+Use [this example](https://github.com/codefresh-io/venona/blob/release-1.0/venonactl/example/values-example.yaml) as a starting point for your values file.
+
+## Install Codefresh Runner with Helm
+
+To install the Codefresh Runner using Helm, follow these steps:
+
+1. Download the Codefresh CLI and authenticate it with your Codefresh account. Click [here](https://codefresh-io.github.io/cli/getting-started/) for more detailed instructions.
+2. Run the following command to create all of the necessary entities in Codefresh:
+
+ ```shell
+ codefresh runner init --generate-helm-values-file
+ ```
+
+ * This will not install anything on your cluster, except for running cluster acceptance tests, (which may be skipped using the `--skip-cluster-test` option). Please note, that the Runner Agent and the Runtime Environment are still created in your Codefresh account.
+ * This command will also generate a `generated_values.yaml` file in your current directory, which you will need to provide to the `helm install` command later. If you want to install several Codefresh Runners, you will need a separate `generated_values.yaml` file for each Runner.
+
+3. Now run the following to complete the installation:
+
+ ```shell
+ helm repo add cf-runtime https://chartmuseum.codefresh.io/cf-runtime
+
+ helm install cf-runtime cf-runtime/cf-runtime -f ./generated_values.yaml --create-namespace --namespace codefresh
+ ```
+ * Here is the link to a repository with the chart for reference: [https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime](https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime)
+
+4. At this point you should have a working Codefresh Runner. You can verify the installation by running:
+
+ ```shell
+ codefresh runner execute-test-pipeline --runtime-name
+ ```
+>**Note!**
+Runtime components' (engine and dind) configuration is determined by the `runner init` command.
+The `helm install` command can only control the configuration of `runner`, `dind-volume-provisioner` and `lv-monitor` components.
+
+## Using the Codefresh Runner
+
+Once installed, the Runner is fully automated. It polls the Codefresh SAAS (by default every 3 seconds) on its own and automatically creates all resources needed for running pipelines.
+
+Once installation is complete, you should see the cluster of the runner as a new [Runtime environment](https://g.codefresh.io/account-admin/account-conf/runtime-environments) in Codefresh in your *Account Settings*, in the respective tab.
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/runtime-environments.png"
+ url="/images/administration/runner/runtime-environments.png"
+ alt="Available runtime environments"
+ caption="Available runtime environments"
+ max-width="60%"
+ %}
+
+If you have multiple environments available, you can change the default (shown with a thin blue border) by clicking on the 3 dot menu on the right of each environment. The Codefresh runner installer comes with a `set-default` option that is automatically set by default in the new runtime environment.
+
+You can even override the runtime environment for a specific pipeline by specifying in the respective section in the [pipeline settings]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipelines/).
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/environment-per-pipeline.png"
+ url="/images/administration/runner/environment-per-pipeline.png"
+ alt="Running a pipeline on a specific environment"
+ caption="Running a pipeline on a specific environment"
+ max-width="60%"
+ %}
+
+## Checking the Runner
+
+Once installed, the runner is a normal Kubernetes application like all other applications. You can use your existing tools to monitor it.
+
+Only the runner pod is long living inside your cluster. All other components (such as the engine) are short lived and exist only during pipeline builds.
+You can always see what the Runner is doing by listing the resources inside the namespace you chose during installation:
+
+```shell
+$ kubectl get pods -n codefresh-runtime
+NAME READY STATUS RESTARTS AGE
+dind-5ee7577017ef40908b784388 1/1 Running 0 22s
+dind-lv-monitor-runner-hn64g 1/1 Running 0 3d
+dind-lv-monitor-runner-pj84r 1/1 Running 0 3d
+dind-lv-monitor-runner-v2lhc 1/1 Running 0 3d
+dind-volume-provisioner-runner-64994bbb84-lgg7v 1/1 Running 0 3d
+engine-5ee7577017ef40908b784388 1/1 Running 0 22s
+monitor-648b4778bd-tvzcr 1/1 Running 0 3d
+runner-5d549f8bc5-7h5rc 1/1 Running 0 3d
+```
+
+In the same manner you can list secrets, config-maps, logs, volumes etc. for the Codefresh builds.
+
+## Uninstall the Codefresh Runner
+
+You can uninstall the Codefresh runner from your cluster by running:
+
+```shell
+codefresh runner delete
+```
+
+A wizard, similar to the installation wizard, will ask you questions regarding your cluster before finishing with the removal.
+
+Like the installation wizard, you can pass the additional options in advance as command line parameters (see `--help` output):
+```shell
+codefresh runner delete --help
+```
+
+
+
+## Runner architecture overview
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/codefresh_runner.png"
+ url="/images/administration/runner/codefresh_runner.png"
+ alt="Codefresh Runner architecture overview"
+ caption="Codefresh Runner architecture overview"
+ max-width="100%"
+ %}
+
+
+1. [Runtime-Environment specification]({{site.baseurl}}/docs/administration/codefresh-runner/) defines engine and dind pods spec and PVC parameters.
+2. Runner pod (Agent) pulls tasks (Builds) from Codefresh API every 3 seconds.
+3. Once the agent receives build task (either Manual run build or Webhook triggered build) it calls k8s API to create engine/dind pods and PVC object.
+4. Volume Provisioner listens for PVC events (create) and based on StorageClass definition it creates PV object with the corresponding underlying volume backend (ebs/gcedisk/local).
+5. During the build, each step (clone/build/push/freestyle/composition) is represented as docker container inside dind (docker-in-docker) pod. Shared Volume (`/codefresh/volume`) is represented as docker volume and mounted to every step (docker containers). PV mount point inside dind pod is `/var/lib/docker`.
+6. Engine pod controls dind pod. It deserializes pipeline yaml to docker API calls, terminates dind after build has been finished or per user request (sigterm).
+7. `dind-lv-monitor` DaemonSet OR `dind-volume-cleanup` CronJob are part of [Runtime Cleaner]({{site.baseurl}}/docs/administration/codefresh-runner/#runtime-cleaners), `app-proxy` Deployment and Ingress are described in the [next section]({{site.baseurl}}/docs/administration/codefresh-runner/#app-proxy-installation), `monitor` Deployment is for [Kubernetes Dashboard]({{site.baseurl}}/docs/deploy-to-kubernetes/manage-kubernetes/).
+
+## App Proxy installation
+
+The App Proxy is an **optional** component of the runner that is mainly used when the git provider server is installed on-premises behind the firewall. The App Proxy provides the following features once installed:
+
+* Enables you to automatically create webhooks for Git in the Codefresh UI (same as the SAAS experience)
+* Sends commit status information back to your Git provider (same as the SAAS experience)
+* Makes all Git Operations in the GUI work exactly like the SAAS installation of Codefresh
+
+The requirements for the App proxy is a Kubernetes cluster that:
+
+1. has already the Codefresh runner installed
+1. has an active [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress/)
+1. allows incoming connections from the VPC/VPN where users are browsing the Codefresh UI. The ingress connection **must** have a hostname assigned for this route and **must** be configured to perform SSL termination
+
+>Currently the App-proxy works only for Github (SAAS and on-prem versions), Gitlab (SAAS and on-prem versions) and Bitbucket server.
+
+Here is the architecture of the app-proxy:
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/app-proxy-architecture.png"
+ url="/images/administration/runner/app-proxy-architecture.png"
+ alt="How App Proxy and the Codefresh runner work together"
+ caption="How App Proxy and the Codefresh runner work together"
+ max-width="80%"
+ %}
+
+Basically when a Git GET operation takes place, the Codefresh UI will contact the app-proxy (if it is present) and it will route the request to the backing Git provider. The confidential Git information never leaves the firewall premises and the connection between the browser and the ingress is SSL/HTTPS.
+
+The app-proxy has to work over HTTPS and by default it will use the ingress controller to do its SSL termination. Therefore, the ingress controller will need to be configured to perform SSL termination. Check the documentation of your ingress controller (for example [nginx ingress](https://kubernetes.github.io/ingress-nginx/examples/tls-termination/)). This means that the app-proxy does not compromise security in any way.
+
+To install the app-proxy on a Kubernetes cluster that already has a Codefresh runner use the following command:
+
+```shell
+codefresh install app-proxy --host=
+```
+
+If you want to install the Codefresh runner and app-proxy in a single command use the following:
+
+```shell
+codefresh runner init --app-proxy --app-proxy-host=
+```
+
+If you have multiple ingress controllers in the Kubernetes cluster you can use the `--app-proxy-ingress-class` parameter to define which ingress will be used. For additional security you can also define an allowlist for IPs/ranges that are allowed to use the ingress (to further limit the web browsers that can access the Ingress). Check the documentation of your ingress controller for the exact details.
+
+By default the app-proxy ingress will use the path `hostname/app-proxy`. You can change that default by using the values file in the installation with the flag `--values values.yaml`.
+
+See the `AppProxy` section in the example [values.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/venonactl/example/values-example.yaml#L231-L253).
+
+```shell
+codefresh install app-proxy --values values.yaml
+```
+
+## Manual Installation of Runner Components
+
+If you don't want to use the wizard, you can also install the components of the runner yourself.
+
+The Codefresh runner consists of the following:
+
+* Runner - responsible for getting tasks from the platform and executing them. One per account. Can handle multiple runtimes
+* Runtime - the components that are responsible on runtime for the workflow execution :
+ * Volume provisioner - (pod’s name prefix dind-volume-provisioner-runner) - responsible for volume provisioning for dind pod
+ * lv-monitor - (pod’s name prefix dind-lv-monitor-runner) - daemonset - responsible for cleaning volumes
+
+To install the runner on a single cluster with both the runtime and the agent, execute the following:
+
+```shell
+kubectl create namespace codefresh
+codefresh install agent --agent-kube-namespace codefresh --install-runtime
+```
+
+You can then follow the instructions for [using the runner](#using-the-codefresh-runner).
+
+### Installing Multiple runtimes with a Single Agent
+
+It is also possible, for advanced users to install a single agent that can manage multiple runtime environments.
+
+>NOTE: Please make sure that the cluster where the agent is installed has network access to the other clusters of the runtimes
+
+```shell
+# 1. Create namespace for the agent:
+kubectl create namespace codefresh-agent
+
+# 2. Install the agent on the namespace ( give your agent a unique name as $NAME):
+# Note down the token and use it in the second command.
+codefresh create agent $NAME
+codefresh install agent --token $TOKEN --kube-namespace codefresh-agent
+codefresh get agents
+
+# 3. Create namespace for the first runtime:
+kubectl create namespace codefresh-runtime-1
+
+# 4. Install the first runtime on the namespace
+# 5. the runtime name is printed
+codefresh install runtime --runtime-kube-namespace codefresh-runtime-1
+
+# 6. Attach the first runtime to agent:
+codefresh attach runtime --agent-name $AGENT_NAME --agent-kube-namespace codefresh-agent --runtime-name $RUNTIME_NAME --runtime-kube-namespace codefresh-runtime-1
+
+# 7. Restart the runner pod in namespace `codefresh-agent`
+kubectl delete pods $RUNNER_POD
+
+# 8. Create namespace for the second runtime
+kubectl create namespace codefresh-runtime-2
+
+# 9. Install the second runtime on the namespace
+codefresh install runtime --runtime-kube-namespace codefresh-runtime-2
+
+# 10. Attach the second runtime to agent and restart the Venona pod automatically
+codefresh attach runtime --agent-name $AGENT_NAME --agent-kube-namespace codefresh-agent --runtime-name $RUNTIME_NAME --runtime-kube-namespace codefresh-runtime-2 --restart-agent
+```
+
+## Configuration Options
+
+You can fine tune the installation of the runner to better match your environment and cloud provider.
+
+### Installing on AWS
+
+If you've installed the Codefresh runner on [EKS](https://aws.amazon.com/eks/) or any other custom cluster (e.g. with kops) in Amazon you need to configure it properly to work with EBS volumes in order to gain [caching]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipeline-caching/).
+
+> This section assumes you already installed the Runner with default options: `codefresh runner init`
+
+**Prerequisites**
+
+`dind-volume-provisioner` deployment should have permissions to create/attach/detach/delete/get ebs volumes.
+
+There are 3 options:
+* running `dind-volume-provisioner` pod on the node (node-group) with iam role
+* k8s secret with [aws credentials format](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) mounted to ~/.aws/credentials (or `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` env vars passed) to the `dind-volume-provisioner` pod
+* using [Aws Identity for Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) iam role assigned to `volume-provisioner-runner` service account
+
+Minimal policy for `dind-volume-provisioner`:
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:AttachVolume",
+ "ec2:CreateSnapshot",
+ "ec2:CreateTags",
+ "ec2:CreateVolume",
+ "ec2:DeleteSnapshot",
+ "ec2:DeleteTags",
+ "ec2:DeleteVolume",
+ "ec2:DescribeInstances",
+ "ec2:DescribeSnapshots",
+ "ec2:DescribeTags",
+ "ec2:DescribeVolumes",
+ "ec2:DetachVolume"
+ ],
+ "Resource": "*"
+ }
+ ]
+}
+```
+
+Create Storage Class for EBS volumes:
+>Choose **one** of the Availability Zones you want to be used for your pipeline builds. Multi AZ configuration is not supported.
+
+**Storage Class (gp2)**
+
+```yaml
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: dind-ebs
+### Specify name of provisioner
+provisioner: codefresh.io/dind-volume-provisioner-runner-<-NAMESPACE-> # <---- rename <-NAMESPACE-> with the runner namespace
+volumeBindingMode: Immediate
+parameters:
+ # ebs or ebs-csi
+ volumeBackend: ebs
+ # Valid zone
+ AvailabilityZone: us-central1-a # <---- change it to your AZ
+ # gp2, gp3 or io1
+ VolumeType: gp2
+ # in case of io1 you can set iops
+ # iops: 1000
+ # ext4 or xfs (default to xfs, ensure that there is xfstools )
+ fsType: xfs
+```
+**Storage Class (gp3)**
+
+```yaml
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: dind-ebs
+### Specify name of provisioner
+provisioner: codefresh.io/dind-volume-provisioner-runner-<-NAMESPACE-> # <---- rename <-NAMESPACE-> with the runner namespace
+volumeBindingMode: Immediate
+parameters:
+ # ebs or ebs-csi
+ volumeBackend: ebs
+ # Valid zone
+ AvailabilityZone: us-central1-a # <---- change it to your AZ
+ # gp2, gp3 or io1
+ VolumeType: gp3
+ # ext4 or xfs (default to xfs, ensure that there is xfstools )
+ fsType: xfs
+ # I/O operations per second. Only effetive when gp3 volume type is specified.
+ # Default value - 3000.
+ # Max - 16,000
+ iops: "5000"
+ # Throughput in MiB/s. Only effective when gp3 volume type is specified.
+ # Default value - 125.
+ # Max - 1000.
+ throughput: "500"
+```
+
+Apply storage class manifest:
+```shell
+kubectl apply -f dind-ebs.yaml
+```
+
+Change your [runtime environment]({{site.baseurl}}/docs/administration/codefresh-runner/#full-runtime-environment-specification) configuration:
+
+The same AZ you selected before should be used in nodeSelector inside Runtime Configuration:
+
+To get a list of all available runtimes execute:
+
+```shell
+codefresh get runtime-environments
+```
+
+Choose the runtime you have just added and get its yaml representation:
+
+```shell
+codefresh get runtime-environments my-eks-cluster/codefresh -o yaml > runtime.yaml
+```
+
+ Under `dockerDaemonScheduler.cluster` block add the nodeSelector `topology.kubernetes.io/zone: `. It should be at the same level as `clusterProvider` and `namespace`. Also, the `pvcs.dind` block should be modified to use the Storage Class you created above (`dind-ebs`).
+
+`runtime.yaml` example:
+
+```yaml
+version: 1
+metadata:
+ ...
+runtimeScheduler:
+ cluster:
+ clusterProvider:
+ accountId: 5f048d85eb107d52b16c53ea
+ selector: my-eks-cluster
+ namespace: codefresh
+ serviceAccount: codefresh-engine
+ annotations: {}
+dockerDaemonScheduler:
+ cluster:
+ clusterProvider:
+ accountId: 5f048d85eb107d52b16c53ea
+ selector: my-eks-cluster
+ namespace: codefresh
+ nodeSelector:
+ topology.kubernetes.io/zone: us-central1-a
+ serviceAccount: codefresh-engine
+ annotations: {}
+ userAccess: true
+ defaultDindResources:
+ requests: ''
+ pvcs:
+ dind:
+ volumeSize: 30Gi
+ storageClassName: dind-ebs
+ reuseVolumeSelector: 'codefresh-app,io.codefresh.accountName'
+extends:
+ - system/default/hybrid/k8s_low_limits
+description: '...'
+accountId: 5f048d85eb107d52b16c53ea
+```
+
+Update your runtime environment with the [patch command](https://codefresh-io.github.io/cli/operate-on-resources/patch/):
+
+```shell
+codefresh patch runtime-environment my-eks-cluster/codefresh -f runtime.yaml
+```
+
+If necessary, delete all existing PV and PVC objects left from default local provisioner:
+```
+kubectl delete pvc -l codefresh-app=dind -n
+kubectl delete pv -l codefresh-app=dind -n
+```
+
+>You can define all these options above for clean Runner installation with [values.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/venonactl/example/values-example.yaml) file:
+
+`values-ebs.yaml` example:
+
+```yaml
+### Storage parameter example for aws ebs disks
+Storage:
+ Backend: ebs
+ AvailabilityZone: us-east-1d
+ VolumeType: gp3
+ #AwsAccessKeyId: ABCDF
+ #AwsSecretAccessKey: ZYXWV
+ Encrypted: # encrypt volume, default is false
+ VolumeProvisioner:
+ ServiceAccount:
+ Annotations:
+ eks.amazonaws.com/role-arn: arn:aws:iam:::role/
+NodeSelector: topology.kubernetes.io/zone=us-east-1d
+...
+ Runtime:
+ NodeSelector: # dind and engine pods node-selector (--build-node-selector)
+ topology.kubernetes.io/zone: us-east-1d
+```
+
+```shell
+codefresh runner init --values values-ebs.yaml --exec-demo-pipeline false --skip-cluster-integration true
+```
+
+### Installing to EKS with Autoscaling
+
+#### Step 1- EKS Cluster Creation
+
+See below is a content of cluster.yaml file. We define separate node pools for dind, engine and other services(like runner, cluster-autoscaler etc).
+
+Before creating the cluster we have created two separate IAM policies:
+
+* one for our volume-provisioner controller(policy/runner-ebs) that should create and delete volumes
+* one for dind pods(policy/dind-ebs) that should be able to attach/detach those volumes to the appropriate nodes using [iam attachPolicyARNs options](https://eksctl.io/usage/iam-policies/).
+
+`policy/dind-ebs:`
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:DescribeVolumes"
+ ],
+ "Resource": [
+ "*"
+ ]
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:DetachVolume",
+ "ec2:AttachVolume"
+ ],
+ "Resource": [
+ "*"
+ ]
+ }
+ ]
+}
+```
+
+`policy/runner-ebs:`
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:AttachVolume",
+ "ec2:CreateSnapshot",
+ "ec2:CreateTags",
+ "ec2:CreateVolume",
+ "ec2:DeleteSnapshot",
+ "ec2:DeleteTags",
+ "ec2:DeleteVolume",
+ "ec2:DescribeInstances",
+ "ec2:DescribeSnapshots",
+ "ec2:DescribeTags",
+ "ec2:DescribeVolumes",
+ "ec2:DetachVolume"
+ ],
+ "Resource": "*"
+ }
+ ]
+}
+```
+
+`my-eks-cluster.yaml`
+
+```yaml
+apiVersion: eksctl.io/v1alpha5
+kind: ClusterConfig
+metadata:
+ name: my-eks
+ region: us-west-2
+ version: "1.15"
+
+nodeGroups:
+ - name: dind
+ instanceType: m5.2xlarge
+ desiredCapacity: 1
+ iam:
+ attachPolicyARNs:
+ - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
+ - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
+ - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
+ - arn:aws:iam::XXXXXXXXXXXX:policy/dind-ebs
+ withAddonPolicies:
+ autoScaler: true
+ ssh: # import public key from file
+ publicKeyPath: ~/.ssh/id_rsa.pub
+ minSize: 1
+ maxSize: 50
+ volumeSize: 50
+ volumeType: gp2
+ ebsOptimized: true
+ availabilityZones: ["us-west-2a"]
+ kubeletExtraConfig:
+ enableControllerAttachDetach: false
+ labels:
+ node-type: dind
+ taints:
+ codefresh.io: "dinds:NoSchedule"
+
+ - name: engine
+ instanceType: m5.large
+ desiredCapacity: 1
+ iam:
+ withAddonPolicies:
+ autoScaler: true
+ minSize: 1
+ maxSize: 10
+ volumeSize: 50
+ volumeType: gp2
+ availabilityZones: ["us-west-2a"]
+ labels:
+ node-type: engine
+ taints:
+ codefresh.io: "engine:NoSchedule"
+
+ - name: addons
+ instanceType: m5.2xlarge
+ desiredCapacity: 1
+ ssh: # import public key from file
+ publicKeyPath: ~/.ssh/id_rsa.pub
+ minSize: 1
+ maxSize: 10
+ volumeSize: 50
+ volumeType: gp2
+ ebsOptimized: true
+ availabilityZones: ["us-west-2a"]
+ labels:
+ node-type: addons
+ iam:
+ attachPolicyARNs:
+ - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
+ - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
+ - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
+ - arn:aws:iam::XXXXXXXXXXXX:policy/runner-ebs
+ withAddonPolicies:
+ autoScaler: true
+availabilityZones: ["us-west-2a", "us-west-2b", "us-west-2c"]
+```
+
+Execute:
+
+```shell
+eksctl create cluster -f my-eks-cluster.yaml
+```
+
+The config above will leverage [Amazon Linux 2](https://aws.amazon.com/amazon-linux-2/) as the default operating system for the nodes in the nodegroup. To leverage [Bottlerocket-based nodes](https://aws.amazon.com/bottlerocket/), specify the AMI Family using `amiFamily: Bottlerocket` and add the following additional IAM Policies: `arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly` and `arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore`.
+
+>Bottlerocket is an open source Linux based Operating System specifically built to run containers. It focuses on security, simplicity and easy updates via transactions. Find more information in the [official repository](https://github.com/bottlerocket-os/bottlerocket).
+
+#### Step 2 - Autoscaler
+
+Once the cluster is up and running we need to install the [cluster autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html):
+
+We used iam AddonPolicies `"autoScaler: true"` in the cluster.yaml file so there is no need to create a separate IAM policy or add Auto Scaling group tags, everything is done automatically.
+
+Deploy the Cluster Autoscaler:
+
+```shell
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
+```
+
+Add the `cluster-autoscaler.kubernetes.io/safe-to-evict` annotation
+
+```shell
+kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
+```
+
+Edit the cluster-autoscaler container command to replace `` with *my-eks*(name of the cluster from cluster.yaml file), and add the following options:
+ `--balance-similar-node-groups` and `--skip-nodes-with-system-pods=false`
+
+```shell
+kubectl -n kube-system edit deployment.apps/cluster-autoscaler
+```
+
+```yaml
+spec:
+ containers:
+ - command:
+ - ./cluster-autoscaler
+ - --v=4
+ - --stderrthreshold=info
+ - --cloud-provider=aws
+ - --skip-nodes-with-local-storage=false
+ - --expander=least-waste
+ - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/my-eks
+ - --balance-similar-node-groups
+ - --skip-nodes-with-system-pods=false
+```
+
+We created our EKS cluster with 1.15 version so the appropriate cluster autoscaler version from [https://github.com/kubernetes/autoscaler/releases](https://github.com/kubernetes/autoscaler/releases) is 1.15.6
+
+```shell
+kubectl -n kube-system set image deployment.apps/cluster-autoscaler cluster-autoscaler=us.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.15.6
+```
+
+Check your own version to make sure that the autoscaler version is appropriate.
+
+#### Step 3 - Optional: We also advise to configure overprovisioning with Cluster Autoscaler
+
+See details at the [FAQ](
+https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler).
+
+#### Step 4 - Adding an EKS cluster as a runner to the Codefresh platform with EBS support
+
+Make sure that you are targeting the correct cluster
+
+```shell
+$ kubectl config current-context
+my-aws-runner
+```
+
+Install the runner passing additional options:
+
+```shell
+codefresh runner init \
+--name my-aws-runner \
+--kube-node-selector=topology.kubernetes.io/zone=us-west-2a \
+--build-node-selector=topology.kubernetes.io/zone=us-west-2a \
+--kube-namespace cf --kube-context-name my-aws-runner \
+--set-value Storage.VolumeProvisioner.NodeSelector=node-type=addons \
+--set-value=Storage.Backend=ebs \
+--set-value=Storage.AvailabilityZone=us-west-2a
+```
+
+* You should specify the zone in which you want your volumes to be created, example: `--set-value=Storage.AvailabilityZone=us-west-2a`
+* (Optional) - if you want to assign the volume-provisioner to a specific node, for example a specific node group what has an IAM role which allows to create EBS volumes, example: `--set-value Storage.VolumeProvisioner.NodeSelector=node-type=addons`
+
+If you want to use [encrypted EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#EBSEncryption_key_mgmt) (they are unencrypted by default) - add the custom value `--set-value=Storage.Encrypted=true`
+If you already have a key - add its ARN via `--set-value=Storage.KmsKeyId= value`, otherwise a key is generated by AWS. Here is the full command:
+
+```shell
+codefresh runner init \
+--name my-aws-runner \
+--kube-node-selector=topology.kubernetes.io/zone=us-west-2a \
+--build-node-selector=topology.kubernetes.io/zone=us-west-2a \
+--kube-namespace cf --kube-context-name my-aws-runner \
+--set-value Storage.VolumeProvisioner.NodeSelector=node-type=addons \
+--set-value=Storage.Backend=ebs \
+--set-value=Storage.AvailabilityZone=us-west-2a\
+--set-value=Storage.Encrypted=[false|true] \
+--set-value=Storage.KmsKeyId=
+```
+
+For an explanation of all other options run `codefresh runner init --help` ([global parameter table](#customizing-the-wizard-installation)).
+
+At this point the quick start wizard will start the installation.
+
+Once that is done we need to modify the runtime environment of `my-aws-runner` to specify the necessary toleration, nodeSelector and disk size:
+
+```shell
+codefresh get re --limit=100 my-aws-runner/cf -o yaml > my-runtime.yml
+```
+
+Modify the file my-runtime.yml as shown below:
+
+```yaml
+version: null
+metadata:
+ agent: true
+ trial:
+ endingAt: 1593596844167
+ reason: Codefresh hybrid runtime
+ started: 1592387244207
+ name: my-aws-runner/cf
+ changedBy: ivan-codefresh
+ creationTime: '2020/06/17 09:47:24'
+runtimeScheduler:
+ cluster:
+ clusterProvider:
+ accountId: 5cb563d0506083262ba1f327
+ selector: my-aws-runner
+ namespace: cf
+ nodeSelector:
+ node-type: engine
+ tolerations:
+ - effect: NoSchedule
+ key: codefresh.io
+ operator: Equal
+ value: engine
+ annotations: {}
+dockerDaemonScheduler:
+ cluster:
+ clusterProvider:
+ accountId: 5cb563d0506083262ba1f327
+ selector: my-aws-runner
+ namespace: cf
+ nodeSelector:
+ node-type: dind
+ annotations: {}
+ defaultDindResources:
+ requests: ''
+ tolerations:
+ - effect: NoSchedule
+ key: codefresh.io
+ operator: Equal
+ value: dinds
+ pvcs:
+ dind:
+ volumeSize: 30Gi
+ reuseVolumeSelector: 'codefresh-app,io.codefresh.accountName'
+ storageClassName: dind-local-volumes-runner-cf
+ userAccess: true
+extends:
+ - system/default/hybrid/k8s_low_limits
+description: 'Runtime environment configure to cluster: my-aws-runner and namespace: cf'
+accountId: 5cb563d0506083262ba1f327
+```
+
+Apply changes.
+
+```shell
+codefresh patch re my-aws-runner/cf -f my-runtime.yml
+```
+
+That's all. Now you can go to UI and try to run a pipeline on RE my-aws-runner/cf
+
+### Injecting AWS arn roles into the cluster
+
+**Step 1** - Make sure the OIDC provider is connected to the cluster
+
+See:
+
+* [https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html)
+* [https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/)
+
+**Step 2** - Create IAM role and policy as explained in [https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html)
+
+Here, in addition to the policy explained, you need a Trust Relationship established between this role and the OIDC entity.
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
+ },
+ "Action": "sts:AssumeRoleWithWebIdentity",
+ "Condition": {
+ "StringEquals": {
+ "${OIDC_PROVIDER}:sub": "system:serviceaccount:${CODEFRESH_NAMESPACE}:codefresh-engine"
+ }
+ }
+ }
+ ]
+}
+```
+
+**Step 3** - Annotate the `codefresh-engine` Kubernetes Service Account in the namespace where the Codefresh Runner is installed with the proper IAM role.
+
+```shell
+kubectl annotate -n ${CODEFRESH_NAMESPACE} sa codefresh-engine eks.amazonaws.com/role-arn=${ROLE_ARN}
+```
+
+Once the annotation is added, you should see it when you describe the Service Account.
+
+```shell
+kubectl describe -n ${CODEFRESH_NAMESPACE} sa codefresh-engine
+
+Name: codefresh-engine
+Namespace: codefresh
+Labels: app=app-proxy
+ version=1.6.8
+Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/Codefresh
+Image pull secrets:
+Mountable secrets: codefresh-engine-token-msj8d
+Tokens: codefresh-engine-token-msj8d
+Events:
+```
+
+**Step 4** - Using the AWS assumed role identity
+
+After annotating the Service Account, run a pipeline to test the AWS resource access:
+
+```yaml
+RunAwsCli:
+ title : Communication with AWS
+ image : mesosphere/aws-cli
+ stage: "build"
+ commands :
+ - apk update
+ - apk add jq
+ - env
+ - cat /codefresh/volume/sensitive/.kube/web_id_token
+ - aws sts assume-role-with-web-identity --role-arn $AWS_ROLE_ARN --role-session-name mh9test --web-identity-token file://$AWS_WEB_IDENTITY_TOKEN_FILE --duration-seconds 1000 > /tmp/irp-cred.txt
+ - export AWS_ACCESS_KEY_ID="$(cat /tmp/irp-cred.txt | jq -r ".Credentials.AccessKeyId")"
+ - export AWS_SECRET_ACCESS_KEY="$(cat /tmp/irp-cred.txt | jq -r ".Credentials.SecretAccessKey")"
+ - export AWS_SESSION_TOKEN="$(cat /tmp/irp-cred.txt | jq -r ".Credentials.SessionToken")"
+ - rm /tmp/irp-cred.txt
+ - aws s3api get-object --bucket jags-cf-eks-pod-secrets-bucket --key eks-pod2019-12-10-21-18-32-560931EEF8561BC4 getObjectNotWorks.txt
+```
+
+### Installing behind a proxy
+
+If you want to deploy the Codefresh runner on a Kubernetes cluster that doesn’t have direct access to `g.codefresh.io`, and has to go trough a proxy server to access `g.codefresh.io`, you will need to follow these additional steps:
+
+**Step 1** - Follow the installation instructions of the previous section
+
+**Step 2** - Run `kubectl edit deployment runner -n codefresh-runtime` and add the proxy variables like this
+
+```yaml
+spec:
+ containers:
+ - env:
+ - name: HTTP_PROXY
+ value: http://:port
+ - name: HTTPS_PROXY
+ value: http://:port
+ - name: http_proxy
+ value: http://:port
+ - name: https_proxy
+ value: http://:port
+ - name: no_proxy
+ value: localhost,127.0.0.1,
+ - name: NO_PROXY
+ value: localhost,127.0.0.1,
+```
+
+**Step 3** - Add the following variables to your runtime.yaml, both under the `runtimeScheduler:` and under `dockerDaemonScheduler:` blocks inside the `envVars:` section
+
+```yaml
+HTTP_PROXY: http://:port
+http_proxy: http://:port
+HTTPS_PROXY: http://:port
+https_proxy: http://:port
+No_proxy: localhost, 127.0.0.1,
+NO_PROXY: localhost, 127.0.0.1,
+```
+
+**Step 4** - Add `.firebaseio.com` to the allowed-sites of the proxy server
+
+**Step 5** - Exec into the `dind` pod and run `ifconfig`
+
+If the MTU value for `docker0` is higher than the MTU value of `eth0` (sometimes the `docker0` MTU is 1500, while `eth0` MTU is 1440) - you need to change this, the `docker0` MTU should be lower than `eth0` MTU
+
+To fix this, edit the configmap in the codefresh-runtime namespace:
+
+```shell
+kubectl edit cm codefresh-dind-config -n codefresh-runtime
+```
+
+And add this after one of the commas:
+`\"mtu\":1440,`
+
+### Installing on Rancher RKE 2.X
+
+#### Step 1 - Configure the kubelet to work with the runner's StorageClass
+
+The runner's default StorageClass creates the persistent cache volume from local storage on each node. We need to edit the cluster config to allow this.
+
+In the Rancher UI (v2.5.9 and earlier), drill into the target cluster and then click the Edit Cluster button at the top-right.
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/rancher-cluster.png"
+ url="/images/administration/runner/rancher-cluster.png"
+ alt="Drill into your cluster and click Edit Cluster on the right"
+ caption="Drill into your cluster and click Edit Cluster on the right"
+ max-width="100%"
+ %}
+
+In Rancher v2.6+ with the updated UI, open the Cluster Management in the left panel, then click the three-dot menu near the corresponding cluster and select 'Edit Config'.
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/rancher-cluster-2.png"
+ url="/images/administration/runner/rancher-cluster-2.png"
+ alt="Click Edit Cluster on the right in your cluster list"
+ caption="Click Edit Cluster on the right in your cluster list"
+ max-width="100%"
+ %}
+
+On the edit cluster page, scroll down to the Cluster Options section and click its **Edit as YAML** button
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/rancher-edit-as-yaml.png"
+ url="/images/administration/runner/rancher-edit-as-yaml.png"
+ alt="Cluster Options -> Edit as YAML"
+ caption="Cluster Options -> Edit as YAML"
+ max-width="100%"
+ %}
+Edit the YAML to include an extra mount in the kubelet service:
+
+```yaml
+rancher_kubernetes_engine_config:
+ ...
+ services:
+ ...
+ kubelet:
+ extra_binds:
+ - '/var/lib/codefresh:/var/lib/codefresh:rshared'
+```
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/runner/rancher-kublet.png"
+ url="/images/administration/runner/rancher-kublet.png"
+ alt="Add volume to rancher_kubernetes_engine_config.services.kublet.extra_binds"
+ caption="Add volume to rancher_kubernetes_engine_config.services.kublet.extra_binds"
+ max-width="100%"
+ %}
+
+#### Step 2 - Make sure your kubeconfig user is a ClusterAdmin
+
+The user in your kubeconfig must be a cluster admin in order to install the runner. If you plan to have your pipelines connect to this cluster as a cluster admin, then you can go ahead and create a Codefresh user for this purpose in the Rancher UI with a **non-expiring** kubeconfig token. This is the easiest way to do the installation.
+
+However, if you want your pipelines to connect to this cluster with less privileges, then you can use your personal user account with Cluster Admin privileges for the installation, and then we'll create a Codefresh account with lesser privileges later (in Step 5). In that case, you can now move on to Step 3.
+
+Follow these steps to create a Codefresh user with Cluster Admin rights, from the Rancher UI:
+
+* Click Security at the top, and then choose Users
+ {% include image.html lightbox="true" file="/images/administration/runner/rancher-security.png" url="/images/administration/runner/rancher-security.png" alt="Create a cluster admin user for Codefresh" caption="Create a cluster admin ser for Codefresh" max-width="100%" %}
+* Click the Add User button, and under Global Permissions check the box for **Restricted Administrstor**
+* Log out of the Rancher UI, and then log back in as the new user
+* Click your user icon at the top-right, and then choose **API & Keys**
+* Click the **Add Key** button and create a kubeconfig token with Expires set to Never
+* Copy the Bearer Token field (combines Access Key and Secret Key)
+* Edit your kubeconfig and put the Bearer Token you copied in the `token` field of your user
+
+#### Step 3 - Install the Runner
+
+If you've created your kubeconfig from the Rancher UI, then it will contain an API endpoint that is not reachable internally, from within the cluster. To work around this, we need to tell the runner to instead use Kubernetes' generic internal API endpoint. Also, if you didn't create a Codefresh user in step 2 and your kubeconfig contains your personal user account, then you should also add the `--skip-cluster-integration` option.
+
+Install the runner with a Codefresh user (ClusterAdmin, non-expiring token):
+
+```shell
+codefresh runner init \
+ --set-value KubernetesHost=https://kubernetes.default.svc.cluster.local
+```
+
+Or install the runner with your personal user account:
+
+```shell
+codefresh runner init \
+ --set-value KubernetesHost=https://kubernetes.default.svc.cluster.local \
+ --skip-cluster-integration
+```
+
+The wizard will then ask you some basic questions.
+
+#### Step 4 - Update the runner's Docker MTU
+
+By default, RKE nodes use the [Canal CNI](https://rancher.com/docs/rancher/v2.x/en/faq/networking/cni-providers/#canal), which combines elements of Flannel and Calico, and uses VXLAN encapsulation. This VXLAN encapsulation has a 50-byte overhead, thus reducing the MTU of its virtual interfaces from the standard 1500 to 1450. For example, when running `ifconfig` on an RKE 2.5.5 node, you might see several interfaces like this. Note the `MTU:1450`.
+
+```shell
+cali0f8ac592086 Link encap:Ethernet HWaddr ee:ee:ee:ee:ee:ee
+ inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:Link
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
+ RX packets:11106 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:10908 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:922373 (922.3 KB) TX bytes:9825590 (9.8 MB)
+```
+
+We must reduce the Docker MTU used by the runner's Docker in Docker (dind) pods to fit within this lower MTU. This is stored in a configmap in the namespace where the runner is installed. Assuming that you installed the runner into the `codefresh` namespace, you would edit the configmap like this:
+
+```shell
+kubectl edit cm codefresh-dind-config -n codefresh
+```
+
+In the editor, update the **daemon.json** field - add `,\"mtu\":1440` just before the last curley brace.
+ {% include image.html
+ lightbox="true"
+ file="/images/administration/runner/rancher-mtu.png"
+ url="/images/administration/runner/rancher-mtu.png"
+ alt="Update the runner's Docker MTU"
+ caption="Update the runner's Docker MTU"
+ max-width="100%"
+ %}
+
+#### Step 5 - Create the Cluster Integration
+
+If you created a user in Step 2 and used it to install the runner in Step 3, then you can skip this step - your installation is complete!
+
+However, if you installed the runner with the `--skip-cluster-integration` option then you should follow the documentaion to [Add a Rancher Cluster]({{site.baseurl}}/docs/deploy-to-kubernetes/add-kubernetes-cluster/#adding-a-rancher-cluster) to your Kubernetes Integrations.
+
+Once complete, you can go to the Codefresh UI and run a pipeline on the new runtime, including steps that deploy to the Kubernetes Integration.
+
+#### Troubleshooting TLS Errors
+
+Depending on your Rancher configuration, you may need to allow insecure HTTPS/TLS connections. You can do this by adding an environment variable to the runner deployment.
+
+Assuming that you installed the runner into the `codefresh` namespace, you would edit the runner deployment like this:
+
+```shell
+kubectl edit deploy runner -n codefresh
+```
+
+In the editor, add this environment variable under spec.containers.env[]:
+
+```yaml
+- name: NODE_TLS_REJECT_UNAUTHORIZED
+ value: "0"
+```
+
+### Installing on Google Kubernetes Engine
+
+If you are installing Codefresh runner on the Kubernetes cluster on [GKE](https://cloud.google.com/kubernetes-engine/)
+
+* make sure your user has `Kubernetes Engine Cluster Admin` role in google console and
+* bind your user with `cluster-admin` Kubernetes cluster role.
+
+```shell
+kubectl create clusterrolebinding cluster-admin-binding \
+ --clusterrole cluster-admin \
+ --user $(gcloud config get-value account)
+```
+
+
+#### Storage options on GKE
+
+**Local SSD**
+
+If you want to use *LocalSSD* in GKE:
+
+*Prerequisites:* [GKE cluster with local SSD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd)
+
+Install Runner with the Wizard:
+
+```shell
+codefresh runner init [options] --set-value=Storage.LocalVolumeParentDir=/mnt/disks/ssd0/codefresh-volumes \
+ --build-node-selector=cloud.google.com/gke-local-ssd=true
+```
+
+Or with `values-example.yaml` values file:
+
+```yaml
+...
+### Storage parameters example for gke-local-ssd
+ Storage:
+ Backend: local
+ LocalVolumeParentDir: /mnt/disks/ssd0/codefresh-volumes
+ NodeSelector: cloud.google.com/gke-local-ssd=true
+...
+ Runtime:
+ NodeSelector: # dind and engine pods node-selector (--build-node-selector)
+ cloud.google.com/gke-local-ssd: "true"
+...
+```
+```shell
+codefresh runner init [options] --values values-example.yaml
+```
+
+To configure existing Runner with Local SSDs follow this article:
+
+[How-to: Configuring an existing Runtime Environment with Local SSDs (GKE only)](https://support.codefresh.io/hc/en-us/articles/360016652920-How-to-Configuring-an-existing-Runtime-Environment-with-Local-SSDs-GKE-only-)
+
+
+**GCE Disks**
+
+If you want to use *GCE Disks*:
+
+*Prerequisites:* volume provisioner (dind-volume-provisioner) should have permissions to create/delete/get GCE disks
+
+There are 3 options to provide cloud credentials:
+
+* run `dind-volume-provisioner-runner` pod on a node with IAM role which is allowed to create/delete/get GCE disks
+* create Google Service Account with `ComputeEngine.StorageAdmin` role, download its key in JSON format and pass it to `codefresh runner init` with `--set-file=Storage.GooogleServiceAccount=/path/to/google-service-account.json`
+* use [Google Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) to assign IAM role to `volume-provisioner-runner` service account
+
+Notice that builds will be running in a single availability zone, so you must specify AvailabilityZone parameters.
+
+
+##### Runner installation with GCE Disks (Google SA JSON key)
+
+Using the Wizard:
+
+```shell
+codefresh runner init [options] \
+ --set-value=Storage.Backend=gcedisk \
+ --set-value=Storage.AvailabilityZone=us-central1-c \
+ --kube-node-selector=topology.kubernetes.io/zone=us-central1-c \
+ --build-node-selector=topology.kubernetes.io/zone=us-central1-c \
+ --set-file=Storage.GoogleServiceAccount=/path/to/google-service-account.json
+```
+
+Using the values `values-example.yaml` file:
+```yaml
+...
+### Storage parameter example for GCE disks
+ Storage:
+ Backend: gcedisk
+ AvailabilityZone: us-central1-c
+ GoogleServiceAccount: > #serviceAccount.json content
+ {
+ "type": "service_account",
+ "project_id": "...",
+ "private_key_id": "...",
+ "private_key": "...",
+ "client_email": "...",
+ "client_id": "...",
+ "auth_uri": "...",
+ "token_uri": "...",
+ "auth_provider_x509_cert_url": "...",
+ "client_x509_cert_url": "..."
+ }
+ NodeSelector: topology.kubernetes.io/zone=us-central1-c
+...
+ Runtime:
+ NodeSelector: # dind and engine pods node-selector (--build-node-selector)
+ topology.kubernetes.io/zone: us-central1-c
+...
+```
+```shell
+codefresh runner init [options] --values values-example.yaml
+```
+
+
+##### Runner installation with GCE Disks (Workload Identity with IAM role)
+
+Using the values `values-example.yaml` file:
+
+```yaml
+...
+### Storage parameter example for GCE disks
+ Storage:
+ Backend: gcedisk
+ AvailabilityZone: us-central1-c
+ VolumeProvisioner:
+ ServiceAccount:
+ Annotations: #annotation to the volume-provisioner service account, using the email address of the Google service account
+ iam.gke.io/gcp-service-account: @.iam.gserviceaccount.com
+ NodeSelector: topology.kubernetes.io/zone=us-central1-c
+...
+ Runtime:
+ NodeSelector: # dind and engine pods node-selector (--build-node-selector)
+ topology.kubernetes.io/zone: us-central1-c
+...
+```
+```shell
+codefresh runner init [options] --values values-example.yaml
+```
+
+Create the binding between Kubernetes service account and Google service account:
+
+```shell
+export K8S_NAMESPACE=codefresh
+export KSA_NAME=volume-provisioner-runner
+export GSA_NAME=
+export PROJECT_ID=
+
+gcloud iam service-accounts add-iam-policy-binding \
+ --role roles/iam.workloadIdentityUser \
+ --member "serviceAccount:${PROJECT_ID}.svc.id.goog[${K8S_NAMESPACE}/${KSA_NAME}]" \
+ ${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
+```
+
+To configure existing Runner with GCE Disks follow this article:
+
+[How-to: Configuring an existing Runtime Environment with GCE disks](https://support.codefresh.io/hc/en-us/articles/360016652900-How-to-Configuring-an-existing-Runtime-Environment-with-GCE-disks)
+
+
+##### Using multiple Availability Zones
+
+Currently, to support effective caching with GCE disks, the builds/pods need to be scheduled in a single AZ (this is more related to a GCP limitation than a Codefresh runner issue).
+
+If you have Kubernetes nodes running in multiple Availability Zones and wish to use the Codefresh runner we suggest the following:
+
+**Option A** - Provision a new Kubernetes cluster: a cluster that runs in a single AZ only. - The cluster should be dedicated for usage with the Codefresh runner. This is the preferred solution and avoids extra complexity.
+
+**Option B** - Install Codefresh runner in your multi-zone cluster, and let it run in the default Node Pool: - in this case, you must specify `--build-node-selector=` (e.g.: `--build-node-selector=topology.kubernetes.io/zone=us-central1-c`) or simply modify the Runtime environment as below:
+
+```shell
+codefresh get re $RUNTIME_NAME -o yaml > re.yaml
+```
+
+Edit the yaml:
+
+```yaml
+version: 2
+metadata:
+ ...
+runtimeScheduler:
+ cluster:
+ nodeSelector: #schedule engine pod onto a node whose labels match the nodeSelector
+ topology.kubernetes.io/zone: us-central1-c
+ ...
+dockerDaemonScheduler:
+ cluster:
+ nodeSelector: #schedule dind pod onto a node whose labels match the nodeSelector
+ topology.kubernetes.io/zone: us-central1-c
+ ...
+ pvcs:
+ dind:
+ ...
+```
+
+Apply changes with:
+
+```shell
+codefresh patch re -f re.yaml
+```
+
+**Option C** - Like option B, but with a dedicated Node Pool
+
+**Option D** - Have 2 separate Codefresh runner Runtimes, one for zone A, and the other for zone B, and so on: this technically works, but it will require you to manually set the RE to use for the pipelines that won't use the default Codefresh runner RE. To distribute the pipeline's builds across the Codefresh runner REs.
+
+For example, let's say Venona-zoneA is the default RE, then, that means that for the pipelines that you want to run in Venona-zoneB, then you'll need to modify their RE settings, and explicitly set Venona-zoneB as the one to use.
+
+Regarding [Regional Persistent Disks](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd), their support is not currently implemented in the Codefresh runner.
+
+
+### Installing on AKS
+
+**Azure Disks**
+
+*Prerequisite:* volume provisioner (`dind-volume-provisioner`) should have permissions to create/delete/get Azure Disks
+
+Minimal IAM Role for dind-volume-provisioner:
+`dind-volume-provisioner-role.json`
+```json
+{
+ "Name": "CodefreshDindVolumeProvisioner",
+ "Description": "Perform create/delete/get disks",
+ "IsCustom": true,
+ "Actions": [
+ "Microsoft.Compute/disks/read",
+ "Microsoft.Compute/disks/write",
+ "Microsoft.Compute/disks/delete"
+
+ ],
+ "AssignableScopes": ["/subscriptions/"]
+}
+```
+
+If you use AKS with managed [identities for node group](https://docs.microsoft.com/en-us/azure/aks/use-managed-identity), you can run the script below to assign `CodefreshDindVolumeProvisioner` role to aks node identity:
+
+```shell
+export ROLE_DEFINITIN_FILE=dind-volume-provisioner-role.json
+export SUBSCRIPTION_ID=$(az account show --query "id" | xargs echo )
+export RESOURCE_GROUP=codefresh-rt1
+export AKS_NAME=codefresh-rt1
+export LOCATION=$(az aks show -g $RESOURCE_GROUP -n $AKS_NAME --query location | xargs echo)
+export NODES_RESOURCE_GROUP=MC_${RESOURCE_GROUP}_${AKS_NAME}_${LOCATION}
+export NODE_SERVICE_PRINCIPAL=$(az aks show -g $RESOURCE_GROUP -n $AKS_NAME --query identityProfile.kubeletidentity.objectId | xargs echo)
+
+az role definition create --role-definition @${ROLE_DEFINITIN_FILE}
+az role assignment create --assignee $NODE_SERVICE_PRINCIPAL --scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$NODES_RESOURCE_GROUP --role CodefreshDindVolumeProvisioner
+```
+
+Now install Codefresh Runner with cli wizard:
+```shell
+codefresh runner init --set-value Storage.Backend=azuredisk --set Storage.VolumeProvisioner.MountAzureJson=true
+```
+Or using [values-example.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/venonactl/example/values-example.yaml):
+```yaml
+Storage:
+ Backend: azuredisk
+ VolumeProvisioner:
+ MountAzureJson: true
+```
+```shell
+codefresh runner init --values values-example.yaml
+```
+Or with helm chart [values.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/charts/cf-runtime/values.yaml):
+```yaml
+storage:
+ backend: azuredisk
+ azuredisk:
+ skuName: Premium_LRS
+
+volumeProvisioner:
+ mountAzureJson: true
+```
+```shell
+helm install cf-runtime cf-runtime/cf-runtime -f ./generated_values.yaml -f values.yaml --create-namespace --namespace codefresh
+```
+
+
+### Internal Registry Mirror
+
+You can configure your Codefresh Runner to use an internal registry as a mirror for any container images that are mentioned in your pipelines.
+
+First setup an internal registry as described in [https://docs.docker.com/registry/recipes/mirror/](https://docs.docker.com/registry/recipes/mirror/).
+
+Then locate the `codefresh-dind-config` config map in the namespace that houses the runner and edit it.
+
+```shell
+kubectl -n codefresh edit configmap codefresh-dind-config
+```
+
+Change the `data` field from:
+
+```yaml
+data:
+ daemon.json: "{\n \"hosts\": [ \"unix:///var/run/docker.sock\",\n \"tcp://0.0.0.0:1300\"],\n
+ \ \"storage-driver\": \"overlay2\",\n \"tlsverify\": true, \n \"tls\": true,\n
+ \ \"tlscacert\": \"/etc/ssl/cf-client/ca.pem\",\n \"tlscert\": \"/etc/ssl/cf/server-cert.pem\",\n
+ \ \"tlskey\": \"/etc/ssl/cf/server-key.pem\",\n \"insecure-registries\" : [\"192.168.99.100:5000\"],\n
+ \ \"metrics-addr\" : \"0.0.0.0:9323\",\n \"experimental\" : true\n}\n"
+```
+
+to
+
+```yaml
+data:
+ daemon.json: "{\n \"hosts\": [ \"unix:///var/run/docker.sock\",\n \"tcp://0.0.0.0:1300\"],\n
+ \ \"storage-driver\": \"overlay2\",\n \"tlsverify\": true, \n \"tls\": true,\n
+ \ \"tlscacert\": \"/etc/ssl/cf-client/ca.pem\",\n \"tlscert\": \"/etc/ssl/cf/server-cert.pem\",\n
+ \ \"tlskey\": \"/etc/ssl/cf/server-key.pem\",\n \"insecure-registries\" : [\"192.168.99.100:5000\"],\n
+ \ \"registry-mirrors\": [ \"https://\" ], \n
+ \ \"metrics-addr\" : \"0.0.0.0:9323\",\n \"experimental\" : true\n}\n"
+```
+
+This adds the line `\ \"registry-mirrors\": [ \"https://\" ], \n` which contains a single registry to use as a mirror. Quit and Save by typing `:wq`.
+
+Now any container image that is used in your pipeline and isn't fully qualified, will be pulled through the Docker registry that is configured as a mirror.
+
+
+### Installing the monitoring component
+
+If your cluster is located [behind the firewall](https://codefresh.io/docs/docs/administration/behind-the-firewall/) you might want to use the runner monitoring component to get valuable information about the cluster resources to Codefresh, for example, to [Kubernetes](https://g.codefresh.io/kubernetes/services/) and [Helm Releases](https://g.codefresh.io/helm/releases/releasesNew/) dashboards.
+
+To install the monitoring component you can use `--install-monitor` flag in the `runner init` command:
+
+```shell
+codefresh runner init --install-monitor
+```
+
+Please note, that the monitoring component will not be installed if you use `--install-monitor` with `--skip-cluster-integration` flag. In case you want to skip adding the cluster integration during the runner installation, but still want to get the cluster resources to Codefresh dashboards, you can install the monitoring component separately:
+
+```shell
+codefresh install monitor --kube-context-name --kube-namespace --cluster-id --token
+```
+
+
+
+## Full runtime environment specification
+
+The following section contains an explanation of runtime environment specification and possible options to modify it. Notice that there are additional and hidden fields that are autogenerated by Codefresh that complete a full runtime spec. You can't directly see or edit them (unless you run your own [Codefresh On-Premises Installation]({{site.baseurl}}/docs/administration/codefresh-on-prem/) )
+
+
+To get a list of all available runtimes execute:
+```shell
+codefresh get runtime-environments
+#or
+codefresh get re
+```
+
+Choose the runtime that you want to inspect or modify and get its yaml/json representation:
+```shell
+codefresh get re my-eks-cluster/codefresh -o yaml > runtime.yaml
+#or
+codefresh get re my-eks-cluster/codefresh -o json > runtime.json
+```
+
+Update your runtime environment with the [patch command](https://codefresh-io.github.io/cli/operate-on-resources/patch/):
+```shell
+codefresh patch re my-eks-cluster/codefresh -f runtime.yaml
+```
+
+Below is the example for the default and basic runtime spec after you've installed the Runner:
+
+{% highlight yaml %}
+{% raw %}
+version: 1
+metadata:
+ ...
+runtimeScheduler:
+ cluster:
+ clusterProvider:
+ accountId: 5f048d85eb107d52b16c53ea
+ selector: my-eks-cluster
+ namespace: codefresh
+ serviceAccount: codefresh-engine
+ annotations: {}
+dockerDaemonScheduler:
+ cluster:
+ clusterProvider:
+ accountId: 5f048d85eb107d52b16c53ea
+ selector: my-eks-cluster
+ namespace: codefresh
+ serviceAccount: codefresh-engine
+ annotations: {}
+ userAccess: true
+ defaultDindResources:
+ requests: ''
+ pvcs:
+ dind:
+ storageClassName: dind-local-volumes-runner-codefresh
+extends:
+ - system/default/hybrid/k8s_low_limits
+description: '...'
+accountId: 5f048d85eb107d52b16c53ea
+{% endraw %}
+{% endhighlight %}
+
+### Top level fields
+
+{: .table .table-bordered .table-hover}
+| Field name | Type | Value |
+| -------------- |-------------------------| -------------------------|
+| `version` | string | Runtime environment version |
+| `metadata` | object | Meta-information |
+| `runtimeScheduler` | object | Engine pod definition |
+| `dockerDaemonScheduler` | object | Dind pod definition |
+| `extends` | array | System field (links to full runtime spec from Codefresh API) |
+| `description` | string | Runtime environment description (k8s context name and namespace) |
+| `accountId` | string | Account to which this runtime belongs |
+| `appProxy` | object | Optional filed for [app-proxy]({{site.baseurl}}/docs/administration/codefresh-runner/#optional-installation-of-the-app-proxy) |
+
+### runtimeScheduler fields (engine)
+
+{: .table .table-bordered .table-hover}
+| Field name | Type | Value |
+| -------------- |-------------------------| -------------------------|
+| `image` | string | Override default engine image |
+| `imagePullPolicy` | string | Override image pull policy (default `IfNotPresent`) |
+| `type` | string | `KubernetesPod` |
+| `envVars` | object | Override or add environment variables passed into the engine pod |
+| `userEnvVars` | object | Add external env var(s) to the pipeline. See [Custom Global Environment Variables]({{site.baseurl}}/docs/administration/codefresh-runner/#custom-global-environment-variables) |
+| `cluster` | object | k8s related information (`namespace`, `serviceAccount`, `nodeSelector`) |
+| `resources` | object | Specify non-default `requests` and `limits` for engine pod |
+| `tolerations` | array | Add tolerations to engine pod |
+| `annotations` | object | Add custom annotations to engine pod (empty by default `{}`) |
+| `labels` | object | Add custom labels to engine pod (empty by default `{}`) |
+| `dnsPolicy` | string | Engine pod's [DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) |
+| `dnsConfig` | object | Engine pod's [DNS config](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config) |
+
+`runtimeScheduler` example:
+{% highlight yaml %}
+{% raw %}
+runtimeScheduler:
+ imagePullPolicy: Always
+ cluster:
+ clusterProvider:
+ accountId: 5f048d85eb107d52b16c53ea
+ selector: my-eks-cluster
+ nodeSelector: #schedule engine pod onto a node whose labels match the nodeSelector
+ node-type: engine
+ namespace: codefresh
+ serviceAccount: codefresh-engine
+ annotations: {}
+ labels:
+ spotinst.io/restrict-scale-down: "true" #optional label to prevent node scaling down when the runner is deployed on spot instances using spot.io
+ envVars:
+ NODE_TLS_REJECT_UNAUTHORIZED: '0' #disable certificate validation for TLS connections (e.g. to g.codefresh.io)
+ METRICS_PROMETHEUS_ENABLED: 'true' #enable /metrics on engine pod
+ DEBUGGER_TIMEOUT: '30' #debug mode timeout duration (in minutes)
+ userEnvVars:
+ - name: GITHUB_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: github-token
+ key: token
+ resources:
+ requests:
+ cpu: 60m
+ memory: 500Mi
+ limits:
+ cpu: 1000m
+ memory: 2048Mi
+ tolerations:
+ - effect: NoSchedule
+ key: codefresh.io
+ operator: Equal
+ value: engine
+{% endraw %}
+{% endhighlight %}
+
+### dockerDaemonScheduler fields (dind)
+
+| Field name | Type | Value |
+| -------------- |-------------------------| -------------------------|
+| `dindImage` | string | Override default dind image |
+| `type` | string | `DindPodPvc` |
+| `envVars` | object | Override or add environment variables passed into the dind pod. See [IN-DIND cleaner]({{site.baseurl}}/docs/administration/codefresh-runner/#cleaners) |
+| `userVolumeMounts` with `userVolumes` | object | Add volume mounts to the pipeline See [Custom Volume Mounts]({{site.baseurl}}/docs/administration/codefresh-runner/#custom-volume-mounts) |
+| `cluster` | object | k8s related information (`namespace`, `serviceAccount`, `nodeSelector`) |
+| `defaultDindResources` | object | Override `requests` and `limits` for dind pod (defaults are `cpu: 400m` and `memory:800Mi` ) |
+| `tolerations` | array | Add tolerations to dind pod |
+| `annotations` | object | Add custom annotations to dind pod (empty by default `{}`) |
+| `labels` | object | Add custom labels to dind pod (empty by default `{}`) |
+| `pvc` | object | Override default storage configuration for PersistentVolumeClaim (PVC) with `storageClassName`, `volumeSize`, `reuseVolumeSelector`. See [Volume Reusage Policy]({{site.baseurl}}/docs/administration/codefresh-runner/#volume-reusage-policy) |
+| `dnsPolicy` | string | Dind pod's [DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) |
+| `dnsConfig` | object | Dind pod's [DNS config](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config) |
+
+`dockerDaemonScheduler` example:
+{% highlight yaml %}
+{% raw %}
+dockerDaemonScheduler:
+ cluster:
+ clusterProvider:
+ accountId: 5f048d85eb107d52b16c53ea
+ selector: my-eks-cluster
+ nodeSelector: #schedule dind pod onto a node whose labels match the nodeSelector
+ node-type: dind
+ namespace: codefresh
+ serviceAccount: codefresh-engine
+ annotations: {}
+ labels:
+ spotinst.io/restrict-scale-down: "true" #optional label to prevent node scaling down when the runner is deployed on spot instances using spot.io
+ userAccess: true
+ defaultDindResources:
+ requests: ''
+ limits:
+ cpu: 1000m
+ memory: 2048Mi
+ userVolumeMounts:
+ my-cert:
+ name: cert
+ mountPath: /etc/ssl/cert
+ readOnly: true
+ userVolumes:
+ my-cert:
+ name: cert
+ secret:
+ secretName: tls-secret
+ pvcs:
+ dind:
+ storageClassName: dind-local-volumes-runner-codefresh
+ volumeSize: 30Gi
+ reuseVolumeSelector: 'codefresh-app,io.codefresh.accountName,pipeline_id'
+ tolerations:
+ - key: codefresh.io
+ operator: Equal
+ value: dinds
+ effect: NoSchedule
+{% endraw %}
+{% endhighlight %}
+
+### Custom Global Environment Variables
+You can add your own environment variables in the runtime environment, so that all pipeline steps will have access to it. A typical example would be a shared secret that you want to pass to the pipeline.
+
+Under the `runtimeScheduler` block you can add an additional element with named `userEnvVars` that follows the same syntax as [secret/environment variables](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
+
+`runtime.yaml`
+{% highlight yaml %}
+{% raw %}
+...
+runtimeScheduler:
+ userEnvVars:
+ - name: GITHUB_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: github-token
+ key: token
+...
+{% endraw %}
+{% endhighlight %}
+
+### Custom Volume Mounts
+You can add your own volume mounts in the runtime environment, so that all pipeline steps have access to the same set of external files. A typical example of this scenario is when you want to make a set of SSL certificates available to all your pipelines. Rather than manually download the certificates in each pipeline, you can provide them centrally on the runtime level.
+
+Under the `dockerDaemonScheduler` block you can add two additional elements with names `userVolumeMounts` and `userVolumes` (they follow the same syntax as normal k8s `volumes` and `volumeMounts`) and define your own global volumes.
+
+`runtime.yaml`
+{% highlight yaml %}
+{% raw %}
+...
+dockerDaemonScheduler:
+ userVolumeMounts:
+ my-cert:
+ name: cert
+ mountPath: /etc/ssl/cert
+ readOnly: true
+ userVolumes:
+ my-cert:
+ name: cert
+ secret:
+ secretName: tls-secret
+...
+{% endraw %}
+{% endhighlight %}
+
+### Debug Timeout Duration
+
+The default timeout for [debug mode]({{site.baseurl}}/docs/configure-ci-cd-pipeline/debugging-pipelines/) is 14 minutes, and even if the user is actively working, it is still 14 minutes. To change the duration of the debugger, you will need to update your Runtime Spec for the runtime you would like to change. To change the default duration, you will need to add `DEBUGGER_TIMEOUT` to the environment variable. The value you pass is a string value that will define the timeout in minutes. For example, you can pass '30', which will be 30 minutes.
+
+Under `.runtimeScheduler`, add an `envVars` section, then add `DEBUGGER_TIMEOUT` under `envVars` with the value you want.
+
+```yaml
+...
+runtimeScheduler:
+ envVars:
+ DEBUGGER_TIMEOUT: '30'
+...
+```
+
+### Volume Reusage Policy
+
+The behavior of how the volumes are reused depends on volume selector configuration.
+`reuseVolumeSelector` option is configurable in runtime environment spec.
+
+The following options are available:
+
+* `reuseVolumeSelector: 'codefresh-app,io.codefresh.accountName'` - determined PV can be used by **ANY** pipeline of your account (it's a **default** volume selector).
+* `reuseVolumeSelector: 'codefresh-app,io.codefresh.accountName,pipeline_id'` - determined PV can be used only by a **single pipeline**.
+* `reuseVolumeSelector: 'codefresh-app,io.codefresh.accountName,pipeline_id,io.codefresh.branch_name'` - determined PV can be used only by **single pipeline AND single branch**.
+* `reuseVolumeSelector: 'codefresh-app,io.codefresh.accountName,pipeline_id,trigger'` - determined PV can be used only by **single pipeline AND single trigger**.
+
+For approach `codefresh-app,io.codefresh.accountName`:
+
+* Benefit: less PVs --> lower cost (since any PV can be used by any pipeline, then, the cluster would need to keep less PVs in its pool of PVs for Codefresh)
+* Downside: since the PV can be used by any pipeline, then, the PVs could have assets and info from different pipelines, thus reducing the probability of cache,
+
+For approach `codefresh-app,io.codefresh.accountName,pipeline_id`:
+
+* Benefit: more probability of cache (no "spam" from other pipelines)
+* Downside: more PVs to keep (higher cost)
+
+
+To change volume selector get runtime yaml spec and under `dockerDaemonScheduler.pvcs.dind` block specify `reuseVolumeSelector`:
+
+```yaml
+ pvcs:
+ dind:
+ volumeSize: 30Gi
+ reuseVolumeSelector: 'codefresh-app,io.codefresh.accountName,pipeline_id'
+```
+
+## Runtime Cleaners
+
+### Key points
+
+* Codefresh pipelines require disk space for:
+ * [Pipeline Shared Volume](https://codefresh.io/docs/docs/yaml-examples/examples/shared-volumes-between-builds/) (`/codefresh/volume`, implemented as [docker volume](https://docs.docker.com/storage/volumes/))
+ * Docker containers - running and stopped
+ * Docker images and cached layers
+* To improve performance, `volume-provisioner` is able to provision previously used disk with docker images and pipeline volume from previously running builds. It improves performance by using docker cache and decreasing I/O rate.
+* Least recently docker images and volumes should be cleaned to avoid out-of-space errors.
+* There are several places where pipeline volume cleanup is required, so there are several kinds of cleaner.
+
+### Cleaners
+
+* [IN-DIND cleaner](https://github.com/codefresh-io/dind/tree/master/cleaner) - deletes extra docker containers, volumes, images in **dind pod**
+* [External volumes cleaner](https://github.com/codefresh-io/runtime-cluster-monitor/blob/master/charts/cf-monitoring/templates/dind-volume-cleanup.yaml) - deletes unused **external** PVs (EBS, GCE/Azure disks)
+* [Local volumes cleaner](https://github.com/codefresh-io/dind-volume-utils/blob/master/local-volumes/lv-cleaner.sh) - deletes **local** volumes in case node disk space is close to the threshold
+
+***
+
+#### IN-DIND cleaner
+
+**Purpose:** Removes unneeded *docker containers, images, volumes* inside kubernetes volume mounted to the dind pod
+
+**Where it runs:** Running inside each dind pod as script
+
+**Triggered by:** SIGTERM and also during the run when disk usage (cleaner-agent ) > 90% (configurable)
+
+**Configured by:** Environment Variables which can be set in Runtime Environment configuration
+
+**Configuration/Logic:** [README.md](https://github.com/codefresh-io/dind/tree/master/cleaner#readme)
+
+Override `dockerDaemonScheduler.envVars` on Runtime Environment if necessary (the following are **defaults**):
+
+```yaml
+dockerDaemonScheduler:
+ envVars:
+ CLEAN_PERIOD_SECONDS: '21600' # launch clean if last clean was more than CLEAN_PERIOD_SECONDS seconds ago
+ CLEAN_PERIOD_BUILDS: '5' # launch clean if last clean was more CLEAN_PERIOD_BUILDS builds since last build
+ IMAGE_RETAIN_PERIOD: '14400' # do not delete docker images if they have events since current_timestamp - IMAGE_RETAIN_PERIOD
+ VOLUMES_RETAIN_PERIOD: '14400' # do not delete docker volumes if they have events since current_timestamp - VOLUMES_RETAIN_PERIOD
+ DISK_USAGE_THRESHOLD: '0.8' # launch clean based on current disk usage DISK_USAGE_THRESHOLD
+ INODES_USAGE_THRESHOLD: '0.8' # launch clean based on current inodes usage INODES_USAGE_THRESHOLD
+```
+
+***
+
+#### External volumes cleaner
+
+**Purpose:** Removes unused *kubernetes volumes and related backend volumes*
+
+**Where it runs:** On Runtime Cluster as CronJob
+(`kubectl get cronjobs -n codefresh -l app=dind-volume-cleanup`). Installed in case the Runner uses non-local volumes (`Storage.Backend != local`)
+
+**Triggered by:** CronJob every 10min (configurable), part of [runtime-cluster-monitor](https://github.com/codefresh-io/runtime-cluster-monitor/blob/master/charts/cf-monitoring/templates/dind-volume-cleanup.yaml) and runner deployment
+
+**Configuration:**
+
+Set `codefresh.io/volume-retention` annotation on Runtime Environment:
+
+```yaml
+dockerDaemonScheduler:
+ pvcs:
+ dind:
+ storageClassName: dind-ebs-volumes-runner-codefresh
+ reuseVolumeSelector: 'codefresh-app,io.codefresh.accountName,pipeline_id'
+ volumeSize: 32Gi
+ annotations:
+ codefresh.io/volume-retention: 7d
+```
+
+Override environment variables for `dind-volume-cleanup` cronjob if necessary:
+
+* `RETENTION_DAYS` (defaults to 4)
+* `MOUNT_MIN` (defaults to 3)
+* `PROVISIONED_BY` (defaults to `codefresh.io/dind-volume-provisioner`)
+
+About *optional* `-m` argument:
+
+* `dind-volume-cleanup` to clean volumes that were last used more than `RETENTION_DAYS` ago
+* `dind-volume-cleanup-m` to clean volumes that were used more than a day ago, but mounted less than `MOUNT_MIN` times
+
+***
+
+#### Local volumes cleaner
+
+**Purpose:** Deletes local volumes in case node disk space is close to the threshold
+
+**Where it runs:** On each node on runtime cluster as DaemonSet `dind-lv-monitor`. Installed in case the Runner use local volumes (`Storage.Backend == local`)
+
+**Triggered by:** Starts clean if disk space usage or inodes usage is more than thresholds (configurable)
+
+**Configuration:**
+
+Override environment variables for `dind-lv-monitor` daemonset if necessary:
+
+* `VOLUME_PARENT_DIR` - default `/var/lib/codefresh/dind-volumes`
+* `KB_USAGE_THRESHOLD` - default 80 (percentage)
+* `INODE_USAGE_THRESHOLD` - default 80
+
+## ARM Builds
+
+With hybrid runner it's possibe to run native ARM64v8 builds.
+
+>**Note:** Running both amd64 and arm64 images within the same pipeline - it is not possible. We do not support multi-architecture builds. One runtime configuration - one architecture. Considering one pipeline can map only to one runtime, it is possible to run either amd64 or arm64, but not both within a one pipeline
+
+The following scenario is an example of how to set up ARM Runner on existing EKS cluster:
+
+**Step 1 - Preparing nodes**
+
+Create new ARM nodegroup:
+
+```shell
+eksctl utils update-coredns --cluster
+eksctl utils update-kube-proxy --cluster --approve
+eksctl utils update-aws-node --cluster --approve
+
+eksctl create nodegroup \
+--cluster \
+--region \
+--name \
+--node-type \
+--nodes <3>\
+--nodes-min <2>\
+--nodes-max <4>\
+--managed
+```
+
+Check nodes status:
+
+```shell
+kubectl get nodes -l kubernetes.io/arch=arm64
+```
+
+Also it's recommeded to label and taint the required ARM nodes:
+
+```shell
+kubectl taint nodes arch=aarch64:NoSchedule
+kubectl label nodes arch=arm
+```
+
+**Step 2 - Runner installation**
+
+Use [values.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/venonactl/example/values-example.yaml) to inject `tolerations`, `kube-node-selector`, `build-node-selector` into the Runtime Environment spec.
+
+`values-arm.yaml`
+
+```yaml
+...
+Namespace: codefresh
+
+### NodeSelector --kube-node-selector: controls runner and dind-volume-provisioner pods
+NodeSelector: arch=arm
+
+### Tolerations --tolerations: controls runner, dind-volume-provisioner and dind-lv-monitor
+Tolerations:
+- key: arch
+ operator: Equal
+ value: aarch64
+ effect: NoSchedule
+...
+########################################################
+### Codefresh Runtime ###
+### ###
+### configure engine and dind pods ###
+########################################################
+Runtime:
+### NodeSelector --build-node-selector: controls engine and dind pods
+ NodeSelector:
+ arch: arm
+### Tolerations for engine and dind pods
+ tolerations:
+ - key: arch
+ operator: Equal
+ value: aarch64
+ effect: NoSchedule
+...
+```
+
+Install the Runner with:
+
+```shell
+codefresh runner init --values values-arm.yaml --exec-demo-pipeline false --skip-cluster-integration true
+```
+
+**Step 3 - Post-installation fixes**
+
+Change `engine` image version in Runtime Environment specification:
+
+```shell
+# get the latest engine ARM64 tag
+curl -X GET "https://quay.io/api/v1/repository/codefresh/engine/tag/?limit=100" --silent | jq -r '.tags[].name' | grep "^1.*arm64$"
+1.136.1-arm64
+```
+
+```shell
+# get runtime spec
+codefresh get re $RUNTIME_NAME -o yaml > runtime.yaml
+```
+
+under `runtimeScheduler.image` change image tag:
+
+```yaml
+runtimeScheduler:
+ image: 'quay.io/codefresh/engine:1.136.1-arm64'
+```
+
+```shell
+# patch runtime spec
+codefresh patch re -f runtime.yaml
+```
+
+For `local` storage patch `dind-lv-monitor-runner` DaemonSet and add `nodeSelector`:
+
+```shell
+kubectl edit ds dind-lv-monitor-runner
+```
+
+```yaml
+ spec:
+ nodeSelector:
+ arch: arm
+```
+
+**Step 4 - Run Demo pipeline**
+
+Run a modified version of the *CF_Runner_Demo* pipeline:
+
+```yaml
+version: '1.0'
+stages:
+ - test
+steps:
+ test:
+ stage: test
+ title: test
+ image: 'arm64v8/alpine'
+ commands:
+ - echo hello Codefresh Runner!
+```
+
+## Troubleshooting
+
+For troubleshooting refer to the [Knowledge Base](https://support.codefresh.io/hc/en-us/sections/4416999487762-Hybrid-Runner)
+
+## What to read next
+
+* [Codefresh installation options]({{site.baseurl}}/docs/administration/installation-security/)
+* [Codefresh On-Premises]({{site.baseurl}}/docs/administration/codefresh-on-prem/)
+* [Codefresh API]({{site.baseurl}}/docs/integrations/codefresh-api/)
diff --git a/_docs/runtime/git-sources.md b/_docs/installation/git-sources.md
similarity index 68%
rename from _docs/runtime/git-sources.md
rename to _docs/installation/git-sources.md
index 2b95dc54..b51913a8 100644
--- a/_docs/runtime/git-sources.md
+++ b/_docs/installation/git-sources.md
@@ -1,23 +1,23 @@
---
-title: "Add Git Sources to runtimes"
+title: "Add Git Sources to GitOps Runtimes"
description: ""
-group: runtime
+group: installation
toc: true
---
-A Git Source is the equivalent of an Argo CD application that tracks a Git repository and syncs the desired state of the repo to the destination K8s cluster. In addition to application resources, the Git Source can store resources for Codefresh runtimes, and CI/CD entities such as delivery pipelines, Workflow Templates, workflows, and applications.
+A Git Source is the equivalent of an Argo CD application that tracks a Git repository and syncs the desired state of the repo to the destination K8s cluster. In addition to application resources, the Git Source can store resources for GitOps Runtimes, and CI/CD entities such as delivery pipelines, Workflow Templates, workflows, and applications.
-Provisioning a runtime automatically creates a Git Source that stores resources for the runtime and for the demo CI pipelines that are optionally installed with the runtime. Every Git Source is associated with a Codefresh runtime. A runtime can have one or more Git Sources. You can add Git Sources at any time, to the same or to different runtimes.
+Provisioning a Runtime automatically creates a Git Source that stores resources for the Runtime and for the demo CI pipelines that are optionally installed with the Runtime. Every Git Source is associated with a Runtime. You can add Git Sources at any time, to the same or to different Runtimes.
-Once you create a Git Source for a runtime, you can store resources for CI/CD entities associated with that runtime. For example, when creating pipelines or applications, you can select the Git Source to which to store manifest definitions.
+Once you create a Git Source for a Runtime, you can store resources for CI/CD entities associated with it. For example, when creating pipelines or applications, you can select the Git Source to which to store manifest definitions.
### View Git Sources and definitions
Drill down on a runtime in List View to see its Git Sources.
-1. In the Codefresh UI, go to the [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"} page.
-1. From the **List View** (the default), select a runtime name, and then select the **Git Sources** tab.
+1. In the Codefresh UI, go to the [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"} page.
+1. From the **List View** (the default), select a Runtime name, and then select the **Git Sources** tab.
{% include
image.html
@@ -34,12 +34,12 @@ Drill down on a runtime in List View to see its Git Sources.
1. To see the definitions for the Git Source, select the three dots at the end of the row.
### Create a Git Source
-Create Git Sources for any provisioned runtime. The Git Sources are available to store resources for pipelines or applications when you create them.
+Create Git Sources for any provisioned Runtime. The Git Sources are available to store resources for pipelines or applications when you create them.
>Make sure you are in the List View to create Git Sources.
-1. In the Codefresh UI, go to [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes**){:target="\_blank"}.
-1. In the List View, select the runtime for which to add a Git Source, and then select the **Git Sources** tab.
+1. In the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes**){:target="\_blank"}.
+1. In the List View, select the Runtime for which to add a Git Source, and then select the **Git Sources** tab.
1. Select **Create Git Sources**, and in the Create Git Source panel, define the definitions for the Git Source:
{% include
@@ -56,7 +56,7 @@ Create Git Sources for any provisioned runtime. The Git Sources are available t
* **Source**: The Git repo with the desired state, tracked by the Git Source, and synced to the destination cluster.
* **Repository**: Mandatory. The URL to the Git repo.
* **Branch**: Optional. The specific branch within the repo to track.
- * **Path**: Optional. The specific path within the repo, and branch, if one is specified, to track.
+ * **Path**: Optional. The specific path within the repo, and branch if one is specified, to track.
* **Destination**: The destination cluster with the actual state to which to apply the changes from the **Source**.
* **Namespace**: The namespace in the destination cluster to which to sync the changes.
@@ -73,8 +73,8 @@ Create Git Sources for any provisioned runtime. The Git Sources are available t
Edit an existing Git Source by changing the source and destination definitions.
> You cannot change the name of the Git Source.
-1. In the Codefresh UI, go to [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes**){:target="\_blank"}.
-1. From the **List View** (the default), select the runtime with the Git Source, and then select the **Git Sources** tab.
+1. In the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes**){:target="\_blank"}.
+1. From the **List View** (the default), select the Runtime with the Git Source, and then select the **Git Sources** tab.
1. In the row with the Git Source to edit, select the three dots, and then select **Edit** in the panel that appears.
{% include
@@ -90,12 +90,12 @@ Edit an existing Git Source by changing the source and destination definitions.
1. Change the **Source** and **Destination** definitions for the Git Source, and select **Save**.
### View/download logs for a Git Source
-View online logs for any Git Source associated with a runtime, and if needed, download the log file for offline viewing and analysis.
-Online logs show up to 1000 of the most recent events (lines), updated in real time. Downloaded logs include all the events from the application launch to the date and time of download.
+View online logs for any Git Source associated with a Runtime, and if needed, download the log file for offline viewing and analysis.
+Online logs show up to 1000 of the most recent events (lines), updated in real time. Downloaded logs include all the events, from the application launch to the date and time of download.
-1. In the Codefresh UI, go to [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes**){:target="\_blank"}.
-1. From the **List View** (the default), select the runtime with the Git Source, and then select the **Git Sources** tab.
-1. In the row with the Git Source foe which to view/download logs, select the three dots, and then select **View Logs**.
+1. In the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes**){:target="\_blank"}.
+1. From the **List View** (the default), select the Runtime with the Git Source, and then select the **Git Sources** tab.
+1. In the row with the Git Source for which to view/download logs, select the three dots, and then select **View Logs**.
{% include
image.html
@@ -127,6 +127,7 @@ Online logs show up to 1000 of the most recent events (lines), updated in real t
The file is downloaded with `.log` extension.
### What to read next
-[Manage runtimes]({{site.baseurl}}/docs/runtime/monitor-manage-runtimes/)
-[Recover runtimes]({{site.baseurl}}/docs/runtime/runtime-recovery/)
+[Monitoring & managing GitOps Runtimes]({{site.baseurl}}/docs/installation/monitor-manage-runtimes/)
+[Shared configuration repo]({{site.baseurl}}/docs/reference/shared-configuration)
+
diff --git a/_docs/runtime/hosted-runtime.md b/_docs/installation/hosted-runtime.md
similarity index 66%
rename from _docs/runtime/hosted-runtime.md
rename to _docs/installation/hosted-runtime.md
index 0a08ba3b..cfc64c7e 100644
--- a/_docs/runtime/hosted-runtime.md
+++ b/_docs/installation/hosted-runtime.md
@@ -1,18 +1,28 @@
---
-title: "Set up a hosted runtime environment"
-description: ""
-group: runtime
+title: "Hosted GitOps Runtime setup"
+description: "Provision Hosted GitOps environment"
+group: installation
toc: true
---
-If you have Codefresh's Hosted GitOps, set up your hosted environment, and you are all ready to leverage extensive CD Ops capabilities.
-Read about [Hosted GitOps]({{site.baseurl}}/docs/incubation/intro-hosted-runtime/).
+Set up your hosted environment with the Hosted GitOps Runtime to leverage extensive CD capabilities.
+
-### Where to start with Hosted GitOps
-If you have not provisioned a hosted runtime, Codefresh presents you with the setup instructions in the **Home** dashboard.
+## System requirements for Hosted GitOps Runtimes
+{: .table .table-bordered .table-hover}
+| Item | Requirement |
+| -------------- | -------------- |
+|Kubernetes cluster | Server version 1.18 and higher to which to deploy applications|
+|Git provider | {::nomarkdown}{:/}|
+
+
+## Where to start with Hosted GitOps Runtimes
+If you have not provisioned a Hosted GitOps Runtime, Codefresh presents you with the setup instructions in the **Home** dashboard.
+
+
* In the Codefresh UI, go to Codefresh [Home](https://g.codefresh.io/2.0/?time=LAST_7_DAYS){:target="\_blank"}.
Codefresh guides you through the three-step setup, as described below.
@@ -27,18 +37,18 @@ caption="Hosted GitOps setup"
max-width="80%"
%}
- >You can provision a single hosted runtime for your Codefresh account.
+ >You can provision a single Hosted GitOps Runtime per Codefresh account.
-### 1. Provision hosted runtime
-Start installing the hosted runtime with a single-click. Codefresh completes the installation without any further intervention on your part.
-The hosted runtime is provisioned on the Codefresh cluster, and completely managed by Codefresh with automatic version and security upgrades.
+## Step 1: Install Hosted GitOps Runtime
+Start installing the Hosted GitOps Runtime with a single-click. Codefresh completes the installation without any further intervention on your part.
+The Hosted GitOps Runtime is provisioned on the Codefresh cluster, and completely managed by Codefresh with automatic version and security upgrades.
1. Do one of the following:
- * To set up Hosted GitOps later, click **Install later**, and continue from step _2_.
+ * To set up Hosted GitOps Runtime later, click **Install later**, and continue from step _2_.
* To start setup, click **Install**, and continue from step _3_.
{% include
@@ -46,16 +56,16 @@ image.html
lightbox="true"
file="/images/runtime/hosted-installing.png"
url="/images/runtime/hosted-installing.png"
-alt="Step 1: Installing hosted runtime"
-caption="Step 1: Installing hosted runtime"
+alt="Step 1: Installing Hosted GitOps Runtime"
+caption="Step 1: Installing Hosted GitOps Runtime"
max-width="80%"
%}
{:start="2"}
1. Do the following:
- * In the Codefresh UI, go to [**Runtimes**](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}, and click **+ Add Runtimes**.
- * Select **Hosted Runtime** and click **Add**.
- >An account can be provisioned with a single hosted runtime. If you have already provisioned a hosted runtime for your account, the Hosted Runtime option is disabled.
+ * In the Codefresh UI, go to [**GitOps Runtimes**](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}, and click **+ Add Runtimes**.
+ * Select **Hosted GitOps Runtime** and click **Add**.
+ >An account can be provisioned with a single Hosted GitOps Runtime. If you have already provisioned a Hosted GitOps Runtime for your account, the Hosted GitOps Runtime option is disabled.
* Continue from _step 3_.
{% include
@@ -63,14 +73,14 @@ image.html
lightbox="true"
file="/images/runtime/hosted-install-later.png"
url="/images/runtime/hosted-install-later.png"
-alt="Install hosted runtime"
-caption="Install hosted runtime"
+alt="Install Hosted GitOps Runtime"
+caption="Install Hosted GitOps Runtime"
max-width="40%"
%}
{:start="3"}
-1. When complete, to view the components for the hosted runtime, click **View Runtime**.
+1. When complete, to view the components for the Hosted GitOps Runtime, click **View Runtime**.
You are directed to the Runtime Components tab.
{% include
@@ -78,14 +88,14 @@ image.html
lightbox="true"
file="/images/runtime/hosted-runtime-components.png"
url="/images/runtime/hosted-runtime-components.png"
-alt="Runtime components for hosted runtime"
-caption="Runtime components for hosted runtime"
+alt="Runtime components for Hosted GitOps Runtime"
+caption="Runtime components for Hosted GitOps Runtime"
max-width="70%"
%}
> The Git Sources and the Managed Clusters are empty as they will be set up in the next steps.
-If you navigate to **Runtimes > List View**, you can identify the hosted runtime through the Type column (Hosted ), the Cluster/Namespace column (Codefresh), and the Module column (CD Ops).
+If you navigate to **Runtimes > List View**, you can identify the Hosted GitOps Runtime through the Type column (Hosted), the Cluster/Namespace column (Codefresh), and the Module column (CD Ops).
{% include
image.html
@@ -97,8 +107,8 @@ caption="Hosted runtimes in List view"
max-width="70%"
%}
-#### Troubleshoot failed hosted runtime installation
-Your hosted runtime may fail to install with an error as in the image below. We are closely moinitoring the hosted runtime installation process and activley working to prevent and iron out all installation errors. Follow the instructions to uninstall and reinstall the hosted runtime.
+### Troubleshoot failed Hosted GitOps Runtime installation
+Your Hosted GitOps Runtime may fail to install with an error as in the image below. We are closely moinitoring the Hosted GitOps Runtime installation process and activley working to prevent and iron out all installation errors. Follow the instructions to uninstall and reinstall the Hosted GitOps Runtime.
{% include
image.html
@@ -117,16 +127,16 @@ max-width="70%"
To compare with the latest version from Codefresh, [click here](https://github.com/codefresh-io/cli-v2/releases){:target="\_blank"}.
* [Download the CLI]({{site.baseurl}}/docs/clients/csdp-cli/).
-1. Uninstall the failed hosted runtime:
+1. Uninstall the failed Hosted GitOps Runtime:
`cf runtime uninstall codefresh-hosted --force`
where:
- `hosted-codefresh` is the name of your hosted runtime, automatically assigned by Codefresh.
+ `hosted-codefresh` is the name of your Hosted GitOps Runtime, automatically assigned by Codefresh.
1. In the Codefresh UI, return to Codefresh [Home](https://g.codefresh.io/2.0/?time=LAST_7_DAYS){:target="\_blank"}.
1. Refresh the page and start with _1. Provision hosted runtime_ above.
-### 2. Connect Git provider
-Connect your hosted runtime to a Git provider for Codefresh to create the required Git repos. First authorize access to your Git provider through an OAuth token, and then select the Git organizations or accounts in which to create the required Git repos.
+### Step 2: Connect Git provider
+Connect your Hosted GitOps Runtime to a Git provider for Codefresh to create the required Git repos. First authorize access to your Git provider through an OAuth token, and then select the Git organizations or accounts in which to create the required Git repos.
>Only authorized organizations are displayed in the list. To authorize organizations for the Codefresh application in GitHub, see [Authorize organizations/projects]({{site.baseurl}}/docs/administration/hosted-authorize-orgs/).
@@ -145,12 +155,12 @@ max-width="80%"
Once you authorize access, Codefresh creates two Git repositories, one to store the runtime configuration settings, and the other to store the runtime's application settings:
* Shared runtime configuration repo
- The shared runtime configuration repo is a centralized Git repository that stores configuration settings for the hosted runtime. Additional runtimes provisioned for the account can point to this repo to retrieve and reuse the configuration.
+ The shared runtime configuration repo is a centralized Git repository that stores configuration settings for the Hosted GitOps Runtime. Additional runtimes provisioned for the account can point to this repo to retrieve and reuse the configuration.
Read about [Shared configuration repo]({{site.baseurl}}/docs/reference/shared-configuration/).
* Git Source application repo
- Codefresh creates a Git Source application repo for every hosted runtime.
+ Codefresh creates a Git Source application repo for every Hosted GitOps Runtime.
Read about [Git sources]({{site.baseurl}}/docs/runtime/git-sources/).
@@ -224,15 +234,15 @@ image.html
lightbox="true"
file="/images/runtime/hosted-git-source-in-ui.png"
url="/images/runtime/hosted-git-source-in-ui.png"
-alt="Git Source tab for hosted runtime"
-caption="Git Source tab for hosted runtime"
+alt="Git Source tab for Hosted GitOps Runtime"
+caption="Git Source tab for Hosted GitOps Runtime"
max-width="80%"
%}
### 3. Connect a Kubernetes cluster
-Connect a destination cluster to the hosted runtime and register it as a managed cluster. Deploy applications and configuration to the cluster.
+Connect a destination cluster to the Hosted GitOps Runtime and register it as a managed cluster. Deploy applications and configuration to the cluster.
For managed cluster information and installing Argo Rollouts, see [Add and manage external clusters]({{site.baseurl}}/docs/runtime/managed-cluster/).
@@ -241,8 +251,8 @@ image.html
lightbox="true"
file="/images/runtime/hosted-connect-cluster-step.png"
url="/images/runtime/hosted-connect-cluster-step.png"
-alt="Step 3: Connect a K8s cluster for hosted runtime"
-caption="Step 3: Connect a K8s cluster for hosted runtime"
+alt="Step 3: Connect a K8s cluster for Hosted GitOps Runtime"
+caption="Step 3: Connect a K8s cluster for Hosted GitOps Runtime"
max-width="70%"
%}
@@ -273,8 +283,8 @@ max-width="70%"
lightbox="true"
file="/images/runtime/hosted-new-cluster-topology.png"
url="/images/runtime/hosted-new-cluster-topology.png"
- alt="New K8s cluster in hosted runtime"
- caption="New K8s cluster in hosted runtime"
+ alt="New K8s cluster in Hosted GitOps Runtime"
+ caption="New K8s cluster in Hosted GitOps Runtime"
max-width="80%"
%}
@@ -287,7 +297,7 @@ If you could not connect a cluster, you may not have the latest version of the C
To compare with the latest version from Codefresh, [click here](https://github.com/codefresh-io/cli-v2/releases){:target="\_blank"}.
* [Download the CLI]({{site.baseurl}}/docs/clients/csdp-cli/).
-You have completed setting up your hosted runtime. You are ready to create applications, and connect third-party CI tools for image enrichment.
+You have completed setting up your Hosted GitOps Runtime. You are ready to create applications, and connect third-party CI tools for image enrichment.
### (Optional) Create application
Optional. Create an application in Codefresh, deploy it to the cluster, and track deployment and performance in the Applications dashboard.
@@ -305,8 +315,9 @@ Optional. Integrate Codefresh with the third-party tools you use for CI to enric
[Image enrichment with integrations]({{site.baseurl}}/docs/integrations/image-enrichment-overview/)
### Related articles
-[Manage provisioned runtimes]({{site.baseurl}}/docs/runtime/monitor-manage-runtimes/)
-[Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/)
+[Monitoring & managing GitOps Runtimes]({{site.baseurl}}/docs/installation/monitor-manage-runtimes/)
+[Add Git Sources to runtimes]({{site.baseurl}}/docs/installation/git-sources/)
+[Shared configuration repo]({{site.baseurl}}/docs/reference/shared-configuration)
[Home dashboard]({{site.baseurl}}/docs/reporting/home-dashboard/)
[DORA metrics]({{site.baseurl}}/docs/reporting/dora-metrics/)
diff --git a/_docs/installation/hybrid-gitops.md b/_docs/installation/hybrid-gitops.md
new file mode 100644
index 00000000..889c8d29
--- /dev/null
+++ b/_docs/installation/hybrid-gitops.md
@@ -0,0 +1,1282 @@
+---
+title: "Hybrid GitOps Runtime installation"
+description: "Provision Hybrid GitOps Runtimes"
+group: installation
+toc: true
+---
+
+Provision one or more Hybrid GitOps Runtimes in your Codefresh account.
+Start by reviewing [system requirements](#minimum-system-requirements) for Hybrid GitOps. If you are installing with ingress-controllers, you must configure them as required _before_ starting the installation.
+
+> To provision a Hosted GitOps Runtime, see [Provision a hosted runtime]({{site.baseurl}}/docs/installation/hosted-runtime/#1-provision-hosted-runtime) in [Set up a hosted (Hosted GitOps) environment]({{site.baseurl}}/docs/installation/hosted-runtime/).
+
+**Git providers and Hybrid Runtimes**
+Your Codefresh account is always linked to a specific Git provider. This is the Git provider you select on installing the first GitOps Runtime, either Hybrid or Hosted, in your Codefresh account. All the Hybrid Runtimes you install in the same account use the same Git provider.
+If Bitbucker Server is your Git provider, you must also select the specific server instance to associate with the runtime.
+
+>To change the Git provider for your Codefresh account after installation, contact Codefresh support.
+
+
+**Hybrid Runtimes**
+ The Hybrid Runtime comprises Argo CD components and Codefresh-specific components. The Argo CD components are derived from a fork of the Argo ecosystem, and do not correspond to the open-source versions available.
+
+There are two parts to installing a Hybrid GitOps Runtime:
+
+1. [Installing the Codefresh CLI](#gitops-cli-installation)
+2. [Installing the Hybrid GitOps Runtime](#install-hybrid-gitops-runtime), either through the CLI wizard or via silent installation through the installation flags.
+ The Hybrid GitOps Runtime is installed in a specific namespace on your cluster. You can install more Runtimes on different clusters in your deployment.
+ Every Hybrid GitOps Runtime installation makes commits to three Git repos:
+ * Runtime install repo: The installation repo that manages the Hybrid Runtime itself with Argo CD. If the repo URL does not exist, it is automatically created during installation.
+ * Git Source repo: Created automatically during Runtime installation. The repo where you store manifests for pipelines and applications. See [Git Sources]({{site.baseurl}}/docs/runtime/git-sources).
+ * Shared configuration repo: Created for the first GitOps Runtime installed in a user account. The repo stores configuration manifests for account-level resources and is shared with other GitOps Runtimes in the same account. See [Shared configuration repository]({{site.baseurl}}/docs/reference/shared-configuration).
+
+
+
+{::nomarkdown}
+
+{:/}
+
+## Minimum system requirements
+
+{: .table .table-bordered .table-hover}
+| Item | Requirement |
+| -------------- | -------------- |
+|Kubernetes cluster | Server version 1.18 and higher, without Argo Project components. {::nomarkdown}
Tip: To check the server version, run:
kubectl version --short.{:/}|
+| Ingress controller| Configured on Kubernetes cluster and exposed from the cluster. {::nomarkdown}
Supported and tested ingress controllers include: - Ambassador
{:/}(see [Ambassador ingress configuration](#ambassador-ingress-configuration)){::nomarkdown}- AWS ALB (Application Load Balancer)
{:/} (see [AWS ALB ingress configuration](#aws-alb-ingress-configuration)){::nomarkdown}- Istio
{:/} (see [Istio ingress configuration](#istio-ingress-configuration)){::nomarkdown}- NGINX Enterprise (nginx.org/ingress-controller)
{:/} (see [NGINX Enterprise ingress configuration](#nginx-enterprise-ingress-configuration)){::nomarkdown}- NGINX Community (k8s.io/ingress-nginx)
{:/} (see [NGINX Community ingress configuration](#nginx-community-version-ingress-configuration)){::nomarkdown}- Trafik
{:/}(see [Traefik ingress configuration](#traefik-ingress-configuration))|
+|Node requirements| {::nomarkdown}{:/}|
+|Cluster permissions | Cluster admin permissions |
+|Git providers |{::nomarkdown}- GitHub
- GitHub Enterprise
- GitLab Cloud
- GitLab Server
- Bitbucket Cloud
- Bitbucket Server
{:/}|
+|Git access tokens | {::nomarkdown}Git runtime token:- Valid expiration date
- Scopes:
- GitHub and GitHub Enterprise: repo, admin-repo.hook
- GitLab Cloud and GitLab Server: api, read_repository
- Bitbucket Cloud and Server: Permissions: Read, Workspace membership: Read, Webhooks: Read and write, Repositories: Write, Admin
{:/}|
+
+## Ingress controller configuration
+
+### Ambassador ingress configuration
+For detailed configuration information, see the [Ambassador ingress controller documentation](https://www.getambassador.io/docs/edge-stack/latest/topics/running/ingress-controller){:target="\_blank"}.
+
+This section lists the specific configuration requirements for Codefresh to be completed _before_ installing the hybrid runtime.
+* Valid external IP address
+* Valid TLS certificate
+* TCP support
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+ {::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+{::nomarkdown}
+
+{:/}
+
+### AWS ALB ingress configuration
+
+For detailed configuration information, see the [ALB AWS ingress controller documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4){:target="\_blank"}.
+
+This table lists the specific configuration requirements for Codefresh.
+
+{: .table .table-bordered .table-hover}
+| What to configure | When to configure |
+| -------------- | -------------- |
+|Valid external IP address | _Before_ installing hybrid runtime |
+|Valid TLS certificate | |
+|TCP support| |
+|Controller configuration] | |
+|Alias DNS record in route53 to load balancer | _After_ installing hybrid runtime |
+|(Optional) Git integration registration | |
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+{::nomarkdown}
+
+{:/}
+
+#### Controller configuration
+In the ingress resource file, verify that `spec.controller` is configured as `ingress.k8s.aws/alb`.
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: IngressClass
+metadata:
+ name: alb
+spec:
+ controller: ingress.k8s.aws/alb
+```
+
+{::nomarkdown}
+
+{:/}
+
+#### Create an alias to load balancer in route53
+
+> The alias must be configured _after_ installing the hybrid runtime.
+
+1. Make sure a DNS record is available in the correct hosted zone.
+1. _After_ hybrid runtime installation, in Amazon Route 53, create an alias to route traffic to the load balancer that is automatically created during the installation:
+ * **Record name**: Enter the same record name used in the installation.
+ * Toggle **Alias** to **ON**.
+ * From the **Route traffic to** list, select **Alias to Application and Classic Load Balancer**.
+ * From the list of Regions, select the region. For example, **US East**.
+ * From the list of load balancers, select the load balancer that was created during installation.
+
+For more information, see [Creating records by using the Amazon Route 53 console](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html){:target="\_blank"}.
+
+{% include image.html
+ lightbox="true"
+ file="/images/runtime/post-install-alb-ingress.png"
+ url="/images/runtime/post-install-alb-ingress.png"
+ alt="Route 53 record settings for AWS ALB"
+ caption="Route 53 record settings for AWS ALB"
+ max-width="60%"
+%}
+
+{::nomarkdown}
+
+{:/}
+
+#### (Optional) Git integration registration
+If the installation failed, as can happen if the DNS record was not created within the timeframe, manually create and register Git integrations using these commands:
+ `cf integration git add default --runtime --api-url `
+ `cf integration git register default --runtime --token `
+
+{::nomarkdown}
+
+{:/}
+
+### Istio ingress configuration
+For detailed configuration information, see [Istio ingress controller documentation](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress){:target="\_blank}.
+
+The table below lists the specific configuration requirements for Codefresh.
+
+{: .table .table-bordered .table-hover}
+| What to configure | When to configure |
+| -------------- | -------------- |
+|Valid external IP address |_Before_ installing hybrid runtime |
+|Valid TLS certificate| |
+|TCP support | |
+|Cluster routing service | _After_ installing hybrid runtime |
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+{::nomarkdown}
+
+{:/}
+
+
+
+#### Cluster routing service
+> The cluster routing service must be configured _after_ installing the hybrid runtime.
+
+Based on the runtime version, you need to configure a single or multiple `VirtualService` resources for the `app-proxy`, `webhook`, and `workflow` services.
+
+##### Runtime version 0.0.543 or higher
+Configure a single `VirtualService` resource to route traffic to the `app-proxy`, `webhook`, and `workflow` services, as in the example below.
+
+```yaml
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ namespace: pov-codefresh-istio-runtime # replace with your runtime name
+ name: internal-router
+spec:
+ hosts:
+ - pov-codefresh-istio-runtime.sales-dev.codefresh.io # replace with your host name
+ gateways:
+ - istio-system/internal-router # replace with your gateway name
+ http:
+ - match:
+ - uri:
+ prefix: /webhooks
+ route:
+ - destination:
+ host: internal-router
+ port:
+ number: 80
+ - match:
+ - uri:
+ prefix: /app-proxy
+ route:
+ - destination:
+ host: internal-router
+ port:
+ number: 80
+ - match:
+ - uri:
+ prefix: /workflows
+ route:
+ - destination:
+ host: internal-router
+ port:
+ number: 80
+```
+
+##### Runtime version 0.0.542 or lower
+
+Configure two different `VirtualService` resources, one to route traffic to the `app-proxy`, and the second to route traffic to the `webhook` services, as in the examples below.
+
+{::nomarkdown}
+
+{:/}
+
+**`VirtualService` example for `app-proxy`:**
+
+```yaml
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ namespace: test-runtime3 # replace with your runtime name
+ name: cap-app-proxy
+spec:
+ hosts:
+ - my.support.cf-cd.com # replace with your host name
+ gateways:
+ - my-gateway # replace with your host name
+ http:
+ - match:
+ - uri:
+ prefix: /app-proxy
+ route:
+ - destination:
+ host: cap-app-proxy
+ port:
+ number: 3017
+```
+
+**`VirtualService` example for `webhook`:**
+
+> Configure a `uri.prefix` and `destination.host` for each event-source if you have more than one.
+
+```yaml
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ namespace: test-runtime3 # replace with your runtime name
+ name: csdp-default-git-source
+spec:
+ hosts:
+ - my.support.cf-cd.com # replace with your host name
+ gateways:
+ - my-gateway # replace with your gateway name
+ http:
+ - match:
+ - uri:
+ prefix: /webhooks/test-runtime3/push-github # replace `test-runtime3` with your runtime name, and `push-github` with the name of your event source
+ route:
+ - destination:
+ host: push-github-eventsource-svc # replace `push-github' with the name of your event source
+ port:
+ number: 80
+ - match:
+ - uri:
+ prefix: /webhooks/test-runtime3/cypress-docker-images-push # replace `test-runtime3` with your runtime name, and `cypress-docker-images-push` with the name of your event source
+ route:
+ - destination:
+ host: cypress-docker-images-push-eventsource-svc # replace `cypress-docker-images-push` with the name of your event source
+ port:
+ number: 80
+```
+
+{::nomarkdown}
+
+{:/}
+
+### NGINX Enterprise ingress configuration
+
+For detailed configuration information, see [NGINX ingress controller documentation](https://docs.nginx.com/nginx-ingress-controller){:target="\_blank}.
+
+The table below lists the specific configuration requirements for Codefresh.
+
+{: .table .table-bordered .table-hover}
+| What to configure | When to configure |
+| -------------- | -------------- |
+|Verify valid external IP address |_Before_ installing hybrid runtime |
+|Valid TLS certificate | |
+|TCP support| |
+|NGINX Ingress: Enable report status to cluster | |
+|NGINX Ingress Operator: Enable report status to cluster| |
+|Patch certificate secret |_After_ installing hybrid runtime
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+{::nomarkdown}
+
+{:/}
+
+#### NGINX Ingress: Enable report status to cluster
+
+If the ingress controller is not configured to report its status to the cluster, Argo’s health check reports the health status as “progressing” resulting in a timeout error during installation.
+
+* Pass `--report-ingress-status` to `deployment`.
+
+```yaml
+spec:
+ containers:
+ - args:
+ - --report-ingress-status
+```
+
+{::nomarkdown}
+
+{:/}
+
+#### NGINX Ingress Operator: Enable report status to cluster
+
+If the ingress controller is not configured to report its status to the cluster, Argo’s health check reports the health status as “progressing” resulting in a timeout error during installation.
+
+1. Add this to the `Nginxingresscontrollers` resource file:
+
+ ```yaml
+ ...
+ spec:
+ reportIngressStatus:
+ enable: true
+ ...
+ ```
+
+1. Make sure you have a certificate secret in the same namespace as the runtime. Copy an existing secret if you don't have one.
+You will need to add this to the `ingress-master` when you have completed runtime installation.
+
+{::nomarkdown}
+
+{:/}
+
+#### Patch certificate secret
+> The certificate secret must be configured _after_ installing the hybrid runtime.
+
+Patch the certificate secret in `spec.tls` of the `ingress-master` resource.
+The secret must be in the same namespace as the runtime.
+
+1. Go to the runtime namespace with the NGINX ingress controller.
+1. In `ingress-master`, add to `spec.tls`:
+
+ ```yaml
+ tls:
+ - hosts:
+ -
+ secretName:
+ ```
+
+{::nomarkdown}
+
+{:/}
+
+### NGINX Community version ingress configuration
+
+Codefresh has been tested with and supports implementations of the major providers. For your convenience, we have provided configuration instructions, both for supported and untested providers in [Provider-specific configuration](#provider-specific-configuration).
+
+
+This section lists the specific configuration requirements for Codefresh to be completed _before_ installing the hybrid runtime.
+* Verify valid external IP address
+* Valid TLS certificate
+* TCP support
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services, and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+Here's an example of TCP configuration for NGINX Community on AWS.
+Verify that the `ingress-nginx-controller` service manifest has either of the following annotations:
+
+`service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"`
+OR
+`service.beta.kubernetes.io/aws-load-balancer-type: nlb`
+
+{::nomarkdown}
+
+{:/}
+
+#### Provider-specific configuration
+
+> The instructions are valid for `k8s.io/ingress-nginx`, the community version of NGINX.
+
+
+AWS
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/aws/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for AWS.
+
+
+Azure (AKS)
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for AKS.
+
+
+
+
+Bare Metal Clusters
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+Bare-metal clusters often have additional considerations. See Bare-metal ingress-nginx considerations.
+
+
+
+
+Digital Ocean
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/do/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Digital Ocean.
+
+
+
+
+Docker Desktop
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Docker Desktop.
+Note: By default, Docker Desktop services will provision with localhost as their external address. Triggers in delivery pipelines cannot reach this instance unless they originate from the same machine where Docker Desktop is being used.
+
+
+
+
+Exoscale
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Exoscale.
+
+
+
+
+
+Google (GKE)
+
+Add firewall rules
+
+GKE by default limits outbound requests from nodes. For the runtime to communicate with the control-plane in Codefresh, add a firewall-specific rule.
+
+
+- Find your cluster's network:
+ gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)"
+
+- Get the Cluster IPV4 CIDR:
+ gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)"
+
+- Replace the `[CLUSTER_NAME]`, `[NETWORK]`, and `[CLUSTER_IPV4_CIDR]`, with the relevant values:
+ gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network"
+
+ --network="[NETWORK]" \
+
+
+ --source-ranges="[CLUSTER_IPV4_CIDR]" \
+
+
+ --allow=tcp,udp,icmp,esp,ah,sctp
+
+
+
+
+Use ingress-nginx
+
+ - Create a `cluster-admin` role binding:
+
+ kubectl create clusterrolebinding cluster-admin-binding \
+
+
+ --clusterrole cluster-admin \
+
+
+ --user $(gcloud config get-value account)
+
+
+ - Apply:
+
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
+
+
+ - Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+
+We recommend reviewing the provider-specific documentation for GKE.
+
+
+
+
+
+MicroK8s
+
+- Install using Microk8s addon system:
+ microk8s enable ingress
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+MicroK8s has not been tested with Codefresh, and may require additional configuration. For details, see Ingress addon documentation.
+
+
+
+
+
+MiniKube
+
+- Install using MiniKube addon system:
+ minikube addons enable ingress
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+MiniKube has not been tested with Codefresh, and may require additional configuration. For details, see Ingress addon documentation.
+
+
+
+
+
+
+Oracle Cloud Infrastructure
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Oracle Cloud.
+
+
+
+
+Scaleway
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/scw/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Scaleway.
+
+
+
+{::nomarkdown}
+
+{:/}
+
+### Traefik ingress configuration
+For detailed configuration information, see [Traefik ingress controller documentation](https://doc.traefik.io/traefik/providers/kubernetes-ingress){:target="\_blank}.
+
+The table below lists the specific configuration requirements for Codefresh.
+
+{: .table .table-bordered .table-hover}
+
+| What to configure | When to configure |
+| -------------- | -------------- |
+|Valid external IP address | _Before_ installing hybrid runtime |
+|Valid SSL certificate | |
+|TCP support | |
+|Enable report status to cluster| |
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+{::nomarkdown}
+
+{:/}
+
+#### Enable report status to cluster
+By default, the Traefik ingress controller is not configured to report its status to the cluster. If not configured, Argo’s health check reports the health status as “progressing”, resulting in a timeout error during installation.
+
+To enable reporting its status, add `publishedService` to `providers.kubernetesIngress.ingressEndpoint`.
+
+The value must be in the format `"/"`, where:
+ `` is the Traefik service from which to copy the status
+
+```yaml
+...
+providers:
+ kubernetesIngress:
+ ingressEndpoint:
+ publishedService: "/" # Example, "codefresh/traefik-default"
+...
+```
+
+{::nomarkdown}
+
+{:/}
+
+## GitOps CLI installation
+
+### GitOps CLI installation modes
+The table lists the modes available to install the Codefresh CLI.
+
+{: .table .table-bordered .table-hover}
+| Install mode | OS | Commands |
+| -------------- | ----------| ----------|
+| `curl` | MacOS-x64 | `curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-darwin-amd64.tar.gz | tar zx && mv ./cf-darwin-amd64 /usr/local/bin/cf && cf version`|
+| | MacOS-m1 |`curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-darwin-arm64.tar.gz | tar zx && mv ./cf-darwin-arm64 /usr/local/bin/cf && cf version` |
+| | Linux - X64 |`curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-linux-amd64.tar.gz | tar zx && mv ./cf-linux-amd64 /usr/local/bin/cf && cf version` |
+| | Linux - ARM | `curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-linux-arm64.tar.gz | tar zx && mv ./cf-linux-arm64 /usr/local/bin/cf && cf version`|
+| `brew` | N/A| `brew tap codefresh-io/cli && brew install cf2`|
+
+### Install the GitOps CLI
+Install the Codefresh CLI using the option that best suits you: `curl`, `brew`, or standard download.
+If you are not sure which OS to select for `curl`, simply select one, and Codefresh automatically identifies and selects the right OS for CLI installation.
+
+1. Do one of the following:
+ * For first-time installation, go to the Welcome page, select **+ Install Runtime**.
+ * If you have provisioned a GitOps Runtime, in the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}, and select **+ Add Runtime**.
+1. Install the Codefresh CLI:
+ * Select one of the installation modes.
+ * Generate the API key.
+ * Create the authentication context:
+ `cf config create-context codefresh --api-key `
+
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/getting-started/quick-start/quick-start-download-cli.png"
+ url="/images/getting-started/quick-start/quick-start-download-cli.png"
+ alt="Download CLI to install runtime"
+ caption="Download CLI to install runtime"
+ max-width="30%"
+ %}
+
+
+{::nomarkdown}
+
+{:/}
+
+## Install Hybrid GitOps Runtime
+
+**Before you begin**
+* Make sure you meet the [minimum requirements]({{site.baseurl}}/docs/runtime/requirements/#minimum-requirements) for installation
+* Make sure you have [Runtime token with the required scopes from your Git provider]({{site.baseurl}}/docs/reference/git-tokens)
+* [Download or upgrade to the latest version of the CLI]({{site.baseurl}}/docs/installation/hybrid-gitops/#hybrid-gitops-upgrade-gitops-cli)
+* Review [Hybrid Runtime installation flags](#hybrid-runtime-installation-flags)
+* For ingress-based runtimes, make sure your ingress controller is configured correctly:
+ * [Ambasador ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#ambassador-ingress-configuration)
+ * [AWS ALB ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#alb-aws-ingress-configuration)
+ * [Istio ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#istio-ingress-configuration)
+ * [NGINX Enterprise ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#nginx-enterprise-ingress-configuration)
+ * [NGINX Community ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#nginx-community-version-ingress-configuration)
+ * [Traefik ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#traefik-ingress-configuration)
+
+
+{::nomarkdown}
+
+{:/}
+
+**How to**
+
+1. Do one of the following:
+ * If this is your first Hybrid Runtime installation, in the Welcome page, select **+ Install Runtime**.
+ * If you have provisioned a Hybrid Runtime, to provision additional runtimes, in the Codefresh UI, go to [**Runtimes**](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. Click **+ Add Runtimes**, and then select **Hybrid Runtimes**.
+1. Do one of the following:
+ * CLI wizard: Run `cf runtime install`, and follow the prompts to enter the required values.
+ * Silent install: Pass the required flags in the install command:
+ `cf runtime install --repo --git-token --silent`
+ For the list of flags, see [Hybrid runtime installation flags](#hybrid-runtime-installation-flags).
+1. If relevant, complete the configuration for these ingress controllers:
+ * [ALB AWS: Alias DNS record in route53 to load balancer]({{site.baseurl}}/docs/runtime/requirements/#alias-dns-record-in-route53-to-load-balancer)
+ * [Istio: Configure cluster routing service]({{site.baseurl}}/docs/runtime/requirements/#cluster-routing-service)
+ * [NGINX Enterprise ingress controller: Patch certificate secret]({{site.baseurl}}/docs/runtime/requirements/#patch-certificate-secret)
+1. If you bypassed installing ingress resources with the `--skip-ingress` flag for ingress controllers not in the supported list, create and register Git integrations using these commands:
+ `cf integration git add default --runtime --api-url `
+ `cf integration git register default --runtime --token `
+
+
+{::nomarkdown}
+
+{:/}
+
+
+
+## Hybrid GitOps Runtime installation flags
+This section describes the required and optional flags to install a Hybrid GitOps Runtime.
+For documentation purposes, the flags are grouped into:
+* Runtime flags, relating to Runtime, cluster, and namespace requirements
+* Ingress-less flags, for tunnel-based installation
+* Ingress-controller flags, for ingress-based installation
+* Git provider flags
+* Codefresh resource flags
+
+{::nomarkdown}
+
+{:/}
+
+### Runtime flags
+
+**Runtime name**
+Required.
+The Runtime name must start with a lower-case character, and can include up to 62 lower-case characters and numbers.
+* CLI wizard: Add when prompted.
+* Silent install: Add the `--runtime` flag and define the name.
+
+**Namespace resource labels**
+Optional.
+The label of the namespace resource to which you are installing the Hybrid Runtime. Labels are required to identify the networks that need access during installation, as is the case when using services meshes, such as Istio for example.
+
+* CLI wizard and Silent install: Add the `--namespace-labels` flag, and define the labels in `key=value` format. Separate multiple labels with `commas`.
+
+**Kube context**
+Required.
+The cluster defined as the default for `kubectl`. If you have more than one Kube context, the current context is selected by default.
+
+* CLI wizard: Select the Kube context from the list displayed.
+* Silent install: Explicitly specify the Kube context with the `--context` flag.
+
+**Access mode**
+The access mode for the runtime, which can be one of the following:
+* [Tunnel-based]({{site.baseurl}}/docs/installation/runtime-architecture/#tunnel-based-hybrid-gitops-runtime-architecture), for runtimes without ingress controllers. This is the default.
+* [Ingress-based]({{site.baseurl}}/docs/getting-started/architecture/#ingress-based-hybrid-gitops-runtime-architecture) for runtimes with ingress contollers.
+
+
+* CLI wizard: Select the access mode from the list displayed.
+* Silent install:
+ * For tunnel-based, see [Tunnel-based runtime flags](#tunnel-based-runtime-flags)
+ * For ingress-based, add the [Ingress controller flags](#ingress-controller-flags)
+
+ >If you don't specify any flags, tunnel-based access is automatically selected.
+
+**Shared configuration repository**
+The Git repository per Runtime account with shared configuration manifests.
+* CLI wizard and Silent install: Add the `--shared-config-repo` flag and define the path to the shared repo.
+
+{::nomarkdown}
+
+{:/}
+
+### Tunnel-based runtime flags
+These flags are required to install tunnel-based Hybrid Runtimes, without an ingress controller.
+
+**IP allowlist**
+
+Optional.
+
+The allowed list of IPs from which to forward requests to the internal customer cluster for ingress-less runtime installations. The allowlist can include IPv4 and IPv6 addresses, with/without subnet and subnet masks. Multiple IPs must be separated by commas.
+
+When omitted, all incoming requests are authenticated regardless of the IPs from which they originated.
+
+* CLI wizard and Silent install: Add the `--ips-allow-list` flag, followed by the IP address, or list of comma-separated IPs to define more than one. For example, `--ips-allow-list 77.126.94.70/16,192.168.0.0`
+
+{::nomarkdown}
+
+{:/}
+
+### Ingress controller flags
+
+
+**Skip ingress**
+Required, if you are using an unsupported ingress controller.
+For unsupported ingress controllers, bypass installing ingress resources with the `--skip-ingress` flag.
+In this case, after completing the installation, manually configure the cluster's routing service, and create and register Git integrations. See the last step in [Install the Hybrid GitOps Runtime](#install-hybrid-gitops-runtime).
+
+**Ingress class**
+Required.
+
+* CLI wizard: Select the ingress class for Runtime installation from the list displayed.
+* Silent install: Explicitly specify the ingress class through the `--ingress-class` flag. Otherwise, Runtime installation fails.
+
+**Ingress host**
+Required.
+The IP address or host name of the ingress controller component.
+
+* CLI wizard: Automatically selects and displays the host, either from the cluster or the ingress controller associated with the **Ingress class**.
+* Silent install: Add the `--ingress-host` flag. If a value is not provided, takes the host from the ingress controller associated with the **Ingress class**.
+ > Important: For AWS ALB, the ingress host is created post-installation. However, when prompted, add the domain name you will create in `Route 53` as the ingress host.
+
+**Insecure ingress hosts**
+TLS certificates for the ingress host:
+If the ingress host does not have a valid TLS certificate, you can continue with the installation in insecure mode, which disables certificate validation.
+
+* CLI wizard: Automatically detects and prompts you to confirm continuing the installation in insecure mode.
+* Silent install: To continue with the installation in insecure mode, add the `--insecure-ingress-host` flag.
+
+**Internal ingress host**
+Optional.
+Enforce separation between internal (app-proxy) and external (webhook) communication by adding an internal ingress host for the app-proxy service in the internal network.
+For both CLI wizard and Silent install:
+
+* For new Runtime installations, add the `--internal-ingress-host` flag pointing to the ingress host for `app-proxy`.
+* For existing installations, commit changes to the installation repository by modifying the `app-proxy ingress` and `.yaml`
+ See [(Optional) Internal ingress host configuration for existing Hybrid Runtimes](#optional-internal-ingress-host-configuration-for-existing-hybrid-runtimes).
+
+{::nomarkdown}
+
+{:/}
+
+
+
+### Git provider and repo flags
+The Git provider defined for the Runtime.
+
+>Because Codefresh creates a [shared configuration repo]({{site.baseurl}}/docs/reference/shared-configuration) for the Runtimes in your account, the Git provider defined for the first Runtime you install in your account is used for all the other Runtimes in the same account.
+
+You can define any of the following Git providers:
+* GitHub:
+ * [GitHub](#github) (the default Git provider)
+ * [GitHub Enterprise](#github-enterprise)
+* GitLab:
+ * [GitLab Cloud](#gitlab-cloud)
+ * [GitLab Server](#gitlab-server)
+* Bitbucket:
+ * [Bitbucket Cloud](#bitbucket-cloud)
+ * [Bitbucket Server](#bitbucket-server)
+
+{::nomarkdown}
+
+{:/}
+
+
+
+#### GitHub
+GitHub is the default Git provider for Hybrid Runtimes. Being the default provider, for both the CLI wizard and Silent install, you need to provide only the repository URL and the Git runtime token.
+
+> For the required scopes, see [GitHub and GitHub Enterprise Runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#github-and-github-enterprise-runtime-token-scopes).
+
+`--repo --git-token `
+
+where:
+* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the Runtime installation, including the `.git` suffix. Copy the clone URL from your GitHub website (see [Cloning with HTTPS URLs](https://docs.github.com/en/get-started/getting-started-with-git/about-remote-repositories#cloning-with-https-urls){:target="\_blank"}).
+ If the repo doesn't exist, copy an existing clone URL and change the name of the repo. Codefresh creates the repository during the installation.
+
+ Repo URL format:
+ `https://github.com//reponame>.git[/subdirectory][?ref=branch]`
+ where:
+ * `/` is your username or organization name, followed by the name of the repo, identical to the HTTPS clone URL. For example, `https://github.com/nr-codefresh/codefresh.io.git`.
+ * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the Runtime is installed in the root of the repository. For example, `/runtimes/defs`.
+ * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the Runtime is installed in the default branch. For example, `codefresh-prod`.
+
+ Example:
+ `https://github.com/nr-codefresh/codefresh.io.git/runtimes/defs?ref=codefresh-prod`
+* `--git-token ` (required), is the Git token authenticating access to the Runtime installation repository (see [GitHub runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#github-and-github-enterprise-runtime-token-scopes)).
+
+{::nomarkdown}
+
+{:/}
+
+#### GitHub Enterprise
+
+> For the required scopes, see [GitHub and GitHub Enterprise runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#github-and-github-enterprise-runtime-token-scopes).
+
+
+`--provider github --repo --git-token `
+
+where:
+* `--provider github` (required), defines GitHub Enterprise as the Git provider for the Runtime and the account.
+* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the Runtime installation, including the `.git` suffix. Copy the clone URL for HTTPS from your GitHub Enterprise website (see [Cloning with HTTPS URLs](https://docs.github.com/en/get-started/getting-started-with-git/about-remote-repositories#cloning-with-https-urls){:target="\_blank"}).
+ If the repo doesn't exist, copy an existing clone URL and change the name of the repo. Codefresh creates the repository during the installation.
+ Repo URL format:
+
+ `https://ghe-trial.devops.cf-cd.com//.git[/subdirectory][?ref=branch]`
+ where:
+ * `/` is your username or organization name, followed by the name of the repo. For example, `codefresh-io/codefresh.io.git`.
+ * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the Runtime is installed in the root of the repository. For example, `/runtimes/defs`.
+ * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the Runtime is installed in the default branch. For example, `codefresh-prod`.
+
+ Example:
+ `https://ghe-trial.devops.cf-cd.com/codefresh-io/codefresh.io.git/runtimes/defs?ref=codefresh-prod`
+* `--git-token ` (required), is the Git token authenticating access to the Runtime installation repository (see [GitHub runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#github-and-github-enterprise-runtime-token-scopes)).
+
+
+{::nomarkdown}
+
+{:/}
+
+#### GitLab Cloud
+> For the required scopes, see [GitLab Cloud and GitLab Server runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#gitlab-cloud-and-gitlab-server-runtime-token-scopes).
+
+
+`--provider gitlab --repo --git-token `
+
+where:
+* `--provider gitlab` (required), defines GitLab Cloud as the Git provider for the Runtime and the account.
+* `--repo ` (required), is the `HTTPS` clone URL of the Git project for the Runtime installation, including the `.git` suffix. Copy the clone URL for HTTPS from your GitLab website.
+ If the repo doesn't exist, copy an existing clone URL and change the name of the repo. Codefresh creates the repository during the installation.
+
+ > Important: You must create the group with access to the project prior to the installation.
+
+ Repo URL format:
+
+ `https://gitlab.com//.git[/subdirectory][?ref=branch]`
+ where:
+ * `` is either your username, or if your project is within a group, the front-slash separated path to the project. For example, `nr-codefresh` (owner), or `parent-group/child-group` (group hierarchy)
+ * `` is the name of the project. For example, `codefresh`.
+ * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the Runtime is installed in the root of the repository. For example, `/runtimes/defs`.
+ * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the Runtime is installed in the default branch. For example, `codefresh-prod`.
+
+ Examples:
+ `https://gitlab.com/nr-codefresh/codefresh.git/runtimes/defs?ref=codefresh-prod` (owner)
+
+ `https://gitlab.com/parent-group/child-group/codefresh.git/runtimes/defs?ref=codefresh-prod` (group hierarchy)
+
+* `--git-token ` (required), is the Git token authenticating access to the Runtime installation repository (see [GitLab runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#gitlab-cloud-and-gitlab-server-runtime-token-scopes)).
+
+
+{::nomarkdown}
+
+{:/}
+
+
+#### GitLab Server
+
+> For the required scopes, see [GitLab Cloud and GitLab Server runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#gitlab-cloud-and-gitlab-server-runtime-token-scopes).
+
+`--provider gitlab --repo --git-token `
+
+where:
+* `--provider gitlab` (required), defines GitLab Server as the Git provider for the Runtime and the account.
+* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the Runtime installation, including the `.git` suffix.
+ If the project doesn't exist, copy an existing clone URL and change the name of the project. Codefresh creates the project during the installation.
+
+ > Important: You must create the group with access to the project prior to the installation.
+
+ Repo URL format:
+ `https://gitlab-onprem.devops.cf-cd.com//.git[/subdirectory][?ref=branch]`
+ where:
+ * `` is your username, or if the project is within a group or groups, the name of the group. For example, `nr-codefresh` (owner), or `parent-group/child-group` (group hierarchy)
+ * `` is the name of the project. For example, `codefresh`.
+ * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the Runtime is installed in the root of the repository. For example, `/runtimes/defs`.
+ * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the Runtime is installed in the default branch. For example, `codefresh-prod`.
+
+ Examples:
+ `https://gitlab-onprem.devops.cf-cd.com/nr-codefresh/codefresh.git/runtimes/defs?ref=codefresh-prod` (owner)
+
+ `https://gitlab-onprem.devops.cf-cd.com/parent-group/child-group/codefresh.git/runtimes/defs?ref=codefresh-prod` (group hierarchy)
+
+* `--git-token ` (required), is the Git token authenticating access to the Runtime installation repository (see [GitLab runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#gitlab-cloud-and-gitlab-server-runtime-token-scopes)).
+
+
+{::nomarkdown}
+
+{:/}
+
+#### Bitbucket Cloud
+> For the required scopes, see [Bitbucket runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#bitbucket-cloud-and-bitbucket-server-runtime-token-scopes).
+
+
+`--provider bitbucket --repo --git-user --git-token `
+
+where:
+* `--provider gitlab` (required), defines Bitbucket Cloud as the Git provider for the Runtime and the account.
+* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the Runtime installation, including the `.git` suffix.
+ If the project doesn't exist, copy an existing clone URL and change the name of the project. Codefresh creates the project during Runtime installation.
+ >Important: Remove the username, including @ from the copied URL.
+
+ Repo URL format:
+
+ `https://bitbucket.org.git[/subdirectory][?ref=branch]`
+ where:
+ * `` is your workspace ID. For example, `nr-codefresh`.
+ * `` is the name of the repository. For example, `codefresh`.
+ * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the Runtime is installed in the root of the repository. For example, `/runtimes/defs`.
+ * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the Runtime is installed in the default branch. For example, `codefresh-prod`.
+
+ Example:
+ `https://bitbucket.org/nr-codefresh/codefresh.git/runtimes/defs?ref=codefresh-prod`
+* `--git-user ` (required), is your username for the Bitbucket Cloud account.
+* `--git-token ` (required), is the Git token authenticating access to the runtime installation repository (see [Bitbucket runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#bitbucket-cloud-and-bitbucket-server-runtime-token-scopes)).
+
+
+{::nomarkdown}
+
+{:/}
+
+#### Bitbucket Server
+
+> For the required scopes, see [Bitbucket runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#bitbucket-cloud-and-bitbucket-server-runtime-token-scopes).
+
+
+`--provider bitbucket-server --repo --git-user --git-token `
+
+where:
+* `--provider gitlab` (required), defines Bitbucket Cloud as the Git provider for the Runtime and the account.
+* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the Runtime installation, including the `.git` suffix.
+ If the project doesn't exist, copy an existing clone URL and change the name of the project. Codefresh then creates the project during the installation.
+ >Important: Remove the username, including @ from the copied URL.
+
+ Repo URL format:
+
+ `https://bitbucket-server-8.2.devops.cf-cd.com:7990/scm//.git[/subdirectory][?ref=branch]`
+ where:
+ * `` is your username or organization name. For example, `codefresh-io.`.
+ * `` is the name of the repo. For example, `codefresh`.
+ * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the Runtime is installed in the root of the repository. For example, `/runtimes/defs`.
+ * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the Runtime is installed in the default branch. For example, `codefresh-prod`.
+
+ Example:
+ `https://bitbucket-server-8.2.devops.cf-cd.com:7990/scm/codefresh-io/codefresh.git/runtimes/defs?ref=codefresh-prod`
+* `--git-user ` (required), is your username for the Bitbucket Server account.
+* `--git-token ` (required), is the Git token authenticating access to the Runtime installation repository (see [Bitbucket runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#bitbucket-cloud-and-bitbucket-server-runtime-token-scopes)).
+
+{::nomarkdown}
+
+{:/}
+
+### Codefresh resource flags
+**Codefresh demo resources**
+Optional.
+Install demo pipelines to use as a starting point to create your own GitOps pipelines. We recommend installing the demo resources as these are used in our quick start tutorials.
+
+* Silent install: Add the `--demo-resources` flag, and define its value as `true` (default), or `false`. For example, `--demo-resources=true`
+
+**Insecure flag**
+For _on-premises installations_, if the Ingress controller does not have a valid SSL certificate, to continue with the installation, add the `--insecure` flag to the installation command.
+
+{::nomarkdown}
+
+{:/}
+
+
+
+
+
+
+
+## (Optional) Internal ingress host configuration for existing Hybrid Runtimes
+If you already have provisioned Hybrid Runtimes, to use an internal ingress host for app-proxy communication and an external ingress host to handle webhooks, change the specs for the `Ingress` and `Runtime` resources in the Runtime installation repository. Use the examples as guidelines.
+
+`/apps/app-proxy/overlays//ingress.yaml`: change `host`
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: codefresh-cap-app-proxy
+ namespace: codefresh #replace with your runtime name
+spec:
+ ingressClassName: nginx
+ rules:
+ - host: my-internal-ingress-host # replace with the internal ingress host for app-proxy
+ http:
+ paths:
+ - backend:
+ service:
+ name: cap-app-proxy
+ port:
+ number: 3017
+ path: /app-proxy/
+ pathType: Prefix
+```
+
+`..//bootstrap/.yaml`: add `internalIngressHost`
+
+```yaml
+apiVersion: v1
+data:
+ base-url: https://g.codefresh.io
+ runtime: |
+ apiVersion: codefresh.io/v1alpha1
+ kind: Runtime
+ metadata:
+ creationTimestamp: null
+ name: codefresh #replace with your runtime name
+ namespace: codefresh #replace with your runtime name
+ spec:
+ bootstrapSpecifier: github.com/codefresh-io/cli-v2/manifests/argo-cd
+ cluster: https://7DD8390300DCEFDAF87DC5C587EC388C.gr7.us-east-1.eks.amazonaws.com
+ components:
+ - isInternal: false
+ name: events
+ type: kustomize
+ url: github.com/codefresh-io/cli-v2/manifests/argo-events
+ wait: true
+ - isInternal: false
+ name: rollouts
+ type: kustomize
+ url: github.com/codefresh-io/cli-v2/manifests/argo-rollouts
+ wait: false
+ - isInternal: false
+ name: workflows
+ type: kustomize
+ url: github.com/codefresh-io/cli-v2/manifests/argo-workflows
+ wait: false
+ - isInternal: false
+ name: app-proxy
+ type: kustomize
+ url: github.com/codefresh-io/cli-v2/manifests/app-proxy
+ wait: false
+ defVersion: 1.0.1
+ ingressClassName: nginx
+ ingressController: k8s.io/ingress-nginx
+ ingressHost: https://support.cf.com/
+ internalIngressHost: https://my-internal-ingress-host # add this line and replace my-internal-ingress-host with your internal ingress host
+ repo: https://github.com/NimRegev/my-codefresh.git
+ version: 99.99.99
+```
+
+
+## Related articles
+[Add external clusters to Hybrid and Hosted Runtimes]({{site.baseurl}}/docs/installation/managed-cluster/)
+[Monitoring & managing GitOps Runtimes]({{site.baseurl}}/docs/installation/monitor-manage-runtimes/)
+[Add Git Sources to runtimes]({{site.baseurl}}/docs/installation/git-sources/)
+[Shared configuration repo]({{site.baseurl}}/docs/reference/shared-configuration)
+[Troubleshoot Hybrid Runtime installation]({{site.baseurl}}/installation/troubleshooting/runtime-issues/)
diff --git a/_docs/installation/installation-options.md b/_docs/installation/installation-options.md
new file mode 100644
index 00000000..f88b9fe9
--- /dev/null
+++ b/_docs/installation/installation-options.md
@@ -0,0 +1,231 @@
+---
+title: "Installation environments"
+description: ""
+group: installation
+toc: true
+---
+To be changed and updated for ProjectOne
+
+The Codefresh platform supports two different installation environments, each with different installation options.
+
+* CI/CD installation environment
+ The CI/CD installation environment is optimized for Continuous Integration/Delivery with Codefresh pipelines. CI pipelines created in Codefresh fetch code from your Git repository, packages/compiles the code, and deploys the final artifact to a target environment.
+
+ The CI/CD installation environment supports these installation options:
+ * Hybrid, where the Codefresh CI/CD UI runs in the Codefresh cloud, and the builds run on customer premises
+ * SaaS, a full cloud version that is fully managed by Codefresh
+ * On-premises, where Codefresh CI/CD runs within the customer datacenter/cloud
+
+ On-premises and Hybrid CI/CD options are available to Enterprise customers looking for a "behind-the-firewall" solution.
+
+* GitOps installation environment
+ The GitOps installation environment is a full-featured solution for application deployments and releases. Powered by the Argo Project, Codefresh uses Argo CD, Argo Workflows, Argo Events, and Argo Rollouts, extended with unique functionality and features essential for enterprise deployments.
+
+ GitOps installations support Hosted and Hybrid options.
+
+## Comparison
+Both environments can co-exist giving you the best of both worlds. For
+
+TBD
+
+
+## Codefresh CI/CD installation options
+
+
+
+
+
+
+
+
+
+### Codefresh Cloud CI/CD - likely to be removed
+
+The Codefresh CI/CD Cloud version is the easiest way to start using Codefresh as it is fully managed and runs 100% on the cloud. Codefresh DevOps handles the maintenance and updates.
+
+You can also create a [free account]({{site.baseurl}}/docs/getting-started/create-a-codefresh-account/) on the SAAS version right away. The account is forever free with some limitations
+on number of builds.
+
+The cloud version runs on multiple clouds:
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/installation/codefresh-saas.png"
+ url="/images/administration/installation/codefresh-saas.png"
+ alt="sso-diagram.png"
+ max-width="60%"
+ %}
+
+Codefresh Cloud is also compliant with [SOC2 - Type2](https://www.aicpa.org/SOC) showing our commitment to security and availability.
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/installation/soc2-type2-certified.png"
+ url="/images/administration/installation/soc2-type2-certified.png"
+ alt="sso-diagram.png"
+ max-width="40%"
+ %}
+
+The Cloud version has multi-account support with most git providers (GitLab, GitHub, Bitbucket) as well as Azure and Google.
+
+
+### Codefresh Hybrid CI/CD
+
+The Hybrid CI/CD installation option is for organizations who want their source code to live within their premises, or have other security constraints. For more about the theory and implementation, see [CI/CD behind the firewall installation]({{site.baseurl}}/docs/administration/behind-the-firewall/).
+
+The UI runs on Codefresh infrastructure, while the builds happen in a Kubernetes cluster in the customer's premises.
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/installation/hybrid-installation.png"
+ url="/images/administration/installation/hybrid-installation.png"
+ alt="sso-diagram.png"
+ max-width="70%"
+ %}
+
+
+CI/CD Hybrid installation strikes the perfect balance between security, flexibility, and ease of use. Codefresh still does the heavy lifting for maintaining most of the platform parts. The sensitive data (such as source code and internal services) never leave the premises of the customers.
+
+With Hybrid CI/CD installation, Codefresh can easily connect to internal [secure services]({{site.baseurl}}/docs/reference/behind-the-firewall/#using-secure-services-in-your-pipelines) that have no public presence.
+The UI part is still compliant with Soc2.
+
+
+Here are the security implications of CI/CD Hybrid installation:
+
+{: .table .table-bordered .table-hover}
+| Company Asset | Flow/Storage of data | Comments |
+| -------------- | ---------------------------- |-------------------------|
+| Source code | Stays behind the firewall | |
+| Binary artifacts | Stay behind the firewall | |
+| Build logs | Also sent to Codefresh Web application | |
+| Pipeline volumes | Stay behind the firewall | |
+| Pipeline variables | Defined in Codefresh Web application | |
+| Deployment docker images | Stay behind the firewall| Stored on your Docker registry |
+| Development docker images | Stay behind the firewall | Stored on your Docker registry|
+| Testing docker images | Stay behind the firewall| Stored on your Docker registry |
+| Inline pipeline definition | Defined in Codefresh Web application | |
+| Pipelines as YAML file | Stay behind the firewall | |
+| Test results | Stay behind the firewall | |
+| HTML Test reports | Shown on Web application | Stored in your S3 or Google bucket or Azure storage |
+| Production database data | Stays behind the firewall | |
+| Test database data | Stays behind the firewall | |
+| Other services (e.g. Queue, ESB) | Stay behind the firewall | |
+| Kubernetes deployment specs | Stay behind the firewall | |
+| Helm charts | Stay behind the firewall | |
+| Other deployment resources/script (e.g. terraform) | Stay behind the firewall | |
+| Shared configuration variables | Defined in Codefresh Web application | |
+| Deployment secrets (from git/Puppet/Vault etc) | Stay behind the firewall| |
+| Audit logs | Managed via Codefresh Web application | |
+| SSO/Idp Configuration | Managed via Codefresh Web application | |
+| User emails | Managed via Codefresh Web application | |
+| Access control rules | Managed via Codefresh Web application | |
+
+
+
+### Codefresh On-premises CI/CD
+
+For customers who want full control, Codefresh also offers an on-premises option for CI/CD installation. Both the UI and builds run on a Kubernetes cluster fully managed by the customer.
+
+While Codefresh can still help with maintenance of the CI/CD On-premises, we would recommend the Hybrid CI/CD option first as it offers the most flexibility while maintaining high security.
+
+### CI/CD installation comparison
+
+{: .table .table-bordered .table-hover}
+| Characteristic | Cloud | Hybrid | On Premise |
+| -------------- | ---------------------------- |-------------------------|
+| Managed by | Codefresh | Codefresh and Customer | Customer |
+| UI runs on | public cloud | public cloud | private cluster |
+| Builds run on | public cloud | private cluster | private cluster |
+| Access to secure/private services | no | yes | yes |
+| Customer maintenance effort | none | some | full |
+| Best for | most companies | companies with security constraints | Large scale installations |
+| Available to | all customers | [enterprise plans](https://codefresh.io/contact-us/) | [enterprise plans](https://codefresh.io/contact-us/) |
+
+
+## Codefresh GitOps installation options
+
+Similar to CI/CD installation options, Codefresh GitOps also supports SaaS and hybrid installation options:
+
+
+### Hosted GitOps
+The SaaS version of GitOps, has Argo CD installed in the Codefresh cluster.
+Hosted GitOps Runtime is installed and provisioned in a Codefresh cluster, and managed by Codefresh.
+Hosted enviroments are full-cloud environments, where all updates and improvements are managed by Codefresh, with zero-maintenance overhead for you as the customer. Currently, you can add one Hosted GitOps Runtime per account.
+For the architecture, see [Hosted GitOps Runtime architecture]({{site.baseurl}}/docs/installation/architecture/#hosted-gitops-runtime-architecture).
+
+
+{% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/intro-hosted-hosted-initial-view.png"
+ url="/images/runtime/intro-hosted-hosted-initial-view.png"
+ alt="Hosted runtime setup"
+ caption="Hosted runtime setup"
+ max-width="80%"
+%}
+
+ For more information on how to set up the hosted environment, including provisioning hosted runtimes, see [Set up Hosted GitOps]({{site.baseurl}}/docs/installation/hosted-runtime/).
+
+### Hybrid GitOps
+The hybrid version of GitOps, has Argo CD installed in the customer's cluster.
+Hybrid GitOps is installed in the customer's cluster, and managed by the customer.
+The Hybrid GitOps Runtime is optimal for organizations with security constraints, wanting to manage CI/CD operations within their premises. Hybrid GitOps strikes the perfect balance between security, flexibility, and ease of use. Codefresh maintains and manages most aspects of the platform, apart from installing and upgrading Hybrid GitOps Runtimes which are managed by the customer.
+
+
+{% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-list-view.png"
+ url="/images/runtime/runtime-list-view.png"
+ alt="Runtime List View"
+ caption="Runtime List View"
+ max-width="70%"
+%}
+
+ For more information on hybrid environments, see [Hybrid GitOps runtime requirements]({{site.baseurl}}/docs/installation/hybrid-gitops/#minimum-system-requirements) and [Installling Hybrid GitOps Runtimes]({{site.baseurl}}/docs/installation/hybrid-gitops/).
+
+
+
+
+
+### Hosted vs.Hybrid GitOps
+
+The table below highlights the main differences between Hosted and Hybrid GitOps.
+
+{: .table .table-bordered .table-hover}
+| GitOps Functionality |Feature | Hosted | Hybrid |
+| -------------- | -------------- |--------------- | --------------- |
+| Runtime | Installation | Provisioned by Codefresh | Provisioned by customer |
+| | Runtime cluster | Managed by Codefresh | Managed by customer |
+| | Number per account | One runtime | Multiple runtimes |
+| | External cluster | Managed by customer | Managed by customer |
+| | Upgrade | Managed by Codefresh | Managed by customer |
+| | Uninstall | Managed by customer | Managed by customer |
+| Argo CD | | Codefresh cluster | Customer cluster |
+| CI Ops | Delivery Pipelines |Not supported | Supported |
+| |Workflows | Not supported | Supported |
+| |Workflow Templates | Not supported | Supported |
+| CD Ops |Applications | Supported | Supported |
+| |Image enrichment | Supported | Supported |
+| | Rollouts | Supported | Supported |
+|Integrations | | Supported | Supported |
+|Dashboards |Home Analytics | Hosted runtime and deployments|Runtimes, deployments, Delivery Pipelines |
+| |DORA metrics | Supported |Supported |
+| |Applications | Supported |Supported |
+
+### Related articles
+[Architecture]({{site.baseurl}}/docs/installation/runtime-architecture/)
+[Add Git Sources to GitOps Runtimes]({{site.baseurl}}/docs/installation/git-sources/)
+[Shared configuration repository]({{site.baseurl}}/docs/reference/shared-configuration)
+
diff --git a/_docs/runtime/managed-cluster.md b/_docs/installation/managed-cluster.md
similarity index 69%
rename from _docs/runtime/managed-cluster.md
rename to _docs/installation/managed-cluster.md
index 25ae4546..fb010209 100644
--- a/_docs/runtime/managed-cluster.md
+++ b/_docs/installation/managed-cluster.md
@@ -1,42 +1,42 @@
---
-title: "Add external clusters to runtimes"
+title: "Add external clusters to GitOps Runtimes"
description: ""
-group: runtime
+group: installation
toc: true
---
-Register external clusters to provisioned hybrid or hosted runtimes in Codefresh. Once you add an external cluster, you can deploy applications to that cluster without having to install Argo CD in order to do so. External clusters allow you to manage multiple clusters through a single runtime.
+Register external clusters to provisioned Hybrid or Hosted GitOps Runtimes in Codefresh. Once you add an external cluster, you can deploy applications to that cluster without having to install Argo CD in order to do so. Manage manage multiple external clusters through a single Runtime.
-When you add an external cluster to a provisioned runtime, the cluster is registered as a managed cluster. A managed cluster is treated as any other managed K8s resource, meaning that you can monitor its health and sync status, deploy applications on the cluster and view information in the Applications dashboard, and remove the cluster from the runtime's managed list.
+When you add an external cluster to a provisioned Runtime, the cluster is registered as a managed cluster. A managed cluster is treated as any other managed K8s resource, meaning that you can monitor its health and sync status, deploy applications to it, view information in the Applications dashboard, and remove the cluster from the Runtime's managed list.
Add managed clusters through:
* Codefresh CLI
* Kustomize
-Adding a managed cluster via Codefresh ensures that Codefresh applies the required RBAC resources (`ServiceAccount`, `ClusterRole` and `ClusterRoleBinding`) to the target cluster, creates a `Job` that updates the selected runtime with the information, registers the cluster in Argo CD as a managed cluster, and updates the platform with the new cluster information.
+Adding a managed cluster via Codefresh ensures that Codefresh applies the required RBAC resources (`ServiceAccount`, `ClusterRole` and `ClusterRoleBinding`) to the target cluster, creates a `Job` that updates the selected Runtime with the information, registers the cluster in Argo CD as a managed cluster, and updates the platform with the new cluster information.
-### Add a managed cluster with Codefresh CLI
-Add an external cluster to a provisioned runtime through the Codefresh CLI. When adding the cluster, you can also add labels and annotations to the cluster, which are added to the cluster secret created by Argo CD.
+## Add a managed cluster with Codefresh CLI
+Add an external cluster to a provisioned GitOps Runtime through the Codefresh CLI. When adding the cluster, you can also add labels and annotations to the cluster, which are added to the cluster secret created by Argo CD.
Optionally, to first generate the YAML manifests, and then manually apply them, use the `dry-run` flag in the CLI.
**Before you begin**
-
-* For _hosted_ runtimes: [Configure access to these IP addresses]({{site.baseurl}}/docs/administration/platform-ip-addresses/)
+* For _Hosted_ Runtimes: [Configure access to these IP addresses]({{site.baseurl}}/docs/administration/platform-ip-addresses/)
* Verify that:
- * Your Git personal access token is valid and has the correct permissions
- * You have installed the latest version of the Codefresh CLI
+ * Your Git personal access token is valid and has the [required scopes]({{site.baseurl}}/docs/reference/git-tokens)
+ * You have installed the [latest version of the Codefresh CLI]({{site.baseurl}}/docs/installation/monitor-manage-runtimes/#hybrid-gitops-upgrade-gitops-cli)
**How to**
-1. In the Codefresh UI, go to [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
-1. From either the **Topology** or **List** views, select the runtime to which to add the cluster.
+1. In the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. From either the **Topology** or **List** views, select the Runtime to which to add the cluster.
1. Topology View: Select {::nomarkdown}
{:/}.
List View: Select the **Managed Clusters** tab, and then select **+ Add Cluster**.
1. In the Add Managed Cluster panel, copy and run the command:
- `cf cluster add [--labels label-key=label-value] [--annotations annotation-key=annotation-value][--dry-run]`
+ `cf cluster add [runtime-name] [--labels label-key=label-value] [--annotations annotation-key=annotation-value][--dry-run]`
where:
+ * `runtime-name` is the name of the Runtime to which to add the cluster.
* `--labels` is optional, and required to add labels to the cluster. When defined, add a label in the format `label-key=label-value`. Separate multiple labels with `commas`.
* `--annotations` is optional, and required to add annotations to the cluster. When defined, add an annotation in the format `annotation-key=annotation-value`. Separate multiple annotations with `commas`.
* `--dry-run` is optional, and required if you want to generate a list of YAML manifests that you can redirect and apply manually with `kubectl`.
@@ -54,7 +54,7 @@ Optionally, to first generate the YAML manifests, and then manually apply them,
{:start="5"}
1. If you used `dry-run`, apply the generated manifests to the same target cluster on which you ran the command.
- Here is an example of the YAML manifest generated with the `--dry-run` flag. Note that there are placeholders in the example, which are replaced with the actual values with `--dry-run`.
+ Here is an example of the YAML manifest generated with the `--dry-run` flag. Note that the example has placeholders, which are replaced with the actual values during the `--dry-run`.
```yaml
@@ -177,9 +177,9 @@ spec:
```
-The new cluster is registered to the runtime as a managed cluster.
+The new cluster is registered to the Runtime as a managed cluster.
-### Add a managed cluster with Kustomize
+## Add a managed cluster with Kustomize
Create a `kustomization.yaml` file with the information shown in the example below, and run `kustomize build` on it.
```yaml
@@ -222,16 +222,20 @@ resources:
```
-### Work with managed clusters
-Work with managed clusters in hybrid or hosted runtimes in either the Topology or List runtime views. For information on runtime views, see [Runtime views]({{site.baseurl}}/docs/runtime/runtime-views).
-As the cluster is managed through the runtime, updates to the runtime automatically updates the components on all the managed clusters that include it.
+## Work with managed clusters
+Work with managed clusters in either the Topology or List Runtime views. For information on Runtime views, see [Runtime views]({{site.baseurl}}/docs/runtime/runtime-views).
+As the cluster is managed through the Runtime, updates to the Runtime automatically updates the components on all the managed clusters that include it.
View connection status for the managed cluster, and health and sync errors. Health and sync errors are flagged by the error notification in the toolbar, and visually flagged in the List and Topology views.
-#### Install Argo Rollouts
-Install Argo Rollouts directly from Codefresh with a single click to visualize rollout progress in the [Applications dashboard]({{site.baseurl}}/docs/deployment/applications-dashboard/). If Argo Rollouts has not been installed, an **Install Argo Rollouts** button is displayed on selecting the managed cluster.
+### Install Argo Rollouts
+Applications with `rollout` resources need Argo Rollouts on the target cluster, both to visualize rollouts in the Applications dashboard and control rollout steps with the Rollout Player.
+If Argo Rollouts has not been installed on the target cluster, it displays **Install Argo Rollouts** button.
+
+Install Argo Rollouts with a single click to execute rollout instructions, deploy the application, and visualize rollout progress in the [Applications dashboard]({{site.baseurl}}/docs/deployment/applications-dashboard/).
+
-1. In the Codefresh UI, go to [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. In the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
1. Select **Topology View**.
1. Select the target cluster, and then select **+ Install Argo Rollouts**.
@@ -246,16 +250,16 @@ Install Argo Rollouts directly from Codefresh with a single click to visualize r
%}
-#### Remove a managed cluster from the Codefresh UI
-Remove a cluster from the runtime's list of managed clusters from the Codefresh UI.
+### Remove a managed cluster from the Codefresh UI
+Remove a cluster from the Runtime's list of managed clusters from the Codefresh UI.
> You can also remove it through the CLI.
-1. In the Codefresh UI, go to [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. In the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
1. Select either the **Topology View** or the **List View** tabs.
1. Do one of the following:
- * In the Topology View, select the cluster node from the runtime it is registered to.
- * In the List View, select the runtime, and then select the **Managed Clusters** tab.
+ * In the Topology View, select the cluster node from the Runtime it is registered to.
+ * In the List View, select the Runtime, and then select the **Managed Clusters** tab.
1. Select the three dots next to the cluster name, and then select **Uninstall** (Topology View) or **Remove** (List View).
{% include
@@ -269,8 +273,8 @@ Remove a cluster from the runtime's list of managed clusters from the Codefresh
%}
-#### Remove a managed cluster through the Codefresh CLI
-Remove a cluster from the list managed by the runtime, through the CLI.
+### Remove a managed cluster through the Codefresh CLI
+Remove a cluster from the list managed by the Runtime, through the CLI.
* Run:
`cf cluster remove --server-url `
@@ -279,7 +283,6 @@ Remove a cluster from the list managed by the runtime, through the CLI.
`` is the URL of the server on which the managed cluster is installed.
-### Related articles
-[Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/)
-[Manage provisioned hybrid runtimes]({{site.baseurl}}/docs/runtime/monitor-manage-runtimes/)
-[(Hybrid) Monitor provisioned runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/)
\ No newline at end of file
+## Related articles
+[Add Git Sources to GitOps Runtimes]({{site.baseurl}}/docs/installation/git-sources/)
+[Monitoring & managing GitOps Runtimes]({{site.baseurl}}/docs/installation/monitor-manage-runtimes/)
diff --git a/_docs/installation/monitor-manage-runtimes.md b/_docs/installation/monitor-manage-runtimes.md
new file mode 100644
index 00000000..08267a95
--- /dev/null
+++ b/_docs/installation/monitor-manage-runtimes.md
@@ -0,0 +1,643 @@
+---
+title: "Monitoring & managing GitOps Runtimes"
+description: ""
+group: runtime
+redirect_from:
+ - /monitor-manage-runtimes/
+ - /monitor-manage-runtimes
+toc: true
+---
+
+
+The **Runtimes** page displays the provisioned GitOps Runtimes in your account, both Hybrid, and the Hosted Runtime if you have one.
+
+View Runtime components and information in List or Topology view formats to monitor and manage them.
+
+{% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-list-view.png"
+ url="/images/runtime/runtime-list-view.png"
+ alt="Runtime List View"
+ caption="Runtime List View"
+ max-width="70%"
+%}
+
+Monitor provisioned GitOps Runtimes for security, health, and sync errors:
+
+* (Hybrid and Hosted) View/download logs for Runtimes and for Runtime components
+* (Hybrid) Restore provisioned Runtimes
+* (Hybrid) Configure browsers to allow access to insecure Runtimes
+* (Hybrid) Monitor notifications in the Activity Log
+
+
+Manage provisioned GitOps Runtimes:
+* [Add managed clusters to GitOps Runtimes]({{site.baseurl}}/docs/installation/managed-cluster/)
+* [Add and manage Git Sources for GitOps Runtimes]({{site.baseurl}}/docs/installation/git-sources/)
+*
+* Upgrade GitOps CLI
+* Upgrade Hybrid GitOps Runtimes
+* Uninstall GitOps Runtimes
+
+
+
+> Unless specified otherwise, all options are common to both types of GitOps Runtimes. If an option is valid only for Hybrid GitOps, it is indicated as such.
+
+
+## GitOps Runtime views
+
+View provisioned GitOps Runtimes in List or Topology view formats.
+
+* List view: The default view, displays the list of provisioned Runtimes, the clusters managed by them, and Git Sources associated with them.
+* Topology view: Displays a hierarchical view of Runtimes and the clusters managed by them, with health and sync status of each cluster.
+
+### List view
+
+The List view is a grid-view of the provisioned Runtimes.
+
+Here is an example of the List view for runtimes.
+{% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-list-view.png"
+ url="/images/runtime/runtime-list-view.png"
+ alt="Runtime List View"
+ caption="Runtime List View"
+ max-width="70%"
+%}
+
+Here is a description of the information in the List View.
+
+{: .table .table-bordered .table-hover}
+| List View Item| Description |
+| -------------- | ---------------- |
+|**Name**| The name of the provisioned GitOps Runtime. |
+|**Type**| The type of GitOps Runtime provisioned, and can be **Hybrid** or **Hosted**. |
+|**Cluster/Namespace**| The K8s API server endpoint, as well as the namespace with the cluster. |
+|**Modules**| The modules installed based on the type of provisioned Runtime. Hybrid Runtimes include CI amnd CD Ops modules. Hosted runtimes include CD Ops. |
+|**Managed Cluster**| The number of managed clusters if any, for the runtime. To view list of managed clusters, select the runtime, and then the **Managed Clusters** tab. To work with managed clusters, see [Adding external clusters to runtimes]({{site.baseurl}}/docs/runtime/managed-cluster).|
+|**Version**| The version of the runtime currently installed. **Update Available!** indicates there are later versions of the runtime. To see all the commits to the runtime, mouse over **Update Available!**, and select **View Complete Change Log**.
+|**Last Updated**| The most recent update information from the runtime to the Codefresh platform. Updates are sent to the platform typically every few minutes. Longer update intervals may indicate networking issues.|
+|**Sync Status**| The health and sync status of the runtime or cluster. {::nomarkdown}
-
indicates health or sync errors in the runtime, or a managed cluster if one was added to the runtime. The runtime name is colored red.
indicates that the runtime is being synced to the cluster on which it is provisioned.
{:/} |
+
+### Topology view
+
+A hierachical visualization of the provisioned Runtimes. The Topology view makes it easy to identify key information such as versions, health and sync status, for both the provisioned Runtime and the clusters managed by it.
+Here is an example of the Topology view for Runtimes.
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-topology-view.png"
+ url="/images/runtime/runtime-topology-view.png"
+ alt="Runtime Topology View"
+ caption="Runtime Topology View"
+ max-width="30%"
+%}
+
+Here is a description of the information in the Topology view.
+
+{: .table .table-bordered .table-hover}
+| Topology View Item | Description |
+| ------------------------| ---------------- |
+|**Runtime** |  the provisioned Runtime. Hybrid Runtimes display the name of the K8s API server endpoint with the cluster. Hosted Runtimes display 'hosted'. |
+|**Cluster** | The local, and managed clusters if any, for the Runtime. {::nomarkdown}
indicates the local cluster, always displayed as `in-cluster`. The in-cluster server URL is always set to `https://kubernetes.default.svc/`.
indicates a managed cluster. -
select to add a new managed cluster.
{:/} To view cluster components, select the cluster. To add and work with managed clusters, see [Adding external clusters to runtimes]({{site.baseurl}}/docs/runtime/managed-cluster). |
+|**Health/Sync status** |The health and sync status of the Runtime or cluster. {::nomarkdown}
indicates health or sync errors in the Runtime, or a managed cluster if one was added to the runtime. The runtime or cluster node is bordered in red and the name is colored red.
indicates that the Runtime is being synced to the cluster on which it is provisioned.
{:/} |
+|**Search and View options** | {::nomarkdown}- Find a Runtime or its clusters by typing part of the Runtime/cluster name, and then navigate to the entries found.
- Topology view options: Resize to window, zoom in, zoom out, full screen view.
{:/}|
+
+## Managing provisioned GitOps Runtimes
+* [Reset shared configuration repository for GitOps Runtimes](#reset-shared-configuration-repository-for-gitpps-runtimes)
+* [(Hybrid GitOps) Upgrade GitOps CLI](#hybrid-gitops-upgrade-gitops-cli)
+* [(Hybrid GitOps) Upgrade provisioned Runtimes](#hybrid-gitops-upgrade-provisioned-runtimes)
+* [Uninstall provisioned GitOps Runtimes](#uninstall-provisioned-gitops-runtimes)
+* [Update Git tokens for Runtimes](#update-git-tokens-for-runtimes)
+
+### Reset shared configuration repository for GitOps Runtimes
+Codefresh creates the [shared configuration repository]({{site.baseurl}}/docs/reference/shared-configuration) when you install the first hybrid or hosted GitOps runtime for your account, and uses it for all runtimes you add to the same account.
+
+If needed, you can reset the location of the shared configuration repository in your account and re-initialize it. For example, when moving from evaluation to production.
+Uninstall all the existing runtimes in your account, and then run the reset command. On the next installation, Codefresh re-initializes the shared configuration repo.
+
+**Before you begin**
+[Uninstall every runtime in the account](#uninstall-provisioned-gitops-runtimes)
+
+**How to**
+* Run:
+ `cf config --reset-shared-config-repo`
+
+### (Hybrid GitOps) Upgrade GitOps CLI
+Upgrade the CLI to the latest version to prevent Runtime installation errors.
+
+1. Check the version of the CLI you have installed:
+ `cf version`
+1. Compare with the [latest version](https://github.com/codefresh-io/cli-v2/releases){:target="\_blank"} released by Codefresh.
+1. Select and run the appropriate command:
+
+{: .table .table-bordered .table-hover}
+| Download mode | OS | Commands |
+| -------------- | ----------| ----------|
+| `curl` | MacOS-x64 | `curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-darwin-amd64.tar.gz | tar zx && mv ./cf-darwin-amd64 /usr/local/bin/cf && cf version`|
+| | MacOS-m1 |`curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-darwin-arm64.tar.gz | tar zx && mv ./cf-darwin-arm64 /usr/local/bin/cf && cf version` |
+| | Linux - X64 |`curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-linux-amd64.tar.gz | tar zx && mv ./cf-linux-amd64 /usr/local/bin/cf && cf version` |
+| | Linux - ARM | `curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-linux-arm64.tar.gz | tar zx && mv ./cf-linux-arm64 /usr/local/bin/cf && cf version`|
+| `brew` | N/A| `brew tap codefresh-io/cli && brew install cf2`|
+
+### (Hybrid GitOps) Upgrade provisioned Runtimes
+
+Upgrade provisioned Hybrid Runtimes to install critical security updates or the latest versions of all components. Upgrade a provisioned Hybrid Runtime by running a silent upgrade or through the CLI wizard.
+If you have managed clusters for the Hybrid Runtime, upgrading the Runtime automatically updates runtime components within the managed cluster as well.
+
+> When there are security updates, the UI displays the alert, _At least one runtime requires a security update_. The Version column displays an _Update Required!_ notification.
+
+> If you have older Hybrid Runtime versions, upgrade to manually define or create the shared configuration repo for your account. See [Shared configuration repo]({{site.baseurl}}/docs/reference/shared-configuration/).
+
+
+**Before you begin**
+For both silent or CLI-wizard based upgrades, make sure you have:
+
+* The latest version of the Codefresh CLI
+ Run `cf version` to see your version and [click here](https://github.com/codefresh-io/cli-v2/releases){:target="\_blank"} to compare with the latest CLI version.
+* A valid Git token with [the required scopes]({{site.baseurl}}/docs/reference/git-tokens)
+
+**Silent upgrade**
+
+* Pass the mandatory flags in the upgrade command:
+
+ `cf runtime upgrade --git-token --silent`
+ where:
+ `` is a valid Git token with the correct scopes.
+
+**CLI wizard-based upgrade**
+
+1. In the Codefresh UI, make sure you are in [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. Switch to either the **List View** or to the **Topology View**.
+1. **List view**:
+ * Select the Runtime name.
+ * To see all the commits to the Runtime, in the Version column, mouse over **Update Available!**, and select **View Complete Change Log**.
+ * On the top-right, select **Upgrade**.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-list-view-upgrade.png"
+ url="/images/runtime/runtime-list-view-upgrade.png"
+ alt="List View: Upgrade runtime option"
+ caption="List View: Upgrade runtime option"
+ max-width="30%"
+ %}
+
+ **Topology view**:
+ Select the Runtime cluster, and from the panel, select the three dots and then select **Upgrade Runtime**.
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtiime-topology-upgrade.png"
+ url="/images/runtime/runtiime-topology-upgrade.png"
+ alt="Topology View: Upgrade runtime option"
+ caption="Topology View: Upgrade runtime option"
+ max-width="30%"
+%}
+
+{:start="4"}
+
+1. If you have already installed the Codefresh CLI, in the Install Upgrades panel, copy the upgrade command.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/install-upgrades.png"
+ url="/images/runtime/install-upgrades.png"
+ alt="Upgrade runtime"
+ caption="Upgrade runtime panel"
+ max-width="30%"
+%}
+
+{:start="5"}
+1. In your terminal, paste the command, and do the following:
+ * Update the Git token value.
+ * To manually define the shared configuration repo, add the `--shared-config-repo` flag with the path to the repo.
+1. Confirm to start the upgrade.
+
+
+
+
+
+
+### Uninstall provisioned GitOps Runtimes
+
+Uninstall provisioned GitOps Runtimes that are not in use. Uninstall a Runtime through a silent uninstall or through the CLI wizard.
+> Uninstalling a Runtime removes the Git Sources and managed clusters associated with it.
+
+**Before you begin**
+For both types of uninstalls, make sure you have:
+
+* The latest version of the GitOps CLI
+* A valid runtime Git token
+* The Kube context from which to uninstall the provisioned Runtime
+
+**Silent uninstall**
+Pass the mandatory flags in the uninstall command:
+ `cf runtime uninstall --git-token --silent`
+ where:
+ `--git-token` is a valid runtime token with the `repo` and `admin-repo.hook` scopes.
+
+**CLI wizard uninstall**
+
+1. In the Codefresh UI, make sure you are in [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. Switch to either the **List View** or to the **Topology View**.
+1. **List view**: On the top-right, select the three dots and then select **Uninstall**.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/uninstall-location.png"
+ url="/images/runtime/uninstall-location.png"
+ alt="List View: Uninstall runtime option"
+ caption="List View: Uninstall runtime option"
+ max-width="30%"
+%}
+
+**Topology view**: Select the Runtime node, and from the panel, select the three dots and then select **Uninstall Runtime**.
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-topology-uninstall.png"
+ url="/images/runtime/runtime-topology-uninstall.png"
+ alt="Topology View: Uninstall runtime option"
+ caption="Topology View: Uninstall runtime option"
+ max-width="30%"
+%}
+
+{:start="4"}
+
+1. If you already have the latest version of the Codefresh CLI, in the Uninstall Codefresh Runtime panel, copy the uninstall command.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/uninstall.png"
+ url="/images/runtime/uninstall.png"
+ alt="Uninstall Codefresh runtime"
+ caption="Uninstall Codefresh runtime"
+ max-width="40%"
+%}
+
+{:start="5"}
+
+1. In your terminal, paste the command, and update the Git token value.
+1. Select the Kube context from which to uninstall the Runtime, and then confirm the uninstall.
+1. If you get errors, run the uninstall command again, with the `--force` flag.
+
+
+
+### Update Git tokens for Runtimes
+
+Provisioned Runtimes require valid Git tokens at all times to authenticate Git actions by you as a user.
+>These tokens are specific to the user, and the same can be used for multiple runtimes.
+
+There are two different situations when you need to update Git tokens:
+* Update invalid, revoked, or expired tokens: Codefresh automatically flags Runtimes with such tokens. It is mandatory to update the Git tokens to continue working with the platform.
+* Update valid tokens: Optional. You may want to update Git tokens, even valid ones, by deleting the existing token and replacing it with a new token.
+
+The methods for updating any Git token are the same regardless of the reason for the update:
+* OAuth2 authorization, if your admin has registered an OAuth Application for Codefresh
+* Git access token authentication, by generating a personal access token in your Git provider account with the correct scopes
+
+**Before you begin**
+* To authenticate through a Git access token, make sure your token is valid and has [the required scopes]({{site.baseurl}}/docs/reference/git-tokens)
+
+**How to**
+1. Do one of the following:
+ * If you see a notification in the Codefresh UI about invalid Runtime tokens, click **[Update Token]**.
+ The Runtimes page shows Runtimes with invalid tokens prefixed by the key icon. Mouse over shows invalid token.
+ * To update an existing token, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. Select the Runtime for which to update the Git token.
+1. From the context menu with the additional actions at the top-right, select **Update Git Runtime token**.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/update-git-runtime-token.png"
+ url="/images/runtime/update-git-runtime-token.png"
+ alt="Update Git runtime token option"
+ caption="Update Git runtime token option"
+ max-width="40%"
+%}
+
+{:start="4"}
+1. Do one of the following:
+ * If your admin has set up OAuth access, click **Authorize Access to Git Provider**. Go to _step 5_.
+ * Alternatively, authenticate with an access token from your Git provider. Go to _step 6_.
+
+{:start="5"}
+1. For OAuth2 authorization:
+ > If the application is not registered, you get an error. Contact your admin for help.
+ * Enter your credentials, and select **Sign In**.
+ * If required, as for example if two-factor authentication is configured, complete the verification.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/administration/user-settings/oauth-user-authentication.png"
+ url="/images/administration/user-settings/oauth-user-authentication.png"
+ alt="Authorizing access with OAuth2"
+ caption="Authorizing access with OAuth2"
+ max-width="30%"
+ %}
+
+{:start="6"}
+1. For Git token authentication, expand **Advanced authorization options**, and then paste the generated token in the **Git runtime token** field.
+
+1. Click **Update Token**.
+
+## Monitoring GitOps Runtimes
+* [View/download logs to troubleshoot Runtimes](#viewdownload-logs-to-troubleshoot-runtimes)
+* [(Hybrid GitOps) Restoring provisioned Runtimes](#hybrid-gitops-restoring-provisioned-runtimes)
+* [(Hybrid GitOps) Configure browser to allow insecure Runtimes](#hybrid-gitops-configure-browser-to-allow-insecure-runtimes)
+* [(Hybrid GitOps) View notifications in Activity Log](#hybrid-gitops-view-notifications-in-activity-log)
+* [(Hybrid GitOps) Troubleshoot health and sync errors for Runtimes](#hybrid-gitops-troubleshoot-health-and-sync-errors-for-runtimes)
+
+### View/download logs to troubleshoot Runtimes
+Logs are available for completed Runtimes, both for the Runtime and for individual Runtime components. Download log files for offline viewing and analysis, or view online logs for a Runtime component, and download if needed for offline analysis. Online logs support free-text search, search-result navigation, and line-wrap for enhanced readability.
+
+Log files include events from the date of the application launch, with the newest events listed first.
+
+{::nomarkdown}
+
+{:/}
+
+#### Download logs for Runtimes
+Download the log file for a Runtime. The Runtime log is downloaded as a `.tar.gz` file, which contains the individual log files for each runtime component.
+
+1. In the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. If needed, switch to **List View**, and then select the runtime for which to download logs.
+1. From the context menu, select **Download All Logs**.
+ The log file is downloaded to the Downloads folder or the folder designated for downloads, with the filename, `.tar.gz`. For example, `codefreshv2-production2.tar.gz`.
+
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-logs-download-all.png"
+ url="/images/runtime/runtime-logs-download-all.png"
+ alt="Download logs for selected runtime"
+ caption="Download logs for selected runtime"
+ max-width="40%"
+%}
+
+
+{:start="4"}
+1. To view the log files of the individual components, unzip the file.
+ Here is an example of the folder with the individual logs.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-logs-folder-view.png"
+ url="/images/runtime/runtime-logs-folder-view.png"
+ alt="Individual log files in folder"
+ caption="Individual log files in folder"
+ max-width="50%"
+%}
+
+{:start="5"}
+1. Open a log file with the text editor of your choice.
+
+{::nomarkdown}
+
+{:/}
+
+#### View/download logs for Runtime components
+View online logs for any Runtime component, and if needed, download the log file for offline viewing and analysis.
+
+Online logs show up to 1000 of the most recent events (lines), updated in real time. Downloaded logs include all the events, from the application launch to the date and time of download.
+
+1. In the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. If needed, switch to **List View**, and then select the Runtime.
+1. Select the Runtime component and then select **View Logs**.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-logs-view-component.png"
+ url="/images/runtime/runtime-logs-view-component.png"
+ alt="View log option for individual runtime component"
+ caption="View log option for individual runtime component"
+ max-width="40%"
+%}
+
+
+{:start="4"}
+1. Do the following:
+ * Search by free-text for any string, and click the next and previous buttons to navigate between the search results.
+ * To switch on line-wrap for readability, click **Wrap**.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-logs-screen-view.png"
+ url="/images/runtime/runtime-logs-screen-view.png"
+ alt="Runtime component log example"
+ caption="Runtime component log example"
+ max-width="50%"
+%}
+
+{:start="5"}
+1. To download the log, click **Download**.
+ The file is downloaded as `.log`.
+
+### (Hybrid GitOps) Restoring provisioned Runtimes
+
+In case of cluster failure, restore the provisioned Hybrid Runtime from the existing runtime installation repository.
+For partial or complete cluster failures, you can restore the Runtime to either the failed cluster or to a different cluster.
+Restoring the provisioned Runtime reinstalls it, leveraging the resources in the existing Runtime repo.
+
+Restoring the runtime:
+* Applies `argo-cd` from the installation manifests in your repo to your cluster
+* Associates `argo-cd` with the existing installation repo
+* Applies the Runtime and `argo-cd` secrets to the cluster
+* Updates the Runtime config map (`.yaml` in the `bootstrap` directory) with the new cluster configuration for these fields:
+ `cluster`
+ `ingressClassName`
+ `ingressController`
+ `ingressHost`
+
+{::nomarkdown}
+
+{:/}
+
+#### Restore a Hybrid Runtime
+Reinstall the Hybrid Runtime from the existing installation repository to restore it to the same or a different cluster.
+
+**Before you begin**
+
+* Have the following information handy:
+ > All values must be the identical to the Runtime to be restored.
+ * Runtime name
+ * Repository URL
+ * Codefresh context
+ * Kube context: Required if you are restoring to the same cluster
+
+**How to**
+
+1. Run:
+ `cf runtime install --from-repo`
+1. Provide the relevant values when prompted.
+1. If you are performing the runtime recovery in a different cluster, verify the ingress resource configuration for `app-proxy`, `workflows`, and `default-git-source`.
+ If the health status remains as `Progressing`, do the following:
+
+ * In the Runtime installation repo, check if the `ingress.yaml` files for the `app-proxy` and `workflows` are configured with the correct `host` and `ingressClassName`:
+
+ `apps/app-proxy/overlays//ingress.yaml`
+ `apps/workflows/overlays//ingress.yaml`
+
+ * In the Git Source repository, check the `host` and `ingressClassName` in `cdp-default-git-source.ingress.yaml`:
+
+ `resources_/cdp-default-git-source.ingress.yaml`
+
+ See the [example](#ingress-example) below.
+
+{:start="4"}
+1. If you have managed clusters registered to the hybrid runtime you are restoring, reconnect them.
+ Run the command and follow the instructions in the wizard:
+ `cf cluster add`
+
+1. Verify that you have a registered Git integration:
+ `cf integration git list --runtime `
+
+1. If needed, create a new Git integration:
+ `cf integration git add default --runtime --provider github --api-url https://api.github.com`
+
+{::nomarkdown}
+
+{:/}
+
+#### Ingress example
+This is an example of the `ingress.yaml` for `workflows`.
+
+ ```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ annotations:
+ ingress.kubernetes.io/protocol: https
+ ingress.kubernetes.io/rewrite-target: /$2
+ nginx.ingress.kubernetes.io/backend-protocol: https
+ nginx.ingress.kubernetes.io/rewrite-target: /$2
+ creationTimestamp: null
+ name: runtime-name-workflows-ingress
+ namespace: runtime-name
+spec:
+ ingressClassName: nginx
+ rules:
+ - host: your-ingress-host.com
+ http:
+ paths:
+ - backend:
+ service:
+ name: argo-server
+ port:
+ number: 2746
+ path: /workflows(/|$)(.*)
+ pathType: ImplementationSpecific
+status:
+ loadBalancer: {}
+```
+
+
+### (Hybrid GitOps) Configure browser to allow insecure Runtimes
+
+If at least one of your Hybrid Runtimes was installed in insecure mode (without an SSL certificate for the ingress controller from a CA), the UI alerts you that _At least one runtime was installed in insecure mode_.
+{% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-insecure-alert.png"
+ url="/images/runtime/runtime-insecure-alert.png"
+ alt="Insecure runtime installation alert"
+ caption="Insecure runtime installation alert"
+ max-width="100%"
+%}
+
+All you need to do is to configure the browser to trust the URL and receive content.
+
+1. Select **View Runtimes** to the right of the alert.
+ You are taken to the Runtimes page, where you can see insecure Runtimes tagged as **Allow Insecure**.
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/runtime-insecure-steps.png"
+ url="/images/runtime/runtime-insecure-steps.png"
+ alt="Insecure runtimes in Runtime page"
+ caption="Insecure runtimes in Runtime page"
+ max-width="40%"
+%}
+{:start="2"}
+1. For _every_ insecure Runtime, select **Allow Insecure**, and when the browser prompts you to allow access, do as relevant:
+
+* Chrome: Click **Advanced** and then **Proceed to site**.
+* Firefox: Click **Advanced** and then **Accept the risk and continue**.
+* Safari: Click **Show Certificate**, and then select **Always allow content from site**.
+* Edge: Click **Advanced**, and then select **Continue to site(unsafe)**.
+
+### (Hybrid GitOps) View notifications in Activity Log
+
+The Activity Log is a quick way to monitor notifications for Runtime events such as upgrades. A pull-down panel in the Codefresh toolbar, the Activity Log shows ongoing, success, and error notifications, sorted by date, starting with today's date.
+
+1. In the Codefresh UI, on the top-right of the toolbar, select  **Activity Log**.
+1. To see notifications for provisioned Runtimes, filter by **Runtime**.
+
+ {% include image.html
+ lightbox="true"
+ file="/images/runtime/runtime-activity-log.png"
+ url="/images/runtime/runtime-activity-log.png"
+ alt="Activity Log filtered by Runtime events"
+ caption="Activity Log filtered by Runtime events"
+ max-width="30%"
+ %}
+
+{:start="3"}
+
+1. To see more information on an error, select the **+** sign.
+
+### (Hybrid GitOps) Troubleshoot health and sync errors for Runtimes
+The  icon with the Runtime in red indicates either health or sync errors.
+
+**Health errors**
+Health errors are generated by Argo CD and by Codefresh for Runtime components.
+
+**Sync errors**
+Runtimes with sync errors display an **Out of sync** status in Sync Status column. They are related to discrepancies between the desired and actual state of a Runtime component or one of the Git sources associated with the Runtime.
+
+**View errors**
+For both views, select the Runtime, and then select **Errors Detected**.
+Here is an example of health errors for a Runtime.
+
+ {% include image.html
+ lightbox="true"
+ file="/images/runtime/runtime-health-sync-errors.png"
+ url="/images/runtime/runtime-health-sync-errors.png"
+ alt="Health errors for runtime example"
+ caption="Health errors for runtime example"
+ max-width="30%"
+ %}
+
+
+### Related articles
+[Add Git Sources to GitOps Runtimes]({{site.baseurl}}/docs/installation/git-sources/)
+[Add external clusters to GitOps Runtimes]({{site.baseurl}}/docs/installation/managed-cluster/)
+[Shared configuration repo for GitOps Runtimes]({{site.baseurl}}/docs/reference/shared-configuration)
+
+
diff --git a/_docs/installation/runtime-architecture.md b/_docs/installation/runtime-architecture.md
new file mode 100644
index 00000000..f31a7415
--- /dev/null
+++ b/_docs/installation/runtime-architecture.md
@@ -0,0 +1,240 @@
+---
+title: "Runtime architectures"
+description: ""
+group: installation
+toc: true
+---
+
+Overview TBD
+
+## Codefresh CI/CD architecture
+
+The most important components are the following:
+
+**Codefresh VPC:** All internal Codefresh services run in the VPC (analyzed in the next section). Codefresh uses Mongo and PostgreSQL to store user and authentication information.
+
+**Pipeline execution environment**: The Codefresh engine component is responsible for taking pipeline definitions and running them in managed Kubernetes clusters by automatically launching the Docker containers that each pipeline needs for its steps.
+
+**External actors**. Codefresh offers a [public API]({{site.baseurl}}/docs/integrations/ci-integrations/codefresh-api/) that is consumed both by the Web user interface and the [Codefresh CLI](https://codefresh-io.github.io/cli/){:target="\_blank"}. The API is also available for any custom integration with external tools or services.
+
+### CI/CD topology
+
+If we zoom into Codefresh Services for CI/CD, we will see the following:
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/installation/topology-new.png"
+ url="/images/administration/installation/topology-new.png"
+ alt="Topology diagram"
+ caption="Topology diagram (click to enlarge)"
+ max-width="100%"
+ %}
+
+### CI/CD core components
+
+{: .table .table-bordered .table-hover}
+|Category | Component | Function |
+| -------------- | ----------| ----------|
+| Core | **pipeline-manager**| Manages all CRUD operations for CI pipelines.|
+| | **cfsign** | Signs server TLS certificates for docker daemons, and generates client TLS certificates for hybrid pipelines. |
+| | **cf-api** | Central back-end component that functions as an API gateway for other services, and handles authentication/authorization. |
+| | **context-manager**| Manages the authentications/configurations used by Codefresh CI/CD and by the Codefresh engine. |
+| | **runtime-environment-manager**| Manages the different runtime environments for CI pipelines. The runtime environment for CI/CD SaaS is fully managed by Codefresh. For CI/CD Hybrid, customers can add their own runtime environments using private Kubernetes clusters. |
+| Trigger | **hermes**| Controls CI pipeline trigger management. See [triggers]({{site.baseurl}}/docs/pipelines/triggers/). |
+| | **nomios**| Enables triggers from Docker Hub when a new image/tag is pushed.See [Triggers from Docker Hub]({{site.baseurl}}/docs/pipelines/triggers/dockerhub-triggers/). |
+| | **cronus**| Enables defining Cron triggers for CI pipelines. See [Cron triggers]({{site.baseurl}}/docs/pipelines/triggers/cron-triggers/).|
+| Log | **cf-broadcaster**| Stores build logs from CI pipelines. The UI and CLI stream logs by accessing the **cf-broadcaster** through a web socket. |
+| Kubernetes | **cluster-providers** | Provides an interface to define cluster contexts to connect Kubernetes clusters in CI/CD installation environments. |
+| | **helm-repo-manager** | Manages the Helm charts for CI/CD installation environments through the Helm repository admin API and ChartMuseum proxy. See [Helm charts in Codefresh]({{site.baseurl}}/docs/deployments/helm/managed-helm-repository/). |
+| | **k8s-monitor** | The agent installed on every Kubernetes cluster, providing information for the Kubernetes dashboards. See [Kubernetes dashboards]({{site.baseurl}}/docs/deployments/kubernetes/manage-kubernetes/). |
+| |**charts-manager** | Models the Helm chart view in Codefresh. See [Helm chart view]({{site.baseurl}}/docs/deployments/helm/helm-releases-management/). |
+| | **kube-integration** | Provides an interface to retrieve required information from a Kubernetes cluster, can be run either as an http server or an NPM module. |
+| | **tasker-kubernetes** | Provides cache storage for Kubernetes dashboards. See [Kubernetes dashboards]({{site.baseurl}}/docs/deployments/kubernetes/manage-kubernetes/). |
+
+
+## Codefresh GitOps Platform architecture
+
+The diagram shows a high-level view of the Codefresh GitOps installation environment, and its core components, the Codefresh Control Plane, the Codefresh Runtime, and the Codefresh Clients.
+
+{% include
+image.html
+lightbox="true"
+file="/images/getting-started/architecture/arch-codefresh-simple.png"
+url="/images/getting-started/architecture/arch-codefresh-simple.png"
+alt="Codefresh GitOps Platform architecture"
+caption="Codefresh GitOps Platform architecture"
+max-width="100%"
+%}
+
+{::nomarkdown}
+
+{:/}
+
+### Codefresh GitOps Control Plane
+The Codefresh Control Plane is the SaaS component in the platform. External to the enterprise firewall, it does not have direct communication with the Codefresh Runtime, Codefresh Clients, or the customer's organizational systems. The Codefresh Runtime and the Codefresh Clients communicate with the Codefresh Control Plane to retrieve the required information.
+
+
+{::nomarkdown}
+
+{:/}
+
+### Codefresh GitOps Runtime
+The Codefresh Runtime is installed on a Kubernetes cluster, and houses the enterprise distribution of the Codefresh Application Proxy and the Argo Project.
+Depending on the type of GitOps installation, the Codefresh Runtime is installed either in the Codefresh platform (Hosted GitOps), or in the customer environment (Hybrid GitOps). Read more in [Codefresh GitOps Runtime architecture](#codefresh-gitops-runtime-architecture).
+
+
+{::nomarkdown}
+
+{:/}
+
+### Codefresh GitOps Clients
+
+Codefresh Clients include the Codefresh UI and the Codefresh CLI.
+The Codefresh UI provides a unified, enterprise-wide view of deployments (runtimes and clusters), and CI/CD operations (Delivery Pipelines, workflows, and deployments) in the same location.
+The Codefresh CLI includes commands to install hybrid runtimes, add external clusters, and manage runtimes and clusters.
+
+### Codefresh GitOps Runtime architecture
+The sections that follow show detailed views of the GitOps Runtime architecture for the different installation options, and descriptions of the GitOps Runtime components.
+
+* [Hosted GitOps runtime architecture](#hosted-gitops-runtime-architecture)
+ For Hosted GitOps, the GitOps Runtime is installed on a _Codefresh-managed cluster_ in the Codefresh platform.
+* Hybrid GitOps runtime architecture:
+ For Hybrid GitOps, the GitOps Runtime is installed on a _customer-managed cluster_ in the customer environment. The Hybrid GitOps Runtime can be tunnel- or ingress-based:
+ * [Tunnel-based](#tunnel-based-hybrid-gitops-runtime-architecture)
+ * [Ingress-based](#ingress-based-hybrid-gitops-runtime-architecture)
+* GitOps Runtime components
+ * [Application Proxy](#application-proxy)
+ * [Argo Project](#argo-project)
+ * [Request Routing Service](#request-routing-service)
+ * [Tunnel Server](#tunnel-server)
+ * [Tunnel Client](#tunnel-client)
+
+
+#### Hosted GitOps runtime architecture
+In the hosted environment, the Codefresh Runtime is installed on a K8s cluster managed by Codefresh.
+
+{% include
+ image.html
+ lightbox="true"
+ file="/images/getting-started/architecture/arch-hosted.png"
+ url="/images/getting-started/architecture/arch-hosted.png"
+ alt="Hosted runtime architecture"
+ caption="Hosted runtime architecture"
+ max-width="100%"
+%}
+
+#### Tunnel-based Hybrid GitOps runtime architecture
+Tunnel-based Hybrid GitOps runtimes use tunneling instead of ingress controllers to control communication between the GitOps Runtime in the customer cluster and the Codefresh GitOps Platform. Tunnel-based runtimes are optimal when the cluster with the GitOps Runtime is not exposed to the internet.
+
+{% include
+ image.html
+ lightbox="true"
+ file="/images/getting-started/architecture/arch-hybrid-ingressless.png"
+ url="/images/getting-started/architecture/arch-hybrid-ingressless.png"
+ alt="Tunnel-based hybrid runtime architecture"
+ caption="Tunnel-based hybrid runtime architecture"
+ max-width="100%"
+%}
+
+
+#### Ingress-based Hybrid GitOps runtime architecture
+Ingress-based runtimes use ingress controllers to control communication between the GitOps Runtime in the customer cluster and the Codefresh GitOps Platform. Ingress-based runtimes are optimal when the cluster with the GitOps Runtime is exposed to the internet.
+
+
+
+{% include
+ image.html
+ lightbox="true"
+ file="/images/getting-started/architecture/arch-hybrid-ingress.png"
+ url="/images/getting-started/architecture/arch-hybrid-ingress.png"
+ alt="Ingress-based hybrid runtime architecture"
+ caption="Ingress-based hybrid runtime architecture"
+ max-width="100%"
+%}
+
+
+#### Application Proxy
+The GitOps Application Proxy (App-Proxy) functions as the Codefresh agent, and is deployed as a service in the GitOps Runtime.
+
+For tunnel-based Hybrid GitOps Runtimes, the Tunnel Client forwards the incoming traffic from the Tunnel Server using the Request Routing Service to the GitOps App-Proxy.
+For Hybrid GitOps Runtimes with ingress, the App-Proxy is the single point-of-contact between the GitOps Runtime, and the GitOps Clients, the GitOps Platform, and any organizational systems in the customer environment.
+
+
+The GitOps App-Proxy:
+* Accepts and serves requests from GitOps Clients either via the UI or CLI
+* Retrieves a list of Git repositories for visualization in the Client interfaces
+* Retrieves permissions from the GitOps Control Plane to authenticate and authorize users for the required operations.
+* Implements commits for GitOps-controlled entities, such as Delivery Pipelines and other CI resources
+* Implements state-change operations for non-GitOps controlled entities, such as terminating Argo Workflows
+
+{::nomarkdown}
+
+{:/}
+
+#### Argo Project
+
+The Argo Project includes:
+* Argo CD for declarative continuous deployment
+* Argo Rollouts for progressive delivery
+* Argo Workflows as the workflow engine
+* Argo Events for event-driven workflow automation framework
+
+
+{::nomarkdown}
+
+{:/}
+
+#### Request Routing Service
+The Request Routing Service is installed on the same cluster as the GitOps Runtime in the customer environment.
+It receives requests from the the Tunnel Client (tunnel-based) or the ingress controller (ingress-based), and forwards the request URLs to the Application Proxy, and webhooks directly to the Event Sources.
+
+>Important:
+ The Request Routing Service is available from runtime version 0.0.543 and higher.
+ Older runtime versions are not affected as there is complete backward compatibility, and the ingress controller continues to route incoming requests.
+
+#### Tunnel Server
+Applies only to _tunnel-based_ Hybrid GitOps Runtimes.
+The Codefresh Tunnel Server is installed in the Codefresh platform. It communicates with the enterprise cluster located behind a NAT or firewall.
+
+The Tunnel Server:
+* Forwards traffic from Codefresh Clients to the client (customer) cluster.
+* Manages the lifecycle of the Tunnel Client.
+* Authenticates requests from the Tunnel Client to open tunneling connections.
+
+{::nomarkdown}
+
+{:/}
+
+#### Tunnel Client
+Applies only to _tunnel-based_ Hybrid GitOps Runtimes.
+
+Installed on the same cluster as the Hybrid GitOps Runtime, the Tunnel Client establishes the tunneling connection to the Tunnel Server via the WebSocket Secure (WSS) protocol.
+A single Hybrid GitOps Runtime can have a single Tunnel Client.
+
+The Tunnel Client:
+* Initiates the connection with the Tunnel Server.
+* Forwards the incoming traffic from the Tunnel Server through the Request Routing Service to App-Proxy, and other services.
+
+{::nomarkdown}
+
+{:/}
+
+
+#### Customer environment
+The customer environment that communicates with the GitOps Runtime and the GitOps Platform, generally includes:
+* Ingress controller for ingress hybrid runtimes
+ The ingress controller is configured on the same Kubernetes cluster as the GitOps Runtime, and implements the ingress traffic rules for the GitOps Runtime.
+ See [Ingress controller requirements]({{site.baseurl}}/docs/installation/requirements/#ingress-controller).
+* Managed clusters
+ Managed clusters are external clusters registered to provisioned Hosted or Hybrid GitOps runtimes for application deployment.
+ Hosted GitOps requires you to connect at least one external K8s cluster as part of setting up the Hosted GitOps environment.
+ Hybrid GitOps allow you to add external clusters after provisioning the runtimes.
+ See [Add external clusters to runtimes]({{site.baseurl}}/docs/installation/managed-cluster/).
+* Organizational systems
+ Organizational Systems include the customer's tracking, monitoring, notification, container registries, Git providers, and other systems. They can be entirely on-premises or in the public cloud.
+ Either the ingress controller (ingress hybrid environments), or the Tunnel Client (tunnel-based hybrid environments), forwards incoming events to the Codefresh Application Proxy.
+
+ ## Related articles
+[Codefresh pricing](https://codefresh.io/pricing/)
+[Codefresh features](https://codefresh.io/features/)
+
\ No newline at end of file
diff --git a/_docs/installation/upgrade-gitops-cli.md b/_docs/installation/upgrade-gitops-cli.md
new file mode 100644
index 00000000..30e06096
--- /dev/null
+++ b/_docs/installation/upgrade-gitops-cli.md
@@ -0,0 +1,87 @@
+---
+title: "Download/upgrade Codefresh CLI"
+description: "Have the latest version of the Codefresh CLI for GitOps runtimes"
+group: installation
+toc: true
+---
+
+You need the Codefresh CLI to install Hybrid GitOps Runtimes, and have access to all the newest features.
+For the initial download, you need to generate an API key and create the API authentication context, which you do from the UI.
+When newer versions are available, the CLI automatically notifies you through a banner. You can use the existing API credentials for the upgrade.
+
+
+## GitOps CLI installation modes
+The table lists the modes available to install the Codefresh CLI.
+
+{: .table .table-bordered .table-hover}
+| Install mode | OS | Commands |
+| -------------- | ----------| ----------|
+| `curl` | MacOS-x64 | `curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-darwin-amd64.tar.gz | tar zx && mv ./cf-darwin-amd64 /usr/local/bin/cf && cf version`|
+| | MacOS-m1 |`curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-darwin-arm64.tar.gz | tar zx && mv ./cf-darwin-arm64 /usr/local/bin/cf && cf version` |
+| | Linux - X64 |`curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-linux-amd64.tar.gz | tar zx && mv ./cf-linux-amd64 /usr/local/bin/cf && cf version` |
+| | Linux - ARM | `curl -L --output - https://github.com/codefresh-io/cli-v2/releases/latest/download/cf-linux-arm64.tar.gz | tar zx && mv ./cf-linux-arm64 /usr/local/bin/cf && cf version`|
+| `brew` | N/A| `brew tap codefresh-io/cli && brew install cf2`|````
+
+## Install the GitOps CLI
+Install the Codefresh CLI using the option that best suits you: `curl`, `brew`, or standard download.
+If you are not sure which OS to select for `curl`, simply select one, and Codefresh automatically identifies and selects the right OS for CLI installation.
+
+1. Do one of the following:
+ * For first-time installation, go to the Welcome page, select **+ Install Runtime**.
+ * If you have provisioned a GitOps Runtime, in the Codefresh UI, go to [GitOps Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}, and select **+ Add Runtime**.
+1. Install the Codefresh CLI:
+ * Select one of the installation modes.
+ * Generate the API key.
+ * Create the authentication context:
+ `cf config create-context codefresh --api-key `
+
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/getting-started/quick-start/quick-start-download-cli.png"
+ url="/images/getting-started/quick-start/quick-start-download-cli.png"
+ alt="Download CLI to install runtime"
+ caption="Download CLI to install runtime"
+ max-width="30%"
+ %}
+
+
+{::nomarkdown}
+
+{:/}
+
+
+## Upgrade the GitOps CLI
+
+The Codefresh CLI automatically self-checks its version, and if a newer version is available, prints a banner with the notification.
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/cli-upgrade-banner.png"
+ url="/images/runtime/cli-upgrade-banner.png"
+ alt="Upgrade banner for Codefresh CLI"
+ caption="Upgrade banner for Codefresh CLI"
+ max-width="40%"
+ %}
+
+
+You can upgrade to a specific version if you so require, or download the latest version to an output folder to upgrade at your convenience.
+
+
+* Do any of the following:
+ * To upgrade to the latest version, run:
+ `cf upgrade`
+ * To upgrade to a specific version, even an older version, run:
+ `cf upgrade --version v`
+ where:
+ `` is the version you want to upgrade to.
+ * To download the latest version to an output file, run:
+ `cf upgrade --version v -o `
+ where:
+ * `` is the path to the destination file, for example, `/cli-download`.
+
+## Related articles
+[Hosted GitOps Runtime setup]({{site.baseurl}}/docs/installation/hosted-runtime)
+[Hybrid GitOps Runtime installation]({{site.baseurl}}/docs/installation/hybrid-gitops)
diff --git a/_docs/reference/behind-the-firewall.md b/_docs/reference/behind-the-firewall.md
new file mode 100644
index 00000000..b01ba138
--- /dev/null
+++ b/_docs/reference/behind-the-firewall.md
@@ -0,0 +1,248 @@
+---
+title: "Runner installation behind firewalls"
+description: "Run Codefresh Pipelines in your own secure infrastructure"
+group: installation
+redirect_from:
+ - /docs/enterprise/behind-the-firewall/
+toc: true
+
+---
+
+As described in [installation options]({{site.baseurl}}/docs/installation/installation-options/) Codefresh offers CI/CD and GitOps installation environments, each with its own installation options.
+This articles focuses on the CI/CD Hybrid installation option with the Codefresh Runner and its advantages.
+
+## Running Codefresh CI/CD in secure environments
+
+Codefresh CI/CD has an on-premises installation in which the Codefresh CI/CD platform is installed on the customer's premises. While
+this solution is very effective as far as security is concerned, it places a lot of overhead on the customer, as all updates
+and improvements done in the platform must also be transferred to the customer premises.
+
+Hybrid CI/CD places a Codefresh Runner within the customer premises, and the UI (and management platform) stays in the Codefresh SaaS.
+
+Here is the overall architecture:
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/behind-the-firewall/architecture.png"
+ url="/images/administration/behind-the-firewall/architecture.png"
+ alt="Codefresh Hybrid CD/CD behind the firewall"
+ caption="Codefresh Hybrid CD/CD behind the firewall"
+ max-width="100%"
+ %}
+
+The advantages for this scenario are multi-fold.
+
+Regarding platform maintenance:
+
+ 1. Codefresh is responsible for the heavy lifting for platform maintenance, instead of the customer.
+ 1. Updates to the UI, build engine, integrations etc., happen automatically, without any customer involvement.
+ 1. Actual builds run in the customer premises under fully controlled conditions.
+ 1. Codefresh Runner is fully automated. It handles volume claims and build scheduling on its own within the Kubernetes cluster it is placed.
+
+Regarding security of services:
+
+ 1. Pipelines can run in behind-the-firewall clusters with internal services.
+ 1. Pipelines can use integrations (such as Docker registries) that are private and secure.
+ 1. Source code does not ever leave the customer premises.
+
+Regarding firewall security:
+
+ 1. Uni-directional, outgoing communication between the Codefresh Runner and Codefresh CI/CD Platform. The Runner polls the Codefresh platform for jobs.
+ 1. Codefresh SaaS never connects to the customer network. No ports need to be open in the customer firewall for the runner to work.
+ 1. The Codefresh Runner is fully open-sourced, so its code can be scrutinized by any stakeholder.
+
+
+
+## Using secure services in your CI pipelines
+
+After installing the [Codefresh Runner]({{site.baseurl}}/docs/installation/codefresh-runner/) on your private Kubernetes cluster in your infrastructure, all CI pipelines in the private Kubernetes cluster have access to all other internal services that are network reachable.
+
+You can easily create CI pipelines that:
+
+ * Use databases internal to the company
+ * Run integration tests against services internal to the company
+ * Launch [compositions]({{site.baseurl}}/docs/pipelines/steps/composition/) that communicate with other secure services
+ * Upload and download artifacts from a private artifact repository (e.g., Nexus or Artifactory)
+ * Deploy to any other cluster accessible in the secure network
+ * Create infrastructure such as machines, load balancers, auto-scaling groups etc.
+
+ Any of these CI pipelines will work out the box without extra configuration. In all cases,
+ all data stays witin the private local network and does not exit the firewall.
+
+ >Notice that [long-running compositions]({{site.baseurl}}/docs/pipelines/steps/composition/) (preview test environments) are not yet available via the Codefresh build runner.
+
+
+
+### Checking out code from a private GIT repository
+
+To check out code from your private Git repository, you need to connect first to Codefresh via [GIT integrations]({{site.baseurl}}/docs/integrations/git-providers/). However, once you define your GIT provider as *on premise* you also
+need to mark it as *behind the firewall* as well:
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/behind-the-firewall/behind-the-firewall-toggle.png"
+ url="/images/administration/behind-the-firewall/behind-the-firewall-toggle.png"
+ alt="Behind the firewall toggle"
+ caption="Behind the firewall toggle"
+ max-width="100%"
+ %}
+
+Once you do that save your provider and make sure that it has the correct tags. The name you used for the git provider will also be used in the pipeline. You cannot "test the connection" because
+the Codefresh SAAS doesn't have access to your on-premises GIT repository.
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/behind-the-firewall/behind-the-firewall-tag.png"
+ url="/images/administration/behind-the-firewall/behind-the-firewall-tag.png"
+ alt="Behind the firewall tags"
+ caption="Behind the firewall tags"
+ max-width="100%"
+ %}
+
+To check out code just use a [clone step]({{site.baseurl}}/docs/pipelines/steps/git-clone/) like any other clone operation.
+The only thing to remember is that the GIT URL must be fully qualified. You need to [create a pipeline]({{site.baseurl}}/docs/pipelines/pipelines/#pipeline-creation-modes) on it its own from the *Pipelines* section of the left sidebar (instead of one adding a git repository to Codefresh)
+
+
+
+`YAML`
+{% highlight yaml %}
+{% raw %}
+version: '1.0'
+steps:
+ main_clone:
+ type: git-clone
+ description: Step description
+ repo: https://github-internal.example.com/my-username/my-app
+ git: my-internal-git-provider
+ BuildingDockerImage:
+ title: Building Docker Image
+ type: build
+ image_name: my-image
+ tag: '${{CF_BRANCH_TAG_NORMALIZED}}-${{CF_SHORT_REVISION}}'
+ dockerfile: Dockerfile
+{% endraw %}
+{% endhighlight %}
+
+Once you trigger the CI pipeline, the Codefresh builder will communicate with your private GIT instance and checks out code.
+
+>Note that currently there is a limitation on the location of the `codefresh.yml` file. Only the [inline mode]({{site.baseurl}}/docs/pipelines/pipelines/#writing-codefresh-yml-in-the-gui) is supported. Soon we will allow the loading of the pipeline from the Git repository itself.
+
+You can also use a [network proxy]({{site.baseurl}}/docs/pipelines/steps/git-clone/#using-git-behind-a-proxy) for the Git clone step.
+
+#### Adding triggers from private GIT repositories
+
+
+In the previous section we have seen how a CI pipeline can check out code from an internal Git repository. We also need to set up a trigger,
+so that every time a commit or any other supported event occurs, the Codefresh CI pipeline is triggered automatically.
+
+If you have installed the [optional app-proxy]({{site.baseurl}}/docs/installation/codefresh-runner/#optional-installation-of-the-app-proxy), adding a trigger can be done exactly like the SAAS version of Codefresh, using only the Codefresh UI.
+
+If you haven't installed the app-proxy, then adding a Git trigger is a two-step process:
+
+1. First we set up a webhook endpoint in Codefresh.
+1. Then we create the webhook call in the side of the the GIT provider.
+
+> To support triggers based on PR (Pull Request) events, it is mandatory to install `app-proxy`.
+
+For the Codefresh side, follow the usual instructions for creating a [basic git trigger]({{site.baseurl}}/docs/configure-ci-cd-pipeline/triggers/git-triggers/).
+
+Once you select your GIT provider, you need to manually enter your username and repository that you wish to trigger the build.
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/behind-the-firewall/enter-repo-details.png"
+ url="/images/administration/behind-the-firewall/enter-repo-details.png"
+ alt="Entering repository details"
+ caption="Entering repository details"
+ max-width="60%"
+ %}
+
+All other details (git events, branch naming, monorepo pattern, etc.) are still the same as normal SAAS GIT providers.
+Once that is done, Codefresh will show you the webhook endpoint along with a secret for triggering this pipeline. Note them down.
+
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/behind-the-firewall/codefresh-webhook.png"
+ url="/images/administration/behind-the-firewall/codefresh-webhook.png"
+ alt="Codefresh webhook details"
+ caption="Codefresh webhook details"
+ max-width="60%"
+ %}
+
+This concludes the setup on the Codefresh side. The final step is create a webhook call on the side of your GIT provider.
+The instructions are different per GIT provider:
+
+* [GitHub webhooks](https://developer.github.com/webhooks/)
+* [GitLab webhooks](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html)
+* [Stash webhooks](https://confluence.atlassian.com/bitbucketserver/managing-webhooks-in-bitbucket-server-938025878.html)
+
+In all cases make sure that the payload is JSON, because this is what Codefresh expects.
+
+* For GitHub the events monitored should be `Pull requests` and `Pushes`.
+* For GitLab the events monitored should be `Push events`,`Tag push events` and `Merge request events`.
+
+After the setup is finished, the Codefresh pipeline will be executed every time a git event happens.
+
+### Accessing an internal docker registry
+
+To access an internal registry just follow the instructions for [adding registries]({{site.baseurl}}/docs/docker-registries/external-docker-registries/) . Like GIT repositories
+you need to mark the Docker registry as *Behind the firewall*.
+
+Once that is done, use the [push step]({{site.baseurl}}/docs/codefresh-yaml/steps/push/) as usual with the name you gave to the registry during the integration setup.
+
+
+`YAML`
+{% highlight yaml %}
+{% raw %}
+version: '1.0'
+steps:
+ gitClone:
+ type: git-clone
+ description: Step description
+ repo: https://github-internal.example.com/my-username/my-app
+ git: my-internal-git-repo
+ BuildingDockerImage:
+ title: Building Docker Image
+ type: build
+ image_name: my-image
+ dockerfile: Dockerfile
+ PushingDockerImage:
+ title: Pushing a docker image
+ type: push
+ candidate: '${{BuildingDockerImage}}'
+ tag: '${{CF_BRANCH}}'
+ registry: my-internal-docker-registry
+{% endraw %}
+{% endhighlight %}
+
+
+### Deploying to an internal Kubernetes cluster
+
+To connect a cluster that is behind the firewall follow the [connecting cluster guide]({{site.baseurl}}/docs/deploy-to-kubernetes/add-kubernetes-cluster/), paying attention to the following two points:
+
+1. Your cluster should be added as a [Custom provider]({{site.baseurl}}/docs/deploy-to-kubernetes/add-kubernetes-cluster/#adding-any-other-cluster-type-not-dependent-on-any-provider)
+1. You need to mark the cluster as internal by using the toggle switch.
+
+
+
+
+{% include image.html
+ lightbox="true"
+ file="/images/administration/behind-the-firewall/cluster-behind-firewall.png"
+ url="/images/administration/behind-the-firewall/cluster-behind-firewall.png"
+ alt="Marking a Kubernetes cluster as internal"
+ caption="Marking a Kubernetes cluster as internal"
+ max-width="60%"
+ %}
+
+The cluster where the runner works on should have network connectivity with the cluster you wish to deploy to.
+
+>Notice that the service account used in the cluster configuration is completely independent from the privileges granted to the Codefresh build runner. The privileges needed by the runner are only used to launch Codefresh pipelines within your cluster. The Service account used in the "custom provider" setting should have the needed privileges for deployment.
+
+Once your cluster is connected you can use any of the familiar deployment methods such as the [dedicated deploy step]({{site.baseurl}}/docs/deploy-to-kubernetes/deployment-options-to-kubernetes/) or [custom kubectl commands]({{site.baseurl}}/docs/deploy-to-kubernetes/custom-kubectl-commands/).
+
+## Related articles
+[Codefresh installation options]({{site.baseurl}}/docs/installation/installation-options/)
+[Google marketplace integration]({{site.baseurl}}/docs/integrations/ci-integrations/google-marketplace/)
+[Managing your Kubernetes cluster]({{site.baseurl}}/docs/deployments/kubernetes/manage-kubernetes/)
diff --git a/_docs/runtime/download-runtime-logs.md b/_docs/runtime/download-runtime-logs.md
deleted file mode 100644
index ca6cf8ff..00000000
--- a/_docs/runtime/download-runtime-logs.md
+++ /dev/null
@@ -1,91 +0,0 @@
----
-title: "View/download runtime logs"
-description: ""
-group: runtime
-toc: true
----
-
-Logs are available for completed runtimes, both for the runtime and for individual runtime components. Download runtime log files for offline viewing and analysis, or view online logs for a runtime component, and download if needed for offline analysis. Online logs support free-text search, search-result navigation, and line-warp for enhanced readability.
-
-Log files include events from the date of the application launch, with the newest events listed first.
-
-
-### Download logs for runtimes
-Download the log file for a runtime. The runtime log is downloaded as a `.tar.gz` file, which contains the individual log files for each runtime component.
-
-1. In the Codefresh UI, go to [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
-1. If needed, switch to **List View**, and then select the runtime for which to download logs.
-1. From the list of **Additional Actions**, select **Download All Logs**.
- The log file is downloaded to the Downloads folder or the folder designated for downloads, with the filename, `.tar.gz`. For example, `codefreshv2-production2.tar.gz`.
-
-
- {% include
- image.html
- lightbox="true"
- file="/images/runtime/runtime-logs-download-all.png"
- url="/images/runtime/runtime-logs-download-all.png"
- alt="Download logs for selected runtime"
- caption="Download logs for selected runtime"
- max-width="40%"
-%}
-
-
-{:start="4"}
-1. To view the log files of the individual components, unzip the file.
- Here is an example of the folder with the individual logs.
-
- {% include
- image.html
- lightbox="true"
- file="/images/runtime/runtime-logs-folder-view.png"
- url="/images/runtime/runtime-logs-folder-view.png"
- alt="Individual log files in folder"
- caption="Individual log files in folder"
- max-width="50%"
-%}
-
-{:start="5"}
-1. Open a log file with the text editor of your choice.
-
-
-### View/download logs for runtime components
-View online logs for any runtime component, and if needed, download the log file for offline viewing and analysis.
-
-Online logs show up to 1000 of the most recent events (lines), updated in real time. Downloaded logs include all the events from the application launch to the date and time of download.
-
-1. In the Codefresh UI, go to [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
-1. If needed, switch to **List View**, and then select the runtime.
-1. Select the runtime component and then select **View Logs**.
-
- {% include
- image.html
- lightbox="true"
- file="/images/runtime/runtime-logs-view-component.png"
- url="/images/runtime/runtime-logs-view-component.png"
- alt="View log option for individual runtime component"
- caption="View log option for individual runtime component"
- max-width="40%"
-%}
-
-
-{:start="4"}
-1. Do the following:
- * Search by free-text for any string, and click the next and previous buttons to navigate between the search results.
- * To switch on line-wrap for readability, click **Wrap**.
-
- {% include
- image.html
- lightbox="true"
- file="/images/runtime/runtime-logs-screen-view.png"
- url="/images/runtime/runtime-logs-screen-view.png"
- alt="Runtime component log example"
- caption="Runtime component log example"
- max-width="50%"
-%}
-
-{:start="5"}
-1. To download the log, click **Download**.
- The file is downloaded as `.log`.
-
-### Related information
-[Manage Git Sources]({{site.baseurl}}/docs/runtime/git-sources/#viewdownload-logs-for-a-git-source)
\ No newline at end of file
diff --git a/_docs/runtime/installation-options.md b/_docs/runtime/installation-options.md
deleted file mode 100644
index e75e2058..00000000
--- a/_docs/runtime/installation-options.md
+++ /dev/null
@@ -1,90 +0,0 @@
----
-title: "Installation environments"
-description: ""
-group: runtime
-toc: true
----
-
-Codefresh supports two installation environments:
-
-
-* **Hosted** environments (Beta), with Argo CD installed in the Codefresh cluster.
- The runtime is installed and provisioned in a Codefresh cluster, and managed by Codefresh.
- Hosted enviroments are full-cloud environments, where all updates and improvements are managed by Codefresh, with zero-maintenance overhead for you as the customer. Currently, you can add one hosted runtime per account.
- For the architecture illustration, see [Hosted runtime architecture]({{site.baseurl}}/docs/getting-started/architecture/#hosted-runtime-architecture).
-
-
-{% include
- image.html
- lightbox="true"
- file="/images/runtime/intro-hosted-hosted-initial-view.png"
- url="/images/runtime/intro-hosted-hosted-initial-view.png"
- alt="Hosted runtime setup"
- caption="Hosted runtime setup"
- max-width="80%"
-%}
-
- For more information on how to set up the hosted environment, including provisioning hosted runtimes, see [Set up a hosted (Hosted GitOps) environment]({{site.baseurl}}/docs/runtime/hosted-runtime/).
-
-* **Hybrid** environments, with Argo CD installed in the customer's cluster.
- The runtime is installed in the customer's cluster, and managed by the customer.
- Hybrid environments are optimal for organizations that want to manage CI/CD operations within their premises, or have other security constraints. Hybrid installations strike the perfect balance between security, flexibility, and ease of use. Codefresh maintains and manages most aspects of the platform, apart from installing and upgrading runtimes which are managed by the customer.
-
-
-{% include
- image.html
- lightbox="true"
- file="/images/runtime/runtime-list-view.png"
- url="/images/runtime/runtime-list-view.png"
- alt="Runtime List View"
- caption="Runtime List View"
- max-width="70%"
-%}
-
- For more information on hybrid environments, see [Hybrid runtime requirements]({{site.baseurl}}/docs/runtime/requirements/) and [Installling hybrid runtimes]({{site.baseurl}}/docs/runtime/installation/).
-
-
-
-#### Git provider repos
-Codefresh Runtime creates three repositories in your organization's Git provider account:
-
-* Codefresh runtime installation repository
-* Codefresh Git Sources
-* Codefresh shared configuration repository
-
-
-
-### Hosted vs.Hybrid environments
-
-The table below highlights the main differences between hosted and hybrid environments.
-
-{: .table .table-bordered .table-hover}
-| Functionality |Feature | Hosted | Hybrid |
-| -------------- | -------------- |--------------- | --------------- |
-| Runtime | Installation | Provisioned by Codefresh | Provisioned by customer |
-| | Runtime cluster | Managed by Codefresh | Managed by customer |
-| | Number per account | One runtime | Multiple runtimes |
-| | External cluster | Managed by customer | Managed by customer |
-| | Upgrade | Managed by Codefresh | Managed by customer |
-| | Uninstall | Managed by customer | Managed by customer |
-| Argo CD | | Codefresh cluster | Customer cluster |
-| CI Ops | Delivery Pipelines |Not supported | Supported |
-| |Workflows | Not supported | Supported |
-| |Workflow Templates | Not supported | Supported |
-| CD Ops |Applications | Supported | Supported |
-| |Image enrichment | Supported | Supported |
-| | Rollouts | Supported | Supported |
-|Integrations | | Supported | Supported |
-|Dashboards |Home Analytics | Hosted runtime and deployments|Runtimes, deployments, Delivery Pipelines |
-| |DORA metrics | Supported |Supported |
-| |Applications | Supported |Supported |
-
-### Related articles
-[Architecture]({{site.baseurl}}/docs/getting-started/architecture/)
-[Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/)
-[Shared configuration repository]({{site.baseurl}}/docs/reference/shared-configuration)
-
diff --git a/_docs/runtime/installation.md b/_docs/runtime/installation.md
deleted file mode 100644
index 44012210..00000000
--- a/_docs/runtime/installation.md
+++ /dev/null
@@ -1,535 +0,0 @@
----
-title: "Install hybrid runtimes"
-description: ""
-group: runtime
-toc: true
----
-
-If you have a hybrid environment, you can provision one or more hybrid runtimes in your Codefresh account.
-
-> If you have Hosted GitOps, to provision a hosted runtime, see [Provision a hosted runtime]({{site.baseurl}}/docs/runtime/hosted-runtime/#1-provision-hosted-runtime) in [Set up a hosted (Hosted GitOps) environment]({{site.baseurl}}/docs/runtime/hosted-runtime/).
-
-**Git providers and runtimes**
-Your Codefresh account is always linked to a specific Git provider. This is the Git provider you select on installing the first runtime, either hybrid or hosted, in your Codefresh account. All the hybrid runtimes you install in the same account use the same Git provider.
-If Bitbucker Server is your Git provider, you must also select the specific server instance to associate with the runtime.
-
->To change the Git provider for your Codefresh account after installation, contact Codefresh support.
-
-
-**Hybrid runtime**
- The hybrid runtime comprises Argo CD components and Codefresh-specific components. The Argo CD components are derived from a fork of the Argo ecosystem, and do not correspond to the open-source versions available.
-
-There are two parts to installing a hybrid runtime:
-
-1. Installing the Codefresh CLI
-2. Installing the hybrid runtime from the CLI, either through the CLI wizard or via silent installation through the installation flags.
- The hybrid runtime is installed in a specific namespace on your cluster. You can install more runtimes on different clusters in your deployment.
- Every hybrid runtime installation makes commits to three Git repos:
- * Runtime install repo: The installation repo that manages the hybrid runtime itself with Argo CD. If the repo URL does not exist, it is automatically created during runtime installation.
- * Git Source repo: Created automatically during runtime installation. The repo where you store manifests for pipelines and applications. See [Git Sources]({{site.baseurl}}/docs/runtime/git-sources).
- * Shared configuration repo: Created for the first runtime in a user account. The repo stores configuration manifests for account-level resources and is shared with other runtimes in the same account. See [Shared configuration repository]({{site.baseurl}}/docs/reference/shared-configuration).
-
-
-See also [Codefresh architecture]({{site.baseurl}}/docs/getting-started/architecture).
-
-{::nomarkdown}
-
-{:/}
-
-### Hybrid runtime installation flags
-This section describes the required and optional flags to install a hybrid runtime.
-For documentation purposes, the flags are grouped into:
-* Runtime flags, relating to runtime, cluster, and namespace requirements
-* Ingress controller flags, relating to ingress controller requirements
-* Git provider flags
-* Codefresh resource flags
-
-{::nomarkdown}
-
-{:/}
-
-#### Runtime flags
-
-**Runtime name**
-Required.
-The runtime name must start with a lower-case character, and can include up to 62 lower-case characters and numbers.
-* CLI wizard: Add when prompted.
-* Silent install: Add the `--runtime` flag and define the runtime name.
-
-**Namespace resource labels**
-Optional.
-The label of the namespace resource to which you are installing the hybrid runtime. Labels are required to identify the networks that need access during installation, as is the case when using services meshes such as Istio for example.
-
-* CLI wizard and Silent install: Add the `--namespace-labels` flag, and define the labels in `key=value` format. Separate multiple labels with `commas`.
-
-**Kube context**
-Required.
-The cluster defined as the default for `kubectl`. If you have more than one Kube context, the current context is selected by default.
-
-* CLI wizard: Select the Kube context from the list displayed.
-* Silent install: Explicitly specify the Kube context with the `--context` flag.
-
-**Shared configuration repository**
-The Git repository per runtime account with shared configuration manifests.
-* CLI wizard and Silent install: Add the `--shared-config-repo` flag and define the path to the shared repo.
-
-{::nomarkdown}
-
-{:/}
-
-#### Ingress-less flags
-These flags are required to install the runtime without an ingress controller.
-
-**Access mode**
-Required.
-
-The access mode for ingress-less runtimes, the tunnel mode.
-
-
-* CLI wizard and Silent install: Add the flag, `--access-mode`, and define `tunnel` as the value.
-
-
-**IP allowlist**
-
-Optional.
-
-The allowed list of IPs from which to forward requests to the internal customer cluster for ingress-less runtime installations. The allowlist can include IPv4 and IPv6 addresses, with/without subnet and subnet masks. Multiple IPs must be separated by commas.
-
-When omitted, all incoming requests are authenticated regardless of the IPs from which they originated.
-
-* CLI wizard and Silent install: Add the `--ips-allow-list` flag, followed by the IP address, or list of comma-separated IPs to define more than one. For example, `--ips-allow-list 77.126.94.70/16,192.168.0.0`
-
-{::nomarkdown}
-
-{:/}
-
-#### Ingress controller flags
-
-
-**Skip ingress**
-Required, if you are using an unsupported ingress controller.
-For unsupported ingress controllers, bypass installing ingress resources with the `--skip-ingress` flag.
-In this case, after completing the installation, manually configure the cluster's routing service, and create and register Git integrations. See the last step in [Install the hybrid runtime](#install-the-hybrid-runtime).
-
-**Ingress class**
-Required.
-
-* CLI wizard: Select the ingress class for runtime installation from the list displayed.
-* Silent install: Explicitly specify the ingress class through the `--ingress-class` flag. Otherwise, runtime installation fails.
-
-**Ingress host**
-Required.
-The IP address or host name of the ingress controller component.
-
-* CLI wizard: Automatically selects and displays the host, either from the cluster or the ingress controller associated with the **Ingress class**.
-* Silent install: Add the `--ingress-host` flag. If a value is not provided, takes the host from the ingress controller associated with the **Ingress class**.
- > Important: For AWS ALB, the ingress host is created post-installation. However, when prompted, add the domain name you will create in `Route 53` as the ingress host.
-
-**Insecure ingress hosts**
-TLS certificates for the ingress host:
-If the ingress host does not have a valid TLS certificate, you can continue with the installation in insecure mode, which disables certificate validation.
-
-* CLI wizard: Automatically detects and prompts you to confirm continuing the installation in insecure mode.
-* Silent install: To continue with the installation in insecure mode, add the `--insecure-ingress-host` flag.
-
-**Internal ingress host**
-Optional.
-Enforce separation between internal (app-proxy) and external (webhook) communication by adding an internal ingress host for the app-proxy service in the internal network.
-For both CLI wizard and Silent install:
-
-* For new runtime installations, add the `--internal-ingress-host` flag pointing to the ingress host for `app-proxy`.
-* For existing installations, commit changes to the installation repository by modifying the `app-proxy ingress` and `.yaml`
- See [(Optional) Internal ingress host configuration for existing hybrid runtimes](#optional-internal-ingress-host-configuration-for-existing-hybrid-runtimes).
-
-{::nomarkdown}
-
-{:/}
-
-
-
-#### Git provider and repo flags
-The Git provider defined for the runtime.
-
->Because Codefresh creates a [shared configuration repo]({{site.baseurl}}/docs/reference/shared-configuration) for the runtimes in your account, the Git provider defined for the first runtime you install in your account is used for all the other runtimes in the same account.
-
-You can define any of the following Git providers:
-* GitHub:
- * [GitHub](#github) (the default Git provider)
- * [GitHub Enterprise](#github-enterprise)
-* GitLab:
- * [GitLab Cloud](#gitlab-cloud)
- * [GitLab Server](#gitlab-server)
-* Bitbucket:
- * [Bitbucket Cloud](#bitbucket-cloud)
- * [Bitbucket Server](#bitbucket-server)
-
-{::nomarkdown}
-
-{:/}
-
-
-
-##### GitHub
-GitHub is the default Git provider for hybrid runtimes. Being the default provider, for both the CLI wizard and Silent install, you need to provide only the repository URL and the Git runtime token.
-
-> For the required scopes, see [GitHub and GitHub Enterprise runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#github-and-github-enterprise-runtime-token-scopes).
-
-`--repo --git-token `
-
-where:
-* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the runtime installation, including the `.git` suffix. Copy the clone URL from your GitHub website (see [Cloning with HTTPS URLs](https://docs.github.com/en/get-started/getting-started-with-git/about-remote-repositories#cloning-with-https-urls){:target="\_blank"}).
- If the repo doesn't exist, copy an existing clone URL and change the name of the repo. Codefresh creates the repository during runtime installation.
-
- Repo URL format:
- `https://github.com//reponame>.git[/subdirectory][?ref=branch]`
- where:
- * `/` is your username or organization name, followed by the name of the repo, identical to the HTTPS clone URL. For example, `https://github.com/nr-codefresh/codefresh.io.git`.
- * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the runtime is installed in the root of the repository. For example, `/runtimes/defs`.
- * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the runtime is installed in the default branch. For example, `codefresh-prod`.
-
- Example:
- `https://github.com/nr-codefresh/codefresh.io.git/runtimes/defs?ref=codefresh-prod`
-* `--git-token ` (required), is the Git token authenticating access to the runtime installation repository (see [GitHub runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#github-and-github-enterprise-runtime-token-scopes)).
-
-{::nomarkdown}
-
-{:/}
-
-##### GitHub Enterprise
-
-> For the required scopes, see [GitHub and GitHub Enterprise runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#github-and-github-enterprise-runtime-token-scopes).
-
-
-`--enable-git-providers --provider github --repo --git-token `
-
-where:
-* `--enable-git-providers` (required), indicates that you are not using the default Git provider for the runtime.
-* `--provider github` (required), defines GitHub Enterprise as the Git provider for the runtime and the account.
-* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the runtime installation, including the `.git` suffix. Copy the clone URL for HTTPS from your GitHub Enterprise website (see [Cloning with HTTPS URLs](https://docs.github.com/en/get-started/getting-started-with-git/about-remote-repositories#cloning-with-https-urls){:target="\_blank"}).
- If the repo doesn't exist, copy an existing clone URL and change the name of the repo. Codefresh creates the repository during runtime installation.
- Repo URL format:
-
- `https://ghe-trial.devops.cf-cd.com//.git[/subdirectory][?ref=branch]`
- where:
- * `/` is your username or organization name, followed by the name of the repo. For example, `codefresh-io/codefresh.io.git`.
- * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the runtime is installed in the root of the repository. For example, `/runtimes/defs`.
- * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the runtime is installed in the default branch. For example, `codefresh-prod`.
-
- Example:
- `https://ghe-trial.devops.cf-cd.com/codefresh-io/codefresh.io.git/runtimes/defs?ref=codefresh-prod`
-* `--git-token ` (required), is the Git token authenticating access to the runtime installation repository (see [GitHub runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#github-and-github-enterprise-runtime-token-scopes)).
-
-
-{::nomarkdown}
-
-{:/}
-
-##### GitLab Cloud
-> For the required scopes, see [GitLab Cloud and GitLab Server runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#gitlab-cloud-and-gitlab-server-runtime-token-scopes).
-
-
-`--enable-git-providers --provider gitlab --repo --git-token `
-
-where:
-* `--enable-git-providers`(required), indicates that you are not using the default Git provider for the runtime.
-* `--provider gitlab` (required), defines GitLab Cloud as the Git provider for the runtime and the account.
-* `--repo ` (required), is the `HTTPS` clone URL of the Git project for the runtime installation, including the `.git` suffix. Copy the clone URL for HTTPS from your GitLab website.
- If the repo doesn't exist, copy an existing clone URL and change the name of the repo. Codefresh creates the repository during runtime installation.
-
- > Important: You must create the group with access to the project prior to the installation.
-
- Repo URL format:
-
- `https://gitlab.com//.git[/subdirectory][?ref=branch]`
- where:
- * `` is either your username, or if your project is within a group, the front-slash separated path to the project. For example, `nr-codefresh` (owner), or `parent-group/child-group` (group hierarchy)
- * `` is the name of the project. For example, `codefresh`.
- * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the runtime is installed in the root of the repository. For example, `/runtimes/defs`.
- * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the runtime is installed in the default branch. For example, `codefresh-prod`.
-
- Examples:
- `https://gitlab.com/nr-codefresh/codefresh.git/runtimes/defs?ref=codefresh-prod` (owner)
-
- `https://gitlab.com/parent-group/child-group/codefresh.git/runtimes/defs?ref=codefresh-prod` (group hierarchy)
-
-* `--git-token ` (required), is the Git token authenticating access to the runtime installation repository (see [GitLab runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#gitlab-cloud-and-gitlab-server-runtime-token-scopes)).
-
-
-{::nomarkdown}
-
-{:/}
-
-
-
-##### GitLab Server
-
-> For the required scopes, see [GitLab Cloud and GitLab Server runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#gitlab-cloud-and-gitlab-server-runtime-token-scopes).
-
-`--enable-git-providers --provider gitlab --repo --git-token `
-
-where:
-* `--enable-git-providers` (required), indicates that you are not using the default Git provider for the runtime.
-* `--provider gitlab` (required), defines GitLab Server as the Git provider for the runtime and the account.
-* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the runtime installation, including the `.git` suffix.
- If the project doesn't exist, copy an existing clone URL and change the name of the project. Codefresh creates the project during runtime installation.
-
- > Important: You must create the group with access to the project prior to the installation.
-
- Repo URL format:
- `https://gitlab-onprem.devops.cf-cd.com//.git[/subdirectory][?ref=branch]`
- where:
- * `` is your username, or if the project is within a group or groups, the name of the group. For example, `nr-codefresh` (owner), or `parent-group/child-group` (group hierarchy)
- * `` is the name of the project. For example, `codefresh`.
- * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the runtime is installed in the root of the repository. For example, `/runtimes/defs`.
- * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the runtime is installed in the default branch. For example, `codefresh-prod`.
-
- Examples:
- `https://gitlab-onprem.devops.cf-cd.com/nr-codefresh/codefresh.git/runtimes/defs?ref=codefresh-prod` (owner)
-
- `https://gitlab-onprem.devops.cf-cd.com/parent-group/child-group/codefresh.git/runtimes/defs?ref=codefresh-prod` (group hierarchy)
-
-* `--git-token ` (required), is the Git token authenticating access to the runtime installation repository (see [GitLab runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#gitlab-cloud-and-gitlab-server-runtime-token-scopes)).
-
-
-{::nomarkdown}
-
-{:/}
-
-##### Bitbucket Cloud
-> For the required scopes, see [Bitbucket runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#bitbucket-cloud-and-bitbucket-server-runtime-token-scopes).
-
-
-`--enable-git-providers --provider bitbucket --repo --git-user --git-token `
-
-where:
-* `--enable-git-providers` (required), indicates that you are not using the default Git provider for the runtime.
-* `--provider gitlab` (required), defines Bitbucket Cloud as the Git provider for the runtime and the account.
-* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the runtime installation, including the `.git` suffix.
- If the project doesn't exist, copy an existing clone URL and change the name of the project. Codefresh creates the project during runtime installation.
- >Important: Remove the username, including @ from the copied URL.
-
- Repo URL format:
-
- `https://bitbucket.org.git[/subdirectory][?ref=branch]`
- where:
- * `` is your workspace ID. For example, `nr-codefresh`.
- * `` is the name of the repository. For example, `codefresh`.
- * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the runtime is installed in the root of the repository. For example, `/runtimes/defs`.
- * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the runtime is installed in the default branch. For example, `codefresh-prod`.
-
- Example:
- `https://bitbucket.org/nr-codefresh/codefresh.git/runtimes/defs?ref=codefresh-prod`
-* `--git-user ` (required), is your username for the Bitbucket Cloud account.
-* `--git-token ` (required), is the Git token authenticating access to the runtime installation repository (see [Bitbucket runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#bitbucket-cloud-and-bitbucket-server-runtime-token-scopes)).
-
-
-{::nomarkdown}
-
-{:/}
-
-##### Bitbucket Server
-
-> For the required scopes, see [Bitbucket runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#bitbucket-cloud-and-bitbucket-server-runtime-token-scopes).
-
-
-`--enable-git-providers --provider bitbucket-server --repo --git-user --git-token `
-
-where:
-* `--enable-git-providers` (required), indicates that you are not using the default Git provider for the runtime.
-* `--provider gitlab` (required), defines Bitbucket Cloud as the Git provider for the runtime and the account.
-* `--repo ` (required), is the `HTTPS` clone URL of the Git repository for the runtime installation, including the `.git` suffix.
- If the project doesn't exist, copy an existing clone URL and change the name of the project. Codefresh then creates the project during runtime installation.
- >Important: Remove the username, including @ from the copied URL.
-
- Repo URL format:
-
- `https://bitbucket-server-8.2.devops.cf-cd.com:7990/scm//.git[/subdirectory][?ref=branch]`
- where:
- * `` is your username or organization name. For example, `codefresh-io.`.
- * `` is the name of the repo. For example, `codefresh`.
- * `[/subdirectory]` (optional) is the path to a subdirectory within the repo. When omitted, the runtime is installed in the root of the repository. For example, `/runtimes/defs`.
- * `[?ref=branch]` (optional) is the `ref` queryParam to select a specific branch. When omitted, the runtime is installed in the default branch. For example, `codefresh-prod`.
-
- Example:
- `https://bitbucket-server-8.2.devops.cf-cd.com:7990/scm/codefresh-io/codefresh.git/runtimes/defs?ref=codefresh-prod`
-* `--git-user ` (required), is your username for the Bitbucket Server account.
-* `--git-token ` (required), is the Git token authenticating access to the runtime installation repository (see [Bitbucket runtime token scopes]({{site.baseurl}}/docs/reference/git-tokens/#bitbucket-cloud-and-bitbucket-server-runtime-token-scopes)).
-
-{::nomarkdown}
-
-{:/}
-
-#### Codefresh resource flags
-**Codefresh demo resources**
-Optional.
-Install demo pipelines to use as a starting point to create your own pipelines. We recommend installing the demo resources as these are used in our quick start tutorials.
-
-* Silent install: Add the `--demo-resources` flag, and define its value as `true` (default), or `false`. For example, `--demo-resources=true`
-
-**Insecure flag**
-For _on-premises installations_, if the Ingress controller does not have a valid SSL certificate, to continue with the installation, add the `--insecure` flag to the installation command.
-
-{::nomarkdown}
-
-{:/}
-
-
-### Install the Codefresh CLI
-
-Install the Codefresh CLI using the option that best suits you: `curl`, `brew`, or standard download.
-If you are not sure which OS to select for `curl`, simply select one, and Codefresh automatically identifies and selects the right OS for CLI installation.
-
-{::nomarkdown}
-
-{:/}
-
-### Install the hybrid runtime
-
-**Before you begin**
-* Make sure you meet the [minimum requirements]({{site.baseurl}}/docs/runtime/requirements/#minimum-requirements) for runtime installation
-* Make sure you have [runtime token with the required scopes from your Git provdier]({{site.baseurl}}/docs/reference/git-tokens)
-* [Download or upgrade to the latest version of the CLI]({{site.baseurl}}/docs/clients/csdp-cli/#upgrade-codefresh-cli)
-* Review [Hybrid runtime installation flags](#hybrid-runtime-installation-flags)
-* Make sure your ingress controller is configured correctly:
- * [Ambasador ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#ambassador-ingress-configuration)
- * [AWS ALB ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#alb-aws-ingress-configuration)
- * [Istio ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#istio-ingress-configuration)
- * [NGINX Enterprise ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#nginx-enterprise-ingress-configuration)
- * [NGINX Community ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#nginx-community-version-ingress-configuration)
- * [Traefik ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#traefik-ingress-configuration)
-
-
-{::nomarkdown}
-
-{:/}
-
-**How to**
-
-1. Do one of the following:
- * If this is your first hybrid runtime installation, in the Welcome page, select **+ Install Runtime**.
- * If you have provisioned a hybrid runtime, to provision additional runtimes, in the Codefresh UI, go to [**Runtimes**](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
-1. Click **+ Add Runtimes**, and then select **Hybrid Runtimes**.
-1. Do one of the following:
- * CLI wizard: Run `cf runtime install`, and follow the prompts to enter the required values.
- * Silent install: Pass the required flags in the install command:
- `cf runtime install --repo --git-token --silent`
- For the list of flags, see [Hybrid runtime installation flags](#hybrid-runtime-installation-flags).
-1. If relevant, complete the configuration for these ingress controllers:
- * [ALB AWS: Alias DNS record in route53 to load balancer]({{site.baseurl}}/docs/runtime/requirements/#alias-dns-record-in-route53-to-load-balancer)
- * [Istio: Configure cluster routing service]({{site.baseurl}}/docs/runtime/requirements/#cluster-routing-service)
- * [NGINX Enterprise ingress controller: Patch certificate secret]({{site.baseurl}}/docs/runtime/requirements/#patch-certificate-secret)
-1. If you bypassed installing ingress resources with the `--skip-ingress` flag for ingress controllers not in the supported list, create and register Git integrations using these commands:
- `cf integration git add default --runtime --api-url `
- `cf integration git register default --runtime --token `
-
-
-{::nomarkdown}
-
-{:/}
-
-### Hybrid runtime components
-
-**Git repositories**
-* Runtime install repository: The installation repo contains three folders: apps, bootstrap and projects, to manage the runtime itself with Argo CD.
-* Git source repository: Created with the name `[repo_name]_git-source`. This repo stores manifests for pipelines with sources, events, workflow templates. See [Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/).
-
-* Shared configuration repository: Stores configuration and resource manifests that can be shared across runtimes, such as integration resources. See [Shared configuration repository]({{site.baseurl}}/docs/reference/shared-configuration/)
-
-**Argo CD components**
-* Project, comprising an Argo CD AppProject and an ApplicationSet
-* Installations of the following applications in the project:
- * Argo CD
- * Argo Workflows
- * Argo Events
- * Argo Rollouts
-
-**Codefresh-specific components**
-* Codefresh Applications in the Argo CD AppProject:
- * App-proxy facilitating behind-firewall access to Git
- * Git Source entity that references the`[repo_name]_git-source`
-
-Once the hybrid runtime is successfully installed, it is provisioned on the Kubernetes cluster, and displayed in the **Runtimes** page.
-
-{::nomarkdown}
-
-{:/}
-
-
-### (Optional) Internal ingress host configuration for existing hybrid runtimes
-If you already have provisioned hybrid runtimes, to use an internal ingress host for app-proxy communication and an external ingress host to handle webhooks, change the specs for the `Ingress` and `Runtime` resources in the runtime installation repository. Use the examples as guidelines.
-
-`/apps/app-proxy/overlays//ingress.yaml`: change `host`
-
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: codefresh-cap-app-proxy
- namespace: codefresh #replace with your runtime name
-spec:
- ingressClassName: nginx
- rules:
- - host: my-internal-ingress-host # replace with the internal ingress host for app-proxy
- http:
- paths:
- - backend:
- service:
- name: cap-app-proxy
- port:
- number: 3017
- path: /app-proxy/
- pathType: Prefix
-```
-
-`..//bootstrap/.yaml`: add `internalIngressHost`
-
-```yaml
-apiVersion: v1
-data:
- base-url: https://g.codefresh.io
- runtime: |
- apiVersion: codefresh.io/v1alpha1
- kind: Runtime
- metadata:
- creationTimestamp: null
- name: codefresh #replace with your runtime name
- namespace: codefresh #replace with your runtime name
- spec:
- bootstrapSpecifier: github.com/codefresh-io/cli-v2/manifests/argo-cd
- cluster: https://7DD8390300DCEFDAF87DC5C587EC388C.gr7.us-east-1.eks.amazonaws.com
- components:
- - isInternal: false
- name: events
- type: kustomize
- url: github.com/codefresh-io/cli-v2/manifests/argo-events
- wait: true
- - isInternal: false
- name: rollouts
- type: kustomize
- url: github.com/codefresh-io/cli-v2/manifests/argo-rollouts
- wait: false
- - isInternal: false
- name: workflows
- type: kustomize
- url: github.com/codefresh-io/cli-v2/manifests/argo-workflows
- wait: false
- - isInternal: false
- name: app-proxy
- type: kustomize
- url: github.com/codefresh-io/cli-v2/manifests/app-proxy
- wait: false
- defVersion: 1.0.1
- ingressClassName: nginx
- ingressController: k8s.io/ingress-nginx
- ingressHost: https://support.cf.com/
- internalIngressHost: https://my-internal-ingress-host # add this line and replace my-internal-ingress-host with your internal ingress host
- repo: https://github.com/NimRegev/my-codefresh.git
- version: 99.99.99
-```
-
-
-### Related articles
-[Add external clusters to runtimes]({{site.baseurl}}/docs/runtime/managed-cluster/)
-[Manage provisioned runtimes]({{site.baseurl}}/docs/runtime/monitor-manage-runtimes/)
-[Monitor provisioned hybrid runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/)
-[Troubleshoot hybrid runtime installation]({{site.baseurl}}/docs/troubleshooting/runtime-issues/)
diff --git a/_docs/runtime/installation_original.md b/_docs/runtime/installation_original.md
deleted file mode 100644
index a9624bc7..00000000
--- a/_docs/runtime/installation_original.md
+++ /dev/null
@@ -1,338 +0,0 @@
----
-title: "Install hybrid runtimes"
-description: ""
-group: runtime
-toc: true
----
-
-If you have a hybrid environment, you can provision one or more hybrid runtimes in your Codefresh account. The hybrid runtime comprises Argo CD components and Codefresh-specific components. The Argo CD components are derived from a fork of the Argo ecosystem, and do not correspond to the open-source versions available.
-
-> If you have Hosted GitOps, to provision a hosted runtime, see [Provision a hosted runtime]({{site.baseurl}}/docs/runtime/hosted-runtime/#1-provision-hosted-runtime) in [Set up a hosted (Hosted GitOps) environment]({{site.baseurl}}/docs/runtime/hosted-runtime/).
-
-There are two parts to installing a hybrid runtime:
-
-1. Installing the Codefresh CLI
-2. Installing the hybrid runtime from the CLI, either through the CLI wizard or via silent installation.
- The hybrid runtime is installed in a specific namespace on your cluster. You can install more runtimes on different clusters in your deployment.
- Every hybrid runtime installation makes commits to two Git repos:
-
- * Runtime install repo: The installation repo that manages the hybrid runtime itself with Argo CD. If the repo URL does not exist, runtime creates it automatically.
- * Git Source repo: Created automatically during runtime installation. The repo where you store manifests to run CodefreshCodefresh pipelines.
-
-See also [Codefresh architecture]({{site.baseurl}}/docs/getting-started/architecture).
-
-### Installing the Codefresh CLI
-
-Install the Codefresh CLI using the option that best suits you: `curl`, `brew`, or standard download.
-If you are not sure which OS to select for `curl`, simply select one, and Codefresh automatically identifies and selects the right OS for CLI installation.
-
-### Installing the hybrid runtime
-
-1. Do one of the following:
- * If this is your first hybrid runtime installation, in the Welcome page, select **+ Install Runtime**.
- * If you have provisioned a hybrid runtime, to provision additional runtimes, in the Codefresh UI, go to [**Runtimes**](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}, and select **+ Add Runtimes**.
-1. Run:
- * CLI wizard: Run `cf runtime install`, and follow the prompts to enter the required values.
- * Silent install: Pass the required flags in the install command:
- `cf runtime install --repo --git-token --silent`
- For the list of flags, see _Hybrid runtime flags_.
-
-> Note:
-> Hybrid runtime installation starts by checking network connectivity and the K8s cluster server version.
- To skip these tests, pass the `--skip-cluster-checks` flag.
-
-#### Hybrid runtime flags
-
-**Runtime name**
-Required.
-The runtime name must start with a lower-case character, and can include up to 62 lower-case characters and numbers.
-* CLI wizard: Add when prompted.
-* Silent install: Required.
-
-**Namespace resource labels**
-Optional.
-The label of the namespace resource to which you are installing the hybrid runtime. You can add more than one label. Labels are required to identity the networks that need access during installation, as is the case when using services meshes such as Istio for example.
-
-* CLI wizard and Silent install: Add the `--namespace-labels` flag, and define the labels in `key=value` format. Separate multiple labels with `commas`.
-
-**Kube context**
-Required.
-The cluster defined as the default for `kubectl`. If you have more than one Kube context, the current context is selected by default.
-
-* CLI wizard: Select the Kube context from the list displayed.
-* Silent install: Explicitly specify the Kube context with the `--context` flag.
-
-**Ingress class**
-Required.
-If you have more than one ingress class configured on your cluster:
-
-* CLI wizard: Select the ingress class for runtime installation from the list displayed.
-* Silent install: Explicitly specify the ingress class through the `--ingress-class` flag. Otherwise, runtime installation fails.
-
-**Ingress host**
-Required.
-The IP address or host name of the ingress controller component.
-
-* CLI wizard: Automatically selects and displays the host, either from the cluster or the ingress controller associated with the **Ingress class**.
-* Silent install: Add the `--ingress-host` flag. If a value is not provided, takes the host from the ingress controller associated with the **Ingress class**.
- > Important: For AWS ALB, the ingress host is created post-installation. However, when prompted, add the domain name you will create in `Route 53` as the ingress host.
-
-SSL certificates for the ingress host:
-If the ingress host does not have a valid SSL certificate, you can continue with the installation in insecure mode, which disables certificate validation.
-
-* CLI wizard: Automatically detects and prompts you to confirm continuing with the installation in insecure mode.
-* Silent install: To continue with the installation in insecure mode, add the `--insecure-ingress-host` flag.
-
-**Internal ingress host**
-Optional.
-Enforce separation between internal (app-proxy) and external (webhook) communication by adding an internal ingress host for the app-proxy service in the internal network.
-For both CLI wizard and Silent install:
-
-* For new runtime installations, add the `--internal-ingress-host` flag pointing to the ingress host for `app-proxy`.
-* For existing installations, commit changes to the installation repository by modifying the `app-proxy ingress` and `.yaml`
- See _Internal ingress host configuration (optional for existing runtimes only)_ in [Post-installation configuration](#post-installation-configuration).
-
-**Ingress resources**
-Optional.
-If you have a different routing service (not NGINX), bypass installing ingress resources with the `--skip-ingress` flag.
-In this case, after completing the installation, manually configure the cluster's routing service, and create and register Git integrations. See _Cluster routing service_ in [Post-installation configuration](#post-installation-configuration).
-
-**Shared configuration repository**
-The Git repository per runtime account with shared configuration manifests.
-* CLI wizard and Silent install: Add the `--shared-config-repo` flag and define the path to the shared repo.
-
-**Insecure flag**
-For _on-premises installations_, if the Ingress controller does not have a valid SSL certificate, to continue with the installation, add the `--insecure` flag to the installation command.
-
-**Repository URLs**
-The GitHub repository to house the installation definitions.
-
-* CLI wizard: If the repo doesn't exist, Codefresh creates it during runtime installation.
-* Silent install: Required. Add the `--repo` flag.
-
-**Git runtime token**
-Required.
-The Git token authenticating access to the GitHub installation repository.
-* Silent install: Add the `--git-token` flag.
-
-**Codefresh demo resources**
-Optional.
-Install demo pipelines to use as a starting point to create your own pipelines. We recommend installing the demo resources as these are used in our quick start tutorials.
-
-* Silent install: Add the `--demo-resources` flag. By default, set to `true`.
-
-### Hybrid runtime components
-
-**Git repositories**
-
-* Runtime install repo: The installation repo contains three folders: apps, bootstrap and projects, to manage the runtime itself with Argo CD.
-* Git source repository: Created with the name `[repo_name]_git-source`. This repo stores manifests for pipelines with sources, events, workflow templates.
-
-**Argo CD components**
-
-* Project, comprising an Argo CD AppProject and an ApplicationSet
-* Installations of the following applications in the project:
- * Argo CD
- * Argo Workflows
- * Argo Events
- * Argo Rollouts
-
-**Codefresh-specific components**
-
-* Codefresh Applications in the Argo CD AppProject:
- * App-proxy facilitating behind-firewall access to Git
- * Git Source entity that references the`[repo_name]_git-source`
-
-Once the hybrid runtime is successfully installed, it is provisioned on the Kubernetes cluster, and displayed in the **Runtimes** page.
-
-### Hybrid runtime post-installation configuration
-
-After provisioning a hybrid runtime, configure additional settings for the following:
-
-* NGINX Enterprise installations (with and without NGINX Ingress Operator)
-* AWS ALB installations
-* Cluster routing service if you bypassed installing ingress resources
-* (Existing hybrid runtimes) Internal and external ingress host specifications
-* Register Git integrations
-
-
-
-#### AWS ALB post-install configuration
-
-For AWS ALB installations, do the following:
-
-* Create an `Alias` record in Amazon Route 53
-* Manually register Git integrations - see _Git integration registration_.
-
-Create an `Alias` record in Amazon Route 53, and map your zone apex (example.com) DNS name to your Amazon CloudFront distribution.
-For more information, see [Creating records by using the Amazon Route 53 console](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html){:target="\_blank"}.
-
-{% include image.html
- lightbox="true"
- file="/images/runtime/post-install-alb-ingress.png"
- url="/images/runtime/post-install-alb-ingress.png"
- alt="Route 53 record settings for AWS ALB"
- caption="Route 53 record settings for AWS ALB"
- max-width="30%"
-%}
-
-#### Configure cluster routing service
-
-If you bypassed installing ingress resources with the `--skip-ingress` flag, configure the `host` for the Ingress, or the VirtualService for Istio if used, to route traffic to the `app-proxy` and `webhook` services, as in the examples below.
-
-**Ingress resource example for `app-proxy`:**
-
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: codefresh-cap-app-proxy
- namespace: codefresh
-spec:
- ingressClassName: alb
- rules:
- - host: my.support.cf-cd.com # replace with your host name
- http:
- paths:
- - backend:
- service:
- name: cap-app-proxy
- port:
- number: 3017
- path: /app-proxy/
- pathType: Prefix
-```
-
-**`VirtualService` examples for `app-proxy` and `webhook`:**
-
-```yaml
-apiVersion: networking.istio.io/v1alpha3
-kind: VirtualService
-metadata:
- namespace: test-runtime3 # replace with your runtime name
- name: cap-app-proxy
-spec:
- hosts:
- - my.support.cf-cd.com # replace with your host name
- gateways:
- - my-gateway
- http:
- - match:
- - uri:
- prefix: /app-proxy
- route:
- - destination:
- host: cap-app-proxy
- port:
- number: 3017
-```
-
-```yaml
-apiVersion: networking.istio.io/v1alpha3
-kind: VirtualService
-metadata:
- namespace: test-runtime3 # replace with your runtime name
- name: csdp-default-git-source
-spec:
- hosts:
- - my.support.cf-cd.com # replace with your host name
- gateways:
- - my-gateway
- http:
- - match:
- - uri:
- prefix: /webhooks/test-runtime3/push-github # replace `test-runtime3` with your runtime name
- route:
- - destination:
- host: push-github-eventsource-svc
- port:
- number: 80
-```
-Continue with [Git integration registration](#git-integration-registration) in this article.
-
-#### Internal ingress host configuration (optional for existing hybrid runtimes only)
-
-If you already have provisioned hybrid runtimes, to use an internal ingress host for app-proxy communication and an external ingress host to handle webhooks, change the specs for the `Ingress` and `Runtime` resources in the runtime installation repository. Use the examples as guidelines.
-
-`/apps/app-proxy/overlays//ingress.yaml`: change `host`
-
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: codefresh-cap-app-proxy
- namespace: codefresh #replace with your runtime name
-spec:
- ingressClassName: nginx
- rules:
- - host: my-internal-ingress-host # replace with the internal ingress host for app-proxy
- http:
- paths:
- - backend:
- service:
- name: cap-app-proxy
- port:
- number: 3017
- path: /app-proxy/
- pathType: Prefix
-```
-
-`..//bootstrap/.yaml`: add `internalIngressHost`
-
-```yaml
-apiVersion: v1
-data:
- base-url: https://g.codefresh.io
- runtime: |
- apiVersion: codefresh.io/v1alpha1
- kind: Runtime
- metadata:
- creationTimestamp: null
- name: codefresh #replace with your runtime name
- namespace: codefresh #replace with your runtime name
- spec:
- bootstrapSpecifier: github.com/codefresh-io/cli-v2/manifests/argo-cd
- cluster: https://7DD8390300DCEFDAF87DC5C587EC388C.gr7.us-east-1.eks.amazonaws.com
- components:
- - isInternal: false
- name: events
- type: kustomize
- url: github.com/codefresh-io/cli-v2/manifests/argo-events
- wait: true
- - isInternal: false
- name: rollouts
- type: kustomize
- url: github.com/codefresh-io/cli-v2/manifests/argo-rollouts
- wait: false
- - isInternal: false
- name: workflows
- type: kustomize
- url: github.com/codefresh-io/cli-v2/manifests/argo-workflows
- wait: false
- - isInternal: false
- name: app-proxy
- type: kustomize
- url: github.com/codefresh-io/cli-v2/manifests/app-proxy
- wait: false
- defVersion: 1.0.1
- ingressClassName: nginx
- ingressController: k8s.io/ingress-nginx
- ingressHost: https://support.cf.com/
- internalIngressHost: https://my-internal-ingress-host # add this line and replace my-internal-ingress-host with your internal ingress host
- repo: https://github.com/NimRegev/my-codefresh.git
- version: 99.99.99
-```
-
-#### Git integration registration
-
-If you bypassed installing ingress resources with the `--skip-ingress` flag, or if AWS ALB is your ingress controller, create and register Git integrations using these commands:
- `cf integration git add default --runtime --api-url `
-
- `cf integration git register default --runtime --token `
-
-### Related articles
-[Add external clusters to runtimes]({{site.baseurl}}/docs/runtime/managed-cluster/)
-[Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/)
-[Manage provisioned runtimes]({{site.baseurl}}/docs/runtime/monitor-manage-runtimes/)
-[Monitor provisioned hybrid runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/)
-[Troubleshoot runtime installation]({{site.baseurl}}/docs/troubleshooting/runtime-issues/)
diff --git a/_docs/runtime/monitor-manage-runtimes.md b/_docs/runtime/monitor-manage-runtimes.md
deleted file mode 100644
index 189b2b08..00000000
--- a/_docs/runtime/monitor-manage-runtimes.md
+++ /dev/null
@@ -1,332 +0,0 @@
----
-title: "Manage provisioned runtimes"
-description: ""
-group: runtime
-redirect_from:
- - /monitor-manage-runtimes/
- - /monitor-manage-runtimes
-toc: true
----
-
-
-The **Runtimes** page displays the provisioned runtimes in your account, both hybrid, and the hosted runtime if you have one.
-
-View runtime components and information in List or Topology view formats, and upgrade, uninstall, and migrate runtimes.
-
-{% include
- image.html
- lightbox="true"
- file="/images/runtime/runtime-list-view.png"
- url="/images/runtime/runtime-list-view.png"
- alt="Runtime List View"
- caption="Runtime List View"
- max-width="70%"
-%}
-
-Select the view mode to view runtime components and information, and manage provisioned runtimes in the view mode that suits you.
-
-
-Manage provisioned runtimes:
-* [Add managed clusters to hybrid or hosted runtimes]({{site.baseurl}}/docs/runtime/managed-cluster/))
-* [Add and manage Git Sources associated with hybrid or hosted runtimes]({{site.baseurl}}/docs/runtime/git-sources/))
-* [Upgrade provisioned hybrid runtimes](#hybrid-upgrade-provisioned-runtimes)
-* [Uninstall provisioned runtimes](#uninstall-provisioned-runtimes)
-* [Migrate ingress-less hybrid runtimes]((#hybrid-migrate-ingress-less-runtimes))
-
-> Unless specified otherwise, management options are common to both hybrid and hosted runtimes. If an option is valid only for hybrid runtimes, it is indicated as such.
-
-* Add managed clusters to hybrid or hosted runtimes (see [Adding & managing external clusters]({{site.baseurl}}/docs/runtime/managed-cluster/))
-* Add and manage Git Sources associated with hybrid or hosted runtimes (see [Adding & managing Git Sources]({{site.baseurl}}/docs/runtime/git-sources/))
-* Upgrade provisioned hybrid runtimes
-* Uninstall hybrid or hosted runtimes
-* Update Git runtime tokens
-
-To monitor provisioned hybrid runtimes, including recovering runtimes for failed clusters, see [Monitor provisioned hybrid runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/).
-
-### Runtime views
-
-View provisioned hybrid and hosted runtimes in List or Topology view formats.
-
-* List view: The default view, displays the list of provisioned runtimes, the clusters managed by them, and Git Sources.
-* Topology view: Displays a hierarchical view of runtimes and the clusters managed by them, with health and sync status of each cluster.
-
-#### List view
-
-The List view is a grid-view of the provisioned runtimes.
-
-Here is an example of the List view for runtimes.
-{% include
- image.html
- lightbox="true"
- file="/images/runtime/runtime-list-view.png"
- url="/images/runtime/runtime-list-view.png"
- alt="Runtime List View"
- caption="Runtime List View"
- max-width="70%"
-%}
-
-Here is a description of the information in the List View.
-
-{: .table .table-bordered .table-hover}
-| List View Item| Description |
-| -------------- | ---------------- |
-|**Name**| The name of the provisioned Codefresh runtime. |
-|**Type**| The type of runtime provisioned, and can be **Hybrid** or **Hosted**. |
-|**Cluster/Namespace**| The K8s API server endpoint, as well as the namespace with the cluster. |
-|**Modules**| The modules installed based on the type of provisioned runtime. Hybrid runtimes include CI amnd CD Ops modules. Hosted runtimes inlcude CD Ops. |
-|**Managed Cluster**| The number of managed clusters if any, for the runtime. To view list of managed clusters, select the runtime, and then the **Managed Clusters** tab. To work with managed clusters, see [Adding external clusters to runtimes]({{site.baseurl}}/docs/runtime/managed-cluster).|
-|**Version**| The version of the runtime currently installed. **Update Available!** indicates there are later versions of the runtime. To see all the commits to the runtime, mouse over **Update Available!**, and select **View Complete Change Log**.
-|**Last Updated**| The most recent update information from the runtime to the Codefresh platform. Updates are sent to the platform typically every few minutes. Longer update intervals may indicate networking issues.|
-|**Sync Status**| The health and sync status of the runtime or cluster. {::nomarkdown}
-
indicates health or sync errors in the runtime, or a managed cluster if one was added to the runtime. The runtime name is colored red.
indicates that the runtime is being synced to the cluster on which it is provisioned.
{:/} |
-
-#### Topology view
-
-A hierachical visualization of the provisioned runtimes. The Topology view makes it easy to identify key information such as versions, health and sync status, for both the provisioned runtime and the clusters managed by it.
-Here is an example of the Topology view for runtimes.
- {% include
- image.html
- lightbox="true"
- file="/images/runtime/runtime-topology-view.png"
- url="/images/runtime/runtime-topology-view.png"
- alt="Runtime Topology View"
- caption="Runtime Topology View"
- max-width="30%"
-%}
-
-Here is a description of the information in the Topology view.
-
-{: .table .table-bordered .table-hover}
-| Topology View Item | Description |
-| ------------------------| ---------------- |
-|**Runtime** |  the provisioned runtime. Hybrid runtimes display the name of the K8s API server endpoint with the cluster. Hosted runtimes display 'hosted'. |
-|**Cluster** | The local, and managed clusters if any, for the runtime. {::nomarkdown}
indicates the local cluster, always displayed as `in-cluster`. The in-cluster server URL is always set to `https://kubernetes.default.svc/`.
indicates a managed cluster. -
select to add a new managed cluster.
{:/} To view cluster components, select the cluster. To add and work with managed clusters, see [Adding external clusters to runtimes]({{site.baseurl}}/docs/runtime/managed-cluster). |
-|**Health/Sync status** |The health and sync status of the runtime or cluster. {::nomarkdown}
indicates health or sync errors in the runtime, or a managed cluster if one was added to the runtime. The runtime or cluster node is bordered in red and the name is colored red.
indicates that the runtime is being synced to the cluster on which it is provisioned.
{:/} |
-|**Search and View options** | {::nomarkdown}- Find a runtime or its clusters by typing part of the runtime/cluster name, and then navigate to the entries found.
- Topology view options: Resize to window, zoom in, zoom out, full screen view.
{:/}|
-
-
-
-### (Hybrid) Upgrade provisioned runtimes
-
-Upgrade provisioned hybrid runtimes to install critical security updates or to install the latest version of all components. Upgrade a provisioned hybrid runtime by running a silent upgrade or through the CLI wizard.
-If you have managed clusters for the hybrid runtime, upgrading the runtime automatically updates runtime components within the managed cluster as well.
-
-> When there are security updates, the UI displays the alert, _At least one runtime requires a security update_. The Version column displays an _Update Required!_ notification.
-
-> If you have older runtime versions, upgrade to manually define or create the shared configuration repo for your account. See [Shared configuration repo]({{site.baseurl}}/docs/reference/shared-configuration/).
-
-
-**Before you begin**
-For both silent or CLI-wizard based upgrades, make sure you have:
-
-* The latest version of the Codefresh CLI
- Run `cf version` to see your version and [click here](https://github.com/codefresh-io/cli-v2/releases){:target="\_blank"} to compare with the latest CLI version.
-* A valid runtime Git token
-
-**Silent upgrade**
-
-* Pass the mandatory flags in the upgrade command:
-
- `cf runtime upgrade --git-token --silent`
- where:
- `` is a valid runtime token with the `repo` and `admin-repo.hook` scopes.
-
-**CLI wizard-based upgrade**
-
-1. In the Codefresh UI, make sure you are in [Runtimes](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
-1. Switch to either the **List View** or to the **Topology View**.
-1. **List view**:
- * Select the runtime name.
- * To see all the commits to the runtime, in the Version column, mouse over **Update Available!**, and select **View Complete Change Log**.
- * On the top-right, select **Upgrade**.
-
- {% include
- image.html
- lightbox="true"
- file="/images/runtime/runtime-list-view-upgrade.png"
- url="/images/runtime/runtime-list-view-upgrade.png"
- alt="List View: Upgrade runtime option"
- caption="List View: Upgrade runtime option"
- max-width="30%"
- %}
-
- **Topology view**:
- Select the runtime cluster, and from the panel, select the three dots and then select **Upgrade Runtime**.
- {% include
- image.html
- lightbox="true"
- file="/images/runtime/runtiime-topology-upgrade.png"
- url="/images/runtime/runtiime-topology-upgrade.png"
- alt="Topology View: Upgrade runtime option"
- caption="Topology View: Upgrade runtime option"
- max-width="30%"
-%}
-
-{:start="4"}
-
-1. If you have already installed the Codefresh CLI, in the Install Upgrades panel, copy the upgrade command.
-
- {% include
- image.html
- lightbox="true"
- file="/images/runtime/install-upgrades.png"
- url="/images/runtime/install-upgrades.png"
- alt="Upgrade runtime"
- caption="Upgrade runtime panel"
- max-width="30%"
-%}
-
-{:start="5"}
-1. In your terminal, paste the command, and do the following:
- * Update the Git token value.
- * To manually define the shared configuration repo, add the `--shared-config-repo` flag with the path to the repo.
-1. Confirm to start the upgrade.
-
-
-
-
-### Uninstall provisioned runtimes
-
-Uninstall provisioned hybrid and hosted runtimes that are not in use. Uninstall a runtime by running a silent uninstall, or through the CLI wizard.
-> Uninstalling a runtime removes the Git Sources and managed clusters associated with the runtime.
-
-**Before you begin**
-For both types of uninstalls, make sure you have:
-
-* The latest version of the Codefresh CLI
-* A valid runtime Git token
-* The Kube context from which to uninstall the provisioned runtime
-
-**Silent uninstall**
-Pass the mandatory flags in the uninstall command:
- `cf runtime uninstall --git-token