diff --git a/_data/home-content.yml b/_data/home-content.yml
index dff15405..68d86fa9 100644
--- a/_data/home-content.yml
+++ b/_data/home-content.yml
@@ -39,7 +39,7 @@
localurl: /docs/runtime/installation
- title: Manage provisioned runtimes
localurl: /docs/runtime/monitor-manage-runtimes/
- - title: Monitor provisioned runtimes
+ - title: Monitor provisioned hybrid runtimes
localurl: /docs/runtime/monitoring-troubleshooting/
- title: Add external clusters to runtimes
localurl: /docs/runtime/managed-cluster/
diff --git a/_data/nav.yml b/_data/nav.yml
index d2833506..c96d5f2e 100644
--- a/_data/nav.yml
+++ b/_data/nav.yml
@@ -55,7 +55,7 @@
url: "/installation"
- title: Manage provisioned runtimes
url: "/monitor-manage-runtimes"
- - title: Monitor provisioned runtimes
+ - title: Monitor provisioned hybrid runtimes
url: "/monitoring-troubleshooting"
- title: Add external clusters to runtimes
url: "/managed-cluster"
diff --git a/_docs/deployment/sync-application.md b/_docs/deployment/sync-application.md
new file mode 100644
index 00000000..beec44f8
--- /dev/null
+++ b/_docs/deployment/sync-application.md
@@ -0,0 +1,68 @@
+---
+title: "Sync applications"
+description: ""
+group: deployment
+toc: true
+---
+
+Sync applications directly from the Codefresh UI
+
+The Synchonuze option si
+
+
+The set of options for application synchronization identical to that of Argo CD. In Codefresh, they are grouped into sets: Revision and Additional Options.
+
+
+### Synchronize application
+
+
+Codefresh groups Synchronization options By
+
+### Revision settings for application sync
+revision - The branch to be checked out when a deployment happens
+
+Prune: When selected, removes legacy resources that do not exist currently in Git. If pruning is not enforced and Argo CD identifies resources that require pruning, it displays them in the
+Read more in [No Prune Resources](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#no-prune-resources){:target="\_blank"}.
+
+Apply only: When selected, applies Kubernetes sever-side apply with field-management controls for patches and updates. Modifications to fields foChanges to a field of which you are not the manager results by If a new field is created, the new ownCompared to the last-applied annotation managed by kubectl, Server Side Apply uses a more declarative approach, which tracks a user's field management, rather than a user's last applied state. This means that as a side effect of using Server Side Apply, information about which field manager manages each field in an object also becomes available.
+
+For a user to manage a field, in the Server Side Apply sense, means that the user relies on and expects the value of the field not to change. The user who last made an assertion about the value of a field will be recorded as the current field manager. This can be done either by changing the value with POST, PUT, or non-apply PATCH, or by including the field in a config sent to the Server Side Apply endpoint. When using Server-Side Apply, trying to change a field which is managed by someone else will result in a rejected request (if not forced, see Conflicts).
+Read more in instead of the client-side apply. when syncinng the application. We have a use case I don't see discussed much. We leverage mutating webhooks to provide lots of valuable default configuration for our Kubernetes users. An issue with this is that when running kubectl apply, the mutation will not take place if there is no difference between the desired and live states (I believe kubectl doesn't try to apply the patch in this case). However, when running kubectl apply --server-side, the mutations are always applied.
+
+SSA controls modification right by clarifying field ownership, which can effectively prevent wrong modification. Understanding SSA merge policy and under what circumstances users can modify fields is necessary.
+
+Whether it is an update or a patch, there will be three circumstances.
+
+The current manager is the manager of all fields. Then the operation can proceed normally.
+The current manager is not the manager of some fields. The current operation can continue if those fields are not modified, and the current manager will be added to the fieldManager of the field as co-manager(sharedmanager).
+Modify non-manager fields, and conflicts occur. You can override value by becoming a shared manager or force the modification using--force-conflict.
+
+Our users primarily interact with Kubernetes through ArgoCD. Without Argo CD supporting server-side apply, there is no way apply these mutations unless the resource actually has some new desired state.
+
+Dry run: When selected, skips dry run for resources that are not known to the cluster. This option is useful when CRDs for custom resources are not created as part of the sync mechanism, but creatd by other mechanisms. In such cases, Argo CD's default behavior is to automatically fail the sync with the server could not find requested resource error.
+Read more in [Skip Dry Run for new custom resource types](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#skip-dry-run-for-new-custom-resources-types){:target="\_blank"}.
+
+Force
+When selected, orphans (deletes??) the dependents of a deleted resource during the sync operation. This option is useful to prevent
+
+### Additional Options for application sync
+
+Sync options are described in ????
+Respect ignore differences
+When selected, ignore differences between all resources in the applicaion that could not be synced or resolved.
+Read more in [Argo CD Diffing Customization](https://argo-cd.readthedocs.io/en/stable/user-guide/diffing/){:target="\_blank"}.
+
+#### Prune propagation policy
+{::nomarkdown}Defines how resources are pruned, applying Kubernetes cascading deletion prune policies.
+Read more at Kubernetes - Cascading deletion.
- Foreground: The default prune propagation policy used by Argo CD. With this policy, Kubernetes changes the state of the owner resource to `deletion in progress`, until the controller deletes the dependent resources and finally the owner resource itself.
- Background: When selected, Kubernetes deletes the owner resource immediately, and then deletes the dependent resources in the background.
- Orphan: When selected, Kubernetes deletes the dependent resources that remain orphaned after the owner resource is deleted.
{:/}
+All Prune propagation policies can be used with:
+
+
+**Replace**: When selected, Argo CD executes `kubectl replace` or `kubectl create`, instead of the default `kubectl apply` to enforce the changes in Git. This action will potentially recreate resources and should be used with care. See [Replace Resource Instead Of Applying Change](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#replace-resource-instead-of-applying-changes){:target="_blank"}.
+
+
+**Retry**: When selected, retries a failed sync operation, based on the retry settings configured:
+* Maximum number of sync retries (**Limit**)
+* Duration of each retry attempt in seconds, minutes, or hours (**Duration**)
+* Maximum duration permitted for each retry (**Max Duration**)
+* Factor by which to multiply the Duration in the event of a failed retry (**Factor**). A factor of 2 for example, attempts the second retry in 2 X 2 seconds, where 2 seconds is the Duration.
\ No newline at end of file
diff --git a/_docs/runtime/installation.md b/_docs/runtime/installation.md
index bc151e28..51587766 100644
--- a/_docs/runtime/installation.md
+++ b/_docs/runtime/installation.md
@@ -12,7 +12,7 @@ If you have a hybrid environment, you can provision one or more hybrid runtimes
There are two parts to installing a hybrid runtime:
1. Installing the Codefresh CLI
-2. Installing the hybrid runtime from the CLI, either through the CLI wizard or via silent installation.
+2. Installing the hybrid runtime from the CLI, either through the CLI wizard or via silent installation through the installation flags.
The hybrid runtime is installed in a specific namespace on your cluster. You can install more runtimes on different clusters in your deployment.
Every hybrid runtime installation makes commits to two Git repos:
@@ -21,27 +21,22 @@ There are two parts to installing a hybrid runtime:
See also [Codefresh architecture]({{site.baseurl}}/docs/getting-started/architecture).
-### Installing the Codefresh CLI
+{::nomarkdown}
+
+{:/}
-Install the Codefresh CLI using the option that best suits you: `curl`, `brew`, or standard download.
-If you are not sure which OS to select for `curl`, simply select one, and Codefresh automatically identifies and selects the right OS for CLI installation.
-
-### Installing the hybrid runtime
-
-1. Do one of the following:
- * If this is your first hybrid runtime installation, in the Welcome page, select **+ Install Runtime**.
- * If you have provisioned a hybrid runtime, to provision additional runtimes, in the Codefresh UI, go to [**Runtimes**](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}, and select **+ Add Runtimes**.
-1. Run:
- * CLI wizard: Run `cf runtime install`, and follow the prompts to enter the required values.
- * Silent install: Pass the required flags in the install command:
- `cf runtime install --repo --git-token --silent`
- For the list of flags, see _Hybrid runtime flags_.
+### Hybrid runtime installation flags
+This section describes the required and optional flags to install a hybrid runtime.
+For documentation purposes, the flags are grouped into:
+* Runtime flags, relating to runtime, cluster, and namespace requirements
+* Ingress controller flags, relating to ingress controller requirements
+* Git repository flags, relating to Git provider requirements
-> Note:
-> Hybrid runtime installation starts by checking network connectivity and the K8s cluster server version.
- To skip these tests, pass the `--skip-cluster-checks` flag.
+{::nomarkdown}
+
+{:/}
-#### Hybrid runtime flags
+#### Runtime flags
**Runtime name**
Required.
@@ -51,7 +46,7 @@ The runtime name must start with a lower-case character, and can include up to 6
**Namespace resource labels**
Optional.
-The label of the namespace resource to which you are installing the hybrid runtime. You can add more than one label. Labels are required to identity the networks that need access during installation, as is the case when using services meshes such as Istio for example.
+The label of the namespace resource to which you are installing the hybrid runtime. Labels are required to identify the networks that need access during installation, as is the case when using services meshes such as Istio for example.
* CLI wizard and Silent install: Add the `--namespace-labels` flag, and define the labels in `key=value` format. Separate multiple labels with `commas`.
@@ -62,9 +57,23 @@ The cluster defined as the default for `kubectl`. If you have more than one Kube
* CLI wizard: Select the Kube context from the list displayed.
* Silent install: Explicitly specify the Kube context with the `--context` flag.
+**Shared configuration repository**
+The Git repository per runtime account with shared configuration manifests.
+* CLI wizard and Silent install: Add the `--shared-config-repo` flag and define the path to the shared repo.
+
+{::nomarkdown}
+
+{:/}
+
+#### Ingress controller flags
+
+**Skip ingress**
+Required, if you are using an unsupported ingress controller.
+For unsupported ingress controllers, bypass installing ingress resources with the `--skip-ingress` flag.
+In this case, after completing the installation, manually configure the cluster's routing service, and create and register Git integrations. See the last step in [Install the hybrid runtime](#install-the-hybrid-runtime).
+
**Ingress class**
Required.
-If you have more than one ingress class configured on your cluster:
* CLI wizard: Select the ingress class for runtime installation from the list displayed.
* Silent install: Explicitly specify the ingress class through the `--ingress-class` flag. Otherwise, runtime installation fails.
@@ -77,10 +86,11 @@ The IP address or host name of the ingress controller component.
* Silent install: Add the `--ingress-host` flag. If a value is not provided, takes the host from the ingress controller associated with the **Ingress class**.
> Important: For AWS ALB, the ingress host is created post-installation. However, when prompted, add the domain name you will create in `Route 53` as the ingress host.
-SSL certificates for the ingress host:
-If the ingress host does not have a valid SSL certificate, you can continue with the installation in insecure mode, which disables certificate validation.
+**Insecure ingress hosts**
+TLS certificates for the ingress host:
+If the ingress host does not have a valid TLS certificate, you can continue with the installation in insecure mode, which disables certificate validation.
-* CLI wizard: Automatically detects and prompts you to confirm continuing with the installation in insecure mode.
+* CLI wizard: Automatically detects and prompts you to confirm continuing the installation in insecure mode.
* Silent install: To continue with the installation in insecure mode, add the `--insecure-ingress-host` flag.
**Internal ingress host**
@@ -90,19 +100,14 @@ For both CLI wizard and Silent install:
* For new runtime installations, add the `--internal-ingress-host` flag pointing to the ingress host for `app-proxy`.
* For existing installations, commit changes to the installation repository by modifying the `app-proxy ingress` and `.yaml`
- See _Internal ingress host configuration (optional for existing runtimes only)_ in [Post-installation configuration](#post-installation-configuration).
+ See [(Optional) Internal ingress host configuration for existing hybrid runtimes](#optional-internal-ingress-host-configuration-for-existing-hybrid-runtimes).
-**Ingress resources**
-Optional.
-If you have a different routing service (not NGINX), bypass installing ingress resources with the `--skip-ingress` flag.
-In this case, after completing the installation, manually configure the cluster's routing service, and create and register Git integrations. See _Cluster routing service_ in [Post-installation configuration](#post-installation-configuration).
-**Shared configuration repository**
-The Git repository per runtime account with shared configuration manifests.
-* CLI wizard and Silent install: Add the `--shared-config-repo` flag and define the path to the shared repo.
+{::nomarkdown}
+
+{:/}
-**Insecure flag**
-For _on-premises installations_, if the Ingress controller does not have a valid SSL certificate, to continue with the installation, add the `--insecure` flag to the installation command.
+#### Git repository flags
**Repository URLs**
The GitHub repository to house the installation definitions.
@@ -115,18 +120,121 @@ Required.
The Git token authenticating access to the GitHub installation repository.
* Silent install: Add the `--git-token` flag.
+
+
+
+{::nomarkdown}
+
+{:/}
+
+#### Codefresh resource flags
**Codefresh demo resources**
Optional.
Install demo pipelines to use as a starting point to create your own pipelines. We recommend installing the demo resources as these are used in our quick start tutorials.
* Silent install: Add the `--demo-resources` flag. By default, set to `true`.
+**Insecure flag**
+For _on-premises installations_, if the Ingress controller does not have a valid SSL certificate, to continue with the installation, add the `--insecure` flag to the installation command.
+
+{::nomarkdown}
+
+{:/}
+{::nomarkdown}
+
+{:/}
+
+### Install the Codefresh CLI
+
+Install the Codefresh CLI using the option that best suits you: `curl`, `brew`, or standard download.
+If you are not sure which OS to select for `curl`, simply select one, and Codefresh automatically identifies and selects the right OS for CLI installation.
+
+{::nomarkdown}
+
+{:/}
+
+### Install the hybrid runtime
+
+**Before you begin**
+* Make sure you meet the [minimum requirements]({{site.baseurl}}/docs/runtime/requirements/#minimum-requirements) for runtime installation
+* [Download or upgrade to the latest version of the CLI]({{site.baseurl}}/docs/clients/csdp-cli/#upgrade-codefresh-cli)
+* Review [Hybrid runtime installation flags](#hybrid-runtime-installation-flags)
+* Make sure your ingress controller is configured correctly:
+ * [Ambasador ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#ambassador-ingress-configuration)
+ * [AWS ALB ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#alb-aws-ingress-configuration)
+ * [Istio ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#istio-ingress-configuration)
+ * [NGINX Enterprise ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#nginx-enterprise-ingress-configuration)
+ * [NGINX Community ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#nginx-community-version-ingress-configuration)
+ * [Traefik ingress configuration]({{site.baseurl}}/docs/runtime/requirements/#traefik-ingress-configuration)
+
+
+{::nomarkdown}
+
+{:/}
+
+**How to**
+
+1. Do one of the following:
+ * If this is your first hybrid runtime installation, in the Welcome page, select **+ Install Runtime**.
+ * If you have provisioned a hybrid runtime, to provision additional runtimes, in the Codefresh UI, go to [**Runtimes**](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}.
+1. Click **+ Add Runtimes**, and then select **Hybrid Runtimes**.
+1. Do one of the following:
+ * CLI wizard: Run `cf runtime install`, and follow the prompts to enter the required values.
+ * Silent install: Pass the required flags in the install command:
+ `cf runtime install --repo --git-token --silent`
+ For the list of flags, see [Hybrid runtime installation flags](#hybrid-runtime-installation-flags).
+1. If relevant, complete the configuration for these ingress controllers:
+ * [ALB AWS: Alias DNS record in route53 to load balancer]({{site.baseurl}}/docs/runtime/requirements/#alias-dns-record-in-route53-to-load-balancer)
+ * [Istio: Configure cluster routing service]({{site.baseurl}}/docs/runtime/requirements/#cluster-routing-service)
+ * [NGINX Enterprise ingress controller: Patch certificate secret]({{site.baseurl}}/docs/runtime/requirements/#patch-certificate-secret)
+1. If you bypassed installing ingress resources with the `--skip-ingress` flag for ingress controllers not in the supported list, create and register Git integrations using these commands:
+ `cf integration git add default --runtime --api-url `
+ `cf integration git register default --runtime --token `
+
+
+{::nomarkdown}
+
+{:/}
+
### Hybrid runtime components
**Git repositories**
-* Runtime install repo: The installation repo contains three folders: apps, bootstrap and projects, to manage the runtime itself with Argo CD.
-* Git source repository: Created with the name `[repo_name]_git-source`. This repo stores manifests for pipelines with sources, events, workflow templates.
+* Runtime install repository: The installation repo contains three folders: apps, bootstrap and projects, to manage the runtime itself with Argo CD.
+* Git source repository: Created with the name `[repo_name]_git-source`. This repo stores manifests for pipelines with sources, events, workflow templates. See [Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/).
+
+* Shared configuration repository: Stores configuration and resource manifests that can be shared across runtimes, such as integration resources. See [Shared configuration repository]({{site.baseurl}}/docs/reference/shared-configuration/)
**Argo CD components**
@@ -145,126 +253,11 @@ Install demo pipelines to use as a starting point to create your own pipelines.
Once the hybrid runtime is successfully installed, it is provisioned on the Kubernetes cluster, and displayed in the **Runtimes** page.
-### Hybrid runtime post-installation configuration
-
-After provisioning a hybrid runtime, configure additional settings for the following:
-
-* NGINX Enterprise installations (with and without NGINX Ingress Operator)
-* AWS ALB installations
-* Cluster routing service if you bypassed installing ingress resources
-* (Existing hybrid runtimes) Internal and external ingress host specifications
-* Register Git integrations
-
-#### NGINX Enterprise post-install configuration
-
-You must patch the certificate secret in `spec.tls` of the `ingress-master` resource.
-
-Configure the `ingress-master` with the certificate secret. The secret must be in the same namespace as the runtime.
-
-1. Go to the runtime namespace with the NGINX ingress controller.
-1. In `ingress-master`, add to `spec.tls`:
-
- ```yaml
- tls:
- - hosts:
- -
- secretName:
- ```
-
-#### AWS ALB post-install configuration
+{::nomarkdown}
+
+{:/}
-For AWS ALB installations, do the following:
-
-* Create an `Alias` record in Amazon Route 53
-* Manually register Git integrations - see _Git integration registration_.
-
-Create an `Alias` record in Amazon Route 53, and map your zone apex (example.com) DNS name to your Amazon CloudFront distribution.
-For more information, see [Creating records by using the Amazon Route 53 console](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html){:target="\_blank"}.
-
-{% include image.html
- lightbox="true"
- file="/images/runtime/post-install-alb-ingress.png"
- url="/images/runtime/post-install-alb-ingress.png"
- alt="Route 53 record settings for AWS ALB"
- caption="Route 53 record settings for AWS ALB"
- max-width="30%"
-%}
-
-#### Configure cluster routing service
-
-If you bypassed installing ingress resources with the `--skip-ingress` flag, configure the `host` for the Ingress, or the VirtualService for Istio if used, to route traffic to the `app-proxy` and `webhook` services, as in the examples below.
-
-**Ingress resource example for `app-proxy`:**
-
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: codefresh-cap-app-proxy
- namespace: codefresh
-spec:
- ingressClassName: alb
- rules:
- - host: my.support.cf-cd.com # replace with your host name
- http:
- paths:
- - backend:
- service:
- name: cap-app-proxy
- port:
- number: 3017
- path: /app-proxy/
- pathType: Prefix
-```
-
-**`VirtualService` examples for `app-proxy` and `webhook`:**
-
-```yaml
-apiVersion: networking.istio.io/v1alpha3
-kind: VirtualService
-metadata:
- namespace: test-runtime3 # replace with your runtime name
- name: cap-app-proxy
-spec:
- hosts:
- - my.support.cf-cd.com # replace with your host name
- gateways:
- - my-gateway
- http:
- - match:
- - uri:
- prefix: /app-proxy
- route:
- - destination:
- host: cap-app-proxy
- port:
- number: 3017
-```
-
-```yaml
-apiVersion: networking.istio.io/v1alpha3
-kind: VirtualService
-metadata:
- namespace: test-runtime3 # replace with your runtime name
- name: csdp-default-git-source
-spec:
- hosts:
- - my.support.cf-cd.com # replace with your host name
- gateways:
- - my-gateway
- http:
- - match:
- - uri:
- prefix: /webhooks/test-runtime3/push-github # replace `test-runtime3` with your runtime name
- route:
- - destination:
- host: push-github-eventsource-svc
- port:
- number: 80
-```
-Continue with [Git integration registration](#git-integration-registration) in this article.
-
-#### Internal ingress host configuration (optional for existing hybrid runtimes only)
+### (Optional) Internal ingress host configuration for existing hybrid runtimes
If you already have provisioned hybrid runtimes, to use an internal ingress host for app-proxy communication and an external ingress host to handle webhooks, change the specs for the `Ingress` and `Runtime` resources in the runtime installation repository. Use the examples as guidelines.
@@ -337,16 +330,9 @@ data:
version: 99.99.99
```
-#### Git integration registration
-
-If you bypassed installing ingress resources with the `--skip-ingress` flag, or if AWS ALB is your ingress controller, create and register Git integrations using these commands:
- `cf integration git add default --runtime --api-url `
-
- `cf integration git register default --runtime --token `
### Related articles
[Add external clusters to runtimes]({{site.baseurl}}/docs/runtime/managed-cluster/)
-[Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/)
[Manage provisioned runtimes]({{site.baseurl}}/docs/runtime/monitor-manage-runtimes/)
-[(Hybrid) Monitor provisioned runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/)
-[Troubleshoot runtime installation]({{site.baseurl}}/docs/troubleshooting/runtime-issues/)
+[Monitor provisioned hybrid runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/)
+[Troubleshoot hybrid runtime installation]({{site.baseurl}}/docs/troubleshooting/runtime-issues/)
diff --git a/_docs/runtime/installation_original.md b/_docs/runtime/installation_original.md
new file mode 100644
index 00000000..a9624bc7
--- /dev/null
+++ b/_docs/runtime/installation_original.md
@@ -0,0 +1,338 @@
+---
+title: "Install hybrid runtimes"
+description: ""
+group: runtime
+toc: true
+---
+
+If you have a hybrid environment, you can provision one or more hybrid runtimes in your Codefresh account. The hybrid runtime comprises Argo CD components and Codefresh-specific components. The Argo CD components are derived from a fork of the Argo ecosystem, and do not correspond to the open-source versions available.
+
+> If you have Hosted GitOps, to provision a hosted runtime, see [Provision a hosted runtime]({{site.baseurl}}/docs/runtime/hosted-runtime/#1-provision-hosted-runtime) in [Set up a hosted (Hosted GitOps) environment]({{site.baseurl}}/docs/runtime/hosted-runtime/).
+
+There are two parts to installing a hybrid runtime:
+
+1. Installing the Codefresh CLI
+2. Installing the hybrid runtime from the CLI, either through the CLI wizard or via silent installation.
+ The hybrid runtime is installed in a specific namespace on your cluster. You can install more runtimes on different clusters in your deployment.
+ Every hybrid runtime installation makes commits to two Git repos:
+
+ * Runtime install repo: The installation repo that manages the hybrid runtime itself with Argo CD. If the repo URL does not exist, runtime creates it automatically.
+ * Git Source repo: Created automatically during runtime installation. The repo where you store manifests to run CodefreshCodefresh pipelines.
+
+See also [Codefresh architecture]({{site.baseurl}}/docs/getting-started/architecture).
+
+### Installing the Codefresh CLI
+
+Install the Codefresh CLI using the option that best suits you: `curl`, `brew`, or standard download.
+If you are not sure which OS to select for `curl`, simply select one, and Codefresh automatically identifies and selects the right OS for CLI installation.
+
+### Installing the hybrid runtime
+
+1. Do one of the following:
+ * If this is your first hybrid runtime installation, in the Welcome page, select **+ Install Runtime**.
+ * If you have provisioned a hybrid runtime, to provision additional runtimes, in the Codefresh UI, go to [**Runtimes**](https://g.codefresh.io/2.0/account-settings/runtimes){:target="\_blank"}, and select **+ Add Runtimes**.
+1. Run:
+ * CLI wizard: Run `cf runtime install`, and follow the prompts to enter the required values.
+ * Silent install: Pass the required flags in the install command:
+ `cf runtime install --repo --git-token --silent`
+ For the list of flags, see _Hybrid runtime flags_.
+
+> Note:
+> Hybrid runtime installation starts by checking network connectivity and the K8s cluster server version.
+ To skip these tests, pass the `--skip-cluster-checks` flag.
+
+#### Hybrid runtime flags
+
+**Runtime name**
+Required.
+The runtime name must start with a lower-case character, and can include up to 62 lower-case characters and numbers.
+* CLI wizard: Add when prompted.
+* Silent install: Required.
+
+**Namespace resource labels**
+Optional.
+The label of the namespace resource to which you are installing the hybrid runtime. You can add more than one label. Labels are required to identity the networks that need access during installation, as is the case when using services meshes such as Istio for example.
+
+* CLI wizard and Silent install: Add the `--namespace-labels` flag, and define the labels in `key=value` format. Separate multiple labels with `commas`.
+
+**Kube context**
+Required.
+The cluster defined as the default for `kubectl`. If you have more than one Kube context, the current context is selected by default.
+
+* CLI wizard: Select the Kube context from the list displayed.
+* Silent install: Explicitly specify the Kube context with the `--context` flag.
+
+**Ingress class**
+Required.
+If you have more than one ingress class configured on your cluster:
+
+* CLI wizard: Select the ingress class for runtime installation from the list displayed.
+* Silent install: Explicitly specify the ingress class through the `--ingress-class` flag. Otherwise, runtime installation fails.
+
+**Ingress host**
+Required.
+The IP address or host name of the ingress controller component.
+
+* CLI wizard: Automatically selects and displays the host, either from the cluster or the ingress controller associated with the **Ingress class**.
+* Silent install: Add the `--ingress-host` flag. If a value is not provided, takes the host from the ingress controller associated with the **Ingress class**.
+ > Important: For AWS ALB, the ingress host is created post-installation. However, when prompted, add the domain name you will create in `Route 53` as the ingress host.
+
+SSL certificates for the ingress host:
+If the ingress host does not have a valid SSL certificate, you can continue with the installation in insecure mode, which disables certificate validation.
+
+* CLI wizard: Automatically detects and prompts you to confirm continuing with the installation in insecure mode.
+* Silent install: To continue with the installation in insecure mode, add the `--insecure-ingress-host` flag.
+
+**Internal ingress host**
+Optional.
+Enforce separation between internal (app-proxy) and external (webhook) communication by adding an internal ingress host for the app-proxy service in the internal network.
+For both CLI wizard and Silent install:
+
+* For new runtime installations, add the `--internal-ingress-host` flag pointing to the ingress host for `app-proxy`.
+* For existing installations, commit changes to the installation repository by modifying the `app-proxy ingress` and `.yaml`
+ See _Internal ingress host configuration (optional for existing runtimes only)_ in [Post-installation configuration](#post-installation-configuration).
+
+**Ingress resources**
+Optional.
+If you have a different routing service (not NGINX), bypass installing ingress resources with the `--skip-ingress` flag.
+In this case, after completing the installation, manually configure the cluster's routing service, and create and register Git integrations. See _Cluster routing service_ in [Post-installation configuration](#post-installation-configuration).
+
+**Shared configuration repository**
+The Git repository per runtime account with shared configuration manifests.
+* CLI wizard and Silent install: Add the `--shared-config-repo` flag and define the path to the shared repo.
+
+**Insecure flag**
+For _on-premises installations_, if the Ingress controller does not have a valid SSL certificate, to continue with the installation, add the `--insecure` flag to the installation command.
+
+**Repository URLs**
+The GitHub repository to house the installation definitions.
+
+* CLI wizard: If the repo doesn't exist, Codefresh creates it during runtime installation.
+* Silent install: Required. Add the `--repo` flag.
+
+**Git runtime token**
+Required.
+The Git token authenticating access to the GitHub installation repository.
+* Silent install: Add the `--git-token` flag.
+
+**Codefresh demo resources**
+Optional.
+Install demo pipelines to use as a starting point to create your own pipelines. We recommend installing the demo resources as these are used in our quick start tutorials.
+
+* Silent install: Add the `--demo-resources` flag. By default, set to `true`.
+
+### Hybrid runtime components
+
+**Git repositories**
+
+* Runtime install repo: The installation repo contains three folders: apps, bootstrap and projects, to manage the runtime itself with Argo CD.
+* Git source repository: Created with the name `[repo_name]_git-source`. This repo stores manifests for pipelines with sources, events, workflow templates.
+
+**Argo CD components**
+
+* Project, comprising an Argo CD AppProject and an ApplicationSet
+* Installations of the following applications in the project:
+ * Argo CD
+ * Argo Workflows
+ * Argo Events
+ * Argo Rollouts
+
+**Codefresh-specific components**
+
+* Codefresh Applications in the Argo CD AppProject:
+ * App-proxy facilitating behind-firewall access to Git
+ * Git Source entity that references the`[repo_name]_git-source`
+
+Once the hybrid runtime is successfully installed, it is provisioned on the Kubernetes cluster, and displayed in the **Runtimes** page.
+
+### Hybrid runtime post-installation configuration
+
+After provisioning a hybrid runtime, configure additional settings for the following:
+
+* NGINX Enterprise installations (with and without NGINX Ingress Operator)
+* AWS ALB installations
+* Cluster routing service if you bypassed installing ingress resources
+* (Existing hybrid runtimes) Internal and external ingress host specifications
+* Register Git integrations
+
+
+
+#### AWS ALB post-install configuration
+
+For AWS ALB installations, do the following:
+
+* Create an `Alias` record in Amazon Route 53
+* Manually register Git integrations - see _Git integration registration_.
+
+Create an `Alias` record in Amazon Route 53, and map your zone apex (example.com) DNS name to your Amazon CloudFront distribution.
+For more information, see [Creating records by using the Amazon Route 53 console](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html){:target="\_blank"}.
+
+{% include image.html
+ lightbox="true"
+ file="/images/runtime/post-install-alb-ingress.png"
+ url="/images/runtime/post-install-alb-ingress.png"
+ alt="Route 53 record settings for AWS ALB"
+ caption="Route 53 record settings for AWS ALB"
+ max-width="30%"
+%}
+
+#### Configure cluster routing service
+
+If you bypassed installing ingress resources with the `--skip-ingress` flag, configure the `host` for the Ingress, or the VirtualService for Istio if used, to route traffic to the `app-proxy` and `webhook` services, as in the examples below.
+
+**Ingress resource example for `app-proxy`:**
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: codefresh-cap-app-proxy
+ namespace: codefresh
+spec:
+ ingressClassName: alb
+ rules:
+ - host: my.support.cf-cd.com # replace with your host name
+ http:
+ paths:
+ - backend:
+ service:
+ name: cap-app-proxy
+ port:
+ number: 3017
+ path: /app-proxy/
+ pathType: Prefix
+```
+
+**`VirtualService` examples for `app-proxy` and `webhook`:**
+
+```yaml
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ namespace: test-runtime3 # replace with your runtime name
+ name: cap-app-proxy
+spec:
+ hosts:
+ - my.support.cf-cd.com # replace with your host name
+ gateways:
+ - my-gateway
+ http:
+ - match:
+ - uri:
+ prefix: /app-proxy
+ route:
+ - destination:
+ host: cap-app-proxy
+ port:
+ number: 3017
+```
+
+```yaml
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ namespace: test-runtime3 # replace with your runtime name
+ name: csdp-default-git-source
+spec:
+ hosts:
+ - my.support.cf-cd.com # replace with your host name
+ gateways:
+ - my-gateway
+ http:
+ - match:
+ - uri:
+ prefix: /webhooks/test-runtime3/push-github # replace `test-runtime3` with your runtime name
+ route:
+ - destination:
+ host: push-github-eventsource-svc
+ port:
+ number: 80
+```
+Continue with [Git integration registration](#git-integration-registration) in this article.
+
+#### Internal ingress host configuration (optional for existing hybrid runtimes only)
+
+If you already have provisioned hybrid runtimes, to use an internal ingress host for app-proxy communication and an external ingress host to handle webhooks, change the specs for the `Ingress` and `Runtime` resources in the runtime installation repository. Use the examples as guidelines.
+
+`/apps/app-proxy/overlays//ingress.yaml`: change `host`
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: codefresh-cap-app-proxy
+ namespace: codefresh #replace with your runtime name
+spec:
+ ingressClassName: nginx
+ rules:
+ - host: my-internal-ingress-host # replace with the internal ingress host for app-proxy
+ http:
+ paths:
+ - backend:
+ service:
+ name: cap-app-proxy
+ port:
+ number: 3017
+ path: /app-proxy/
+ pathType: Prefix
+```
+
+`..//bootstrap/.yaml`: add `internalIngressHost`
+
+```yaml
+apiVersion: v1
+data:
+ base-url: https://g.codefresh.io
+ runtime: |
+ apiVersion: codefresh.io/v1alpha1
+ kind: Runtime
+ metadata:
+ creationTimestamp: null
+ name: codefresh #replace with your runtime name
+ namespace: codefresh #replace with your runtime name
+ spec:
+ bootstrapSpecifier: github.com/codefresh-io/cli-v2/manifests/argo-cd
+ cluster: https://7DD8390300DCEFDAF87DC5C587EC388C.gr7.us-east-1.eks.amazonaws.com
+ components:
+ - isInternal: false
+ name: events
+ type: kustomize
+ url: github.com/codefresh-io/cli-v2/manifests/argo-events
+ wait: true
+ - isInternal: false
+ name: rollouts
+ type: kustomize
+ url: github.com/codefresh-io/cli-v2/manifests/argo-rollouts
+ wait: false
+ - isInternal: false
+ name: workflows
+ type: kustomize
+ url: github.com/codefresh-io/cli-v2/manifests/argo-workflows
+ wait: false
+ - isInternal: false
+ name: app-proxy
+ type: kustomize
+ url: github.com/codefresh-io/cli-v2/manifests/app-proxy
+ wait: false
+ defVersion: 1.0.1
+ ingressClassName: nginx
+ ingressController: k8s.io/ingress-nginx
+ ingressHost: https://support.cf.com/
+ internalIngressHost: https://my-internal-ingress-host # add this line and replace my-internal-ingress-host with your internal ingress host
+ repo: https://github.com/NimRegev/my-codefresh.git
+ version: 99.99.99
+```
+
+#### Git integration registration
+
+If you bypassed installing ingress resources with the `--skip-ingress` flag, or if AWS ALB is your ingress controller, create and register Git integrations using these commands:
+ `cf integration git add default --runtime --api-url `
+
+ `cf integration git register default --runtime --token `
+
+### Related articles
+[Add external clusters to runtimes]({{site.baseurl}}/docs/runtime/managed-cluster/)
+[Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/)
+[Manage provisioned runtimes]({{site.baseurl}}/docs/runtime/monitor-manage-runtimes/)
+[Monitor provisioned hybrid runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/)
+[Troubleshoot runtime installation]({{site.baseurl}}/docs/troubleshooting/runtime-issues/)
diff --git a/_docs/runtime/managed-cluster.md b/_docs/runtime/managed-cluster.md
index b591ab32..db3dd330 100644
--- a/_docs/runtime/managed-cluster.md
+++ b/_docs/runtime/managed-cluster.md
@@ -280,5 +280,5 @@ Remove a cluster from the list managed by the runtime, through the CLI.
### Related articles
[Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/)
-[Manage provisioned runtimes]({{site.baseurl}}/docs/runtime/monitor-manage-runtimes/)
+[Manage provisioned hybrid runtimes]({{site.baseurl}}/docs/runtime/monitor-manage-runtimes/)
[(Hybrid) Monitor provisioned runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/)
\ No newline at end of file
diff --git a/_docs/runtime/monitor-manage-runtimes.md b/_docs/runtime/monitor-manage-runtimes.md
index 1a87d6d1..5c2cebe6 100644
--- a/_docs/runtime/monitor-manage-runtimes.md
+++ b/_docs/runtime/monitor-manage-runtimes.md
@@ -13,7 +13,7 @@ The **Runtimes** page displays the provisioned runtimes in your account, both hy
> Unless specified otherwise, management options are common to both hybrid and hosted runtimes.
-To monitor provisioned runtimes, including recovering runtimes for failed clusters, see [Monitor provisioned runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/).
+To monitor provisioned hybrid runtimes, including recovering runtimes for failed clusters, see [Monitor provisioned hybrid runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/).
### Runtime views
@@ -228,7 +228,7 @@ Pass the mandatory flags in the uninstall command:
### Related articles
-[(Hybrid) Monitor provisioned runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/)
+[Monitor provisioned hybrid runtimes]({{site.baseurl}}/docs/runtime/monitoring-troubleshooting/)
[Add Git Sources to runtimes]({{site.baseurl}}/docs/runtime/git-sources/)
[Add external clusters to runtimes]({{site.baseurl}}/docs/runtime/managed-cluster/)
diff --git a/_docs/runtime/monitoring-troubleshooting.md b/_docs/runtime/monitoring-troubleshooting.md
index af23d126..c225c1b4 100644
--- a/_docs/runtime/monitoring-troubleshooting.md
+++ b/_docs/runtime/monitoring-troubleshooting.md
@@ -1,5 +1,5 @@
---
-title: "Monitor provisioned runtimes"
+title: "(Hybrid) Monitor provisioned runtimes"
description: ""
group: runtime
toc: true
diff --git a/_docs/runtime/requirements.md b/_docs/runtime/requirements.md
index 01883380..6546d253 100644
--- a/_docs/runtime/requirements.md
+++ b/_docs/runtime/requirements.md
@@ -12,52 +12,117 @@ The requirements listed are the **_minimum_** requirements to provision **_hybri
>In the documentation, Kubernetes and K8s are used interchangeably.
+{::nomarkdown}
+
+{:/}
-### Kubernetes cluster requirements
-This section lists cluster requirements.
+### Minimum requirements
-#### Cluster version
-Kubernetes cluster, server version 1.18 and higher, without Argo Project components.
-> Tip:
-> To check the server version, run `kubectl version --short`.
+{: .table .table-bordered .table-hover}
+| Item | Requirement |
+| -------------- | -------------- |
+|Kubernetes cluster | Server version 1.18 and higher, without Argo Project components. {::nomarkdown}
Tip: To check the server version, run:
kubectl version --short.{:/}|
+| Ingress controller| Configured on Kubernetes cluster and exposed from the cluster. {::nomarkdown}
Supported and tested ingress controllers include: - Ambassador
{:/}(see [Ambassador ingress configuration](#ambassador-ingress-configuration)){::nomarkdown}- AWS ALB (Application Load Balancer)
{:/} (see [AWS ALB ingress configuration](#aws-alb-ingress-configuration)){::nomarkdown}- Istio
{:/} (see [Istio ingress configuration](#istio-ingress-configuration)){::nomarkdown}- NGINX Enterprise (nginx.org/ingress-controller)
{:/} (see [NGINX Enterprise ingress configuration](#nginx-enterprise-ingress-configuration)){::nomarkdown}- NGINX Community (k8s.io/ingress-nginx)
{:/} (see [NGINX Community ingress configuration](#nginx-community-version-ingress-configuration)){::nomarkdown}- Trafik
{:/}(see [Traefik ingress configuration](#traefik-ingress-configuration))|
+|Node requirements| {::nomarkdown}{:/}|
+|Cluster permissions | Cluster admin permissions |
+|Git providers |{::nomarkdown}{:/}|
+|Git access tokens | {::nomarkdown}Runtime Git token:- Valid expiration date
- Scopes: repo and admin-repo.hook
Personal access Git token:- Valid expiration date
- Scopes: repo
{:/}|
+
+
+{::nomarkdown}
+
+{:/}
+
+### Ambassador ingress configuration
+For detailed configuration information, see the [Ambassador ingress controller documentation](https://www.getambassador.io/docs/edge-stack/latest/topics/running/ingress-controller){:target="\_blank"}.
+
+This section lists the specific configuration requirements for Codefresh to be completed _before_ installing the hybrid runtime.
+* Valid external IP address
+* Valid TLS certificate
+* TCP support
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+ {::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+{::nomarkdown}
+
+{:/}
+### AWS ALB ingress configuration
-#### Ingress controller
-Configure your Kubernetes cluster with an ingress controller component that is exposed from the cluster.
+For detailed configuration information, see the [ALB AWS ingress controller documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4){:target="\_blank"}.
-**Supported ingress controllers**
+This table lists the specific configuration requirements for Codefresh.
- {: .table .table-bordered .table-hover}
-| Supported Ingress Controller | Reference|
-| -------------- | -------------- |
-| Ambassador | [Ambassador ingress controller documentation](https://www.getambassador.io/docs/edge-stack/latest/topics/running/ingress-controller/){:target="\_blank"} |
-| ALB (AWS Application Load Balancer) | [AWS ALB ingress controller documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/){:target="\_blank"} |
-| NGINX Enterprise (`nginx.org/ingress-controller`) | [NGINX Ingress Controller documentation](https://docs.nginx.com/nginx-ingress-controller/){:target="\_blank"} |
-| NGINX Community (`k8s.io/ingress-nginx`) | [Provider-specific configuration](#nginx-community-version-provider-specific-ingress-configuration) in this article|
-| Istio | [Istio Kubernetes ingress documentation](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/){:target="\_blank"} |
-| Traefik |[Traefik Kubernetes ingress documentation](https://doc.traefik.io/traefik/providers/kubernetes-ingress/){:target="\_blank"}|
+{: .table .table-bordered .table-hover}
+| What to configure | When to configure |
+| -------------- | -------------- |
+|Valid external IP address | _Before_ installing hybrid runtime |
+|Valid TLS certificate | |
+|TCP support| |
+|Controller configuration] | |
+|Alias DNS record in route53 to load balancer | _After_ installing hybrid runtime |
+|(Optional) Git integration registration | |
+
+{::nomarkdown}
+
+{:/}
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
-**Ingress controller requirements**
+{::nomarkdown}
+
+{:/}
-* Valid external IP address
- Run `kubectl get svc -A` to get a list of services and verify that the EXTERNAL-IP column for your ingress controller shows a valid hostname.
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
-* Valid SSL certificate
- For secure runtime installation, the ingress controller must have a valid SSL certificate from an authorized CA (Certificate Authority).
+{::nomarkdown}
+
+{:/}
-* TCP support
- Make sure your ingress controller is configured to handle TCP requests. For exact configuraton requirements, refer to the offiical documentation of the ingress controller you are using.
-
- Here's an example of TCP configuration for NGINX on AWS.
- Verify that the ingress-nginx-controller service manifest has either of the following annotations:
+#### TCP support
+Configure the ingress controller to handle TCP requests.
- `service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"`
- OR
- `service.beta.kubernetes.io/aws-load-balancer-type: nlb`
+{::nomarkdown}
+
+{:/}
-* AWS ALB
- In the ingress resource file, verify that `spec.controller` is configured as `ingress.k8s.aws/alb`.
+#### Controller configuration
+In the ingress resource file, verify that `spec.controller` is configured as `ingress.k8s.aws/alb`.
```yaml
apiVersion: networking.k8s.io/v1
@@ -68,28 +133,210 @@ spec:
controller: ingress.k8s.aws/alb
```
-* Report status
- The ingress controller must be configured to report its status. Otherwise, Argo's health check reports the health status as "progressing" resulting in a timeout error during installation.
-
- By default, NGINX Enterprise and Traefik ingress are not configured to report status. For details on configuration settings, see the following sections in this article:
- [NGINX Enterprise ingress configuration](#nginx-enterprise-version-ingress-configuration)
- [Traefik ingress configuration](#traefik-ingress-configuration)
+{::nomarkdown}
+
+{:/}
+#### Create an alias to load balancer in route53
-#### NGINX Enterprise version ingress configuration
-The Enterprise version of NGINX (`nginx.org/ingress-controller`), both with and without the Ingress Operator, must be configured to report the status of the ingress controller.
+> The alias must be configured _after_ installing the hybrid runtime.
-**Installation with NGINX Ingress**
-* Pass the `- -report-ingress-status` to `deployment`.
+1. Make sure a DNS record is available in the correct hosted zone.
+1. _After_ hybrid runtime installation, in Amazon Route 53, create an alias to route traffic to the load balancer that is automatically created during the installation:
+ * **Record name**: Enter the same record name used in the installation.
+ * Toggle **Alias** to **ON**.
+ * From the **Route traffic to** list, select **Alias to Application and Classic Load Balancer**.
+ * From the list of Regions, select the region. For example, **US East**.
+ * From the list of load balancers, select the load balancer that was created during installation.
- ```yaml
- spec:
- containers:
- - args:
- - -report-ingress-status
- ```
+For more information, see [Creating records by using the Amazon Route 53 console](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html){:target="\_blank"}.
+
+{% include image.html
+ lightbox="true"
+ file="/images/runtime/post-install-alb-ingress.png"
+ url="/images/runtime/post-install-alb-ingress.png"
+ alt="Route 53 record settings for AWS ALB"
+ caption="Route 53 record settings for AWS ALB"
+ max-width="60%"
+%}
+
+{::nomarkdown}
+
+{:/}
+
+#### (Optional) Git integration registration
+If the installation failed, as can happen if the DNS record was not created within the timeframe, manually create and register Git integrations using these commands:
+ `cf integration git add default --runtime --api-url `
+ `cf integration git register default --runtime --token `
+
+{::nomarkdown}
+
+{:/}
+
+### Istio ingress configuration
+For detailed configuration information, see [Istio ingress controller documentation](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress){:target="\_blank}.
+
+The table below lists the specific configuration requirements for Codefresh.
+
+{: .table .table-bordered .table-hover}
+| What to configure | When to configure |
+| -------------- | -------------- |
+|Valid external IP address |_Before_ installing hybrid runtime |
+|Valid TLS certificate| |
+|TCP support | |
+|Cluster routing service | _After_ installing hybrid runtime |
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+{::nomarkdown}
+
+{:/}
+
+#### Cluster routing service
+> The cluster routing service must be configured _after_ installing the hybrid runtime.
+
+Configure the `VirtualService` to route traffic to the `app-proxy` and `webhook` services, as in the examples below.
+
+{::nomarkdown}
+
+{:/}
+
+**`VirtualService` example for `app-proxy`:**
+
+```yaml
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ namespace: test-runtime3 # replace with your runtime name
+ name: cap-app-proxy
+spec:
+ hosts:
+ - my.support.cf-cd.com # replace with your host name
+ gateways:
+ - my-gateway
+ http:
+ - match:
+ - uri:
+ prefix: /app-proxy
+ route:
+ - destination:
+ host: cap-app-proxy
+ port:
+ number: 3017
+```
+{::nomarkdown}
+
+{:/}
+
+**`VirtualService` example for `webhook`:**
+
+```yaml
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ namespace: test-runtime3 # replace with your runtime name
+ name: csdp-default-git-source
+spec:
+ hosts:
+ - my.support.cf-cd.com # replace with your host name
+ gateways:
+ - my-gateway
+ http:
+ - match:
+ - uri:
+ prefix: /webhooks/test-runtime3/push-github # replace `test-runtime3` with your runtime name
+ route:
+ - destination:
+ host: push-github-eventsource-svc
+ port:
+ number: 80
+```
+{::nomarkdown}
+
+{:/}
+
+### NGINX Enterprise ingress configuration
+
+For detailed configuration information, see [NGINX ingress controller documentation](https://docs.nginx.com/nginx-ingress-controller){:target="\_blank}.
+
+The table below lists the specific configuration requirements for Codefresh.
+
+{: .table .table-bordered .table-hover}
+| What to configure | When to configure |
+| -------------- | -------------- |
+|Verify valid external IP address |_Before_ installing hybrid runtime |
+|Valid TLS certificate | |
+|TCP support| |
+|NGINX Ingress: Enable report status to cluster | |
+|NGINX Ingress Operator: Enable report status to cluster| |
+|Patch certificate secret |_After_ installing hybrid runtime
-**Installation with NGINX Ingress Operator**
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+{::nomarkdown}
+
+{:/}
+
+#### NGINX Ingress: Enable report status to cluster
+
+If the ingress controller is not configured to report its status to the cluster, Argo’s health check reports the health status as “progressing” resulting in a timeout error during installation.
+
+* Pass `--report-ingress-status` to `deployment`.
+
+```yaml
+spec:
+ containers:
+ - args:
+ - --report-ingress-status
+```
+
+{::nomarkdown}
+
+{:/}
+
+#### NGINX Ingress Operator: Enable report status to cluster
+
+If the ingress controller is not configured to report its status to the cluster, Argo’s health check reports the health status as “progressing” resulting in a timeout error during installation.
1. Add this to the `Nginxingresscontrollers` resource file:
@@ -104,8 +351,74 @@ The Enterprise version of NGINX (`nginx.org/ingress-controller`), both with and
1. Make sure you have a certificate secret in the same namespace as the runtime. Copy an existing secret if you don't have one.
You will need to add this to the `ingress-master` when you have completed runtime installation.
-#### NGINX Community version provider-specific ingress configuration
-Codefresh has been tested and is supported in major providers. For your convenience, here are provider-specific configuration instructions, both for supported and untested providers.
+{::nomarkdown}
+
+{:/}
+
+#### Patch certificate secret
+> The certificate secret must be configured _after_ installing the hybrid runtime.
+
+Patch the certificate secret in `spec.tls` of the `ingress-master` resource.
+The secret must be in the same namespace as the runtime.
+
+1. Go to the runtime namespace with the NGINX ingress controller.
+1. In `ingress-master`, add to `spec.tls`:
+
+ ```yaml
+ tls:
+ - hosts:
+ -
+ secretName:
+ ```
+
+{::nomarkdown}
+
+{:/}
+
+### NGINX Community version ingress configuration
+
+Codefresh has been tested with and supports implementations of the major providers. For your convenience, we have provided configuration instructions, both for supported and untested providers in [Provider-specific configuration](#provider-specific-configuration).
+
+
+This section lists the specific configuration requirements for Codefresh to be completed _before_ installing the hybrid runtime.
+* Verify valid external IP address
+* Valid TLS certificate
+* TCP support
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services, and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+Here's an example of TCP configuration for NGINX Community on AWS.
+Verify that the `ingress-nginx-controller` service manifest has either of the following annotations:
+
+`service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"`
+OR
+`service.beta.kubernetes.io/aws-load-balancer-type: nlb`
+
+{::nomarkdown}
+
+{:/}
+
+#### Provider-specific configuration
> The instructions are valid for `k8s.io/ingress-nginx`, the community version of NGINX.
@@ -306,86 +619,73 @@ For additional configuration options, see ingress-nginx documentation for Scaleway.
-
-
-
-#### Traefik ingress configuration
-To enable the the Traefik ingress controller to report the status, add `publishedService` to `providers.kubernetesIngress.ingressEndpoint`.
-
-The value must be in the format `"/"`, where:
- `` is the Traefik service from which to copy the status
+
- ```yaml
- ...
- providers:
- kubernetesIngress:
- ingressEndpoint:
- publishedService: "/" # Example, "codefresh/traefik-default" ...
- ...
- ```
+{::nomarkdown}
+
+{:/}
-#### Node requirements
-* Memory: 5000 MB
-* CPU: 2
+### Traefik ingress configuration
+For detailed configuration information, see [Traefik ingress controller documentation](https://doc.traefik.io/traefik/providers/kubernetes-ingress){:target="\_blank}.
-#### Runtime namespace permissions for resources
+The table below lists the specific configuration requirements for Codefresh.
{: .table .table-bordered .table-hover}
-| Resource | Permissions Required|
-| -------------- | -------------- |
-| `ServiceAccount` | Create, Delete |
-| `ConfigMap` | Create, Update, Delete |
-| `Service` | Create, Update, Delete |
-| `Role` | In group `rbac.authorization.k8s.io`: Create, Update, Delete |
-| `RoleBinding` | In group `rbac.authorization.k8s.io`: Create, Update, Delete |
-| `persistentvolumeclaims` | Create, Update, Delete |
-| `pods` | Creat, Update, Delete |
-
-### Git repository requirements
-This section lists the requirements for Git installation repositories.
-
-#### Git installation repo
-If you are using an existing repo, make sure it is empty.
-
-#### Git access tokens
-Codefresh requires two access tokens, one for runtime installation, and the second, a personal token for each user to authenticate Git-based actions in Codefresh.
-
-##### Git runtime token
-The Git runtime token is mandatory for runtime installation.
-
-The token must have valid:
- * Expiration date: Default is `30 days`
- * Scopes: `repo` and `admin-repo.hook`
-
- {% include
- image.html
- lightbox="true"
- file="/images/getting-started/quick-start/quick-start-git-event-permissions.png"
- url="/images/getting-started/quick-start/quick-start-git-event-permissions.png"
- alt="Scopes for Git runtime token"
- caption="Scopes for Git runtime token"
- max-width="30%"
- %}
-
-##### Git user token for Git-based actions
-The Git user token is the user's personal token and is unique to every user. It is used to authenticate every Git-based action of the user in Codefresh. You can add the Git user token at any time from the UI.
-
- The token must have valid:
- * Expiration date: Default is `30 days`
- * Scope: `repo`
+
+| What to configure | When to configure |
+| -------------- | -------------- |
+|Valid external IP address | _Before_ installing hybrid runtime |
+|Valid SSL certificate | |
+|TCP support | |
+|Enable report status to cluster| |
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the `EXTERNAL-IP` column for your ingress controller shows a valid hostname.
+
+{::nomarkdown}
+
+{:/}
+
+#### Valid TLS certificate
+For secure runtime installation, the ingress controller must have a valid TLS certificate.
+> Use the FQDN (Fully Qualified Domain Name) of the ingress controller for the TLS certificate.
+
+{::nomarkdown}
+
+{:/}
+
+#### TCP support
+Configure the ingress controller to handle TCP requests.
+
+{::nomarkdown}
+
+{:/}
+
+#### Enable report status to cluster
+By default, the Traefik ingress controller is not configured to report its status to the cluster. If not configured, Argo’s health check reports the health status as “progressing”, resulting in a timeout error during installation.
+
+To enable reporting its status, add `publishedService` to `providers.kubernetesIngress.ingressEndpoint`.
- {% include
- image.html
- lightbox="true"
- file="/images/runtime/git-token-scope-resource-repos.png"
- url="/images/runtime/git-token-scope-resource-repos.png"
- alt="Scope for Git personal user token"
- caption="Scope for Git personal user token"
- max-width="30%"
- %}
-
-For detailed information on GitHub tokens, see [Creating a personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).
+The value must be in the format `"/"`, where:
+ `` is the Traefik service from which to copy the status
+```yaml
+...
+providers:
+ kubernetesIngress:
+ ingressEndpoint:
+ publishedService: "/" # Example, "codefresh/traefik-default"
+...
+```
+{::nomarkdown}
+
+{:/}
+
### What to read next
-[Installing hybrid runtimes]({{site.baseurl}}/docs/runtime/installation/)
+[Hybrid runtime installation flags]({{site.baseurl}}/docs/runtime/installation//#hybrid-runtime-installation-flags)
+[Install hybrid runtimes]({{site.baseurl}}/docs/runtime/installation/)
diff --git a/_docs/runtime/requirements_orig.md b/_docs/runtime/requirements_orig.md
new file mode 100644
index 00000000..29fad0ee
--- /dev/null
+++ b/_docs/runtime/requirements_orig.md
@@ -0,0 +1,384 @@
+---
+title: "Hybrid runtime requirements"
+description: ""
+group: runtime
+toc: true
+---
+
+
+The requirements listed are the **_minimum_** requirements to provision **_hybrid runtimes_** in the Codefresh platform.
+
+> Hosted runtimes are managed by Codefresh. To provision a hosted runtime as part of Hosted GitOps setup, see [Provision a hosted runtime]({{site.baseurl}}/docs/runtime/hosted-runtime/#1-provision-hosted-runtime) in [Set up a hosted (Hosted GitOps) environment]({{site.baseurl}}/docs/runtime/hosted-runtime/).
+
+>In the documentation, Kubernetes and K8s are used interchangeably.
+
+### Requirements
+
+{: .table .table-bordered .table-hover}
+| Item | Requirement |
+| -------------- | -------------- |
+|Kubernetes cluster | Server version 1.18 and higher, without Argo Project components. Tip: To check the server version, run `kubectl version --short`.|
+| Ingress controller| Configured on Kubernetes cluster and exposed from the cluster. {::nomarkdown} See XREF {:/}|
+|Node requirements| {::nomarkdown} {:/}|
+|Runtime namespace | resource permissions|
+| | `ServiceAccount`: Create, Delete |
+| | `ConfigMap`: Create, Update, Delete |
+| | `Service`: Create, Update, Delete |
+| | `Role`: In group `rbac.authorization.k8s.io`: Create, Update, Delete |
+| |`RoleBinding`: In group `rbac.authorization.k8s.io`: Create, Update, Delete |
+| | `persistentvolumeclaims`: Create, Update, Delete |
+| | `pods`: Create, Update, Delete |
+| Git providers | {::nomarkdown} - Hosted: GitHub
- Hybrid:
- GitHub
- GitLab
Bitbucket Server
- GitHub Enterprise
|
+| Git access tokens | {::nomarkdown} - Runtime Git token:
- Valid expiration date
- Scopes: `repo` and `admin-repo.hook`
- Runtime Git token:
- Valid expiration date
- Scopes: `repo` and `admin-repo.hook`
|
+
+### NGINX EN
+
+#### Valid external IP address
+Run `kubectl get svc -A` to get a list of services and verify that the EXTERNAL-IP column for your ingress controller shows a valid hostname.
+
+#### Valid SSL certificate
+For secure runtime installation, the ingress controller must have a valid SSL certificate from an authorized CA (Certificate Authority).
+
+#### TCP support
+Configure to handle TCP requests.
+
+Here's an example of TCP configuration for NGINX on AWS.
+Verify that the ingress-nginx-controller service manifest has either of the following annotations:
+
+`service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"`
+OR
+`service.beta.kubernetes.io/aws-load-balancer-type: nlb`
+
+
+
+* AWS ALB
+ In the ingress resource file, verify that `spec.controller` is configured as `ingress.k8s.aws/alb`.
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: IngressClass
+metadata:
+ name: alb
+spec:
+ controller: ingress.k8s.aws/alb
+```
+
+* Report status
+ The ingress controller must be configured to report its status. Otherwise, Argo's health check reports the health status as "progressing" resulting in a timeout error during installation.
+
+ By default, NGINX Enterprise and Traefik ingress are not configured to report status. For details on configuration settings, see the following sections in this article:
+ [NGINX Enterprise ingress configuration](#nginx-enterprise-version-ingress-configuration)
+ [Traefik ingress configuration](#traefik-ingress-configuration)
+
+
+#### NGINX Enterprise version ingress configuration
+The Enterprise version of NGINX (`nginx.org/ingress-controller`), both with and without the Ingress Operator, must be configured to report the status of the ingress controller.
+
+**Installation with NGINX Ingress**
+* Pass the `- -report-ingress-status` to `deployment`.
+
+ ```yaml
+ spec:
+ containers:
+ - args:
+ - -report-ingress-status
+ ```
+
+**Installation with NGINX Ingress Operator**
+
+1. Add this to the `Nginxingresscontrollers` resource file:
+
+ ```yaml
+ ...
+ spec:
+ reportIngressStatus:
+ enable: true
+ ...
+ ```
+
+1. Make sure you have a certificate secret in the same namespace as the runtime. Copy an existing secret if you don't have one.
+You will need to add this to the `ingress-master` when you have completed runtime installation.
+
+#### NGINX Community version provider-specific ingress configuration
+Codefresh has been tested and is supported in major providers. For your convenience, here are provider-specific configuration instructions, both for supported and untested providers.
+
+> The instructions are valid for `k8s.io/ingress-nginx`, the community version of NGINX.
+
+
+AWS
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/aws/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for AWS.
+
+
+Azure (AKS)
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for AKS.
+
+
+
+
+Bare Metal Clusters
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+Bare-metal clusters often have additional considerations. See Bare-metal ingress-nginx considerations.
+
+
+
+
+Digital Ocean
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/do/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Digital Ocean.
+
+
+
+
+Docker Desktop
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Docker Desktop.
+Note: By default, Docker Desktop services will provision with localhost as their external address. Triggers in delivery pipelines cannot reach this instance unless they originate from the same machine where Docker Desktop is being used.
+
+
+
+
+Exoscale
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Exoscale.
+
+
+
+
+
+Google (GKE)
+
+Add firewall rules
+
+GKE by default limits outbound requests from nodes. For the runtime to communicate with the control-plane in Codefresh, add a firewall-specific rule.
+
+
+- Find your cluster's network:
+ gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)"
+
+- Get the Cluster IPV4 CIDR:
+ gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)"
+
+- Replace the `[CLUSTER_NAME]`, `[NETWORK]`, and `[CLUSTER_IPV4_CIDR]`, with the relevant values:
+ gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network"
+
+ --network="[NETWORK]" \
+
+
+ --source-ranges="[CLUSTER_IPV4_CIDR]" \
+
+
+ --allow=tcp,udp,icmp,esp,ah,sctp
+
+
+
+
+Use ingress-nginx
+
+ - Create a `cluster-admin` role binding:
+
+ kubectl create clusterrolebinding cluster-admin-binding \
+
+
+ --clusterrole cluster-admin \
+
+
+ --user $(gcloud config get-value account)
+
+
+ - Apply:
+
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
+
+
+ - Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+
+We recommend reviewing the provider-specific documentation for GKE.
+
+
+
+
+
+MicroK8s
+
+- Install using Microk8s addon system:
+ microk8s enable ingress
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+MicroK8s has not been tested with Codefresh, and may require additional configuration. For details, see Ingress addon documentation.
+
+
+
+
+
+MiniKube
+
+- Install using MiniKube addon system:
+ minikube addons enable ingress
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+MiniKube has not been tested with Codefresh, and may require additional configuration. For details, see Ingress addon documentation.
+
+
+
+
+
+
+Oracle Cloud Infrastructure
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Oracle Cloud.
+
+
+
+
+Scaleway
+
+- Apply:
+ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/scw/deploy.yaml
+
+- Verify a valid external address exists:
+ kubectl get svc ingress-nginx-controller -n ingress-nginx
+
+
+For additional configuration options, see ingress-nginx documentation for Scaleway.
+
+
+
+
+#### Traefik ingress configuration
+To enable the the Traefik ingress controller to report the status, add `publishedService` to `providers.kubernetesIngress.ingressEndpoint`.
+
+The value must be in the format `"/"`, where:
+ `` is the Traefik service from which to copy the status
+
+ ```yaml
+ ...
+ providers:
+ kubernetesIngress:
+ ingressEndpoint:
+ publishedService: "/" # Example, "codefresh/traefik-default" ...
+ ...
+ ```
+
+####
+
+#### Runtime namespace permissions for resources
+
+{: .table .table-bordered .table-hover}
+| Resource | Permissions Required|
+| -------------- | -------------- |
+| `ServiceAccount` | Create, Delete |
+| `ConfigMap` | Create, Update, Delete |
+| `Service` | Create, Update, Delete |
+| `Role` | In group `rbac.authorization.k8s.io`: Create, Update, Delete |
+| `RoleBinding` | In group `rbac.authorization.k8s.io`: Create, Update, Delete |
+| `persistentvolumeclaims` | Create, Update, Delete |
+| `pods` | Creat, Update, Delete |
+
+### Git repository requirements
+This section lists the requirements for Git installation repositories.
+
+#### Git installation repo
+If you are using an existing repo, make sure it is empty.
+
+#### Git access tokens
+Codefresh requires two access tokens, one for runtime installation, and the second, a personal token for each user to authenticate Git-based actions in Codefresh.
+
+##### Git runtime token
+The Git runtime token is mandatory for runtime installation.
+
+The token must have valid:
+ * Expiration date: Default is `30 days`
+ * Scopes: `repo` and `admin-repo.hook`
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/getting-started/quick-start/quick-start-git-event-permissions.png"
+ url="/images/getting-started/quick-start/quick-start-git-event-permissions.png"
+ alt="Scopes for Git runtime token"
+ caption="Scopes for Git runtime token"
+ max-width="30%"
+ %}
+
+##### Git user token for Git-based actions
+The Git user token is the user's personal token and is unique to every user. It is used to authenticate every Git-based action of the user in Codefresh. You can add the Git user token at any time from the UI.
+
+ The token must have valid:
+ * Expiration date: Default is `30 days`
+ * Scope: `repo`
+
+ {% include
+ image.html
+ lightbox="true"
+ file="/images/runtime/git-token-scope-resource-repos.png"
+ url="/images/runtime/git-token-scope-resource-repos.png"
+ alt="Scope for Git personal user token"
+ caption="Scope for Git personal user token"
+ max-width="30%"
+ %}
+
+For detailed information on GitHub tokens, see [Creating a personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).
+
+
+### What to read next
+[Installing hybrid runtimes]({{site.baseurl}}/docs/runtime/installation/)
diff --git a/images/runtime/post-install-alb-ingress.png b/images/runtime/post-install-alb-ingress.png
index 56b911a1..ad689a14 100644
Binary files a/images/runtime/post-install-alb-ingress.png and b/images/runtime/post-install-alb-ingress.png differ