From 07cf028089986cde37567560ccd25ae42fd52a43 Mon Sep 17 00:00:00 2001 From: Alan Dooley Date: Tue, 29 Apr 2025 15:02:16 +0100 Subject: [PATCH] Update all self-referential Release links from production URLs (#7708) This commit replaces all production URLs in the Release page for NGINX Ingress Controller with relative Hugo references instead. This improves the consistency of URLs for the documentation set, and allows for Hugo to handle broken link references. Co-authored-by: Gabor Javorszky Co-authored-by: Paul Abel <128620221+pdabelf5@users.noreply.github.com> Co-authored-by: Venktesh Shivam Patel --- site/content/releases.md | 69 ++++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 35 deletions(-) diff --git a/site/content/releases.md b/site/content/releases.md index 2bcdea119c..f2d42781cc 100644 --- a/site/content/releases.md +++ b/site/content/releases.md @@ -21,7 +21,7 @@ We have extended the rate-limit Policy to allow tiered rate limit groups with JW We introduced NGINX Plus Zone Sync as a managed service within NGINX Ingress Controller in this release. In previous releases, we had examples using `stream-snippets` for OIDC support, users can now enable `zone-sync` without the need for `snippets`. NGINX Plus Zone Sync is available when utilising two or more replicas, it supports OIDC & rate limiting. {{< note >}} -For users who have previously installed OIDC or used the `zone_sync` directive with `stream-snippets`, please see the note [here](https://docs.nginx.com//nginx-ingress-controller/configuration/global-configuration/configmap-resource/#zone-sync) to use the new `zone-sync` ConfigMap option. +For users who have previously installed OIDC or used the `zone_sync` directive with `stream-snippets`, please see the note in the [Configmap resources]({{< ref "/configuration/global-configuration/configmap-resource.md#zone-sync" >}}) topic to use the new `zone-sync` ConfigMap option. {{< /note >}} Open Source NGINX Ingress Controller architectures `armv7`, `s390x` & `ppc64le` are deprecated and will be removed in the next minor release. @@ -371,7 +371,7 @@ versions: 1.25-1.30. 25 Jun 2024 Added support for the latest generation of NGINX App Protect Web Application Firewall, v5. NGINX Ingress Controller will continue to support the NGINX App Protect v4 family to allow customers to implement new Policy Bundle workflow at their own pace. -NGINX App Protect WAF v5 does not accept the JSON based policies, instead requiring users to compile a Policy Bundle outside of the NGINX Ingress Controller pod. Policy bundles contain a combination of custom Policy, signatures, and campaigns. Bundles can be compiled using either App Protect [compiler](https://docs.nginx.com/nginx-app-protect-waf/v5/admin-guide/compiler/), or [NGINX Instance Manager](https://docs.nginx.com/nginx-instance-manager/nginx-app-protect/manage-waf-security-policies/#list-security-policy-bundles). Learn more here, https://docs.nginx.com/nginx-ingress-controller/installation/integrations/app-protect-waf-v5/. +NGINX App Protect WAF v5 does not accept the JSON based policies, instead requiring users to compile a Policy Bundle outside of the NGINX Ingress Controller pod. Policy bundles contain a combination of custom Policy, signatures, and campaigns. Bundles can be compiled using either App Protect [compiler](https://docs.nginx.com/nginx-app-protect-waf/v5/admin-guide/compiler/), or [NGINX Instance Manager](https://docs.nginx.com/nginx-instance-manager/nginx-app-protect/manage-waf-security-policies/#list-security-policy-bundles). Read more in the [NGINX App Protect WAF V5]({{< ref "/installation/integrations/app-protect-waf-v5/" >}}) topic. With this release, NGINX Ingress Controller is implementing a new image maintenance policy. Container images for subscribed users will be updated on a regular basis in-between releases to reduce the CVE vulnerabilities. Customers can observe the 3.6.x tag when listing images in the registry and select the latest image to update to for the current release. @@ -421,7 +421,7 @@ versions: 1.25-1.30. ### Important -- Bundles compiled on NAP WAF versions <= v4.8.x are not compatible with NAP WAF versions >= 4.9.x, this release of NIC includes NAP WAF v4.10 so recompilation of policy bundles is required. JSON based [WAF Policies](https://docs.nginx.com/nginx-ingress-controller/installation/integrations/app-protect-waf/configuration/#waf-policies) aren't affected with this change. +- Bundles compiled on NAP WAF versions <= v4.8.x are not compatible with NAP WAF versions >= 4.9.x, this release of NIC includes NAP WAF v4.10 so recompilation of policy bundles is required. JSON based [WAF Policies]({{< ref "/installation/integrations/app-protect-waf/configuration.md#waf-policies" >}}) aren't affected with this change. ### Dependencies @@ -486,11 +486,11 @@ versions: 1.23-1.29. 26 Mar 2024 -NGINX Ingress Controller and NGINX App Protect WAF users can can now view violations through NGINX Instance Manager Security Monitor. Security Monitor can be used to build Policy bundles, reducing reload time impacts on NGINX Ingress Controller. Read more information in [NGINX App Protect WAF Bundles](https://docs.nginx.com/nginx-ingress-controller/installation/integrations/app-protect-waf/configuration/#waf-bundles) and [Security Monitoring](https://docs.nginx.com/nginx-instance-manager/monitoring/security-monitoring/). +NGINX Ingress Controller and NGINX App Protect WAF users can can now view violations through NGINX Instance Manager Security Monitor. Security Monitor can be used to build Policy bundles, reducing reload time impacts on NGINX Ingress Controller. Read more information in [NGINX App Protect WAF Bundles]({{< ref "/installation/integrations/app-protect-waf/configuration.md#waf-bundles" >}}) and [Security Monitoring](https://docs.nginx.com/nginx-instance-manager/monitoring/security-monitoring/). -When using NGINX Plus for two version [split rollouts](https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/#split), you can now control progressive rollouts of a new backend version without reloading NGINX using the [**-weight-changes-dynamic-reload**](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/command-line-arguments/#-weight-changes-dynamic-reload) command line argument. +When using NGINX Plus for two version [split rollouts]({{ ref "/configuration/virtualserver-and-virtualserverroute-resources.md#split" }}), you can now control progressive rollouts of a new backend version without reloading NGINX using the [**-weight-changes-dynamic-reload**]({{< ref "/configuration/global-configuration/command-line-arguments.md#-weight-changes-dynamic-reload" >}}) command line argument. -The [**use-cluster-ip**](https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/#backend-services-upstreams) annotation is now available for the Ingress resource. +The [**use-cluster-ip**]({{< ref "/configuration/ingress-resources/advanced-configuration-with-annotations.md#backend-services-upstreams" >}}) annotation is now available for the Ingress resource. **use-cluster-ip** supports service meshes and specific use cases where the backend service should be the target instead of individual backend service pods, bypassing upstream load balancing. ### Features @@ -925,8 +925,8 @@ helm upgrade my-release oci://ghcr.io/nginx/charts/nginx-ingress --version 0.17. - Beginning with release 3.1.0 the NET_BIND_SERVICE capability is no longer used, and instead relies on net.ipv4.ip_unprivileged_port_start sysctl to allow port binding. Kubernetes 1.22 or later is required for this sysctl to be [classified as safe](https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#safe-and-unsafe-sysctls). **Ensure that you are using the latest updated `deployment` and `daemonset` example yaml files available in the repo.** - *The minimum supported version of Kubernetes is now 1.22*. NGINX Ingress Controller now uses `sysctls` to [bind to lower level ports without additional privileges](https://github.com/nginx/kubernetes-ingress/pull/3573/). This removes the need to use `NET_BIND_SERVICE` to bind to these ports. Thanks to [Valters Jansons](https://github.com/sigv) for making this feature possible! - Added support for loading pre-compiled [AppProtect Policy Bundles](https://github.com/nginx/kubernetes-ingress/pull/3560) when using the `-enable-app-protect` cli argument. This feature removes the need for the Ingress Controller to compile NGINX App Protect Policy when NGINX App Protect Policy is updated. -- IngressMTLS policy now supports configuring a Certificate Revocation Lists(CRL). When using this feature requests made using a revoked certificate will be rejected. See [Using a Certificate Revocation List](https://docs.nginx.com/nginx-ingress-controller/configuration/policy-resource/#using-a-certificate-revocation-list) for details on configuring this option. -- NGINX Ingress Controller now supports [running with a Read-only Root Filesystem](https://github.com/nginx/kubernetes-ingress/pull/3548). This improves the security posture of NGINX Ingress Controller by protecting the file system from unknown writes. See [Configure root filesystem as read-only](https://docs.nginx.com/nginx-ingress-controller/configuration/security/#configure-root-filesystem-as-read-only) for details on configuring this option with both HELM and Manifest. Thanks to [Valters Jansons](https://github.com/sigv) for making this feature possible! +- IngressMTLS policy now supports configuring a Certificate Revocation Lists(CRL). When using this feature requests made using a revoked certificate will be rejected. See [Using a Certificate Revocation List]({{< ref "/configuration/policy-resource.md#using-a-certificate-revocation-list" >}}) for details on configuring this option. +- NGINX Ingress Controller now supports [running with a Read-only Root Filesystem](https://github.com/nginx/kubernetes-ingress/pull/3548). This improves the security posture of NGINX Ingress Controller by protecting the file system from unknown writes. See [Configure root filesystem as read-only]({{< ref "/configuration/security.md#configure-root-filesystem-as-read-only" >}}) for details on configuring this option with both HELM and Manifest. Thanks to [Valters Jansons](https://github.com/sigv) for making this feature possible! - HELM deployments can now set [custom environment variables with controller.env](https://github.com/nginx/kubernetes-ingress/pull/3326). Thanks to [Aaron Shiels](https://github.com/AaronShiels) for making this possible! - HELM deployments can now configure a [pod disruption budget](https://github.com/nginx/kubernetes-ingress/pull/3248) allowing deployments to configure either a minimum number or a maximum unavailable number of pods. Thanks to [Bryan Hendryx](https://github.com/coolbry95) for making this possible! - NGINX Ingress Controller uses the latest OIDC reference implementation which now supports [forwarding access tokens to upstreams / backends](https://github.com/nginx/kubernetes-ingress/pull/3474). Thanks to [Shawn Kim](https://github.com/shawnhankim) for making this possible! @@ -1028,13 +1028,12 @@ We will provide technical support for NGINX Ingress Controller on any Kubernetes ### Overview -- Added support for [Deep Service Insight](https://docs.nginx.com/nginx-ingress-controller/logging-and-monitoring/service-insight) for VirtualServer and TransportServer using the [-enable-service-insight](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/command-line-arguments/#-enable-service-insight) cli argument. +- Added support for [Deep Service Insight]({{< ref "/logging-and-monitoring.md#service-insight" >}}) for VirtualServer and TransportServer using the [-enable-service-insight]({{< ref "/configuration/global-configuration/command-line-arguments.md#-enable-service-insight" >}}) cli argument. - *The minimum supported version of Kubernetes is now 1.21*. NGINX Ingress Controller 3.0.0 removes support for `k8s.io/v1/Endpoints` API in favor of `discovery.k8s.io/v1/EndpointSlices`. For older Kubernetes versions, use the 2.4.x release of the Ingress Controller. - Added support for [EndpointSlices](https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/). -- Added support to dynamically reconfigure namespace watchers using labels [-watch-namespace-label](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/command-line-arguments/#-watch-namespace-label-string) and watching secrets using the [-watch-secret-namespace](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/command-line-arguments/#-watch-secret-namespace-string) cli arguments. -- Allow configuration of NGINX directives `map-hash-bucket-size` and `map-hash-max-size` using the [ConfigMap resource](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/configmap-resource#general-customization) . -- Added support for [fetching JWKs from a remote URL](https://docs.nginx.com/nginx-ingress-controller/configuration/policy-resource#jwt-using-jwks-from-remote-location -) to dynamically validate JWT tokens and optimize performance through caching. +- Added support to dynamically reconfigure namespace watchers using labels [-watch-namespace-label]({{< ref "/configuration/global-configuration/command-line-arguments.md#-watch-namespace-label-string" >}}) and watching secrets using the [-watch-secret-namespace]({{< ref "/configuration/global-configuration/command-line-arguments.md#-watch-secret-namespace-string" >}}) cli arguments. +- Allow configuration of NGINX directives `map-hash-bucket-size` and `map-hash-max-size` using the [ConfigMap resource]({{< ref "/configuration/global-configuration/configmap-resource.md#general-customization" >}}). +- Added support for [fetching JWKs from a remote URL]({{< ref "/configuration/policy-resource.md#jwt-using-jwks-from-remote-location" >}}) to dynamically validate JWT tokens and optimize performance through caching. - Beginning with NGINX Service Mesh release 1.7 it will include support for the free version of NGINX Ingress Controller as well as the paid version. - NGINX Ingress Controller + NGINX App Protect Denial of Service is now available through the AWS Marketplace. @@ -1150,9 +1149,9 @@ We will provide technical support for NGINX Ingress Controller on any Kubernetes - Added support for enabling [proxy_protocol](https://github.com/nginx/kubernetes-ingress/tree/v2.4.0/examples/shared-examples/proxy-protocol) when port 443 is being used for both HTTPS traffic and [TLS Passthrough traffic](https://github.com/nginx/kubernetes-ingress/tree/v2.4.0/examples/custom-resources/tls-passthrough). - Updates to the TransportServer resource to support using [ExternalName services](https://kubernetes.io/docs/concepts/services-networking/service/#externalname). For examples, see [externalname-services](https://github.com/nginx/kubernetes-ingress/tree/v2.4.0/examples/custom-resources/externalname-services). - VirtualServer resource now supports [wildcard hostname](https://kubernetes.io/docs/concepts/services-networking/ingress/#hostname-wildcards). -- NGINX Ingress Controller images including the combined NGINX AppProtect WAF and NGINX AppProtect DoS solutions are now published to our registry. See [Images with NGINX Plus](https://docs.nginx.com/nginx-ingress-controller/technical-specifications/#images-with-nginx-plus) for a detailed list of images in our registry. -- Added support for watching multiple namespaces using the [-watch-namespace](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/command-line-arguments/#-watch-namespace-string) cli argument. This can configured by passing a comma-separated list of namespaces to the `-watch-namespace` CLI argument (e.g. `-watch-namespace=ns-1,ns-2`). -- A new cli argument has been added: [-include-year](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/command-line-arguments/#-include-year). This appends the current year to the log output from the Ingress Controller. Example output: `I20220512 09:20:42.345457`. +- NGINX Ingress Controller images including the combined NGINX AppProtect WAF and NGINX AppProtect DoS solutions are now published to our registry. See [Images with NGINX Plus]({{< ref "/technical-specifications.md#images-with-nginx-plus" >}}) for a detailed list of images in our registry. +- Added support for watching multiple namespaces using the [-watch-namespace]({{< ref "/configuration/global-configuration/command-line-arguments.md#-watch-namespace-string" >}}) cli argument. This can configured by passing a comma-separated list of namespaces to the `-watch-namespace` CLI argument (e.g. `-watch-namespace=ns-1,ns-2`). +- A new cli argument has been added: [-include-year]({{< ref "/configuration/global-configuration/command-line-arguments.md#-include-year" >}}). This appends the current year to the log output from the Ingress Controller. Example output: `I20220512 09:20:42.345457`. - Post-startup configuration reloads have been optimized to reduce traffic impacts. When many resources are modified at the same time, changes are combined to reduce the number of data plane reloads. ### Features @@ -1266,7 +1265,7 @@ We will provide technical support for NGINX Ingress Controller on any Kubernetes - For NGINX, use the 2.3.0 images from our [DockerHub](https://hub.docker.com/r/nginx/nginx-ingress/tags?page=1&ordering=last_updated&name=2.3.0), [GitHub Container](https://github.com/nginx/kubernetes-ingress/pkgs/container/kubernetes-ingress) or [Amazon ECR Public Gallery](https://gallery.ecr.aws/nginx/nginx-ingress). - For NGINX Plus, use the 2.3.0 images from the F5 Container registry or the [AWS Marketplace](https://aws.amazon.com/marketplace/search/?CREATOR=741df81b-dfdc-4d36-b8da-945ea66b522c&FULFILLMENT_OPTION_TYPE=CONTAINER&filters=CREATOR%2CFULFILLMENT_OPTION_TYPE) or build your own image using the 2.3.0 source code. - For Helm, use version 0.14.0 of the chart. If you're using custom resources like VirtualServer and TransportServer (`controller.enableCustomResources` is set to `true`), after you run the `helm upgrade` command, the CRDs will not be upgraded. After running the `helm upgrade` command, run `kubectl apply -f deployments/helm-chart/crds` to upgrade the CRDs. -- When upgrading using the [manifests](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/), make sure to update the [ClusterRole](https://github.com/nginx/kubernetes-ingress/blob/v2.3.1/deployments/rbac/rbac.yaml). This is required to enable the ExternalDNS for VirtualServer resources integration. +- When upgrading using [Manifests]({{< ref "/installation/installing-nic/installation-with-manifests.md" >}}), make sure to update the [ClusterRole](https://github.com/nginx/kubernetes-ingress/blob/v2.3.1/deployments/rbac/rbac.yaml). This is required to enable the ExternalDNS for VirtualServer resources integration. ### Supported Platforms @@ -1319,11 +1318,11 @@ the documentation here - Support for automatic provisioning and management of Certificate resources for VirtualServer resources using [cert-manager](https://cert-manager.io/docs/). Examples for configuring cert-manager with NGINX Ingress Controller can be found [here](https://github.com/nginx/kubernetes-ingress/tree/v2.2.0/examples/custom-resources/certmanager). Please note that HTTP01 type ACME Issuers are not yet supported for use with VirtualServer resources. -- Full support for IPv6 using the NGINX Ingress Controller [VirtualServer and VirtualServerRoute](https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources) custom resources, and Ingress resources. +- Full support for IPv6 using the NGINX Ingress Controller [VirtualServer and VirtualServerRoute]({{< ref "/configuration/virtualserver-and-virtualserverroute-resources.md" >}}) custom resources, and Ingress resources. -- The [-enable-preview-policies](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/command-line-arguments/#-enable-preview-policies) cli argument has been deprecated and is no longer required for the usage of any Policy resource type. This argument will be removed completely in v2.6.0. +- The [-enable-preview-policies]({{< ref "/configuration/global-configuration/command-line-arguments.md#-enable-preview-policies" >}}) cli argument has been deprecated and is no longer required for the usage of any Policy resource type. This argument will be removed completely in v2.6.0. -- A new [-enable-oidc](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/command-line-arguments/#-enable-oidc) cli argument has been added to enable OIDC policies. Previously, this behaviour was achieved through the usage of the `-enable-preview-policies` cli argument. +- A new [-enable-oidc]({{< ref "/configuration/global-configuration/command-line-arguments.md#-enable-oidc" >}}) cli argument has been added to enable OIDC policies. Previously, this behaviour was achieved through the usage of the `-enable-preview-policies` cli argument. ### Features @@ -1353,7 +1352,7 @@ the documentation here - For NGINX, use the 2.2.0 images from our [DockerHub](https://hub.docker.com/r/nginx/nginx-ingress/tags?page=1&ordering=last_updated&name=2.2.0), [GitHub Container](https://github.com/nginx/kubernetes-ingress/pkgs/container/kubernetes-ingress) or [Amazon ECR Public Gallery](https://gallery.ecr.aws/nginx/nginx-ingress). - For NGINX Plus, use the 2.2.0 images from the F5 Container registry or the [AWS Marketplace](https://aws.amazon.com/marketplace/search/?CREATOR=741df81b-dfdc-4d36-b8da-945ea66b522c&FULFILLMENT_OPTION_TYPE=CONTAINER&filters=CREATOR%2CFULFILLMENT_OPTION_TYPE) or build your own image using the 2.2.0 source code. - For Helm, use version 0.13.0 of the chart. If you're using custom resources like VirtualServer and TransportServer (`controller.enableCustomResources` is set to `true`), after you run the `helm upgrade` command, the CRDs will not be upgraded. After running the `helm upgrade` command, run `kubectl apply -f deployments/helm-chart/crds` to upgrade the CRDs. -- When upgrading using the [manifests](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/), make sure to update the [ClusterRole](https://github.com/nginx/kubernetes-ingress/blob/v2.2.0/deployments/rbac/rbac.yaml). This is required to enable the cert-manager for VirtualServer resources integration. +- When upgrading using [Manifests]({{< ref "/installation/installing-nic/installation-with-manifests.md" >}}), make sure to update the [ClusterRole](https://github.com/nginx/kubernetes-ingress/blob/v2.2.0/deployments/rbac/rbac.yaml). This is required to enable the cert-manager for VirtualServer resources integration. - The -enable-preview-policies cli argument has been deprecated, and is no longer required for any Policy resources. - Enabling OIDC Policies now requires the use of -enable-oidc cli argument instead of the -enable-preview-policies cli argument. @@ -1429,11 +1428,11 @@ We will provide technical support for NGINX Ingress Controller on any Kubernetes - Support for NGINX App Protect Denial of Service protection with NGINX Ingress Controller. More information about [NGINX App Protect DoS](https://www.nginx.com/products/nginx-app-protect/denial-of-service/). Examples for configuring NGINX App Protect DoS with NGINX Ingress Controller can be found [here](https://github.com/nginx/kubernetes-ingress/tree/v2.1.1/examples/appprotect-dos). -- Full support for gRPC services using the NGINX Ingress Controller [VirtualServer and VirtualServerRoute](https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources) custom resource definitions. This makes configuring and supporting gRPC services much easier, giving a simple YAML configuration and removing the need for snippets. Resource definition examples for gRPC can be found [here](https://github.com/nginx/kubernetes-ingress/tree/v2.1.1/examples/custom-resources/grpc-upstreams). +- Full support for gRPC services using the NGINX Ingress Controller [VirtualServer and VirtualServerRoute]({{< ref "/configuration/virtualserver-and-virtualserverroute-resources.md" >}}) custom resource definitions. This makes configuring and supporting gRPC services much easier, giving a simple YAML configuration and removing the need for snippets. Resource definition examples for gRPC can be found [here](https://github.com/nginx/kubernetes-ingress/tree/v2.1.1/examples/custom-resources/grpc-upstreams). -- Implementation of NGINX mandatory and persistent health checks in VirtualServer and VirtualServerRoute to further reduce interruptions to your service traffic as configuration changes continuously happen in your dynamic Kubernetes environment(s). Health checks have been extended to include `mandatory` and `persistent` fields. Mandatory health checks ensures that a new upstream server starts receiving traffic only after the health check passes. Mandatory health checks can be marked as persistent, so that the previous state is remembered when the Ingress Controller reloads NGINX Plus configuration. When combined with the slow-start parameter, the mandatory health check give a new upstream server more time to connect to databases and “warm up” before being asked to handle their full share of traffic. See the settings [here](https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/#upstreamhealthcheck). More about the [NGINX Plus mandatory and persistent health check features](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/#mandatory-health-checks). +- Implementation of NGINX mandatory and persistent health checks in VirtualServer and VirtualServerRoute to further reduce interruptions to your service traffic as configuration changes continuously happen in your dynamic Kubernetes environment(s). Health checks have been extended to include `mandatory` and `persistent` fields. Mandatory health checks ensures that a new upstream server starts receiving traffic only after the health check passes. Mandatory health checks can be marked as persistent, so that the previous state is remembered when the Ingress Controller reloads NGINX Plus configuration. When combined with the slow-start parameter, the mandatory health check give a new upstream server more time to connect to databases and “warm up” before being asked to handle their full share of traffic. See the settings [here]({{< ref "/configuration/virtualserver-and-virtualserverroute-resources.md#upstreamhealthcheck" >}}). More about the [NGINX Plus mandatory and persistent health check features](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/#mandatory-health-checks). Mandatory health checks can be marked as persistent, so that the previous state is remembered when reloading configuration. When combined with the slow-start parameter, it gives a new service pod more time to connect to databases and “warm up” before being asked to handle their full share of traffic. -See the settings [here](https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/#upstreamhealthcheck). +See the settings [here]({{< ref "/configuration/virtualserver-and-virtualserverroute-resources.md#upstreamhealthcheck" >}}). More about the [NGINX Plus mandatory and persistent health check features](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/#mandatory-health-checks) ### Features @@ -1640,7 +1639,7 @@ You will find the complete changelog for release 2.0.0, including bug fixes, imp - For NGINX Plus, use the 2.0.0 from the F5 Container registry or build your own image using the 2.0.0 source code. - For Helm, use version 0.11.0 of the chart. -See the complete list of supported images for NGINX and NGINX Plus on the [Technical Specifications](https://docs.nginx.com/nginx-ingress-controller/technical-specifications/#supported-docker-images) page. +See the complete list of supported images for NGINX and NGINX Plus on the [Technical Specifications]({{< ref "/technical-specifications.md#supported-docker-images" >}}) page. INCREASED UPSTREAM ZONES @@ -1648,9 +1647,9 @@ We increased the default size of an upstream zone from 256K to 512K to accommoda The increase in the zone size is to prevent NGINX Plus configuration reload failures after an upgrade to release 1.13.0. Note that If a zone becomes full, NGINX Plus will fail to reload and fail to add more upstream servers via the API. -The new 512K default value will be able to hold ~270 upstream servers per upstream, similarly to how the old 256K value was able to hold the same number of upstream servers in the previous Ingress Controller releases. You can understand the utilization of the upstream zones via [NGINX Plus API](http://nginx.org/en/docs/http/ngx_http_api_module.html#slabs) and the [NGINX Plus dashboard](https://docs.nginx.com/nginx-ingress-controller/logging-and-monitoring/status-page/#accessing-live-activity-monitoring-dashboard) (the shared zones tab). +The new 512K default value will be able to hold ~270 upstream servers per upstream, similarly to how the old 256K value was able to hold the same number of upstream servers in the previous Ingress Controller releases. You can understand the utilization of the upstream zones via [NGINX Plus API](http://nginx.org/en/docs/http/ngx_http_api_module.html#slabs) and the [NGINX Plus dashboard]({{< ref "/logging-and-monitoring/status-page.md#accessing-live-activity-monitoring-dashboard" >}}) (the shared zones tab). -If you have a large number of upstream in the NGINX Plus configuration of the Ingress Controller, expect that after an upgrade NGINX Plus will consume more memory: +256K per upstream. If you don’t have upstreams with huge number of upstream serves and you’d like to reduce the memory usage of NGINX Plus, you can configure the `upstream-zone-size` [ConfigMap key](https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/configmap-resource/#backend-services-upstreams) with a lower value. Additionally, the Ingress resource supports `nginx.org/upstream-zone-size` [annotation](https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/#backend-services-upstreams) to configure zone sizes for the upstreams of an Ingress resource rather than globally. +If you have a large number of upstream in the NGINX Plus configuration of the Ingress Controller, expect that after an upgrade NGINX Plus will consume more memory: +256K per upstream. If you don’t have upstreams with huge number of upstream serves and you’d like to reduce the memory usage of NGINX Plus, you can configure the `upstream-zone-size` [ConfigMap key]({{< ref "/configuration/global-configuration/configmap-resource.md#backend-services-upstreams" >}}) with a lower value. Additionally, the Ingress resource supports `nginx.org/upstream-zone-size` [annotation]({{< ref "/configuration/ingress-resources/advanced-configuration-with-annotations.md#backend-services-upstreams" >}}) to configure zone sizes for the upstreams of an Ingress resource rather than globally. UPDATING POLICIES @@ -1658,11 +1657,11 @@ This section is only relevant if you’re running release 1.9.0 and planning to Release 1.10 removed the `k8s.nginx.org/v1alpha1 version` of the Policy resource and introduced the `k8s.nginx.org/v1` version. This means that to upgrade to release 1.10 users had to re-create v1alpha1 Policies with the v1 version, which caused downtime for their applications. Release 2.0.0 brings back the support for the v1alpha1 Policy, which makes it possible to upgrade from 1.9.0 to 2.0.0 release without causing downtime: -- If the Policy is marked as a preview feature in the [documentation](https://docs.nginx.com/nginx-ingress-controller/configuration/policy-resource/), make sure the -enable-preview-policies command-line argument is set in 2.0.0 Ingress Controller. +- If the Policy is marked as a preview feature in the [documentation]({{< ref "/configuration/policy-resource.md" >}}), make sure the -enable-preview-policies command-line argument is set in 2.0.0 Ingress Controller. - During the upgrade, the existing Policies will not be removed. - After the upgrade, make sure to update the Policy manifests to k8s.nginx.org/v1 version. -Please also read the [release 1.10 changelog](https://docs.nginx.com/nginx-ingress-controller/releases/#nginx-ingress-controller-1100) for the instructions on how to update Secret resources, which is also necessary since some of the Policies reference Secrets. +Please also read the [release 1.10 changelog](#1100) for the instructions on how to update Secret resources, which is also necessary since some of the Policies reference Secrets. Note that 2.1.0 will remove support for the v1alpha1 version of the Policy. @@ -1857,8 +1856,8 @@ You will find the complete changelog for release 1.11.0, including bug fixes, im - For Helm, use version 0.9.0 of the chart. - [1241](https://github.com/nginx/kubernetes-ingress/pull/1241) improved the Makefile. As a result, the commands for building the Ingress Controller image were changed. See the updated commands [here]({{< relref "/installation/build-nginx-ingress-controller.md" >}}). - [1241](https://github.com/nginx/kubernetes-ingress/pull/1241) also consolidated all Dockerfiles into a single Dockerfile. If you customized any of the Dockerfiles, make sure to port the changes to the new Dockerfile. -- [1288](https://github.com/nginx/kubernetes-ingress/pull/1288) further improved validation of Ingress annotations. See this [document](https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/#validation) to learn more about which annotations are validated. Note that the Ingress Controller will reject resources with invalid annotations, which means clients will see `404` responses from NGINX. Before upgrading, ensure the Ingress resources don't have annotations with invalid values. Otherwise, after the upgrade, the Ingress Controller will reject such resources. -- [1457](https://github.com/nginx/kubernetes-ingress/pull/1457) fixed the bug when an Ingress Controller pod could become ready before it generated the configuration for all relevant resources in the cluster. The fix also requires that the Ingress Controller can successfully list the relevant resources from the Kubernetes API. For example, if the `-enable-custom-resources` cli argument is `true` (which is the default), the VirtualServer, VirtualServerRoute, TransportServer, and Policy CRDs must be created in the cluster, so that the Ingress Controller can list them. This is similar to other custom resources -- see the list [here](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/#create-custom-resources). Thus, before upgrading, make sure that the CRDs are created in the cluster. Otherwise, the Ingress Controller pods will not become ready. +- [1288](https://github.com/nginx/kubernetes-ingress/pull/1288) further improved validation of Ingress annotations. See this [document]({{< ref "/configuration/ingress-resources/advanced-configuration-with-annotations.md#validation" >}}) to learn more about which annotations are validated. Note that the Ingress Controller will reject resources with invalid annotations, which means clients will see `404` responses from NGINX. Before upgrading, ensure the Ingress resources don't have annotations with invalid values. Otherwise, after the upgrade, the Ingress Controller will reject such resources. +- [1457](https://github.com/nginx/kubernetes-ingress/pull/1457) fixed the bug when an Ingress Controller pod could become ready before it generated the configuration for all relevant resources in the cluster. The fix also requires that the Ingress Controller can successfully list the relevant resources from the Kubernetes API. For example, if the `-enable-custom-resources` cli argument is `true` (which is the default), the VirtualServer, VirtualServerRoute, TransportServer, and Policy CRDs must be created in the cluster, so that the Ingress Controller can list them. This is similar to other custom resources -- see the list [here]({{< ref "/installation/installing-nic/installation-with-manifests.md#create-custom-resources" >}}). Thus, before upgrading, make sure that the CRDs are created in the cluster. Otherwise, the Ingress Controller pods will not become ready. ### Supported Platforms @@ -1937,7 +1936,7 @@ You will find the complete changelog for release 1.10.0, including bug fixes, im - For NGINX, use the 1.10.0 image from our DockerHub: `nginx/nginx-ingress:1.10.0`, `nginx/nginx-ingress:1.10.0-alpine` or `nginx-ingress:1.10.0-ubi` - For NGINX Plus, please build your own image using the 1.10.0 source code. - For Helm, use version 0.8.0 of the chart. -- As a result of [1270](https://github.com/nginx/kubernetes-ingress/pull/1270) and [1277](https://github.com/nginx/kubernetes-ingress/pull/1277), the Ingress Controller improved validation of Ingress annotations: more annotations are validated and validation errors are reported via events for Ingress resources. Additionally, the default behavior for invalid annotation values was changed: instead of using the default values, the Ingress Controller will reject a resource with an invalid annotation value, which will make clients see `404` responses from NGINX. See this [document](https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/#validation) to learn more. Before upgrading, ensure the Ingress resources don't have annotations with invalid values. Otherwise, after the upgrade, the Ingress Controller will reject such resources. +- As a result of [1270](https://github.com/nginx/kubernetes-ingress/pull/1270) and [1277](https://github.com/nginx/kubernetes-ingress/pull/1277), the Ingress Controller improved validation of Ingress annotations: more annotations are validated and validation errors are reported via events for Ingress resources. Additionally, the default behavior for invalid annotation values was changed: instead of using the default values, the Ingress Controller will reject a resource with an invalid annotation value, which will make clients see `404` responses from NGINX. See this [document]({{< ref "/configuration/ingress-resources/advanced-configuration-with-annotations.md#validation" >}}) to learn more. Before upgrading, ensure the Ingress resources don't have annotations with invalid values. Otherwise, after the upgrade, the Ingress Controller will reject such resources. - In [1232](https://github.com/nginx/kubernetes-ingress/pull/1232) `controller.serviceAccount.imagePullSecrets` was removed. Use the new `controller.serviceAccount.imagePullSecretName` instead. - The Policy resource was promoted to `v1`. If you used the `alpha1` version, the policies are needed to be recreated with the `v1` version. Before upgrading the Ingress Controller, run the following command to remove the `alpha1` policies CRD (that will also remove all existing `alpha1` policies): @@ -1945,7 +1944,7 @@ You will find the complete changelog for release 1.10.0, including bug fixes, im kubectl delete crd policies.k8s.nginx.org ``` - As part of the upgrade, make sure to create the `v1` policies CRD. See the corresponding instructions for the [manifests](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/#create-custom-resources) and [Helm](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/#upgrading-the-crds) installations. + As part of the upgrade, make sure to create the `v1` policies CRD. See the corresponding instructions for [Manifests]({{< ref "/installation/installing-nic/installation-with-manifests.md#create-custom-resources" >}}) and [Helm]({{< ref "/installation/installing-nic/installation-with-helm.md#upgrading-the-crds" >}}) installations. Also note that all policies except for `accessControl` are still in preview. To enable them, run the Ingress Controller with `- -enable-preview-policies` command-line argument (`controller.enablePreviewPolicies` Helm parameter). - It is necessary to update secret resources. See the section UPDATING SECRETS below. @@ -2068,7 +2067,7 @@ You will find the complete changelog for release 1.9.0, including bug fixes, imp - For NGINX Plus, please build your own image using the 1.9.0 source code. - For Helm, use version 0.7.0 of the chart. -For Kubernetes >= 1.18, when upgrading using the [manifests](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/), make sure to update the [ClusterRole](https://github.com/nginx/kubernetes-ingress/blob/v1.9.0/deployments/rbac/rbac.yaml) and create the [IngressClass resource](https://github.com/nginx/kubernetes-ingress/blob/v1.9.0/deployments/common/ingress-class.yaml), which is required for Kubernetes >= 1.18. Otherwise, the Ingress Controller will fail to start. If you run multiple NGINX Ingress Controllers in the cluster, each Ingress Controller must have its own IngressClass resource. As the `-use-ingress-class-only` argument is now ignored (see NOTES), make sure your Ingress resources have the `ingressClassName` field or the `kubernetes.io/ingress.class` annotation set to the name of the IngressClass resource. Otherwise, the Ingress Controller will ignore them. +For Kubernetes >= 1.18, when upgrading using [Manifests]({{< ref "/installation/installing-nic/installation-with-manifests.md" >}}), make sure to update the [ClusterRole](https://github.com/nginx/kubernetes-ingress/blob/v1.9.0/deployments/rbac/rbac.yaml) and create the [IngressClass resource](https://github.com/nginx/kubernetes-ingress/blob/v1.9.0/deployments/common/ingress-class.yaml), which is required for Kubernetes >= 1.18. Otherwise, the Ingress Controller will fail to start. If you run multiple NGINX Ingress Controllers in the cluster, each Ingress Controller must have its own IngressClass resource. As the `-use-ingress-class-only` argument is now ignored (see NOTES), make sure your Ingress resources have the `ingressClassName` field or the `kubernetes.io/ingress.class` annotation set to the name of the IngressClass resource. Otherwise, the Ingress Controller will ignore them. #### Helm Upgrade @@ -2158,7 +2157,7 @@ If you're using custom resources like VirtualServer and TransportServer (`contro ### Notes -- As part of installing a release, Helm will install the CRDs unless that step is disabled (see the [corresponding doc](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/)). The installed CRDs include the CRDs for all Ingress Controller features, including the ones disabled by default (like App Protect with `aplogconfs.appprotect.f5.com` and `appolicies.appprotect.f5.com` CRDs). +- As part of installing a release, Helm will install the CRDs unless that step is disabled (see the [corresponding doc]({{< ref "/installation/installing-nic/installation-with-helm.md" >}}). The installed CRDs include the CRDs for all Ingress Controller features, including the ones disabled by default (like App Protect with `aplogconfs.appprotect.f5.com` and `appolicies.appprotect.f5.com` CRDs).