From f684545359cd484647f8253b7bf7eec4f9fb2f13 Mon Sep 17 00:00:00 2001 From: Jason Bouska <82831332+skabou@users.noreply.github.com> Date: Fri, 5 Apr 2024 13:05:59 -0400 Subject: [PATCH] Small content adjustments (#410) --- 02-ca-certificates.md | 6 +++--- 03-microsoft-entra-id.md | 2 +- 05-bootstrap-prep.md | 6 +++--- 06-aks-cluster.md | 8 ++++---- 07-bootstrap-validation.md | 6 +++--- 09-secret-management-and-ingress-controller.md | 2 +- 11-validation.md | 4 ++-- 12-cleanup.md | 2 +- README.md | 6 +++--- 9 files changed, 21 insertions(+), 21 deletions(-) diff --git a/02-ca-certificates.md b/02-ca-certificates.md index 62f9bd13..3d948eec 100644 --- a/02-ca-certificates.md +++ b/02-ca-certificates.md @@ -1,6 +1,6 @@ # Generate your client-facing and AKS ingress controller TLS certificates -Now that you have the [prerequisites](./01-prerequisites.md) met, follow these steps to create the TLS certificates that Azure Application Gateway will serve for clients connecting to your web app as well as the AKS ingress controller. If you already have access to an appropriate certificates, or can procure them from your organization, consider doing so and skipping the certificate generation steps. The following will describe using a self-signed certs for instructive purposes only. +Now that you have the [prerequisites](./01-prerequisites.md) met, follow these steps to create the TLS certificates that Azure Application Gateway will serve for clients connecting to your web app as well as the AKS ingress controller. If you already have access to appropriate certificates, or can procure them from your organization, consider doing so and skipping the certificate generation steps. The following will describe using self-signed certs for instructive purposes only. ## Steps @@ -14,7 +14,7 @@ Now that you have the [prerequisites](./01-prerequisites.md) met, follow these s > :book: Contoso Bicycle needs to procure a CA certificate for the web site. As this is going to be a user-facing site, they purchase an EV cert from their CA. This will serve in front of the Azure Application Gateway. They will also procure another one, a standard cert, to be used with the AKS Ingress Controller. This one is not EV, as it will not be user facing. - :warning: Do not use the certificate created by this script for actual deployments. The use of self-signed certificates are provided for ease of illustration purposes only. For your cluster, use your organization's requirements for procurement and lifetime management of TLS certificates, *even for development purposes*. + :warning: Do not use the certificate created by this script for actual deployments. The use of self-signed certificates is for demonstration purposes only. For your cluster, use your organization's requirements for procurement and lifetime management of TLS certificates, *even for development purposes*. Create the certificate that will be presented to web clients by Azure Application Gateway for your domain. @@ -25,7 +25,7 @@ Now that you have the [prerequisites](./01-prerequisites.md) met, follow these s 1. Base64 encode the client-facing certificate. - :bulb: No matter if you used a certificate from your organization or you generated one from above, you'll need the certificate (as `.pfx`) to be Base64 encoded for proper storage in Key Vault later. + :bulb: No matter if you used a certificate from your organization or one generated from above, you'll need the certificate (as `.pfx`) to be Base64 encoded for proper storage in Key Vault later. ```bash export APP_GATEWAY_LISTENER_CERTIFICATE_AKS_BASELINE=$(cat appgw.pfx | base64 | tr -d '\n') diff --git a/03-microsoft-entra-id.md b/03-microsoft-entra-id.md index c5348198..25e27741 100644 --- a/03-microsoft-entra-id.md +++ b/03-microsoft-entra-id.md @@ -93,7 +93,7 @@ AKS supports backing Kubernetes with Microsoft Entra ID in two different modalit ### Azure RBAC *[Preferred]* -If you are using a single tenant for this walk-through, the cluster deployment step later will take care of the necessary role assignments for the groups created above. Specifically, in the above steps, you created the Microsoft Entra security group `cluster-ns-a0008-readers-bu0001a000800` that is going to be a namespace reader in namespace `a0008` and the Microsoft Entra security group `cluster-admins-bu0001a000800` is going to contain cluster admins. Those group Object IDs will be associated to the 'Azure Kubernetes Service RBAC Reader' and 'Azure Kubernetes Service RBAC Cluster Admin' RBAC role respectively, scoped to their proper level within the cluster. +If you are using a single tenant for this walk-through, the cluster deployment step later will take care of the necessary role assignments for the groups created above. Specifically, in the above steps, you created the Microsoft Entra security group `cluster-ns-a0008-readers-bu0001a000800` that is going to be a namespace reader in namespace `a0008` and the Microsoft Entra security group `cluster-admins-bu0001a000800` is going to contain cluster admins. Those group Object IDs will be associated with the 'Azure Kubernetes Service RBAC Reader' and 'Azure Kubernetes Service RBAC Cluster Admin' RBAC roles respectively, scoped to their proper level within the cluster. Using Azure RBAC as your authorization approach is ultimately preferred as it allows for the unified management and access control across Azure Resources, AKS, and Kubernetes resources. At the time of this writing there are four [Azure RBAC roles](https://learn.microsoft.com/azure/aks/manage-azure-rbac#create-role-assignments-for-users-to-access-cluster) that represent typical cluster access patterns. diff --git a/05-bootstrap-prep.md b/05-bootstrap-prep.md index bcc83ffd..4f300b0c 100644 --- a/05-bootstrap-prep.md +++ b/05-bootstrap-prep.md @@ -6,7 +6,7 @@ Now that the [hub-spoke network is provisioned](./04-networking.md), the next st Container registries often have a lifecycle that extends beyond the scope of a single cluster. They can be scoped broadly at organizational or business unit levels, or can be scoped at workload levels, but usually are not directly tied to the lifecycle of any specific cluster instance. For example, you may do blue/green *cluster instance* deployments, both using the same container registry. Even though clusters came and went, the registry stays intact. -- Azure Container Registry is deployed, and exposed as a private endpoint. +- Azure Container Registry is deployed and exposed as a private endpoint. - Azure Container Registry is populated with images your cluster will need as part of its bootstrapping process. - Log Analytics is deployed and Azure Container Registry platform logging is configured. This workspace will be used by your cluster as well. @@ -14,11 +14,11 @@ The role of this pre-existing Azure Container Registry instance is made more pro ### Bootstrapping method -We'll be bootstrapping this cluster with the Flux GitOps agent as installed as an AKS extension. This specific choice does not imply that Flux, or GitOps in general, is the only approach to bootstrapping. Consider your organizational familiarity and acceptance of tooling like this and decide whether cluster bootstrapping should be performed with GitOps or via your deployment pipelines. If you are running a fleet of clusters, a GitOps approach is highly recommended for uniformity and easier governance. When running only a few clusters, GitOps might be seen as "too much" and you might instead opt for integrating that process into one or more deployment pipelines to ensure bootstrapping takes place. No matter which way you go, you'll need your bootstrapping artifacts ready to go before you start your cluster deployment so that you can minimize the time between cluster deployment and bootstrapping. Using the Flux AKS extension allows your cluster to start already bootstrapped and sets you up with a solid management foundation going forward. +We'll be bootstrapping this cluster with the Flux GitOps agent installed as an AKS extension. This specific choice does not imply that Flux, or GitOps in general, is the only approach to bootstrapping. Consider your organizational familiarity and acceptance of tooling like this and decide whether cluster bootstrapping should be performed with GitOps or via your deployment pipelines. If you are running a fleet of clusters, a GitOps approach is highly recommended for uniformity and easier governance. When running only a few clusters, GitOps might be seen as "too much" and you might instead opt for integrating that process into one or more deployment pipelines to ensure bootstrapping takes place. No matter which way you go, you'll need your bootstrapping artifacts ready to go before you start your cluster deployment so that you can minimize the time between cluster deployment and bootstrapping. Using the Flux AKS extension allows your cluster to start already bootstrapped and sets you up with a solid management foundation going forward. ### Additional resources -In addition to Azure Container Registry being deployed to support bootstrapping, this is where any other resources that are considered not tied to the lifecycle of an individual cluster is deployed. Azure Container Registry is one example as talked about above. Another example could be an AKS Backup Vault and backup artifacts storage account which likely would exist prior to and after any individual AKS cluster's existence. When designing your pipelines, ensure to isolate components by their lifecycle watch for singletons in an architecture. These are typically resources like regional logging sinks, supporting global routing infrastructure, and so on. This is in contrast with potentially transient/replaceable components, like the AKS cluster itself. *This implementation does not represent a complete separation of stamp vs regional resources, but is fairly close. Deviations are strictly for ease of deployment in this walkthrough instead of as examples of guidance.* +In addition to Azure Container Registry being deployed to support bootstrapping, this is where any other resources that are considered not tied to the lifecycle of an individual cluster is deployed. Azure Container Registry is one example as talked about above. Another example could be an AKS Backup Vault and backup artifacts storage account which likely would exist prior to and after any individual AKS cluster's existence. When designing your pipelines, ensure to isolate components by their lifecycle watch for singletons in an architecture. These are typically resources like regional logging sinks, supporting global routing infrastructure, and so on. This is in contrast with potentially transient/replaceable components, like the AKS cluster itself. *This implementation does not represent a complete separation of stamp vs regional resources but is fairly close. Deviations are strictly for ease of deployment in this walkthrough instead of as examples of guidance.* ## Steps diff --git a/06-aks-cluster.md b/06-aks-cluster.md index 40a2e96e..063f9df3 100644 --- a/06-aks-cluster.md +++ b/06-aks-cluster.md @@ -6,7 +6,7 @@ Now that your [Azure Container Registry instance is deployed and ready to suppor 1. Indicate your bootstrapping repo. - > If you cloned this repo, then the value will be the original mspnp GitHub organization's repo, which will mean that your cluster will be bootstrapped using public container images. If instead you forked this repo, then the GitOps repo will be your own repo, and your cluster will be bootstrapped using container images references based on the values in your repo's manifest files. On the prior instruction page you had the opportunity to update those manifests to use your Azure Container Registry instance. For guidance on using a private bootstrapping repo, see [Private bootstrapping repository](./cluster-manifests/README.md#private-bootstrapping-repository). + > If you cloned this repo, then the value will be the original mspnp GitHub organization's repo, which means that your cluster will be bootstrapped using public container images. If you forked this repo, then the GitOps repo will be your own repo, and your cluster will be bootstrapped using container images references based on the values in your repo's manifest files. On the prior instruction page, you had the opportunity to update those manifests to use your Azure Container Registry instance. For guidance on using a private bootstrapping repo, see [Private bootstrapping repository](./cluster-manifests/README.md#private-bootstrapping-repository). ```bash GITOPS_REPOURL=$(git config --get remote.origin.url) @@ -17,14 +17,14 @@ Now that your [Azure Container Registry instance is deployed and ready to suppor ``` 1. Deploy the cluster ARM template. - :exclamation: By default, this deployment will allow unrestricted access to your cluster's API Server. You can limit access to the API Server to a set of well-known IP addresses (I.,e. a jump box subnet (connected to by Azure Bastion), build agents, or any other networks you'll administer the cluster from) by setting the `clusterAuthorizedIPRanges` parameter in all deployment options. This setting will also affect traffic originating from within the cluster trying to use the API server, so you will also need to include *all* of the public IPs used by your egress Azure Firewall. For more information, see [Secure access to the API server using authorized IP address ranges](https://learn.microsoft.com/azure/aks/api-server-authorized-ip-ranges#create-an-aks-cluster-with-api-server-authorized-ip-ranges-enabled). + :exclamation: By default, this deployment will allow unrestricted access to your cluster's API Server. You can limit access to the API Server to a set of well-known IP addresses (i.e., a jump box subnet (connected to by Azure Bastion), build agents, or any other networks you'll administer the cluster from) by setting the `clusterAuthorizedIPRanges` parameter in all deployment options. This setting will also affect traffic originating from within the cluster trying to use the API server, so you will also need to include *all* of the public IPs used by your egress Azure Firewall. For more information, see [Secure access to the API server using authorized IP address ranges](https://learn.microsoft.com/azure/aks/api-server-authorized-ip-ranges#create-an-aks-cluster-with-api-server-authorized-ip-ranges-enabled). ```bash # [This takes about 18 minutes.] az deployment group create -g rg-bu0001a0008 -f cluster-stamp.bicep -p targetVnetResourceId=${RESOURCEID_VNET_CLUSTERSPOKE_AKS_BASELINE} clusterAdminMicrosoftEntraGroupObjectId=${MEIDOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE} a0008NamespaceReaderMicrosoftEntraGroupObjectId=${MEIDOBJECTID_GROUP_A0008_READER_AKS_BASELINE} k8sControlPlaneAuthorizationTenantId=${TENANTID_K8SRBAC_AKS_BASELINE} appGatewayListenerCertificate=${APP_GATEWAY_LISTENER_CERTIFICATE_AKS_BASELINE} aksIngressControllerCertificate=${AKS_INGRESS_CONTROLLER_CERTIFICATE_BASE64_AKS_BASELINE} domainName=${DOMAIN_NAME_AKS_BASELINE} gitOpsBootstrappingRepoHttpsUrl=${GITOPS_REPOURL} gitOpsBootstrappingRepoBranch=${GITOPS_CURRENT_BRANCH_NAME} ``` - > Alteratively, you could have updated the [`azuredeploy.parameters.prod.json`](./azuredeploy.parameters.prod.json) file and deployed as above, using `-p "@azuredeploy.parameters.prod.json"` instead of providing the individual key-value pairs. + > Alternatively, you could have updated the [`azuredeploy.parameters.prod.json`](./azuredeploy.parameters.prod.json) file and deployed as above, using `-p "@azuredeploy.parameters.prod.json"` instead of providing the individual key-value pairs. ## Container registry note @@ -34,7 +34,7 @@ This deployment creates an SLA-backed Azure Container Registry for your cluster' ## Application Gateway placement -Azure Application Gateway, for this reference implementation, is placed in the same virtual network as the cluster nodes (isolated by subnets and related NSGs). This facilitates direct network line-of-sight from Application Gateway to the cluster's private load balancer and still allows for strong network boundary control. More importantly, this aligns with cluster operator team owning the point of ingress. Some organizations may instead use a perimeter network in which Application Gateway is managed centrally which resides in an entirely separated virtual network. That topology is also fine, but you'll need to ensure there is secure and limited routing between that perimeter network and your internal private load balancer for your cluster. Also, there will be additional coordination necessary between the cluster/workload operators and the team owning the Application Gateway. +Azure Application Gateway, for this reference implementation, is placed in the same virtual network as the cluster nodes (isolated by subnets and related NSGs). This facilitates direct network line-of-sight from Application Gateway to the cluster's private load balancer and still allows for strong network boundary control. More importantly, this aligns with cluster operator team owning the point of ingress. Some organizations may instead use a perimeter network in which Application Gateway is managed centrally, which resides in an entirely separated virtual network. That topology is also fine, but you'll need to ensure there is secure and limited routing between that perimeter network and your internal private load balancer for your cluster. Also, there will be additional coordination necessary between the cluster/workload operators and the team owning the Application Gateway. ### Next step diff --git a/07-bootstrap-validation.md b/07-bootstrap-validation.md index 9080c9f9..5d01aa87 100644 --- a/07-bootstrap-validation.md +++ b/07-bootstrap-validation.md @@ -4,7 +4,7 @@ Now that the [AKS cluster](./06-aks-cluster.md) has been deployed, the next step ## Steps -GitOps allows a team to author Kubernetes manifest files, persist them in their Git repo, and have them automatically apply to their cluster as changes occur. This reference implementation is focused on the baseline cluster, so Flux is managing cluster-level concerns. This is distinct from workload-level concerns, which would be possible as well to manage via Flux, and would typically be done by additional Flux configuration in the cluster. The namespace `cluster-baseline-settings` will be used to provide a logical division of the cluster bootstrap configuration from workload configuration. Examples of manifests that are applied: +GitOps allows a team to author Kubernetes manifest files, persist them in their Git repo, and have them automatically apply to their cluster as changes occur. This reference implementation is focused on the baseline cluster, so Flux is managing cluster-level concerns. This is distinct from workload-level concerns, which would be possible as well to manage via Flux and would typically be done by additional Flux configuration in the cluster. The namespace `cluster-baseline-settings` will be used to provide a logical division of the cluster bootstrap configuration from workload configuration. Examples of manifests that are applied: - Cluster role bindings for the AKS-managed Microsoft Entra ID integration - Cluster-wide configuration of Azure Monitor for Containers @@ -62,7 +62,7 @@ GitOps allows a team to author Kubernetes manifest files, persist them in their 1. Validate your cluster is bootstrapped. - The bootstrapping process that already happened due to the usage of the Flux extension for AKS has set up the following, amoung other things + The bootstrapping process that already happened due to the usage of the Flux extension for AKS has set up the following, among other things - the workload's namespace named `a0008` @@ -72,7 +72,7 @@ GitOps allows a team to author Kubernetes manifest files, persist them in their These commands will show you results that were due to the automatic bootstrapping process your cluster experienced due to the Flux GitOps extension. This content mirrors the content found in [`cluster-manifests`](./cluster-manifests), and commits made there will reflect in your cluster within minutes of making the change. -The end result of all of this is that `kubectl` was not required for any part of the bootstrapping process of a cluster. The usage of `kubectl`-based access should be reserved for emergency break-fix situations and not for day-to-day configuration operations on this cluster. Between templates for Azure Resource definitions, and the bootstrapping of manifests via the GitOps extension, all normal configuration activities can be performed without the need to use `kubectl`. You will however see us use it for the upcoming workload deployment. This is because the SDLC component of workloads are not in scope for this reference implementation, as this is focused the infrastructure and baseline configuration. +The result is that `kubectl` was not required for any part of the bootstrapping process of a cluster. The usage of `kubectl`-based access should be reserved for emergency break-fix situations and not for day-to-day configuration operations on this cluster. Between templates for Azure Resource definitions, and the bootstrapping of manifests via the GitOps extension, all normal configuration activities can be performed without the need to use `kubectl`. You will however see us use it for the upcoming workload deployment. This is because the SDLC component of workloads are not in scope for this reference implementation, as this is focused the infrastructure and baseline configuration. ## Alternatives diff --git a/09-secret-management-and-ingress-controller.md b/09-secret-management-and-ingress-controller.md index 9e5c3877..542f7082 100644 --- a/09-secret-management-and-ingress-controller.md +++ b/09-secret-management-and-ingress-controller.md @@ -75,7 +75,7 @@ Previously you have configured [workload prerequisites](./08-workload-prerequisi 1. Wait for Traefik to be ready. - > During Traefik's pod creation process, Azure Key Vault will be accessed to get the required certs needed for pod volume mount (csi). This sometimes takes a bit of time, but will eventually succeed if properly configured. + > During Traefik's pod creation process, Azure Key Vault will be accessed to get the required certs needed for pod volume mount (csi). This sometimes takes a bit of time but will eventually succeed if properly configured. ```bash kubectl wait -n a0008 --for=condition=ready pod --selector=app.kubernetes.io/name=traefik-ingress-ilb --timeout=90s diff --git a/11-validation.md b/11-validation.md index 77498498..52ef965d 100644 --- a/11-validation.md +++ b/11-validation.md @@ -1,6 +1,6 @@ # End-to-end validation -Now that you have a workload deployed, the [ASP.NET Core sample web app](./10-workload.md), you can start validating and exploring this reference implementation of the [AKS baseline cluster](./). In addition to the workload, there are some observability validation you can perform as well. +Now that you have a workload deployed, the [ASP.NET Core sample web app](./10-workload.md), you can start validating and exploring this reference implementation of the [AKS baseline cluster](./). In addition to the workload, there is some observability validation you can perform as well. ## Validate the web app @@ -28,7 +28,7 @@ This section will help you to validate the workload is exposed correctly and res > :bulb: Remember to include the protocol prefix `https://` in the URL you type in the address bar of your browser. A TLS warning will be present due to using a self-signed certificate. You can ignore it or import the self-signed cert (`appgw.pfx`) to your user's trusted root store. - Refresh the web page a couple of times and observe the value `Host name` displayed at the bottom of the page. As the Traefik Ingress Controller balances the requests between the two pods hosting the web page, the host name will change from one pod name to the other throughtout your queries. + Refresh the web page a couple of times and observe the value `Host name` displayed at the bottom of the page. As the Traefik Ingress Controller balances the requests between the two pods hosting the web page, the host name will change from one pod name to the other throughout your queries. ## Validate reader access to the a0008 namespace. *Optional.* diff --git a/12-cleanup.md b/12-cleanup.md index 0e78515c..bfef4401 100644 --- a/12-cleanup.md +++ b/12-cleanup.md @@ -34,7 +34,7 @@ Before you can automate a process, it's important to experience the process in a Now that you understand the components involved and have identified the shared responsibilities between your team and your greater organization, you are encouraged to build repeatable deployment processes around your final infrastructure and cluster bootstrapping. Refer to the [AKS baseline automation guidance](https://github.com/Azure/aks-baseline-automation#aks-baseline-automation) to learn how GitHub Actions combined with Infrastructure as Code can be used to facilitate this automation. That guidance is based on the same architecture foundations you've walked through here. -> Note: The [AKS baseline automation guidance](https://github.com/Azure/aks-baseline-automation#aks-baseline-automation) implementation strives to stay in sync with this repo, but may slightly deviate in various decisions made, may introduce new features, or not yet have a feature that is used in this repo. The are functionally aligned by design, but not necessarily identical. Use that repo to explore the automation potential, while this repo is used for the core architectural guidance. +> Note: The [AKS baseline automation guidance](https://github.com/Azure/aks-baseline-automation#aks-baseline-automation) implementation strives to stay in sync with this repo, but may slightly deviate in various decisions made, may introduce new features, or not yet have a feature that is used in this repo. They are functionally aligned by design, but not necessarily identical. Use that repo to explore the automation potential, while this repo is used for the core architectural guidance. ### Next step diff --git a/README.md b/README.md index cad6f344..78716c68 100644 --- a/README.md +++ b/README.md @@ -51,7 +51,7 @@ Also do not forget to view the [detailed architecture diagram](/networking/aks-b ## Deploy the reference implementation -A deployment of AKS-hosted workloads typically experiences a separation of duties and lifecycle management in the area of prerequisites, the host network, the cluster infrastructure, and finally the workload itself. This reference implementation is similar. Also, be aware our primary purpose is to illustrate the topology and decisions of a baseline cluster. We feel a "step-by-step" flow will help you learn the pieces of the solution and give you insight into the relationship between them. Ultimately, lifecycle/SDLC management of your cluster and its dependencies will depend on your situation (team roles, organizational standards, and so on), and will be implemented as appropriate for your needs. +A deployment of AKS-hosted workloads typically experiences a separation of duties and lifecycle management in the areas of prerequisites, the host network, the cluster infrastructure, and finally the workload itself. This reference implementation is similar. Also, be aware our primary purpose is to illustrate the topology and decisions of a baseline cluster. We feel a "step-by-step" flow will help you learn the pieces of the solution and give you insight into the relationship between them. Ultimately, lifecycle/SDLC management of your cluster and its dependencies will depend on your situation (team roles, organizational standards, and so on), and will be implemented as appropriate for your needs. **Please start this learning journey in the *Preparing for the cluster* section.** If you follow this through to the end, you'll have our recommended baseline cluster installed, with an end-to-end sample workload running for you to reference in your own Azure subscription. @@ -65,7 +65,7 @@ There are considerations that must be addressed before you start deploying your ### 2. Build target network -Microsoft recommends AKS be deploy into a carefully planned network; sized appropriately for your needs and with proper network observability. Organizations typically favor a traditional hub-spoke model, which is reflected in this implementation. While this is a standard hub-spoke model, there are fundamental sizing and portioning considerations included that should be understood. +Microsoft recommends AKS be deployed into a carefully planned network; sized appropriately for your needs and with proper network observability. Organizations typically favor a traditional hub-spoke model, which is reflected in this implementation. While this is a standard hub-spoke model, there are fundamental sizing and portioning considerations included that should be understood. - [ ] [Build the hub-spoke network](./04-networking.md) @@ -101,7 +101,7 @@ Most of the Azure resources deployed in the prior steps will incur ongoing charg ## Preview and additional features -Kubernetes and, by extension, AKS are fast-evolving products. The [AKS roadmap](https://aka.ms/AKS/Roadmap) shows how quick the product is changing. This reference implementation does take dependencies on select preview features which the AKS team describes as "Shipped & Improving." The rational behind that is that many of the preview features stay in that state for only a few months before entering GA. If you are just artchitecting your cluster today, by the time you're ready for production, there is a good chance that many of the preview features are nearing or will have hit GA. +Kubernetes and, by extension, AKS are fast-evolving products. The [AKS roadmap](https://aka.ms/AKS/Roadmap) shows how quickly the product is changing. This reference implementation does take dependencies on select preview features which the AKS team describes as "Shipped & Improving." The rationale behind that is that many of the preview features stay in that state for only a few months before entering GA. If you are just architecting your cluster today, by the time you're ready for production, there is a good chance that many of the preview features are nearing or will have hit GA. This implementation will not include every preview feature, but instead only those that add significant value to a general-purpose cluster. There are some additional preview features you may wish to evaluate in preproduction clusters that augment your posture around security, manageability, and so on. As these features come out of preview, this reference implementation may be updated to incorporate them. Consider trying out and providing feedback on the following: