diff --git a/docs/design/get-logs.md b/docs/design/get-logs.md deleted file mode 100644 index 5167908c40..0000000000 --- a/docs/design/get-logs.md +++ /dev/null @@ -1,58 +0,0 @@ -# Logs Retrieval - -## Motivation - -People new to Kubernetes (or Linux) may find challenging the retrieval of relevant logs to troubleshoot an AKS Engine operation failure. - -Hence, it would be convenient to that audience to embedded the log collection inside AKS Engine itself. - -## High level description - -Just for consistency, the command interface could be similar to `rotate-certs`’ interface: - -``` -aks-engine get-logs - --location ${AZURE_LOCATION} - --api-model _out/prefix/apimodel.json - --resource-group prefix - --output-directory _out/prefix/logs_${STAMP} - --client-id ${AZURE_CLIENT_ID} - --client-secret ${AZURE_CLIENT_SECRET} - --subscription-id ${AZURE_CLIENT_SUBSCRIPTION} - --ssh-private-key ~/.ssh/id_rsa - --apiserver prefix.location.cloudapp.azure.com -``` - -After a successful execution, the output directory could contain this (non-exhaustive) list of files: - -- Output of kubectl cluster-info dump -- State of the Azure resources in the resource group -- For each host - - Files in directory /var/log - - Files in directory /var/log/azure - - Files in directory /etc/kubernetes/manifests - - Files in directory /etc/kubernetes/addons - - apimodel.json (no creds) - - azure.json (no creds) - - kubelet journal - - container runtime journal - - etcd journal - - test DNS resolution - - test metadata endpoint response - -## Implementation - -Just like `rotate-cert`, this new command would SSH into each cluster hosts (Linux and/or Windows) and execute a “collect-logs” script. Additionally, we will have to write a small function that downloads a tar/zip that contains the files produced by the log collection script (SCP). - -Because this would be expected to work on an air-gapped environment, we should either: - -- Bake the required script/s onto the VHD in a well-known location - - This is already done for Windows nodes: `c:\k\debug\collect-windows-logs.ps1` -- Add an extra flag to pass the required script/s to the CLI command - - `--log-collection-script ~/collectlogs.sh` - -In connected environments, AKS Engine or each host can potentially download the latest and greatest version of the log collection script. - -Files could be uploaded to a storage account container to simplify colaboration. - -Additionally, for this to work on Windows nodes we have to setup a SSH server by default. diff --git a/docs/topics/clusterdefinitions.md b/docs/topics/clusterdefinitions.md index cecab1683c..8db19d6086 100644 --- a/docs/topics/clusterdefinitions.md +++ b/docs/topics/clusterdefinitions.md @@ -128,6 +128,7 @@ $ aks-engine get-versions | [cilium](https://docs.cilium.io/en/v1.4/kubernetes/policy/#ciliumnetworkpolicy) | true if networkPolicy is "cilium"; currently validated against Kubernetes v1.13, v1.14, and v1.15 | 0 | A NetworkPolicy CRD implementation by the Cilium project (currently supports v1.4) | | [flannel](https://coreos.com/flannel/docs/0.8.0/index.html) | false | 0 | An addon that delivers flannel: a virtual network that gives a subnet to each host for use with container runtimes. If `networkPlugin` is set to `"flannel"` this addon will be enabled automatically. Not compatible with any other `networkPlugin` or `networkPolicy`. | | [csi-secrets-store](../../examples/addons/csi-secrets-store/README.md) | true (for 1.16+ clusters) | as many as linux agent nodes | Integrates secrets stores (Azure keyvault) via a [Container Storage Interface (CSI)](https://kubernetes-csi.github.io/docs/) volume. | +| [azure-arc-onboarding](../../examples/addons/azure-arc-onboarding/README.md) | false | 7 | Attaches the cluster to Azure Arc enabled Kubernetes. | To give a bit more info on the `addons` property: We've tried to expose the basic bits of data that allow useful configuration of these cluster features. Here are some example usage patterns that will unpack what `addons` provide: diff --git a/examples/addons/azure-arc-onboarding/README.md b/examples/addons/azure-arc-onboarding/README.md new file mode 100644 index 0000000000..947004509b --- /dev/null +++ b/examples/addons/azure-arc-onboarding/README.md @@ -0,0 +1,104 @@ +# Azure Arc enabled Kubernetes + +You can attach and configure Kubernetes clusters by using [Azure Arc enabled Kubernetes](https://docs.microsoft.com/azure/azure-arc/kubernetes/overview). +When a Kubernetes cluster is attached to Azure Arc, it will appear in the Azure portal. It will have an Azure Resource Manager ID and a managed identity. +Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource. + +To connect a Kubernetes cluster to Azure, the cluster administrator needs to deploy agents. These agents run in a Kubernetes namespace named `azure-arc` and are standard Kubernetes deployments. The agents are responsible for connectivity to Azure, collecting Azure Arc logs and metrics, and watching for configuration requests. + +You can deploy the Azure Arc agents either as part of the cluster creation process (by including the `azure-arc-onboarding` addon spec in your input `apimodel.json`) or manually using [azure-cli](https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster). + +## Azure Arc enabled Kubernetes Addon + +The `azure-arc-onboarding` addon creates a Kubernetes job (in namespace `azure-arc-onboarding`) in charge of deploying the Azure Arc agents. +The following information is required in order to successfully onboard the new cluster. + +| Name | Required | Description | +| ---------------- | -------- | -------------------------------------------------------------------------------------- | +| location | yes | Azure region where the `connectedCluster` ARM resource will be created | +| subscriptionID | yes | Subscription ID where the `connectedCluster` ARM resource will be created | +| tenantID | yes | Tenant ID that owns the specified Subscription | +| resourceGroup | yes | Existing resource group name where the `connectedCluster` ARM resource will be created | +| clusterName | yes | Unique cluster friendly name | +| clientID | yes | Service principal ID with permissions to create resources in target subscription/group | +| clientSecret | yes | Service principal secret | + +Example: + +```json +{ + "apiVersion": "vlabs", + "properties": { + "orchestratorProfile": { + "kubernetesConfig": { + "addons": [ + { + "name": "azure-arc-onboarding", + "enabled": true, + "config": { + "tenantID": "88e66958-71dd-48b9-8fed-99e13b5c0a59", + "subscriptionID": "88e66958-71dd-48b9-8fed-99e13b5c0a59", + "resourceGroup": "connectedClusters", + "clusterName": "clusterName", + "clientID": "88e66958-71dd-48b9-8fed-99e13b5c0a59", + "clientSecret": "88e66958-71dd-48b9-8fed-99e13b5c0a59", + "location": "eastus" + } + } + ] + } + }, + } +} +``` + +### Validation / Troubleshooting + +To make sure that the onboarding process succeded, you can either look for the new `connectedCluster` resource in the Azure portal +(ARM ID: `/subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.Kubernetes/connectedClusters/{clusterName}`) +or check the status of the agent pods in the `azure-arc` namespace. + +```bash +kubectl get pods -n azure-arc +``` + +If you notice something wrong, the first troubleshooting step would be to inspect the logs produced by the onboarding process + +```bash +kubectl logs -l job-name=azure-arc-onboarding -n azure-arc-onboarding +``` + +#### Frequent issues + +Potential issues you may find by inspecting the job logs include: + +- Target resource group does not exit +- Cluster name is not unique +- Invalid service principal credentials +- Service principal does not have enough permissions to create resources in target subscription or resource group +- Azure Arc is not available in the desired Azure region + +### Clean up + +You are free to delete the resources created in namespace `azure-arc` created by job `azure-arc-onboarding`. + +However, you won't be able to permanently delete the resources created in namespace `azure-arc-onboarding` +until file `arc-onboarding.yaml` is moved out of directory `/etc/kubernetes/addons` (control plane nodes' file system) +as `addon-manager` will re-create the resources in namespace `azure-arc-onboarding`. + +### Addon Reconfiguration + +There are two different ways to reconfigure the `azure-arc-onboarding` addon the cluster is deployed. + +The safer and recommended approach is to update, on every control plane node, +the secret resource declared in the addon manifest (`/etc/kubernetes/addons/arc-onboarding.yaml`) +and re-trigger the onboarding process by deleting the `azure-arc-onboarding` namespace. + +A faster and more fragile alternative is to edit the secret using kubectl +(`kubectl edit secret azure-arc-onboarding -n azure-arc-onboarding`) and +and re-trigger the onboarding process by deleting the onboarding job +(`kubectl delete job azure-arc-onboarding -n azure-arc-onboarding`). +Keep in mind that your changes will be lost if the secret resource is deleted at any point in the future +as `addon-manager` will recreate it using the data in `arc-onboarding.yaml`. + +More information on how to edit a Kubernetes secret can be found [here](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually). diff --git a/examples/addons/azure-arc-onboarding/kubernetes-arc.json b/examples/addons/azure-arc-onboarding/kubernetes-arc.json new file mode 100644 index 0000000000..e967743bc1 --- /dev/null +++ b/examples/addons/azure-arc-onboarding/kubernetes-arc.json @@ -0,0 +1,50 @@ +{ + "apiVersion": "vlabs", + "properties": { + "orchestratorProfile": { + "orchestratorType": "Kubernetes", + "kubernetesConfig": { + "useManagedIdentity": true, + "addons": [ + { + "name": "azure-arc-onboarding", + "enabled": true, + "config": { + "tenantID": "88e66958-71dd-48b9-8fed-99e13b5c0a59", + "subscriptionID": "88e66958-71dd-48b9-8fed-99e13b5c0a59", + "resourceGroup": "connectedClusters", + "clusterName": "clusterName", + "clientID": "88e66958-71dd-48b9-8fed-99e13b5c0a59", + "clientSecret": "88e66958-71dd-48b9-8fed-99e13b5c0a59", + "location": "eastus" + } + } + ] + } + }, + "masterProfile": { + "count": 1, + "dnsPrefix": "", + "vmSize": "Standard_DS2_v2" + }, + "agentPoolProfiles": [ + { + "name": "agentpool", + "count": 1, + "vmSize": "Standard_DS2_v2", + "availabilityProfile": "VirtualMachineScaleSets", + "storageProfile": "ManagedDisks" + } + ], + "linuxProfile": { + "adminUsername": "azureuser", + "ssh": { + "publicKeys": [ + { + "keyData": "" + } + ] + } + } + } +} \ No newline at end of file diff --git a/parts/k8s/addons/arc-onboarding.yaml b/parts/k8s/addons/arc-onboarding.yaml new file mode 100644 index 0000000000..e69d1e1499 --- /dev/null +++ b/parts/k8s/addons/arc-onboarding.yaml @@ -0,0 +1,102 @@ +--- +apiVersion: v1 +kind: Namespace +metadata: + name: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +--- +apiVersion: v1 +kind: Secret +metadata: + name: azure-arc-onboarding + namespace: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +data: + TENANT_ID: {{ContainerConfigBase64 "tenantID"}} + SUBSCRIPTION_ID: {{ContainerConfigBase64 "subscriptionID"}} + RESOURCE_GROUP: {{ContainerConfigBase64 "resourceGroup"}} + CONNECTED_CLUSTER: {{ContainerConfigBase64 "clusterName"}} + LOCATION: {{ContainerConfigBase64 "location"}} + CLIENT_ID: {{ContainerConfigBase64 "clientID"}} + CLIENT_SECRET: {{ContainerConfigBase64 "clientSecret"}} +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: azure-arc-onboarding + namespace: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-admin +subjects: + - kind: ServiceAccount + name: azure-arc-onboarding + namespace: azure-arc-onboarding +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: azure-arc-onboarding + namespace: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +spec: + template: + spec: + serviceAccountName: azure-arc-onboarding + nodeSelector: + kubernetes.io/arch: amd64 + kubernetes.io/os: linux + containers: + - name: azure-arc-onboarding + image: {{ContainerImage "azure-arc-onboarding"}} + env: + - name: TENANT_ID + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: TENANT_ID + - name: SUBSCRIPTION_ID + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: SUBSCRIPTION_ID + - name: RESOURCE_GROUP + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: RESOURCE_GROUP + - name: CONNECTED_CLUSTER + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: CONNECTED_CLUSTER + - name: LOCATION + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: LOCATION + - name: CLIENT_ID + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: CLIENT_ID + - name: CLIENT_SECRET + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: CLIENT_SECRET + restartPolicy: Never + backoffLimit: 4 diff --git a/pkg/api/addons.go b/pkg/api/addons.go index ba57230c38..fc929e314b 100644 --- a/pkg/api/addons.go +++ b/pkg/api/addons.go @@ -877,6 +877,17 @@ func (cs *ContainerService) setAddonsConfig(isUpgrade bool) { }, } + defaultAzureArcOnboardingAddonsConfig := KubernetesAddon{ + Name: common.AzureArcOnboardingAddonName, + Enabled: to.BoolPtr(DefaultAzureArcOnboardingAddonEnabled), + Containers: []KubernetesContainerSpec{ + { + Name: common.AzureArcOnboardingAddonName, + Image: k8sComponents[common.AzureArcOnboardingAddonName], + }, + }, + } + // Allow folks to simply enable kube-dns at cluster creation time without also requiring that coredns be explicitly disabled if !isUpgrade && o.KubernetesConfig.IsAddonEnabled(common.KubeDNSAddonName) { defaultCorednsAddonsConfig.Enabled = to.BoolPtr(false) @@ -917,6 +928,7 @@ func (cs *ContainerService) setAddonsConfig(isUpgrade bool) { defaultFlannelAddonsConfig, defaultScheduledMaintenanceAddonsConfig, defaultSecretsStoreCSIDriverAddonsConfig, + defaultAzureArcOnboardingAddonsConfig, } // Add default addons specification, if no user-provided spec exists if o.KubernetesConfig.Addons == nil { diff --git a/pkg/api/common/const.go b/pkg/api/common/const.go index de4b4a16a7..d3c2deb3df 100644 --- a/pkg/api/common/const.go +++ b/pkg/api/common/const.go @@ -286,6 +286,8 @@ const ( CSISecretsStoreDriverContainerName = "secrets-store" // CSISecretsStoreProviderAzureContainerName is the name of the provider-azure-installer container in csi-secrets-store addon CSISecretsStoreProviderAzureContainerName = "provider-azure-installer" + // ArcAddonName is the name of the arc addon + AzureArcOnboardingAddonName = "azure-arc-onboarding" ) // Component name consts diff --git a/pkg/api/const.go b/pkg/api/const.go index fd815ed6da..773c031c96 100644 --- a/pkg/api/const.go +++ b/pkg/api/const.go @@ -186,6 +186,8 @@ const ( DefaultContainerMonitoringAddonEnabled = false // DefaultIPMasqAgentAddonEnabled enables the ip-masq-agent addon DefaultIPMasqAgentAddonEnabled = true + // DefaultArcAddonEnabled determines the aks-engine provided default for enabling arc addon + DefaultAzureArcOnboardingAddonEnabled = false // DefaultPrivateClusterEnabled determines the aks-engine provided default for enabling kubernetes Private Cluster DefaultPrivateClusterEnabled = false // DefaultPrivateClusterHostsConfigAgentEnabled enables the hosts config agent for private cluster diff --git a/pkg/api/k8s_versions.go b/pkg/api/k8s_versions.go index a4183aa5ab..e074d52c3d 100644 --- a/pkg/api/k8s_versions.go +++ b/pkg/api/k8s_versions.go @@ -60,6 +60,7 @@ const ( csiSecretsStoreProviderAzureImageReference string = "k8s/csi/secrets-store/provider-azure:0.0.6" csiSecretsStoreDriverImageReference string = "k8s/csi/secrets-store/driver:v0.0.11" clusterProportionalAutoscalerImageReference string = "mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.7.1" + azureArcOnboardingImageReference string = "arck8sonboarding.azurecr.io/arck8sonboarding:v0.1.0" ) var kubernetesImageBaseDefaultImages = map[string]map[string]string{ @@ -519,6 +520,7 @@ func getK8sVersionComponents(version, kubernetesImageBaseType string, overrides common.NVIDIADevicePluginAddonName: nvidiaDevicePluginImageReference, common.CSISecretsStoreProviderAzureContainerName: csiSecretsStoreProviderAzureImageReference, common.CSISecretsStoreDriverContainerName: csiSecretsStoreDriverImageReference, + common.AzureArcOnboardingAddonName: azureArcOnboardingImageReference, } case "1.18": ret = map[string]string{ @@ -602,6 +604,7 @@ func getK8sVersionComponents(version, kubernetesImageBaseType string, overrides common.NVIDIADevicePluginAddonName: nvidiaDevicePluginImageReference, common.CSISecretsStoreProviderAzureContainerName: csiSecretsStoreProviderAzureImageReference, common.CSISecretsStoreDriverContainerName: csiSecretsStoreDriverImageReference, + common.AzureArcOnboardingAddonName: azureArcOnboardingImageReference, } case "1.17": ret = map[string]string{ @@ -683,6 +686,7 @@ func getK8sVersionComponents(version, kubernetesImageBaseType string, overrides common.NVIDIADevicePluginAddonName: nvidiaDevicePluginImageReference, common.CSISecretsStoreProviderAzureContainerName: csiSecretsStoreProviderAzureImageReference, common.CSISecretsStoreDriverContainerName: csiSecretsStoreDriverImageReference, + common.AzureArcOnboardingAddonName: azureArcOnboardingImageReference, } case "1.16": ret = map[string]string{ @@ -760,6 +764,7 @@ func getK8sVersionComponents(version, kubernetesImageBaseType string, overrides common.NVIDIADevicePluginAddonName: nvidiaDevicePluginImageReference, common.CSISecretsStoreProviderAzureContainerName: csiSecretsStoreProviderAzureImageReference, common.CSISecretsStoreDriverContainerName: csiSecretsStoreDriverImageReference, + common.AzureArcOnboardingAddonName: azureArcOnboardingImageReference, } case "1.15": ret = map[string]string{ @@ -833,6 +838,7 @@ func getK8sVersionComponents(version, kubernetesImageBaseType string, overrides "gchighthreshold": strconv.Itoa(DefaultKubernetesGCHighThreshold), "gclowthreshold": strconv.Itoa(DefaultKubernetesGCLowThreshold), common.NVIDIADevicePluginAddonName: nvidiaDevicePluginImageReference, + common.AzureArcOnboardingAddonName: azureArcOnboardingImageReference, } case "1.14": ret = map[string]string{ diff --git a/pkg/api/k8s_versions_test.go b/pkg/api/k8s_versions_test.go index ef7e21b25b..795e3a5f28 100644 --- a/pkg/api/k8s_versions_test.go +++ b/pkg/api/k8s_versions_test.go @@ -42,6 +42,7 @@ func TestGetK8sVersionComponents(t *testing.T) { common.PauseComponentName: pauseImageReference, common.TillerAddonName: tillerImageReference, common.ReschedulerAddonName: getDefaultImage(common.ReschedulerAddonName, kubernetesImageBaseType), + common.AzureArcOnboardingAddonName: azureArcOnboardingImageReference, common.ACIConnectorAddonName: virtualKubeletImageReference, common.AzureCNINetworkMonitorAddonName: azureCNINetworkMonitorImageReference, common.ClusterAutoscalerAddonName: k8sComponent[common.ClusterAutoscalerAddonName], @@ -131,6 +132,7 @@ func TestGetK8sVersionComponents(t *testing.T) { common.PauseComponentName: pauseImageReference, common.TillerAddonName: tillerImageReference, common.ReschedulerAddonName: getDefaultImage(common.ReschedulerAddonName, kubernetesImageBaseType), + common.AzureArcOnboardingAddonName: azureArcOnboardingImageReference, common.ACIConnectorAddonName: virtualKubeletImageReference, common.AzureCNINetworkMonitorAddonName: azureCNINetworkMonitorImageReference, common.ClusterAutoscalerAddonName: k8sComponent[common.ClusterAutoscalerAddonName], @@ -216,6 +218,7 @@ func TestGetK8sVersionComponents(t *testing.T) { common.PauseComponentName: pauseImageReference, common.TillerAddonName: tillerImageReference, common.ReschedulerAddonName: getDefaultImage(common.ReschedulerAddonName, kubernetesImageBaseType), + common.AzureArcOnboardingAddonName: azureArcOnboardingImageReference, common.ACIConnectorAddonName: virtualKubeletImageReference, common.AzureCNINetworkMonitorAddonName: azureCNINetworkMonitorImageReference, common.ClusterAutoscalerAddonName: k8sComponent[common.ClusterAutoscalerAddonName], @@ -298,6 +301,7 @@ func TestGetK8sVersionComponents(t *testing.T) { common.PauseComponentName: pauseImageReference, common.TillerAddonName: tillerImageReference, common.ReschedulerAddonName: getDefaultImage(common.ReschedulerAddonName, kubernetesImageBaseType), + common.AzureArcOnboardingAddonName: azureArcOnboardingImageReference, common.ACIConnectorAddonName: virtualKubeletImageReference, common.AzureCNINetworkMonitorAddonName: azureCNINetworkMonitorImageReference, common.ClusterAutoscalerAddonName: k8sComponent[common.ClusterAutoscalerAddonName], diff --git a/pkg/api/vlabs/validate.go b/pkg/api/vlabs/validate.go index 5420547114..e94fb58a04 100644 --- a/pkg/api/vlabs/validate.go +++ b/pkg/api/vlabs/validate.go @@ -851,6 +851,10 @@ func (a *Properties) validateAddons() error { if !common.IsKubernetesVersionGe(a.OrchestratorProfile.OrchestratorVersion, "1.16.0") { return errors.Errorf("%s add-on can only be used in 1.16+", addon.Name) } + case common.AzureArcOnboardingAddonName: + if err := addon.validateArcAddonConfig(); err != nil { + return err + } } } else { // Validation for addons if they are disabled @@ -2048,3 +2052,35 @@ func (a *Properties) validateAzureStackSupport() error { } return nil } + +func (a *KubernetesAddon) validateArcAddonConfig() error { + if a.Config == nil { + a.Config = make(map[string]string) + } + err := []string{} + if a.Config["location"] == "" { + err = append(err, "azure-arc-onboarding addon configuration must have a 'location' property") + } + if a.Config["tenantID"] == "" { + err = append(err, "azure-arc-onboarding addon configuration must have a 'tenantID' property") + } + if a.Config["subscriptionID"] == "" { + err = append(err, "azure-arc-onboarding addon configuration must have a 'subscriptionID' property") + } + if a.Config["resourceGroup"] == "" { + err = append(err, "azure-arc-onboarding addon configuration must have a 'resourceGroup' property") + } + if a.Config["clusterName"] == "" { + err = append(err, "azure-arc-onboarding addon configuration must have a 'clusterName' property") + } + if a.Config["clientID"] == "" { + err = append(err, "azure-arc-onboarding addon configuration must have a 'clientID' property") + } + if a.Config["clientSecret"] == "" { + err = append(err, "azure-arc-onboarding addon configuration must have a 'clientSecret' property") + } + if len(err) > 0 { + return fmt.Errorf(strings.Join(err, "; ")) + } + return nil +} diff --git a/pkg/api/vlabs/validate_test.go b/pkg/api/vlabs/validate_test.go index e78705858f..fb36ecab6b 100644 --- a/pkg/api/vlabs/validate_test.go +++ b/pkg/api/vlabs/validate_test.go @@ -5390,3 +5390,31 @@ func TestValidateContainerRuntimeConfig(t *testing.T) { }) } } + +func TestValidateConnectedClusterProfile(t *testing.T) { + addon := &KubernetesAddon{} + + t.Run("incomplete connected cluster profile", func(t *testing.T) { + err := addon.validateArcAddonConfig() + expected := errors.New("azure-arc-onboarding addon configuration must have a 'location' property; azure-arc-onboarding addon configuration must have a 'tenantID' property; azure-arc-onboarding addon configuration must have a 'subscriptionID' property; azure-arc-onboarding addon configuration must have a 'resourceGroup' property; azure-arc-onboarding addon configuration must have a 'clusterName' property; azure-arc-onboarding addon configuration must have a 'clientID' property; azure-arc-onboarding addon configuration must have a 'clientSecret' property") + if !helpers.EqualError(err, expected) { + t.Errorf("expected error: %v, got: %v", expected, err) + } + }) + + addon.Config = make(map[string]string) + addon.Config["location"] = "location" + addon.Config["tenantID"] = "tenantID" + addon.Config["subscriptionID"] = "subscriptionID" + addon.Config["resourceGroup"] = "resourceGroup" + addon.Config["clusterName"] = "clusterName" + addon.Config["clientID"] = "clientID" + addon.Config["clientSecret"] = "clientSecret" + + t.Run("complete connected cluster profile", func(t *testing.T) { + err := addon.validateArcAddonConfig() + if err != nil { + t.Errorf("error not expected, got: %v", err) + } + }) +} diff --git a/pkg/engine/artifacts.go b/pkg/engine/artifacts.go index 356a10de25..556294113a 100644 --- a/pkg/engine/artifacts.go +++ b/pkg/engine/artifacts.go @@ -235,6 +235,11 @@ func kubernetesAddonSettingsInit(p *api.Properties) map[string]kubernetesCompone base64Data: k.GetAddonScript(common.SecretsStoreCSIDriverAddonName), destinationFile: secretsStoreCSIDriverAddonDestinationFileName, }, + common.AzureArcOnboardingAddonName: { + sourceFile: connectedClusterAddonSourceFilename, + base64Data: k.GetAddonScript(common.AzureArcOnboardingAddonName), + destinationFile: connectedClusterAddonDestinationFilename, + }, } } diff --git a/pkg/engine/const.go b/pkg/engine/const.go index a34188df03..5cc6249f20 100644 --- a/pkg/engine/const.go +++ b/pkg/engine/const.go @@ -259,6 +259,8 @@ const ( scheduledMaintenanceAddonDestinationFilename string = "scheduled-maintenance-deployment.yaml" secretsStoreCSIDriverAddonSourceFileName string = "secrets-store-csi-driver.yaml" secretsStoreCSIDriverAddonDestinationFileName string = "secrets-store-csi-driver.yaml" + connectedClusterAddonSourceFilename string = "arc-onboarding.yaml" + connectedClusterAddonDestinationFilename string = "arc-onboarding.yaml" ) // components source and destination file references diff --git a/pkg/engine/engine.go b/pkg/engine/engine.go index d9401ae170..4f7b666c6f 100644 --- a/pkg/engine/engine.go +++ b/pkg/engine/engine.go @@ -743,6 +743,9 @@ func getAddonFuncMap(addon api.KubernetesAddon, cs *api.ContainerService) templa "ContainerConfig": func(name string) string { return addon.Config[name] }, + "ContainerConfigBase64": func(name string) string { + return base64.StdEncoding.EncodeToString([]byte(addon.Config[name])) + }, "HasWindows": func() bool { return cs.Properties.HasWindows() }, diff --git a/pkg/engine/templates_generated.go b/pkg/engine/templates_generated.go index 818da87156..cd83981832 100644 --- a/pkg/engine/templates_generated.go +++ b/pkg/engine/templates_generated.go @@ -37,6 +37,7 @@ // ../../parts/k8s/addons/aad-pod-identity.yaml // ../../parts/k8s/addons/aci-connector.yaml // ../../parts/k8s/addons/antrea.yaml +// ../../parts/k8s/addons/arc-onboarding.yaml // ../../parts/k8s/addons/audit-policy.yaml // ../../parts/k8s/addons/azure-cloud-provider.yaml // ../../parts/k8s/addons/azure-cni-networkmonitor.yaml @@ -8416,6 +8417,125 @@ func k8sAddonsAntreaYaml() (*asset, error) { return a, nil } +var _k8sAddonsArcOnboardingYaml = []byte(`--- +apiVersion: v1 +kind: Namespace +metadata: + name: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +--- +apiVersion: v1 +kind: Secret +metadata: + name: azure-arc-onboarding + namespace: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +data: + TENANT_ID: {{ContainerConfigBase64 "tenantID"}} + SUBSCRIPTION_ID: {{ContainerConfigBase64 "subscriptionID"}} + RESOURCE_GROUP: {{ContainerConfigBase64 "resourceGroup"}} + CONNECTED_CLUSTER: {{ContainerConfigBase64 "clusterName"}} + LOCATION: {{ContainerConfigBase64 "location"}} + CLIENT_ID: {{ContainerConfigBase64 "clientID"}} + CLIENT_SECRET: {{ContainerConfigBase64 "clientSecret"}} +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: azure-arc-onboarding + namespace: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-admin +subjects: + - kind: ServiceAccount + name: azure-arc-onboarding + namespace: azure-arc-onboarding +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: azure-arc-onboarding + namespace: azure-arc-onboarding + labels: + addonmanager.kubernetes.io/mode: "EnsureExists" +spec: + template: + spec: + serviceAccountName: azure-arc-onboarding + nodeSelector: + kubernetes.io/arch: amd64 + kubernetes.io/os: linux + containers: + - name: azure-arc-onboarding + image: {{ContainerImage "azure-arc-onboarding"}} + env: + - name: TENANT_ID + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: TENANT_ID + - name: SUBSCRIPTION_ID + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: SUBSCRIPTION_ID + - name: RESOURCE_GROUP + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: RESOURCE_GROUP + - name: CONNECTED_CLUSTER + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: CONNECTED_CLUSTER + - name: LOCATION + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: LOCATION + - name: CLIENT_ID + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: CLIENT_ID + - name: CLIENT_SECRET + valueFrom: + secretKeyRef: + name: azure-arc-onboarding + key: CLIENT_SECRET + restartPolicy: Never + backoffLimit: 4 +`) + +func k8sAddonsArcOnboardingYamlBytes() ([]byte, error) { + return _k8sAddonsArcOnboardingYaml, nil +} + +func k8sAddonsArcOnboardingYaml() (*asset, error) { + bytes, err := k8sAddonsArcOnboardingYamlBytes() + if err != nil { + return nil, err + } + + info := bindataFileInfo{name: "k8s/addons/arc-onboarding.yaml", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)} + a := &asset{bytes: bytes, info: info} + return a, nil +} + var _k8sAddonsAuditPolicyYaml = []byte(`apiVersion: audit.k8s.io/v1{{ if not (IsKubernetesVersionGe "1.16.0")}}beta1{{end}} kind: Policy omitStages: @@ -28789,6 +28909,7 @@ var _bindata = map[string]func() (*asset, error){ "k8s/addons/aad-pod-identity.yaml": k8sAddonsAadPodIdentityYaml, "k8s/addons/aci-connector.yaml": k8sAddonsAciConnectorYaml, "k8s/addons/antrea.yaml": k8sAddonsAntreaYaml, + "k8s/addons/arc-onboarding.yaml": k8sAddonsArcOnboardingYaml, "k8s/addons/audit-policy.yaml": k8sAddonsAuditPolicyYaml, "k8s/addons/azure-cloud-provider.yaml": k8sAddonsAzureCloudProviderYaml, "k8s/addons/azure-cni-networkmonitor.yaml": k8sAddonsAzureCniNetworkmonitorYaml, @@ -28983,6 +29104,7 @@ var _bintree = &bintree{nil, map[string]*bintree{ "aad-pod-identity.yaml": {k8sAddonsAadPodIdentityYaml, map[string]*bintree{}}, "aci-connector.yaml": {k8sAddonsAciConnectorYaml, map[string]*bintree{}}, "antrea.yaml": {k8sAddonsAntreaYaml, map[string]*bintree{}}, + "arc-onboarding.yaml": {k8sAddonsArcOnboardingYaml, map[string]*bintree{}}, "audit-policy.yaml": {k8sAddonsAuditPolicyYaml, map[string]*bintree{}}, "azure-cloud-provider.yaml": {k8sAddonsAzureCloudProviderYaml, map[string]*bintree{}}, "azure-cni-networkmonitor.yaml": {k8sAddonsAzureCniNetworkmonitorYaml, map[string]*bintree{}}, diff --git a/test/e2e/azure/cli.go b/test/e2e/azure/cli.go index 9c3c213d13..34c4fb1d4d 100644 --- a/test/e2e/azure/cli.go +++ b/test/e2e/azure/cli.go @@ -186,14 +186,6 @@ func (a *Account) CreateGroup(name, location string) error { log.Printf("Output:%s\n", out) return err } - r := ResourceGroup{ - Name: name, - Location: location, - Tags: map[string]string{ - "now": now, - }, - } - a.ResourceGroup = r return nil } diff --git a/test/e2e/cluster.sh b/test/e2e/cluster.sh index b259d57cd5..b0912063fd 100755 --- a/test/e2e/cluster.sh +++ b/test/e2e/cluster.sh @@ -9,6 +9,7 @@ WORK_DIR="/aks-engine" MASTER_VM_UPGRADE_SKU="${MASTER_VM_UPGRADE_SKU:-Standard_D4_v3}" AZURE_ENV="${AZURE_ENV:-AzurePublicCloud}" IDENTITY_SYSTEM="${IDENTITY_SYSTEM:-azure_ad}" +ARC_LOCATION="eastus" mkdir -p _output || exit 1 # Assumes we're running from the git root of aks-engine @@ -135,6 +136,10 @@ docker run --rm \ -e SERVICE_MANAGEMENT_VM_DNS_SUFFIX="${SERVICE_MANAGEMENT_VM_DNS_SUFFIX}" \ -e RESOURCE_MANAGER_VM_DNS_SUFFIX="${RESOURCE_MANAGER_VM_DNS_SUFFIX}" \ -e STABILITY_ITERATIONS=${STABILITY_ITERATIONS} \ +-e ARC_CLIENT_ID=${ARC_CLIENT_ID:-$AZURE_CLIENT_ID} \ +-e ARC_CLIENT_SECRET=${ARC_CLIENT_SECRET:-$AZURE_CLIENT_SECRET} \ +-e ARC_SUBSCRIPTION_ID=${ARC_SUBSCRIPTION_ID:-$AZURE_SUBSCRIPTION_ID} \ +-e ARC_LOCATION=${ARC_LOCATION:-$LOCATION} \ "${DEV_IMAGE}" make test-kubernetes || exit 1 if [ "${UPGRADE_CLUSTER}" = "true" ] || [ "${SCALE_CLUSTER}" = "true" ] || [ -n "$ADD_NODE_POOL_INPUT" ] || [ "${GET_CLUSTER_LOGS}" = "true" ]; then @@ -261,6 +266,10 @@ if [ -n "$ADD_NODE_POOL_INPUT" ]; then -e SERVICE_MANAGEMENT_VM_DNS_SUFFIX="${SERVICE_MANAGEMENT_VM_DNS_SUFFIX}" \ -e RESOURCE_MANAGER_VM_DNS_SUFFIX="${RESOURCE_MANAGER_VM_DNS_SUFFIX}" \ -e STABILITY_ITERATIONS=${STABILITY_ITERATIONS} \ + -e ARC_CLIENT_ID=${ARC_CLIENT_ID:-$AZURE_CLIENT_ID} \ + -e ARC_CLIENT_SECRET=${ARC_CLIENT_SECRET:-$AZURE_CLIENT_SECRET} \ + -e ARC_SUBSCRIPTION_ID=${ARC_SUBSCRIPTION_ID:-$AZURE_SUBSCRIPTION_ID} \ + -e ARC_LOCATION=${ARC_LOCATION:-$LOCATION} \ ${DEV_IMAGE} make test-kubernetes || exit 1 fi @@ -331,6 +340,10 @@ if [ "${SCALE_CLUSTER}" = "true" ]; then -e SERVICE_MANAGEMENT_VM_DNS_SUFFIX="${SERVICE_MANAGEMENT_VM_DNS_SUFFIX}" \ -e RESOURCE_MANAGER_VM_DNS_SUFFIX="${RESOURCE_MANAGER_VM_DNS_SUFFIX}" \ -e STABILITY_ITERATIONS=${STABILITY_ITERATIONS} \ + -e ARC_CLIENT_ID=${ARC_CLIENT_ID:-$AZURE_CLIENT_ID} \ + -e ARC_CLIENT_SECRET=${ARC_CLIENT_SECRET:-$AZURE_CLIENT_SECRET} \ + -e ARC_SUBSCRIPTION_ID=${ARC_SUBSCRIPTION_ID:-$AZURE_SUBSCRIPTION_ID} \ + -e ARC_LOCATION=${ARC_LOCATION:-$LOCATION} \ ${DEV_IMAGE} make test-kubernetes || exit 1 fi @@ -413,6 +426,10 @@ if [ "${UPGRADE_CLUSTER}" = "true" ]; then -e SERVICE_MANAGEMENT_VM_DNS_SUFFIX="${SERVICE_MANAGEMENT_VM_DNS_SUFFIX}" \ -e RESOURCE_MANAGER_VM_DNS_SUFFIX="${RESOURCE_MANAGER_VM_DNS_SUFFIX}" \ -e STABILITY_ITERATIONS=${STABILITY_ITERATIONS} \ + -e ARC_CLIENT_ID=${ARC_CLIENT_ID:-$AZURE_CLIENT_ID} \ + -e ARC_CLIENT_SECRET=${ARC_CLIENT_SECRET:-$AZURE_CLIENT_SECRET} \ + -e ARC_SUBSCRIPTION_ID=${ARC_SUBSCRIPTION_ID:-$AZURE_SUBSCRIPTION_ID} \ + -e ARC_LOCATION=${ARC_LOCATION:-$LOCATION} \ ${DEV_IMAGE} make test-kubernetes || exit 1 done fi @@ -484,5 +501,9 @@ if [ "${SCALE_CLUSTER}" = "true" ]; then -e SERVICE_MANAGEMENT_VM_DNS_SUFFIX="${SERVICE_MANAGEMENT_VM_DNS_SUFFIX}" \ -e RESOURCE_MANAGER_VM_DNS_SUFFIX="${RESOURCE_MANAGER_VM_DNS_SUFFIX}" \ -e STABILITY_ITERATIONS=${STABILITY_ITERATIONS} \ + -e ARC_CLIENT_ID=${ARC_CLIENT_ID:-$AZURE_CLIENT_ID} \ + -e ARC_CLIENT_SECRET=${ARC_CLIENT_SECRET:-$AZURE_CLIENT_SECRET} \ + -e ARC_SUBSCRIPTION_ID=${ARC_SUBSCRIPTION_ID:-$AZURE_SUBSCRIPTION_ID} \ + -e ARC_LOCATION=${ARC_LOCATION:-$LOCATION} \ ${DEV_IMAGE} make test-kubernetes || exit 1 fi diff --git a/test/e2e/config/config.go b/test/e2e/config/config.go index c09935c3fb..0c8c57c0a3 100644 --- a/test/e2e/config/config.go +++ b/test/e2e/config/config.go @@ -55,6 +55,7 @@ type Config struct { SubscriptionID string `envconfig:"SUBSCRIPTION_ID"` ClientID string `envconfig:"CLIENT_ID"` ClientSecret string `envconfig:"CLIENT_SECRET"` + *ArcOnboardingConfig } // CustomCloudConfig holds configurations for custom cloud @@ -80,6 +81,15 @@ type CustomCloudConfig struct { CustomCloudName string `envconfig:"CUSTOM_CLOUD_NAME"` } +// ArcOnboardingConfig holds the azure arc onboarding addon configuration +type ArcOnboardingConfig struct { + ClientID string `envconfig:"ARC_CLIENT_ID" default:""` + ClientSecret string `envconfig:"ARC_CLIENT_SECRET" default:""` + SubscriptionID string `envconfig:"ARC_SUBSCRIPTION_ID" default:""` + Location string `envconfig:"ARC_LOCATION" default:""` + TenantID string `envconfig:"TENANT_ID"` +} + const ( kubernetesOrchestrator = "kubernetes" ) diff --git a/test/e2e/engine/template.go b/test/e2e/engine/template.go index abaf050756..6c872bd960 100644 --- a/test/e2e/engine/template.go +++ b/test/e2e/engine/template.go @@ -68,6 +68,7 @@ type Config struct { EnableTelemetry bool `envconfig:"ENABLE_TELEMETRY" default:"true"` KubernetesImageBase string `envconfig:"KUBERNETES_IMAGE_BASE" default:""` KubernetesImageBaseType string `envconfig:"KUBERNETES_IMAGE_BASE_TYPE" default:""` + *ArcOnboardingConfig ClusterDefinitionPath string // The original template we want to use to build the cluster from. ClusterDefinitionTemplate string // This is the template after we splice in the environment variables @@ -85,6 +86,15 @@ type Engine struct { ExpandedDefinition *api.ContainerService // Holds the expanded ClusterDefinition } +// ArcOnboardingConfig holds the azure arc onboarding addon configuration +type ArcOnboardingConfig struct { + ClientID string `envconfig:"ARC_CLIENT_ID" default:""` + ClientSecret string `envconfig:"ARC_CLIENT_SECRET" default:""` + SubscriptionID string `envconfig:"ARC_SUBSCRIPTION_ID" default:""` + Location string `envconfig:"ARC_LOCATION" default:""` + TenantID string `envconfig:"TENANT_ID"` +} + // ParseConfig will return a new engine config struct taking values from env vars func ParseConfig(cwd, clusterDefinition, name string) (*Config, error) { c := new(Config) @@ -296,6 +306,26 @@ func Build(cfg *config.Config, masterSubnetID string, agentSubnetIDs []string, i } } + if len(prop.OrchestratorProfile.KubernetesConfig.Addons) > 0 { + for _, addon := range prop.OrchestratorProfile.KubernetesConfig.Addons { + if addon.Name == common.AzureArcOnboardingAddonName && to.Bool(addon.Enabled) { + if addon.Config == nil { + addon.Config = make(map[string]string) + } + if cfg.ArcOnboardingConfig != nil { + addon.Config["tenantID"] = config.ArcOnboardingConfig.TenantID + addon.Config["subscriptionID"] = config.ArcOnboardingConfig.SubscriptionID + addon.Config["clientID"] = config.ArcOnboardingConfig.ClientID + addon.Config["clientSecret"] = config.ArcOnboardingConfig.ClientSecret + addon.Config["location"] = config.ArcOnboardingConfig.Location + } + addon.Config["clusterName"] = cfg.Name + addon.Config["resourceGroup"] = fmt.Sprintf("%s-arc", cfg.Name) // set to config.Name once Arc is supported in all regions + break + } + } + } + if config.CustomHyperKubeImage != "" { prop.OrchestratorProfile.KubernetesConfig.CustomHyperkubeImage = config.CustomHyperKubeImage } diff --git a/test/e2e/kubernetes/kubernetes_test.go b/test/e2e/kubernetes/kubernetes_test.go index 8390c23d7b..3405fdd6fe 100644 --- a/test/e2e/kubernetes/kubernetes_test.go +++ b/test/e2e/kubernetes/kubernetes_test.go @@ -2351,7 +2351,7 @@ var _ = Describe("Azure Container Cluster using the Kubernetes Orchestrator", fu }) }) - Describe("after the cluster has been up for awhile", func() { + Describe("after the cluster has been up for a while", func() { It("dns-liveness pod should not have any restarts", func() { pod, err := pod.Get("dns-liveness", "default", podLookupRetries) Expect(err).NotTo(HaveOccurred()) @@ -2613,5 +2613,27 @@ var _ = Describe("Azure Container Cluster using the Kubernetes Orchestrator", fu } } }) + + It("should have arc agents running", func() { + if hasArc, _ := eng.HasAddon(common.AzureArcOnboardingAddonName); hasArc { + By("Checking the onboarding job succeeded") + succeeded, err := job.WaitOnSucceeded("azure-arc-onboarding", "azure-arc-onboarding", 30*time.Second, cfg.Timeout) + Expect(err).NotTo(HaveOccurred()) + Expect(succeeded).To(Equal(true)) + + By("Checking ready status of each pod in namespace azure-arc") + pods, err := pod.GetAll("azure-arc") + Expect(err).NotTo(HaveOccurred()) + Expect(len(pods.Pods)).ToNot(BeZero()) + for _, currentPod := range pods.Pods { + log.Printf("Checking %s - ready: %t, restarts: %d", currentPod.Metadata.Name, currentPod.Status.ContainerStatuses[0].Ready, currentPod.Status.ContainerStatuses[0].RestartCount) + Expect(currentPod.Status.ContainerStatuses[0].Ready).To(BeTrue()) + tooManyRestarts := 5 + Expect(currentPod.Status.ContainerStatuses[0].RestartCount).To(BeNumerically("<", tooManyRestarts)) + } + } else { + Skip("Onboarding connected cluster was not requested") + } + }) }) }) diff --git a/test/e2e/runner.go b/test/e2e/runner.go index e8cb2a7692..ca29516e57 100644 --- a/test/e2e/runner.go +++ b/test/e2e/runner.go @@ -288,8 +288,13 @@ func teardown() { } if cfg.CleanUpOnExit { for _, rg := range rgs { - log.Printf("Deleting Group:%s\n", rg) + log.Printf("Deleting Group: %s\n", rg) acct.DeleteGroup(rg, false) } + // Delete once we reuse the cluster group for the connectedCluster resource + if cfg.ArcOnboardingConfig != nil { + log.Printf("Deleting Arc Group: %s\n", fmt.Sprintf("%s-arc", cfg.Name)) + acct.DeleteGroup(fmt.Sprintf("%s-arc", cfg.Name), false) + } } } diff --git a/test/e2e/runner/cli_provisioner.go b/test/e2e/runner/cli_provisioner.go index 24e7b770f6..3416e67a64 100644 --- a/test/e2e/runner/cli_provisioner.go +++ b/test/e2e/runner/cli_provisioner.go @@ -128,6 +128,13 @@ func (cli *CLIProvisioner) provision() error { if err != nil { return errors.Wrap(err, "Error while trying to create resource group") } + cli.Account.ResourceGroup = azure.ResourceGroup{ + Name: cli.Config.Name, + Location: cli.Config.Location, + Tags: map[string]string{ + "now": fmt.Sprintf("now=%v", time.Now().Unix()), + }, + } err = cli.Account.ShowGroupWithRetry(cli.Account.ResourceGroup.Name, 10*time.Second, cli.Config.Timeout) if err != nil { return errors.Wrap(err, "Unable to successfully get the resource group using the az CLI") @@ -205,6 +212,8 @@ func (cli *CLIProvisioner) provision() error { } cli.Engine = eng + cli.EnsureArcResourceGroup() + err = cli.Engine.Write() if err != nil { return errors.Wrap(err, "Error while trying to write Engine Template to disk:%s") @@ -435,3 +444,17 @@ func (cli *CLIProvisioner) FetchActivityLog(acct *azure.Account, logPath string) } return nil } + +// EnsureArcResourceGroup creates the resource group for the connected cluster resource +// Once Arc is supported in all regions, we should delete this method and reuse the cluster resource group +// https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/overview#supported-regions +func (cli *CLIProvisioner) EnsureArcResourceGroup() error { + for _, addon := range cli.Engine.ClusterDefinition.Properties.OrchestratorProfile.KubernetesConfig.Addons { + if addon.Name == common.AzureArcOnboardingAddonName && to.Bool(addon.Enabled) { + if err := cli.Account.CreateGroupWithRetry(addon.Config["resourceGroup"], addon.Config["location"], 30*time.Second, cli.Config.Timeout); err != nil { + return errors.Wrapf(err, "Error while trying to create Azure Arc resource group: %s", addon.Config["resourceGroup"]) + } + } + } + return nil +} diff --git a/test/e2e/test_cluster_configs/arc/kubernetes-arc.json b/test/e2e/test_cluster_configs/arc/kubernetes-arc.json new file mode 100644 index 0000000000..047581fe9e --- /dev/null +++ b/test/e2e/test_cluster_configs/arc/kubernetes-arc.json @@ -0,0 +1,66 @@ +{ + "env": {}, + "apiModel": { + "apiVersion": "vlabs", + "properties": { + "orchestratorProfile": { + "orchestratorType": "Kubernetes", + "kubernetesConfig": { + "addons": [ + { + "name": "azure-arc-onboarding", + "enabled": true, + "config": { + "tenantID": "", + "subscriptionID": "", + "resourceGroup": "", + "clusterName": "", + "clientID": "", + "clientSecret": "", + "location": "eastus" + } + } + ] + } + }, + "masterProfile": { + "count": 1, + "dnsPrefix": "", + "vmSize": "Standard_D2_v3" + }, + "agentPoolProfiles": [ + { + "name": "linuxpool1", + "count": 2, + "vmSize": "Standard_D2_v3", + "availabilityProfile": "AvailabilitySet" + }, + { + "name": "windowspool2", + "count": 2, + "vmSize": "Standard_D2_v3", + "availabilityProfile": "AvailabilitySet", + "osType": "Windows" + } + ], + "windowsProfile": { + "adminUsername": "azureuser", + "adminPassword": "replacepassword1234$" + }, + "linuxProfile": { + "adminUsername": "azureuser", + "ssh": { + "publicKeys": [ + { + "keyData": "" + } + ] + } + }, + "servicePrincipalProfile": { + "clientId": "", + "secret": "" + } + } + } +} \ No newline at end of file diff --git a/test/e2e/test_cluster_configs/everything.json b/test/e2e/test_cluster_configs/everything.json index c99c49c768..c6a9885ea1 100644 --- a/test/e2e/test_cluster_configs/everything.json +++ b/test/e2e/test_cluster_configs/everything.json @@ -64,6 +64,19 @@ "min-replicas": "1", "nodes-per-replica": "8" } + }, + { + "name": "azure-arc-onboarding", + "enabled": true, + "config": { + "tenantID": "", + "subscriptionID": "", + "resourceGroup": "", + "clusterName": "", + "clientID": "", + "clientSecret": "", + "location": "" + } } ], "components": [