diff --git a/LICENSE-CODE b/LICENSE-CODE deleted file mode 100644 index b17b032a4..000000000 --- a/LICENSE-CODE +++ /dev/null @@ -1,17 +0,0 @@ -The MIT License (MIT) -Copyright (c) Microsoft Corporation - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and -associated documentation files (the "Software"), to deal in the Software without restriction, -including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, -and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, -subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or substantial -portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT -NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, -WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/README.md b/README.md index c8bb6ab26..de4f3cb74 100644 --- a/README.md +++ b/README.md @@ -26,10 +26,17 @@ I'll update the highlighted section with the clarified information about command Not all documentation is suitable for conversion to Exec Docs. Use these filters to determine if a document can be effectively converted: 1. **Command Execution Limitations** +<<<<<<< HEAD + - **Supported:** + - Any command that can run in a BASH terminal (e.g. azurecli, azure-cli-interactive, azurecli-interactive commands) + + - **Not supported:** +======= - **Supported scenarios:** - Any command that can run in a BASH terminal (e.g. azurecli, azure-cli-interactive, azurecli-interactive commands) - **Not supported currently:** +>>>>>>> 33a7dbee83edb5d5749d591f55071a48c2be6547 - PowerShell scripts - GUI-based instructions - Commands requiring `sudo` privileges diff --git a/scenarios/FixFstabIssuesRepairVM/fix-fstab-issues-repair-vm.md b/scenarios/FixFstabIssuesRepairVM/fix-fstab-issues-repair-vm.md new file mode 100644 index 000000000..81e5392f1 --- /dev/null +++ b/scenarios/FixFstabIssuesRepairVM/fix-fstab-issues-repair-vm.md @@ -0,0 +1,88 @@ +--- +title: Troubleshoot Linux VM boot issues due to fstab errors | Microsoft Learn +description: Explains why Linux VM cannot start and how to solve the problem. +services: virtual-machines +documentationcenter: '' +author: divargas-msft +ms.author: divargas +manager: dcscontentpm +tags: '' +ms.custom: sap:My VM is not booting, linux-related-content, devx-track-azurecli, mode-api, innovation-engine +ms.service: azure-virtual-machines +ms.collection: linux +ms.topic: troubleshooting +ms.workload: infrastructure-services +ms.tgt_pltfrm: vm-linux +ms.devlang: azurecli +ms.date: 02/25/2025 +--- + + +# Troubleshoot Linux VM boot issues due to fstab errors + +**Applies to:** :heavy_check_mark: Linux VMs + + + +The Linux filesystem table, fstab is a configuration table which is designed to configure rules where specific file systems are detected and mounted in an orderly manner during the system boot process. +This article discusses multiple conditions where a wrong fstab configuration can lead to boot issue and provides troubleshooting guidance. + +Few common reasons that can lead to Virtual Machine Boot issues due to fstab misconfiguration are listed below: + +* Traditional filesystem name is used instead of the Universally Unique Identifier (UUID) of the filesystem. +* An incorrect UUID is used. +* An entry exists for an unattached device without `nofail` option within fstab configuration. +* Incorrect entry within fstab configuration. + +## Identify fstab issues + +Check the current boot state of the VM in the serial log within the [Boot diagnostics] (/azure/virtual-machines/boot-diagnostics#boot-diagnostics-view) blade in the Azure portal. The VM will be in an Emergency Mode. You see log entries that resemble the following example leading to the Emergency Mode state: + +```output +[K[[1;31m TIME [0m] Timed out waiting for device dev-incorrect.device. +[[1;33mDEPEND[0m] Dependency failed for /data. +[[1;33mDEPEND[0m] Dependency failed for Local File Systems. +... +Welcome to emergency mode! After logging in, type "journalctl -xb" to viewsystem logs, "systemctl reboot" to reboot, "systemctl default" to try again to boot into default mode. +Give root password for maintenance +(or type Control-D to continue) +``` + + >[!Note] + > "/data" is an example of mount point used. Dependency failure for filesystem mount point will differ based on the names used. + +## Resolution + +There are 2 ways to resolve the issue: + +* Repair the VM online + * [Use the Serial Console](#use-the-serial-console) +* Repair the vm offline + * [Use Azure Linux Auto Repair (ALAR)](#use-azure-linux-auto-repair-alar) + * [Use Manual Method](#use-manual-method) + +#### Use Azure Linux Auto Repair (ALAR) + +Azure Linux Auto Repair (ALAR) scripts is a part of VM repair extension described in [Repair a Linux VM by using the Azure Virtual Machine repair commands](./repair-linux-vm-using-azure-virtual-machine-repair-commands.md). ALAR covers automation of multiple repair scenarios including `/etc/fstab` issues. + +The ALAR scripts use the repair extension `run` command and its `--run-id` option. The script-id for the automated recovery is: **linux-alar2**. Implement the following steps to automate fstab errors via offline ALAR approach: + +```azurecli-interactive +output=$(az extension add -n vm-repair; az extension update -n vm-repair; az vm repair repair-button --button-command 'fstab' --verbose --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME) +value=$(echo "$output" | jq -r '.message') +extracted=$(echo $value) +echo "$extracted" +``` + +> [!NOTE] +> The fstab repair script will take a backup of the original file and strip off any lines in the /etc/fstab file which are not needed to boot a system. After successful start of the OS, edit the fstab again and correct any errors which didn't allow a reboot of the system before. + +[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)] \ No newline at end of file diff --git a/scenarios/KernelBootIssuesRepairVM/kernel-related-boot-issues-repairvm.md b/scenarios/KernelBootIssuesRepairVM/kernel-related-boot-issues-repairvm.md new file mode 100644 index 000000000..3b230795c --- /dev/null +++ b/scenarios/KernelBootIssuesRepairVM/kernel-related-boot-issues-repairvm.md @@ -0,0 +1,84 @@ +--- +title: Recover Azure Linux VM from kernel panic due to missing initramfs +description: Provides solutions to an issue in which a Linux virtual machine (VM) can't boot after applying kernel changes. +author: divargas-msft +ms.author: divargas +ms.date: 02/25/2025 +ms.reviewer: jofrance +ms.service: azure-virtual-machines +ms.custom: sap:Cannot start or stop my VM, devx-track-azurecli, mode-api, innovation-engine, linux-related-content +ms.workload: infrastructure-services +ms.tgt_pltfrm: vm-linux +ms.collection: linux +ms.topic: troubleshooting +--- + +# Azure Linux virtual machine fails to boot after applying kernel changes + +**Applies to:** :heavy_check_mark: Linux VMs + + + + +## Prerequisites + +Make sure the [serial console](serial-console-linux.md) is enabled and functional in the Linux VM. + +## Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) + +This error occurs because of a recent system update (kernel). It's most commonly seen in RHEL-based distributions. +You can [identify this issue from the Azure serial console](#identify-kernel-boot-issue). You'll see any of the following error messages: + +1. "Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)" + + ```output + [ 301.026129] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) + [ 301.027122] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G ------------ T 3.10.0-1160.36.2.el7.x86_64 #1 + [ 301.027122] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090008 12/07/2018 + [ 301.027122] Call Trace: + [ 301.027122] [] dump_stack+0x19/0x1b + [ 301.027122] [] panic+0xe8/0x21f + [ 301.027122] [] mount_block_root+0x291/0x2a0 + [ 301.027122] [] mount_root+0x53/0x56 + [ 301.027122] [] prepare_namespace+0x13c/0x174 + [ 301.027122] [] kernel_init_freeable+0x222/0x249 + [ 301.027122] [] ? initcall_blcklist+0xb0/0xb0 + [ 301.027122] [] ? rest_init+0x80/0x80 + [ 301.027122] [] kernel_init+0xe/0x100 + [ 301.027122] [] ret_from_fork_nospec_begin+0x21/0x21 + [ 301.027122] [] ? rest_init+0x80/0x80 + [ 301.027122] Kernel Offset: 0xc00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) + ``` + +2. "error: file '/initramfs-*.img' not found" + + > error: file '/initramfs-3.10.0-1160.36.2.el7.x86_64.img' not found. + +This kind of error indicates that the initramfs file isn't generated, the GRUB configuration file has the initrd entry missing after a patching process, or a GRUB manual misconfiguration. + +### Regenerate missing initramfs by using Azure Repair VM ALAR scripts + +1. Create a repair VM by running the following Bash command line with [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Use Azure Linux Auto Repair (ALAR) to fix a Linux VM - initrd option](repair-linux-vm-using-ALAR.md#initrd). This command will regenerate the initrd/initramfs image, regenerate the GRUB configuration file if it has the initrd entry missing, and swap the OS disk + +```azurecli-interactive +output=$(az extension add -n vm-repair; az extension update -n vm-repair; az vm repair repair-button --button-command 'initrd' --verbose --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME) +value=$(echo "$output" | jq -r '.message') +extracted=$(echo $value) +echo "$extracted" +``` + +2. Once the repair VM command has been executed, restart the original VM and validate that it's able to boot up. + +## Next steps + +If the specific boot error isn't a kernel related boot issue, see [Troubleshoot Azure Linux Virtual Machines boot errors](./boot-error-troubleshoot-linux.md) for further troubleshooting options. + +[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)] \ No newline at end of file diff --git a/scenarios/ObtainPerformanceMetricsLinuxSustem/obtain-performance-metrics-linux-system.md b/scenarios/ObtainPerformanceMetricsLinuxSustem/obtain-performance-metrics-linux-system.md index 7e8499928..2424ff0dd 100644 --- a/scenarios/ObtainPerformanceMetricsLinuxSustem/obtain-performance-metrics-linux-system.md +++ b/scenarios/ObtainPerformanceMetricsLinuxSustem/obtain-performance-metrics-linux-system.md @@ -50,7 +50,10 @@ export MY_VM_NAME="myVM89f292" The full command for installation of the `sysstat` package on some popular Distros is: ```bash -az vm run-command invoke --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME --command-id RunShellScript --scripts "/bin/bash -c 'OS=\$(cat /etc/os-release|grep NAME|head -1|cut -d= -f2 | sed \"s/\\\"//g\"); if [[ \$OS =~ \"Ubuntu\" ]] || [[ \$OS =~ \"Debian\" ]]; then sudo apt install sysstat -y; elif [[ \$OS =~ \"Red Hat\" ]]; then sudo dnf install sysstat -y; elif [[ \$OS =~ \"SUSE\" ]]; then sudo zypper install sysstat --non-interactive; else echo \"Unknown distribution\"; fi'" +output=$(az vm run-command invoke --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME --command-id RunShellScript --scripts "/bin/bash -c 'OS=\$(cat /etc/os-release|grep NAME|head -1|cut -d= -f2 | sed \"s/\\\"//g\"); if [[ \$OS =~ \"Ubuntu\" ]] || [[ \$OS =~ \"Debian\" ]]; then sudo apt install sysstat -y; elif [[ \$OS =~ \"Red Hat\" ]]; then sudo dnf install sysstat -y; elif [[ \$OS =~ \"SUSE\" ]]; then sudo zypper install sysstat --non-interactive; else echo \"Unknown distribution\"; fi'") +value=$(echo "$output" | jq -r '.value[0].message') +extracted=$(echo "$value" | awk '/\[stdout\]/,/\[stderr\]/' | sed '/\[stdout\]/d' | sed '/\[stderr\]/d') +echo "$extracted" ``` ## CPU diff --git a/scenarios/TroubleshootVMGrubError/troubleshoot-vm-grub-error-repairvm.md b/scenarios/TroubleshootVMGrubError/troubleshoot-vm-grub-error-repairvm.md new file mode 100644 index 000000000..2bbc7dc15 --- /dev/null +++ b/scenarios/TroubleshootVMGrubError/troubleshoot-vm-grub-error-repairvm.md @@ -0,0 +1,104 @@ +--- +title: Linux VM boots to GRUB rescue +description: Provides troubleshooting guidance for GRUB rescue issues with Linux virtual machines. +services: virtual-machines +documentationcenter: '' +author: divargas +ms.service: azure-virtual-machines +ms.collection: linux +ms.workload: infrastructure-services +ms.tgt_pltfrm: vm-linux +ms.custom: sap:My VM is not booting, linux-related-content +ms.topic: troubleshooting +ms.date: 02/25/2025 +ms.author: divargas +ms.reviewer: ekpathak, v-leedennis, v-weizhu +--- + +# Linux virtual machine boots to GRUB rescue + +**Applies to:** :heavy_check_mark: Linux VMs + + + + + +This article discusses multiple conditions that cause GRUB rescue issues and provides troubleshooting guidance. + +During the boot process, the boot loader tries to locate the Linux kernel and hand off the boot control. If this handoff can't be performed, the virtual machine (VM) enters a GRUB rescue console. The GRUB rescue console prompt isn't shown in the Azure serial console log, but it can be shown in the [Azure boot diagnostics screenshot](/azure/virtual-machines/boot-diagnostics#boot-diagnostics-view). + +## Identify GRUB rescue issue + +[View a boot diagnostics screenshot](/azure/virtual-machines/boot-diagnostics#boot-diagnostics-view) in the VM **Boot diagnostics** page of the Azure portal. This screenshot helps diagnose the GRUB rescue issue and determine if a boot error causes the issue. + +The following text is an example of a GRUB rescue issue: + +```output +error: file '/boot/grub2/i386-pc/normal.mod' not found. +Entering rescue mode... +grub rescue> +``` + +## Troubleshoot GRUB rescue issue offline + +1. To troubleshoot a GRUB rescue issue, a rescue/repair VM is required. Use [vm repair commands](repair-linux-vm-using-azure-virtual-machine-repair-commands.md) to create a repair VM that has a copy of the affected VM's OS disk attached. Mount the copy of the OS file systems in the repair VM by using [chroot](chroot-environment-linux.md). + + > [!NOTE] + > Alternatively, you can create a rescue VM manually by using the Azure portal. For more information, see [Troubleshoot a Linux VM by attaching the OS disk to a recovery VM using the Azure portal](troubleshoot-recovery-disks-portal-linux.md). + +2. [Identify GRUB rescue issue](#identify-grub-rescue-issue). When you encounter one of the following GRUB rescue issues, go to the corresponding section to resolve it: + + * [Error: unknown filesystem](#unknown-filesystem) + * [Error 15: File not found](#error15) + * [Error: file '/boot/grub2/i386-pc/normal.mod' not found](#normal-mod-file-not-found) + * [Error: no such partition](#no-such-partition) + * [Error: symbol 'grub_efi_get_secure_boot' not found](#grub_efi_get_secure_boot) + * [Other GRUB rescue errors](#other-grub-rescue-errors) + +3. After the GRUB rescue issue is resolved, perform the following actions: + + 1. Unmount the copy of the file systems from the rescue/repair VM. + + 2. Run the `az vm repair restore` command to swap the repaired OS disk with the original OS disk of the VM. For more information, see Step 5 in [Repair a Linux VM by using the Azure Virtual Machine repair commands](repair-linux-vm-using-azure-virtual-machine-repair-commands.md). + + 3. Check whether the VM can start by taking a look at the Azure serial console or by trying to connect to the VM. + +4. If the entire /boot partition or other important contents are missing and can't be recovered, we recommend restoring the VM from a backup. For more information, see [How to restore Azure VM data in Azure portal](/azure/backup/backup-azure-arm-restore-vms). + +See the following sections for detailed errors, possible causes, and solutions. + +> [!NOTE] +> In the commands mentioned in the following sections, replace `/dev/sdX` with the corresponding Operating System (OS) disk device. + +### Reinstall GRUB and regenerate GRUB configuration file using Auto Repair (ALAR) + +Azure Linux Auto Repair (ALAR) scripts are part of the VM repair extension described in [Use Azure Linux Auto Repair (ALAR) to fix a Linux VM](./repair-linux-vm-using-alar.md). ALAR covers the automation of multiple repair scenarios, including GRUB rescue issues. + +The ALAR scripts use the repair extension `repair-button` to fix GRUB issues by specifying `--button-command grubfix` for Generation 1 VMs, or `--button-command efifix` for Generation 2 VMs. This parameter triggers the automated recovery. Implement the following step to automate the fix of common GRUB errors that could be fixed by reinstalling GRUB and regenerating the corresponding configuration file: + +```azurecli-interactive +GEN=$(az vm get-instance-view --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME --query "instanceView.hyperVGeneration" --output tsv) +if [[ "$GEN" =~ "[Vv]?2" ]]; then ALAR="efifix"; else ALAR="grubfix"; fi +output=$(az extension add -n vm-repair; az extension update -n vm-repair; az vm repair repair-button --button-command $ALAR --verbose --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME) +value=$(echo "$output" | jq -r '.message') +extracted=$(echo $value) +echo "$extracted" +``` + +The repair VM script, in conjunction with the ALAR script, temporarily creates a resource group, a repair VM, and a copy of the affected VM's OS disk. It reinstalls GRUB and regenerates the corresponding GRUB configuration file and then it swaps the OS disk of the broken VM with the copied fixed disk. Finally, the `repair-button` script will automatically delete the resource group containing the temporary repair VM. + +## Next steps + +If the specific boot error isn't a GRUB rescue issue, refer to [Troubleshoot Azure Linux Virtual Machines boot errors](boot-error-troubleshoot-linux.md) for further troubleshooting options. + +[!INCLUDE [Third-party disclaimer](../../../includes/third-party-disclaimer.md)] + +[!INCLUDE [Third-party contact disclaimer](../../../includes/third-party-contact-disclaimer.md)] \ No newline at end of file diff --git a/scenarios/azure-aks-docs/articles/aks/auto-upgrade-cluster.md b/scenarios/azure-aks-docs/articles/aks/auto-upgrade-cluster.md new file mode 100644 index 000000000..98416b5bc --- /dev/null +++ b/scenarios/azure-aks-docs/articles/aks/auto-upgrade-cluster.md @@ -0,0 +1,170 @@ +--- +title: Automatically upgrade an Azure Kubernetes Service (AKS) cluster +description: Learn how to automatically upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates. +ms.topic: how-to +ms.author: nickoman +author: nickomang +ms.subservice: aks-upgrade +ms.date: 05/01/2023 +ms.custom: aks-upgrade, automation, innovation-engine +--- + +# Automatically upgrade an Azure Kubernetes Service (AKS) cluster + +Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. It’s important you apply the latest security releases or upgrade to get the latest features. Before learning about auto-upgrade, make sure you understand the [AKS cluster upgrade fundamentals][upgrade-aks-cluster]. + +> [!NOTE] +> Any upgrade operation, whether performed manually or automatically, upgrades the node image version if it's not already on the latest version. The latest version is contingent on a full AKS release and can be determined by visiting the [AKS release tracker][release-tracker]. +> +> Auto-upgrade first upgrades the control plane, and then upgrades agent pools one by one. + +## Why use cluster auto-upgrade + +Cluster auto-upgrade provides a "set once and forget" mechanism that yields tangible time and operational cost benefits. You don't need to stop your workloads, redeploy your workloads, or create a new AKS cluster. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest features or patches from AKS and upstream Kubernetes. + +AKS follows a strict supportability versioning window. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Alias minor versions][supported-kubernetes-versions]. + +## Customer versus AKS-initiated auto-upgrades + +You can specify cluster auto-upgrade specifics using the following guidance. The upgrades occur based on your specified cadence and are recommended to remain on supported Kubernetes versions. + +AKS also initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default. Stopped node pools are upgraded during an auto-upgrade operation. The upgrade applies to nodes when the node pool is started. To minimize disruptions, set up [maintenance windows][planned-maintenance]. + +## Cluster auto-upgrade limitations + +If you’re using cluster auto-upgrade, you can no longer upgrade the control plane first, and then upgrade the individual node pools. Cluster auto-upgrade always upgrades the control plane and the node pools together. You can't upgrade the control plane only. Running the `az aks upgrade --control-plane-only` command raises the following error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.` + +If using the `node-image` (legacy and not to be used) cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. + +## Cluster auto-upgrade channels + +Automatically completed upgrades are functionally the same as manual upgrades. The [selected auto-upgrade channel][planned-maintenance] determines the timing of upgrades. When making changes to auto-upgrade, allow 24 hours for the changes to take effect. Automatically upgrading a cluster follows the same process as manually upgrading a cluster. For more information, see [Upgrade an AKS cluster][upgrade-aks-cluster]. + +The following upgrade channels are available: + +|Channel| Action | Example +|---|---|---| +| `none`| disables auto-upgrades and keeps the cluster at its current version of Kubernetes.| Default setting if left unchanged.| +| `patch`| automatically upgrades the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster runs version *1.17.7*, and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.17.9*.| +| `stable`| automatically upgrades the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.18.6*.| +| `rapid`| automatically upgrades the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster's Kubernetes version is an *N-2* minor version, where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster first upgrades to *1.18.6*, then upgrades to *1.19.1*.| +| `node-image`(legacy)| automatically upgrades the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default. Node image upgrades work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported. This channel is no longer recommended and is planned for deprecation in future. For an option that can automatically upgrade node images, see the `NodeImage` channel in [node image auto-upgrade][node-image-auto-upgrade]. | + +> [!NOTE] +> +> Keep the following information in mind when using cluster auto-upgrade: +> +> * Cluster auto-upgrade only updates to GA versions of Kubernetes and doesn't update to preview versions. +> +> * With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster runs the minor version's latest GA patch. To learn more, see [AKS support window][supported-kubernetes-versions]. +> +> * Auto-upgrade requires the cluster's Kubernetes version to be within the [AKS support window][supported-kubernetes-versions], even if using the `node-image` channel. +> +> * If you're using the preview API `11-02-preview` or later, and you select the `node-image` cluster auto-upgrade channel, the [node image auto-upgrade channel][node-image-auto-upgrade] automatically sets to `NodeImage`. +> +> * Each cluster can only be associated with a single auto-upgrade channel. This is because your specified channel determines the Kubernetes version that runs on the cluster. +> +> * If your cluster has no auto-upgrade channel and you enable it for LTS *(Long-Term Support)*, it will default to a `patch` auto-upgrade channel. + +## Use cluster auto-upgrade with a new AKS cluster + +### [Azure CLI](#tab/azure-cli) + +* Set the auto-upgrade channel when creating a new cluster using the [`az aks create`][az-aks-create] command and the `auto-upgrade-channel` parameter. + +```text +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export RESOURCE_GROUP="myResourceGroup$RANDOM_SUFFIX" +export AKS_CLUSTER_NAME="myAKSCluster" +az aks create --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --auto-upgrade-channel stable --generate-ssh-keys +``` + +### [Azure portal](#tab/azure-portal) + +1. In the Azure portal, select **Create a resource** > **Containers** > **Azure Kubernetes Service (AKS)**. +2. In the **Basics** tab, under **Cluster details**, select the desired auto-upgrade channel from the **Automatic upgrade** dropdown. We recommend selecting the **Enabled with patch (recommended)** option. + + :::image type="content" source="./media/auto-upgrade-cluster/portal-autoupgrade-new-cluster.png" alt-text="The screenshot of the create blade for an AKS cluster in the Azure portal. The automatic upgrade field shows 'Enabled with patch (recommended)' selected."::: + +3. Complete the remaining steps to create the cluster. + +--- + +## Use cluster auto-upgrade with an existing AKS cluster + +### [Azure CLI](#tab/azure-cli) + +* Set the auto-upgrade channel on an existing cluster using the [`az aks update`][az-aks-update] command with the `auto-upgrade-channel` parameter. + +```azurecli-interactive +az aks update --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --auto-upgrade-channel stable +``` + +Results: + + + +```JSON +{ + "id": "/subscriptions/xxxxx-xxxxx-xxxxx-xxxxx/resourceGroups/myResourceGroupabc123/providers/Microsoft.ContainerService/managedClusters/myAKSCluster", + "properties": { + "autoUpgradeChannel": "stable", + "provisioningState": "Succeeded" + } +} +``` + +### [Azure portal](#tab/azure-portal) + +1. In the Azure portal, navigate to your AKS cluster. +2. In the service menu, under **Settings**, select **Cluster configuration**. +3. Under **Upgrade** > **Kubernetes version**, select **Upgrade version**. + + :::image type="content" source="./media/auto-upgrade-cluster/portal-autoupgrade-existing-cluster.png" alt-text="The screenshot of the upgrade blade for an AKS cluster in the Azure portal."::: + +4. On the **Upgrade Kubernetes version** page, select the desired auto-upgrade channel from the **Automatic upgrade** dropdown. We recommend selecting the **Enabled with patch (recommended)** option. + + :::image type="content" source="./media/auto-upgrade-cluster/portal-autoupgrade-upgrade-page-existing-cluster.png" alt-text="The screenshot of the Upgrade Kubernetes page for an AKS cluster in the Azure portal."::: + +5. Select **Save**. + +--- + +## Use auto-upgrade with Planned Maintenance + +If using Planned Maintenance and cluster auto-upgrade, your upgrade starts during your specified maintenance window. + +> [!NOTE] +> To ensure proper functionality, use a maintenance window of *four hours or more*. + +For more information on how to set a maintenance window with Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance]. + +## Best practices for cluster auto-upgrade + +Use the following best practices to help maximize your success when using auto-upgrade: + +* To ensure your cluster is always in a supported version (i.e within the N-2 rule), choose either `stable` or `rapid` channels. +* If you're interested in getting the latest patches as soon as possible, use the `patch` channel. The `node-image` channel is a good fit if you want your agent pools to always run the most recent node images. +* To automatically upgrade node images while using a different cluster upgrade channel, consider using the [node image auto-upgrade][node-image-auto-upgrade] `NodeImage` channel. +* Follow [Operator best practices][operator-best-practices-scheduler]. +* Follow [PDB best practices][pdb-best-practices]. +* For upgrade troubleshooting information, see the [AKS troubleshooting documentation][aks-troubleshoot-docs]. + +For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide]. + + +[supported-kubernetes-versions]: ./supported-kubernetes-versions.md +[upgrade-aks-cluster]: ./upgrade-cluster.md +[planned-maintenance]: ./planned-maintenance.md +[operator-best-practices-scheduler]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets +[node-image-auto-upgrade]: auto-upgrade-node-image.md +[az-aks-create]: /cli/azure/aks#az_aks_create +[az-aks-update]: /cli/azure/aks#az_aks_update +[aks-troubleshoot-docs]: /support/azure/azure-kubernetes/welcome-azure-kubernetes +[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices + + +[pdb-best-practices]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ +[release-tracker]: release-tracker.md +[k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement +[unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates \ No newline at end of file diff --git a/scenarios/azure-aks-docs/articles/aks/auto-upgrade-node-os-image.md b/scenarios/azure-aks-docs/articles/aks/auto-upgrade-node-os-image.md new file mode 100644 index 000000000..f0635a4ec --- /dev/null +++ b/scenarios/azure-aks-docs/articles/aks/auto-upgrade-node-os-image.md @@ -0,0 +1,232 @@ +--- +title: autoupgrade Node OS Images +description: Learn how to choose an upgrade channel that best supports your needs for cluster's node OS security and maintenance. +ms.topic: how-to +ms.custom: build-2023, devx-track-azurecli, innovation-engine +ms.author: kaarthis +author: kaarthis +ms.subservice: aks-upgrade +ms.date: 05/10/2024 +--- + +# autoupgrade node OS images + +AKS provides multiple autoupgrade channels dedicated to timely node-level OS security updates. This channel is different from cluster-level Kubernetes version upgrades and supersedes it. + +## Interactions between node OS autoupgrade and cluster autoupgrade + +Node-level OS security updates are released at a faster rate than Kubernetes patch or minor version updates. The node OS autoupgrade channel grants you flexibility and enables a customized strategy for node-level OS security updates. Then, you can choose a separate plan for cluster-level Kubernetes version [autoupgrades][Autoupgrade]. +It's best to use both cluster-level [autoupgrades][Autoupgrade] and the node OS autoupgrade channel together. Scheduling can be fine-tuned by applying two separate sets of [maintenance windows][planned-maintenance] - `aksManagedAutoUpgradeSchedule` for the cluster [autoupgrade][Autoupgrade] channel and `aksManagedNodeOSUpgradeSchedule` for the node OS autoupgrade channel. + +## Channels for node OS image upgrades + +The selected channel determines the timing of upgrades. When making changes to node OS auto-upgrade channels, allow up to 24 hours for the changes to take effect. + +> [!NOTE] +> - Once you change from one channel to another channel, **a reimage is triggered leading to rolling nodes**. +> - Node OS image auto-upgrade won't affect the cluster's Kubernetes version. Starting with API version 2023-06-01, the default for any new cluster created is `NodeImage`. + +The following upgrade channels are available. You're allowed to choose one of these options: + +|Channel|Description|OS-specific behavior| +|---|---|---| +| `None`| Your nodes don't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A| +| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially. The OS's infrastructure patches them at some point.|Ubuntu and Azure Linux (CPU node pools) apply security patches through unattended upgrade/dnf-automatic roughly once per day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`. You need to manage the reboot process by using a tool like [kured][kured].| +| `SecurityPatch`|OS security patches, which are AKS-tested, fully managed, and applied with safe deployment practices. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." There might be disruptions when the security patches are applied to the nodes. However AKS is limiting disruptions by only reimaging your nodes only when necessary, such as for certain kernel security packages. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. If AKS decides that reimaging nodes isn't necessary, it patches nodes live without draining pods and performs no VHD update. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs. `SecurityPatch` works on kubernetes patch versions that are deprecated, so long as the minor Kubernetes version is still supported.| +| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. Node image upgrades are supported as long as cluster k8s minor version is still in support. Node images are AKS-tested, fully managed, and applied with safe deployment practices.| + +## What to choose - SecurityPatch Channel or NodeImage Channel? + +There are two important considerations for you to choose between `SecurityPatch` or `NodeImage` channels. + +|Property|NodeImage Channel|SecurityPatch Channel|Recommended Channel| +|---|---|---|---| +| `Speed of shipping`|The typical build, test, release, and rollout timelines for a new VHD can take approximately 2 weeks following safe deployment practices. Although in the event of CVEs, accelerated rollouts can occur on a case by case basis. The exact timing when a new VHD hits a region can be monitored via [release-tracker]. | SecurityPatch releases are relatively faster than `NodeImage`, even with safe deployment practices. SecurityPatch has the advantage of 'Live-patching' in Linux environments, where patching leads to selective 'reimaging' and does not reimage every time a patch gets applied. Re-image if it happens is controlled by maintenance windows. |`SecurityPatch`| +| `Bugfixes`| Carries bug fixes in addition to security fixes.| Strictly carries only security fixes.| `NodeImage`| + +## Set the node OS autoupgrade channel on a new cluster + +### [Azure CLI](#tab/azure-cli) + +* Set the node OS autoupgrade channel on a new cluster using the [`az aks create`][az-aks-create] command with the `--node-os-upgrade-channel` parameter. The following example sets the node OS autoupgrade channel to `SecurityPatch`. + +```text +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export RESOURCE_GROUP="myResourceGroup$RANDOM_SUFFIX" +export AKS_CLUSTER="myAKSCluster$RANDOM_SUFFIX" +az aks create \ + --resource-group $RESOURCE_GROUP \ + --name $AKS_CLUSTER \ + --node-os-upgrade-channel SecurityPatch \ + --generate-ssh-keys +``` + +### [Azure portal](#tab/azure-portal) + +1. In the Azure portal, select **Create a resource** > **Containers** > **Azure Kubernetes Service (AKS)**. +2. In the **Basics** tab, under **Cluster details**, select the desired channel type from the **Node security channel type** dropdown. + + :::image type="content" source="./media/auto-upgrade-node-os-image/set-nodeimage-channel-portal.png" alt-text="A screenshot of the Azure portal showing the node security channel type option in the Basics tab of the AKS cluster creation page."::: + +3. Select **Security channel scheduler** and choose the desired maintenance window using the [Planned Maintenance feature](./planned-maintenance.md). We recommend selecting the default option **Every week on Sunday (recommended)**. + + :::image type="content" source="./media/auto-upgrade-node-os-image/set-nodeimage-maintenance-window-portal.png" alt-text="A screenshot of the Azure portal showing the security channel scheduler option in the Basics tab of the AKS cluster creation page."::: + +4. Complete the remaining steps to create the cluster. + +--- + +## Set the node OS autoupgrade channel on an existing cluster + +### [Azure CLI](#tab/azure-cli) + +* Set the node os autoupgrade channel on an existing cluster using the [`az aks update`][az-aks-update] command with the `--node-os-upgrade-channel` parameter. The following example sets the node OS autoupgrade channel to `SecurityPatch`. + +```azurecli-interactive +az aks update --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER --node-os-upgrade-channel SecurityPatch +``` + +Results: + + +```JSON +{ + "autoUpgradeProfile": { + "nodeOsUpgradeChannel": "SecurityPatch" + } +} +``` + +### [Azure portal](#tab/azure-portal) + +1. In the Azure portal, navigate to your AKS cluster. +2. In the **Settings** section, select **Cluster configuration**. +3. Under **Security updates**, select the desired channel type from the **Node security channel type** dropdown. + + :::image type="content" source="./media/auto-upgrade-node-os-image/set-nodeimage-channel-portal-existing.png" alt-text="A screenshot of the Azure portal showing the node security channel type option in the Cluster configuration page of an existing AKS cluster."::: + +4. For **Security channel scheduler**, select **Add schedule**. +5. On the **Add maintenance schedule** page, configure the following maintenance window settings using the [Planned Maintenance feature](./planned-maintenance.md): + + * **Repeats**: Select the desired frequency for the maintenance window. We recommend selecting **Weekly**. + * **Frequency**: Select the desired day of the week for the maintenance window. We recommend selecting **Sunday**. + * **Maintenance start date**: Select the desired start date for the maintenance window. + * **Maintenance start time**: Select the desired start time for the maintenance window. + * **UTC offset**: Select the desired UTC offset for the maintenance window. If not set, the default is **+00:00**. + + :::image type="content" source="./media/auto-upgrade-node-os-image/set-nodeimage-maintenance-window-portal-existing.png" alt-text="A screenshot of the Azure portal showing the maintenance schedule configuration options in the Add maintenance schedule page of an existing AKS cluster."::: + +6. Select **Save** > **Apply**. + +--- + +## Update ownership and schedule + +The default cadence means there's no planned maintenance window applied. + +|Channel|Updates Ownership|Default cadence| +|---|---|---| +| `Unmanaged`|OS driven security updates. AKS has no control over these updates.|Nightly around 6AM UTC for Ubuntu and Azure Linux. Monthly for Windows.| +| `SecurityPatch`|AKS-tested, fully managed, and applied with safe deployment practices. For more information, see [Increased security and resiliency of Canonical workloads on Azure][Blog].|Typically faster than weekly, AKS determined cadence.| +| `NodeImage`|AKS-tested, fully managed, and applied with safe deployment practices. For more real time information on releases, look up [AKS Node Images in Release tracker][release-tracker] |Weekly.| + +> [!NOTE] +> While Windows security updates are released on a monthly basis, using the `Unmanaged` channel will not automatically apply these updates to Windows nodes. If you choose the `Unmanaged` channel, you need to manage the reboot process for Windows nodes. + +## Node channel known limitations + +- Currently, when you set the [cluster autoupgrade channel][Autoupgrade] to `node-image`, it also automatically sets the node OS autoupgrade channel to `NodeImage`. You can't change node OS autoupgrade channel value if your cluster autoupgrade channel is `node-image`. In order to set the node OS autoupgrade channel value, check the [cluster autoupgrade channel][Autoupgrade] value isn't `node-image`. + +- The `SecurityPatch` channel isn't supported on Windows OS node pools. + + > [!NOTE] + > Use CLI version 2.61.0 or above for the `SecurityPatch` channel. + +## Node OS planned maintenance windows + +Planned maintenance for the node OS autoupgrade starts at your specified maintenance window. + +> [!NOTE] +> To ensure proper functionality, use a maintenance window of four hours or more. + +For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance]. + +## Node OS autoupgrades FAQ + +### How can I check the current nodeOsUpgradeChannel value on a cluster? + +Run the `az aks show` command and check the "autoUpgradeProfile" to determine what value the `nodeOsUpgradeChannel` is set to: + +```azurecli-interactive +az aks show --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER --query "autoUpgradeProfile" +``` + +Results: + + +```JSON +{ + "nodeOsUpgradeChannel": "SecurityPatch" +} +``` + +### How can I monitor the status of node OS autoupgrades? + +To view the status of your node OS auto upgrades, look up [activity logs][monitor-aks] on your cluster. You can also look up specific upgrade-related events as mentioned in [Upgrade an AKS cluster][aks-upgrade]. AKS also emits upgrade-related Event Grid events. To learn more, see [AKS as an Event Grid source][aks-eventgrid]. + +### Can I change the node OS autoupgrade channel value if my cluster autoupgrade channel is set to `node-image`? + + No. Currently, when you set the [cluster autoupgrade channel][Autoupgrade] to `node-image`, it also automatically sets the node OS autoupgrade channel to `NodeImage`. You can't change the node OS autoupgrade channel value if your cluster autoupgrade channel is `node-image`. In order to be able to change the node OS autoupgrade channel values, make sure the [cluster autoupgrade channel][Autoupgrade] isn't `node-image`. + +### Why is `SecurityPatch` recommended over `Unmanaged` channel? + +On the `Unmanaged` channel, AKS has no control over how and when the security updates are delivered. With `SecurityPatch`, the security updates are fully tested and follow safe deployment practices. `SecurityPatch` also honors maintenance windows. For more details, see [Increased security and resiliency of Canonical workloads on Azure][Blog]. + +### Does `SecurityPatch` always lead to a reimage of my nodes? + +AKS limits reimages to only when absolutely necessary, such as certain kernel packages that may require a reimage to get fully applied. `SecurityPatch` is designed to minimize disruptions as much as possible. If AKS decides reimaging nodes isn't necessary, it will patch nodes live without draining pods and no VHD update is performed in such cases. + +### Why does `SecurityPatch` channel requires to reach `snapshot.ubuntu.com` endpoint? + +With the `SecurityPatch` channel, the Linux cluster nodes have to download the required security patches and updates from ubuntu snapshot service described in [ubuntu-snapshots-on-azure-ensuring-predictability-and-consistency-in-cloud-deployments](https://ubuntu.com/blog/ubuntu-snapshots-on-azure-ensuring-predictability-and-consistency-in-cloud-deployments). + +### How do I know if a `SecurityPatch` or `NodeImage` upgrade is applied on my node? + +Run the `kubectl get nodes --show-labels` command to list the nodes in your cluster and their labels + +Among the returned labels, you should see a line similar to the following output: + +```output +kubernetes.azure.com/node-image-version=AKSUbuntu-2204gen2containerd-202410.27.0-2024.12.01 +``` + +Here, the base node image version is `AKSUbuntu-2204gen2containerd-202410.27.0`. If applicable, the security patch version typically follows. In the above example, it's `2024.12.01`. + +The same details also be looked up in the Azure portal under the node label view: + +:::image type="content" source="./media/auto-upgrade-node-os-image/nodeimage-securitypatch-inline.png" alt-text="A screenshot of the nodes page for an AKS cluster in the Azure portal. The label for node image version clearly shows the base node image and the latest applied security patch date." lightbox="./media/auto-upgrade-node-os-image/nodeimage-securitypatch.png"::: + +## Next steps + +For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide]. + + +[planned-maintenance]: planned-maintenance.md +[release-tracker]: release-tracker.md +[az-provider-register]: /cli/azure/provider#az-provider-register +[az-feature-register]: /cli/azure/feature#az-feature-register +[az-feature-show]: /cli/azure/feature#az-feature-show +[upgrade-aks-cluster]: upgrade-cluster.md +[unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates +[Autoupgrade]: auto-upgrade-cluster.md +[kured]: node-updates-kured.md +[supported]: ./support-policies.md +[monitor-aks]: ./monitor-aks-reference.md +[aks-eventgrid]: ./quickstart-event-grid.md +[aks-upgrade]: ./upgrade-cluster.md +[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices +[az-aks-create]: /cli/azure/aks#az-aks-create +[az-aks-update]: /cli/azure/aks#az-aks-update + + +[Blog]: https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/increased-security-and-resiliency-of-canonical-workloads-on/ba-p/3970623 diff --git a/scenarios/azure-aks-docs/articles/aks/azure-cni-powered-by-cilium.md b/scenarios/azure-aks-docs/articles/aks/azure-cni-powered-by-cilium.md new file mode 100644 index 000000000..d1a7f8651 --- /dev/null +++ b/scenarios/azure-aks-docs/articles/aks/azure-cni-powered-by-cilium.md @@ -0,0 +1,229 @@ +--- +title: Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) +description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Azure CNI Powered by Cilium. +ms.topic: how-to +ms.date: 02/12/2024 +author: asudbring +ms.author: allensu +ms.subservice: aks-networking +ms.custom: references_regions, devx-track-azurecli, build-2023, innovation-engine +--- + +# Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) + +Azure CNI Powered by Cilium combines the robust control plane of Azure CNI with the data plane of [Cilium](https://cilium.io/) to provide high-performance networking and security. + +By making use of eBPF programs loaded into the Linux kernel and a more efficient API object structure, Azure CNI Powered by Cilium provides the following benefits: + +- Functionality equivalent to existing Azure CNI and Azure CNI Overlay plugins + +- Improved Service routing + +- More efficient network policy enforcement + +- Better observability of cluster traffic + +- Support for larger clusters (more nodes, pods, and services) + +## IP Address Management (IPAM) with Azure CNI Powered by Cilium + +Azure CNI Powered by Cilium can be deployed using two different methods for assigning pod IPs: + +- Assign IP addresses from an overlay network (similar to Azure CNI Overlay mode) + +- Assign IP addresses from a virtual network (similar to existing Azure CNI with Dynamic Pod IP Assignment) + +If you aren't sure which option to select, read ["Choosing a network model to use."](./azure-cni-overlay.md#choosing-a-network-model-to-use) + +## Versions + +| Kubernetes Version | Cilium Version | +|--------------------|----------------| +| 1.27 (LTS) | 1.13.18 | +| 1.28 (End of Life) | 1.13.18 | +| 1.29 | 1.14.19 | +| 1.30 (LTS) | 1.14.19 | +| 1.31 | 1.16.6 | +| 1.32 | 1.17.0 | + +See [Supported Kubernetes Versions](./supported-kubernetes-versions.md) for more information on AKS versioning and release timelines. + +## Network Policy Enforcement + +Cilium enforces [network policies to allow or deny traffic between pods](./operator-best-practices-network.md#control-traffic-flow-with-network-policies). With Cilium, you don't need to install a separate network policy engine such as Azure Network Policy Manager or Calico. + +## Limitations + +Azure CNI powered by Cilium currently has the following limitations: + +* Available only for Linux and not for Windows. + +* Cilium L7 policy enforcement is disabled. + +* Network policies can't use `ipBlock` to allow access to node or pod IPs. See [frequently asked questions](#frequently-asked-questions) for details and recommended workaround. + +* Multiple Kubernetes services can't use the same host port with different protocols (for example, TCP or UDP) ([Cilium issue #14287](https://github.com/cilium/cilium/issues/14287)). + +* Network policies may be enforced on reply packets when a pod connects to itself via service cluster IP ([Cilium issue #19406](https://github.com/cilium/cilium/issues/19406)). + +* Network policies aren't applied to pods using host networking (`spec.hostNetwork: true`) because these pods use the host identity instead of having individual identities. + +## Prerequisites + +* Azure CLI version 2.48.1 or later. Run `az --version` to see the currently installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). + +* If using ARM templates or the REST API, the AKS API version must be 2022-09-02-preview or later. + +> [!NOTE] +> Previous AKS API versions (2022-09-02preview to 2023-01-02preview) used the field [`networkProfile.ebpfDataplane=cilium`](https://github.com/Azure/azure-rest-api-specs/blob/06dbe269f7d9c709cc225c92358b38c3c2b74d60/specification/containerservice/resource-manager/Microsoft.ContainerService/aks/preview/2022-09-02-preview/managedClusters.json#L6939-L6955). AKS API versions since 2023-02-02preview use the field [`networkProfile.networkDataplane=cilium`](https://github.com/Azure/azure-rest-api-specs/blob/06dbe269f7d9c709cc225c92358b38c3c2b74d60/specification/containerservice/resource-manager/Microsoft.ContainerService/aks/preview/2023-02-02-preview/managedClusters.json#L7152-L7173) to enable Azure CNI Powered by Cilium. + +## Create a new AKS Cluster with Azure CNI Powered by Cilium + +### Create a Resource Group + +Use the following command to create a resource group. Environment variables are declared and used below to replace placeholders. + +```azurecli-interactive +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export RESOURCE_GROUP="myResourceGroup$RANDOM_SUFFIX" +export REGION="EastUS2" + +az group create \ + --name $RESOURCE_GROUP \ + --location $REGION +``` + +Result: + + +```JSON +{ + "id": "/subscriptions/xxxxx-xxxxx-xxxxx-xxxxx/resourceGroups/myResourceGroupxxx", + "location": "WestUS2", + "name": "myResourceGroupxxx", + "provisioningState": "Succeeded" +} +``` + +### Assign IP addresses from an overlay network + +Use the following commands to create a cluster with an overlay network and Cilium. Environment variables are declared and used below to replace placeholders. + +```azurecli-interactive +export CLUSTER_NAME="myAKSCluster$RANDOM_SUFFIX" + +az aks create \ + --name $CLUSTER_NAME \ + --resource-group $RESOURCE_GROUP \ + --location $REGION \ + --network-plugin azure \ + --network-plugin-mode overlay \ + --pod-cidr 192.168.0.0/16 \ + --network-dataplane cilium \ + --generate-ssh-keys +``` + + +```JSON +{ + "id": "/subscriptions/xxxxx-xxxxx-xxxxx-xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.ContainerService/managedClusters/myAKSClusterxxx", + "location": "WestUS2", + "name": "myAKSClusterxxx", + "provisioningState": "Succeeded" +} +``` + +> [!NOTE] +> The `--network-dataplane cilium` flag replaces the deprecated `--enable-ebpf-dataplane` flag used in earlier versions of the aks-preview CLI extension. + +## Frequently asked questions + +- **Can I customize Cilium configuration?** + + No, AKS manages the Cilium configuration and it can't be modified. We recommend that customers who require more control use [AKS BYO CNI](./use-byo-cni.md) and install Cilium manually. + +- **Can I use `CiliumNetworkPolicy` custom resources instead of Kubernetes `NetworkPolicy` resources?** + + `CiliumNetworkPolicy` custom resources are partially supported. Customers may use FQDN filtering as part of the [Advanced Container Networking Services](./advanced-container-networking-services-overview.md) feature bundle. + + This `CiliumNetworkPolicy` example demonstrates a sample matching pattern for services that match the specified label. + + ```yaml + apiVersion: "cilium.io/v2" + kind: CiliumNetworkPolicy + metadata: + name: "example-fqdn" + spec: + endpointSelector: + matchLabels: + foo: bar + egress: + - toFQDNs: + - matchPattern: "*.example.com" + ``` + +- **Why is traffic being blocked when the `NetworkPolicy` has an `ipBlock` that allows the IP address?** + + A limitation of Azure CNI Powered by Cilium is that a `NetworkPolicy`'s `ipBlock` can't select pod or node IPs. + + For example, this `NetworkPolicy` has an `ipBlock` that allows all egress to `0.0.0.0/0`: + ```yaml + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: example-ipblock + spec: + podSelector: {} + policyTypes: + - Egress + egress: + - to: + - ipBlock: + cidr: 0.0.0.0/0 # This will still block pod and node IPs. + ``` + + However, when this `NetworkPolicy` is applied, Cilium blocks egress to pod and node IPs even though the IPs are within the `ipBlock` CIDR. + + As a workaround, you can add `namespaceSelector` and `podSelector` to select pods. This example selects all pods in all namespaces: + ```yaml + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: example-ipblock + spec: + podSelector: {} + policyTypes: + - Egress + egress: + - to: + - ipBlock: + cidr: 0.0.0.0/0 + - namespaceSelector: {} + - podSelector: {} + ``` + + > [!NOTE] + > It isn't currently possible to specify a `NetworkPolicy` with an `ipBlock` to allow traffic to node IPs. +- **Does AKS configure CPU or memory limits on the Cilium `daemonset`?** + + No, AKS doesn't configure CPU or memory limits on the Cilium `daemonset` because Cilium is a critical system component for pod networking and network policy enforcement. + +- **Does Azure CNI powered by Cilium use Kube-Proxy?** + + No, AKS clusters created with network dataplane as Cilium don't use Kube-Proxy. + If the AKS clusters are on [Azure CNI Overlay](./azure-cni-overlay.md) or [Azure CNI with dynamic IP allocation](./configure-azure-cni-dynamic-ip-allocation.md) and are upgraded to AKS clusters running Azure CNI powered by Cilium, new nodes workloads are created without kube-proxy. Older workloads are also migrated to run without kube-proxy as a part of this upgrade process. + +## Next steps + +Learn more about networking in AKS in the following articles: + +* [Upgrade Azure CNI IPAM modes and Dataplane Technology](upgrade-azure-cni.md). + +* [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md) + +* [Use an internal load balancer with Azure Container Service (AKS)](internal-lb.md) + +* [Create a basic ingress controller with external network connectivity][aks-ingress-basic] + + +[aks-ingress-basic]: ingress-basic.md \ No newline at end of file diff --git a/scenarios/azure-aks-docs/articles/aks/cost-analysis.md b/scenarios/azure-aks-docs/articles/aks/cost-analysis.md new file mode 100644 index 000000000..4c152e43b --- /dev/null +++ b/scenarios/azure-aks-docs/articles/aks/cost-analysis.md @@ -0,0 +1,154 @@ +--- +title: Azure Kubernetes Service (AKS) cost analysis +description: Learn how to use cost analysis to surface granular cost allocation data for your Azure Kubernetes Service (AKS) cluster. +author: schaffererin +ms.author: schaffererin +ms.service: azure-kubernetes-service +ms.subservice: aks-monitoring +ms.topic: how-to +ms.date: 06/17/2024 +--- + +# Azure Kubernetes Service (AKS) cost analysis + +In this article, you learn how to enable cost analysis on Azure Kubernetes Service (AKS) to view detailed cost data for cluster resources. + +## About cost analysis + +AKS clusters rely on Azure resources, such as virtual machines (VMs), virtual disks, load balancers, and public IP addresses. Multiple applications can use these resources. The resource consumption patterns often differ for each application, so their contribution toward the total cluster resource cost might also vary. Some applications might have footprints across multiple clusters, which can pose a challenge when performing cost attribution and cost management. + +When you enable cost analysis on your AKS cluster, you can view detailed cost allocation scoped to Kubernetes constructs, such as clusters and namespaces, and Azure Compute, Network, and Storage resources. The add-on is built on top of [OpenCost](https://www.opencost.io/), an open-source Cloud Native Computing Foundation Incubating project for usage data collection. Usage data is reconciled with your Azure invoice data to provide a comprehensive view of your AKS cluster costs directly in the Azure portal Cost Management views. + +For more information on Microsoft Cost Management, see [Start analyzing costs in Azure](/azure/cost-management-billing/costs/quick-acm-cost-analysis). + +After enabling the cost analysis add-on and allowing time for data to be collected, you can use the information in [Understand AKS usage and costs](./understand-aks-costs.md) to help you understand your data. + +## Prerequisites + +* Your cluster must use the `Standard` or `Premium` tier, not the `Free` tier. +* To view cost analysis information, you must have one of the following roles on the subscription hosting the cluster: `Owner`, `Contributor`, `Reader`, `Cost Management Contributor`, or `Cost Management Reader`. +* [Microsoft Entra Workload ID](./workload-identity-overview.md) configured on your cluster. +* If using the Azure CLI, you need version `2.61.0` or later installed. +* Once you have enabled cost analysis, you can't downgrade your cluster to the `Free` tier without first disabling cost analysis. +* Access to the Azure API including Azure Resource Manager (ARM) API. For a list of fully qualified domain names (FQDNs) required, see [AKS Cost Analysis required FQDN](./outbound-rules-control-egress.md#aks-cost-analysis-add-on). + +## Limitations + +* Kubernetes cost views are only available for the *Enterprise Agreement* and *Microsoft Customer Agreement* Microsoft Azure offer types. For more information, see [Supported Microsoft Azure offers](/azure/cost-management-billing/costs/understand-cost-mgt-data#supported-microsoft-azure-offers). +* Currently, virtual nodes aren't supported. + +## Enable cost analysis on your AKS cluster + +You can enable the cost analysis with the `--enable-cost-analysis` flag during one of the following operations: + +* Creating a `Standard` or `Premium` tier AKS cluster. +* Updating an existing `Standard` or `Premium` tier AKS cluster. +* Upgrading a `Free` cluster to `Standard` or `Premium`. +* Upgrading a `Standard` cluster to `Premium`. +* Downgrading a `Premium` cluster to `Standard` tier. + +### Enable cost analysis on a new cluster + +Enable cost analysis on a new cluster using the [`az aks create`][az-aks-create] command with the `--enable-cost-analysis` flag. The following example creates a new AKS cluster in the `Standard` tier with cost analysis enabled: + +```text +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export RESOURCE_GROUP="AKSCostRG$RANDOM_SUFFIX" +export CLUSTER_NAME="AKSCostCluster$RANDOM_SUFFIX" +export LOCATION="WestUS2" +az aks create --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --location $LOCATION --enable-managed-identity --generate-ssh-keys --tier standard --enable-cost-analysis +``` + +Results: + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/AKSCostRGxxxx", + "location": "WestUS2", + "name": "AKSCostClusterxxxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.ContainerService/managedClusters" +} +``` + +### Enable cost analysis on an existing cluster + +Enable cost analysis on an existing cluster using the [`az aks update`][az-aks-update] command with the `--enable-cost-analysis` flag. The following example updates an existing AKS cluster in the `Standard` tier to enable cost analysis: + +```azurecli-interactive +az aks update --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --enable-cost-analysis +``` + +Results: + + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/AKSCostRGxxxx", + "name": "AKSCostClusterxxxx", + "properties": { + "provisioningState": "Succeeded" + } +} +``` + +> [!NOTE] +> An agent is deployed to the cluster when you enable the add-on. The agent consumes a small amount of CPU and Memory resources. + +> [!WARNING] +> The AKS cost analysis add-on Memory usage is dependent on the number of containers deployed. You can roughly approximate Memory consumption using *200 MB + 0.5 MB per container*. The current Memory limit is set to *4 GB*, which supports approximately *7000 containers per cluster*. These estimates are subject to change. + +## Disable cost analysis on your AKS cluster + +Disable cost analysis using the [`az aks update`][az-aks-update] command with the `--disable-cost-analysis` flag. + +```text +az aks update --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --disable-cost-analysis +``` + +Results: + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/AKSCostRGxxxx", + "name": "AKSCostClusterxxxx", + "properties": { + "provisioningState": "Succeeded" + } +} +``` + +> [!NOTE] +> If you want to downgrade your cluster from the `Standard` or `Premium` tier to the `Free` tier while cost analysis is enabled, you must first disable cost analysis. + +## View the cost data + +You can view cost allocation data in the Azure portal. For more information, see [View AKS costs in Microsoft Cost Management](/azure/cost-management-billing/costs/view-kubernetes-costs). + +### Cost definitions + +In the Kubernetes namespaces and assets views, you might see any of the following charges: + +* **Idle charges** represent the cost of available resource capacity that isn't used by any workloads. +* **Service charges** represent the charges associated with the service, like Uptime SLA, Microsoft Defender for Containers, etc. +* **System charges** represent the cost of capacity reserved by AKS on each node to run system processes required by the cluster, including the kubelet and container runtime. [Learn more](./concepts-clusters-workloads.md#resource-reservations). +* **Unallocated charges** represent the cost of resources that couldn't be allocated to namespaces. + +> [!NOTE] +> It might take *up to one day* for data to finalize. After 24 hours, any fluctuations in costs for the previous day will have stabilized. + +## Troubleshooting + +If you're experiencing issues, such as the `cost-agent` pod getting `OOMKilled` or stuck in a `Pending` state, see [Troubleshoot AKS cost analysis add-on issues](/troubleshoot/azure/azure-kubernetes/aks-cost-analysis-add-on-issues). + +## Next steps + +For more information on cost in AKS, see [Understand Azure Kubernetes Service (AKS) usage and costs](./understand-aks-costs.md). + + +[az-aks-create]: /cli/azure/aks#az-aks-create +[az-aks-update]: /cli/azure/aks#az-aks-update \ No newline at end of file diff --git a/scenarios/azure-aks-docs/articles/aks/istio-deploy-addon.md b/scenarios/azure-aks-docs/articles/aks/istio-deploy-addon.md new file mode 100644 index 000000000..41d2e9b1a --- /dev/null +++ b/scenarios/azure-aks-docs/articles/aks/istio-deploy-addon.md @@ -0,0 +1,237 @@ +--- +title: Deploy Istio-based service mesh add-on for Azure Kubernetes Service +description: Deploy Istio-based service mesh add-on for Azure Kubernetes Service +ms.topic: how-to +ms.custom: devx-track-azurecli, innovation-engine +ms.service: azure-kubernetes-service +ms.date: 03/28/2024 +ms.author: shasb +author: shashankbarsin +--- + +# Deploy Istio-based service mesh add-on for Azure Kubernetes Service + +This article shows you how to install the Istio-based service mesh add-on for Azure Kubernetes Service (AKS) cluster. + +For more information on Istio and the service mesh add-on, see [Istio-based service mesh add-on for Azure Kubernetes Service][istio-about]. + +## Before you begin + +* The add-on requires Azure CLI version 2.57.0 or later installed. You can run `az --version` to verify version. To install or upgrade, see [Install Azure CLI][azure-cli-install]. +* To find information about which Istio add-on revisions are available in a region and their compatibility with AKS cluster versions, use the command [`az aks mesh get-revisions`][az-aks-mesh-get-revisions]: + + ```azurecli-interactive + az aks mesh get-revisions --location EastUS2 -o table + ``` +* In some cases, Istio CRDs from previous installations may not be automatically cleaned up on uninstall. Ensure existing Istio CRDs are deleted: + + ```text + kubectl delete crd $(kubectl get crd -A | grep "istio.io" | awk '{print $1}') + ``` + It is recommend to also clean up other resources from self-managed installations of Istio such as ClusterRoles, MutatingWebhookConfigurations and ValidatingWebhookConfigurations. + +* Note that if you choose to use any `istioctl` CLI commands, you will need to include a flag to point to the add-on installation of Istio: `--istioNamespace aks-istio-system` + +## Install Istio add-on + +This section includes steps to install the Istio add-on during cluster creation or enable for an existing cluster using the Azure CLI. If you want to install the add-on using Bicep, see the guide for [installing an AKS cluster with the Istio service mesh add-on using Bicep][install-aks-cluster-istio-bicep]. To learn more about the Bicep resource definition for an AKS cluster, see [Bicep managedCluster reference][bicep-aks-resource-definition]. + +### Revision selection + +If you enable the add-on without specifying a revision, a default supported revision is installed for you. + +To specify a revision, perform the following steps. + +1. Use the [`az aks mesh get-revisions`][az-aks-mesh-get-revisions] command to check which revisions are available for different AKS cluster versions in a region. +1. Based on the available revisions, you can include the `--revision asm-X-Y` (ex: `--revision asm-1-20`) flag in the enable command you use for mesh installation. + +### Install mesh during cluster creation + +To install the Istio add-on when creating the cluster, use the `--enable-azure-service-mesh` or `--enable-asm` parameter. + +```text +az group create --name ${RESOURCE_GROUP} --location ${LOCATION} +``` + +```text +az aks create \ + --resource-group ${RESOURCE_GROUP} \ + --name ${CLUSTER} \ + --enable-asm \ + --generate-ssh-keys +``` + +### Install mesh for existing cluster + +The following example enables Istio add-on for an existing AKS cluster: + +> [!IMPORTANT] +> You can't enable the Istio add-on on an existing cluster if an OSM add-on is already on your cluster. Uninstall the OSM add-on before installing the Istio add-on. +> For more information, see [uninstall the OSM add-on from your AKS cluster][uninstall-osm-addon]. +> Istio add-on can only be enabled on AKS clusters of version >= 1.23. + +```bash +az aks mesh enable --resource-group ${RESOURCE_GROUP} --name ${CLUSTER} +``` + +## Verify successful installation + +To verify the Istio add-on is installed on your cluster, run the following command: + +```azurecli-interactive +az aks show --resource-group ${RESOURCE_GROUP} --name ${CLUSTER} --query 'serviceMeshProfile.mode' +``` + +Confirm the output shows `Istio`. + +Use `az aks get-credentials` to retrieve the credentials for your AKS cluster: + +```azurecli-interactive +az aks get-credentials --resource-group ${RESOURCE_GROUP} --name ${CLUSTER} +``` + +Use `kubectl` to verify that `istiod` (Istio control plane) pods are running successfully: + +```bash +kubectl get pods -n aks-istio-system +``` + +Confirm the `istiod` pod has a status of `Running`. For example: + +``` +NAME READY STATUS RESTARTS AGE +istiod-asm-1-18-74f7f7c46c-xfdtl 1/1 Running 0 2m +istiod-asm-1-18-74f7f7c46c-4nt2v 1/1 Running 0 2m +``` + +## Enable sidecar injection + +To automatically install sidecar to any new pods, you need to annotate your namespaces with the revision label corresponding to the control plane revision currently installed. + +If you're unsure which revision is installed, use: + +```azurecli-interactive +az aks show --resource-group ${RESOURCE_GROUP} --name ${CLUSTER} --query 'serviceMeshProfile.istio.revisions' +``` + +Apply the revision label: + +```bash +kubectl label namespace default istio.io/rev=asm-X-Y +``` + +> [!IMPORTANT] +> The default `istio-injection=enabled` labeling doesn't work. Explicit versioning matching the control plane revision (ex: `istio.io/rev=asm-1-18`) is required. + +For manual injection of sidecar using `istioctl kube-inject`, you need to specify extra parameters for `istioNamespace` (`-i`) and `revision` (`-r`). For example: + +```text +kubectl apply -f <(istioctl kube-inject -f sample.yaml -i aks-istio-system -r asm-X-Y) -n foo +``` + +## Trigger sidecar injection + +You can either deploy the sample application provided for testing, or trigger sidecar injection for existing workloads. + +### Existing applications + +If you have existing applications to be added to the mesh, ensure their namespaces are labeled as in the previous step, and then restart their deployments to trigger sidecar injection: + +```text +kubectl rollout restart -n +``` + +Verify that sidecar injection succeeded by ensuring all containers are ready and looking for the `istio-proxy` container in the `kubectl describe` output, for example: + +```text +kubectl describe pod -n namespace +``` + +The `istio-proxy` container is the Envoy sidecar. Your application is now part of the data plane. + +### Deploy sample application + +Use `kubectl apply` to deploy the sample application on the cluster: + +```bash +kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/bookinfo/platform/kube/bookinfo.yaml +``` +> [!NOTE] +> Clusters using an HTTP proxy for outbound internet access will need to set up a Service Entry. For setup instructions see [HTTP proxy support in Azure Kubernetes Service](./http-proxy.md#istio-add-on-http-proxy-for-external-services) + +Confirm several deployments and services are created on your cluster. For example: + +```output +service/details created +serviceaccount/bookinfo-details created +deployment.apps/details-v1 created +service/ratings created +serviceaccount/bookinfo-ratings created +deployment.apps/ratings-v1 created +service/reviews created +serviceaccount/bookinfo-reviews created +deployment.apps/reviews-v1 created +deployment.apps/reviews-v2 created +deployment.apps/reviews-v3 created +service/productpage created +serviceaccount/bookinfo-productpage created +deployment.apps/productpage-v1 created +``` + +Use `kubectl get services` to verify that the services were created successfully: + +```bash +kubectl get services +``` + +Confirm the following services were deployed: + +```output +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +details ClusterIP 10.0.180.193 9080/TCP 87s +kubernetes ClusterIP 10.0.0.1 443/TCP 15m +productpage ClusterIP 10.0.112.238 9080/TCP 86s +ratings ClusterIP 10.0.15.201 9080/TCP 86s +reviews ClusterIP 10.0.73.95 9080/TCP 86s +``` + +```bash +kubectl get pods +``` + +```output +NAME READY STATUS RESTARTS AGE +details-v1-558b8b4b76-2llld 2/2 Running 0 2m41s +productpage-v1-6987489c74-lpkgl 2/2 Running 0 2m40s +ratings-v1-7dc98c7588-vzftc 2/2 Running 0 2m41s +reviews-v1-7f99cc4496-gdxfn 2/2 Running 0 2m41s +reviews-v2-7d79d5bd5d-8zzqd 2/2 Running 0 2m41s +reviews-v3-7dbcdcbc56-m8dph 2/2 Running 0 2m41s +``` + +Confirm that all the pods have status of `Running` with two containers in the `READY` column. The second container (`istio-proxy`) added to each pod is the Envoy sidecar injected by Istio, and the other is the application container. + +To test this sample application against ingress, check out [next-steps](#next-steps). + +## Next steps + +* [Deploy external or internal ingresses for Istio service mesh add-on][istio-deploy-ingress] +* [Scale istiod and ingress gateway HPA][istio-scaling-guide] +* [Collect metrics for Istio service mesh add-on workloads in Azure Managed Prometheus][istio-metrics-managed-prometheus] + + +[install-aks-cluster-istio-bicep]: https://github.com/Azure-Samples/aks-istio-addon-bicep +[uninstall-istio-oss]: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio + + +[istio-about]: istio-about.md +[azure-cli-install]: /cli/azure/install-azure-cli +[az-feature-register]: /cli/azure/feature#az-feature-register +[az-feature-show]: /cli/azure/feature#az-feature-show +[az-provider-register]: /cli/azure/provider#az-provider-register +[uninstall-osm-addon]: open-service-mesh-uninstall-add-on.md +[istio-deploy-ingress]: istio-deploy-ingress.md +[az-aks-mesh-get-revisions]: /cli/azure/aks/mesh#az-aks-mesh-get-revisions(aks-preview) +[bicep-aks-resource-definition]: /azure/templates/microsoft.containerservice/managedclusters +[istio-scaling-guide]: istio-scale.md#scaling +[istio-metrics-managed-prometheus]: istio-metrics-managed-prometheus.md \ No newline at end of file diff --git a/scenarios/azure-aks-docs/articles/aks/learn/quick-windows-container-deploy-cli.md b/scenarios/azure-aks-docs/articles/aks/learn/quick-windows-container-deploy-cli.md new file mode 100644 index 000000000..bb683c6b4 --- /dev/null +++ b/scenarios/azure-aks-docs/articles/aks/learn/quick-windows-container-deploy-cli.md @@ -0,0 +1,354 @@ +--- +title: Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure CLI +description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using Azure CLI. +ms.topic: quickstart +ms.custom: devx-track-azurecli, innovation-engine +ms.date: 01/11/2024 +author: schaffererin +ms.author: schaffererin +--- + +# Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure CLI + +Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this article, you use Azure CLI to deploy an AKS cluster that runs Windows Server containers. You also deploy an ASP.NET sample application in a Windows Server container to the cluster. + +> [!NOTE] +> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements. + +## Before you begin + +This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). + +- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] + +[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] + +- This article requires version 2.0.64 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed there. +- Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). +- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account set](/cli/azure/account#az-account-set) command. For more information, see [How to manage Azure subscriptions – Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli?tabs=bash#change-the-active-subscription). + +## Create a resource group + +An [Azure resource group](/azure/azure-resource-manager/management/overview) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're asked to specify a location. This location is where resource group metadata is stored and where your resources run in Azure if you don't specify another region during resource creation. + +- Create a resource group using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *WestUS2* location. Enter this command and other commands in this article into a BASH shell: + +```bash +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export REGION="canadacentral" +export MY_RESOURCE_GROUP_NAME="myAKSResourceGroup$RANDOM_SUFFIX" +az group create --name $MY_RESOURCE_GROUP_NAME --location $REGION +``` + +Results: + + + +```JSON +{ + "id": "/subscriptions/xxxxx-xxxxx-xxxxx-xxxxx/resourceGroups/myResourceGroupxxxxx", + "location": "WestUS2", + "managedBy": null, + "name": "myResourceGroupxxxxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +## Create an AKS cluster + +In this section, we create an AKS cluster with the following configuration: + +- The cluster is configured with two nodes to ensure it operates reliably. A [node](../concepts-clusters-workloads.md#nodes) is an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime. +- The `--windows-admin-password` and `--windows-admin-username` parameters set the administrator credentials for any Windows Server nodes on the cluster and must meet [Windows Server password requirements][windows-server-password]. +- The node pool uses `VirtualMachineScaleSets`. + +To create the AKS cluster with Azure CLI, follow these steps: + +1. Create a username to use as administrator credentials for the Windows Server nodes on your cluster. (The original example prompted for input; in this Exec Doc, the environment variable is set non-interactively.) + +```bash +export WINDOWS_USERNAME="winadmin" +``` + +2. Create a password for the administrator username you created in the previous step. The password must be a minimum of 14 characters and meet the [Windows Server password complexity requirements][windows-server-password]. + +```bash +export WINDOWS_PASSWORD=$(echo "P@ssw0rd$(openssl rand -base64 10 | tr -dc 'A-Za-z0-9!@#$%^&*()' | cut -c1-6)") +``` + +3. Create your cluster using the [az aks create][az-aks-create] command and specify the `--windows-admin-username` and `--windows-admin-password` parameters. The following example command creates a cluster using the values from *WINDOWS_USERNAME* and *WINDOWS_PASSWORD* you set in the previous commands. A random suffix is appended to the cluster name for uniqueness. + +```bash +export MY_AKS_CLUSTER="myAKSCluster$RANDOM_SUFFIX" +az aks create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $MY_AKS_CLUSTER \ + --node-count 2 \ + --enable-addons monitoring \ + --generate-ssh-keys \ + --windows-admin-username $WINDOWS_USERNAME \ + --windows-admin-password $WINDOWS_PASSWORD \ + --vm-set-type VirtualMachineScaleSets \ + --network-plugin azure +``` + +After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally, the cluster can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning. + +If you get a password validation error, and the password that you set meets the length and complexity requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group. + +If you don't specify an administrator username and password when creating the node pool, the username is set to *azureuser* and the password is set to a random value. For more information, see the [Windows Server FAQ](../windows-faq.yml) + +The administrator username can't be changed, but you can change the administrator password that your AKS cluster uses for Windows Server nodes using `az aks update`. For more information, see [Windows Server FAQ](../windows-faq.yml). + +To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI (advanced)][azure-cni] network plugin. The `--network-plugin azure` parameter specifies Azure CNI. + +## Add a node pool + +By default, an AKS cluster is created with a node pool that can run Linux containers. You must add another node pool that can run Windows Server containers alongside the Linux node pool. + +Windows Server 2022 is the default operating system for Kubernetes versions 1.25.0 and higher. Windows Server 2019 is the default OS for earlier versions. If you don't specify a particular OS SKU, Azure creates the new node pool with the default SKU for the version of Kubernetes used by the cluster. + +### [Windows node pool (default SKU)](#tab/add-windows-node-pool) + +To use the default OS SKU, create the node pool without specifying an OS SKU. The node pool is configured for the default operating system based on the Kubernetes version of the cluster. + +Add a Windows node pool using the `az aks nodepool add` command. The following command creates a new node pool named *npwin* and adds it to *myAKSCluster*. The command also uses the default subnet in the default virtual network created when running `az aks create`. An OS SKU isn't specified, so the node pool is set to the default operating system based on the Kubernetes version of the cluster: + +```text +az aks nodepool add \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --cluster-name $MY_AKS_CLUSTER \ + --os-type Windows \ + --name npwin \ + --node-count 1 +``` + +### [Windows Server 2022 node pool](#tab/add-windows-server-2022-node-pool) + +To use Windows Server 2022, specify the following parameters: + +- `os-type` set to `Windows` +- `os-sku` set to `Windows2022` + +> [!NOTE] +> Windows Server 2022 requires Kubernetes version 1.23.0 or higher. Windows Server 2022 is being retired after Kubernetes version 1.34 reaches its end of support. Windows Server 2022 will not be supported in Kubernetes version 1.35 and above. For more information about this retirement, see the [AKS release notes][aks-release-notes]. + +Add a Windows Server 2022 node pool using the `az aks nodepool add` command: + +```text +az aks nodepool add \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --cluster-name $MY_AKS_CLUSTER \ + --os-type Windows \ + --os-sku Windows2022 \ + --name npwin \ + --node-count 1 +``` + +### [Windows Server 2019 node pool](#tab/add-windows-server-2019-node-pool) + +To use Windows Server 2019, specify the following parameters: + +- `os-type` set to `Windows` +- `os-sku` set to `Windows2019` + +> [!NOTE] +> Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of support. Windows Server 2019 will not be supported in Kubernetes version 1.33 and above. For more information about this retirement, see the [AKS release notes][aks-release-notes]. + +Add a Windows Server 2019 node pool using the `az aks nodepool add` command: + +```text +az aks nodepool add \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --cluster-name $MY_AKS_CLUSTER \ + --os-type Windows \ + --os-sku Windows2019 \ + --name npwin \ + --node-count 1 +``` + +## Connect to the cluster + +You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. If you want to install and run `kubectl` locally, call the [az aks install-cli][az-aks-install-cli] command. + +1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. + +```bash +az aks get-credentials --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_AKS_CLUSTER +``` + +2. Verify the connection to your cluster using the [kubectl get][kubectl-get] command, which returns a list of the cluster nodes. + +```bash +kubectl get nodes -o wide +``` + +The following sample output shows all nodes in the cluster. Make sure the status of all nodes is *Ready*: + + + +```text +NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME +aks-nodepool1-20786768-vmss000000 Ready agent 22h v1.27.7 10.224.0.4 Ubuntu 22.04.3 LTS 5.15.0-1052-azure containerd://1.7.5-1 +aks-nodepool1-20786768-vmss000001 Ready agent 22h v1.27.7 10.224.0.33 Ubuntu 22.04.3 LTS 5.15.0-1052-azure containerd://1.7.5-1 +aksnpwin000000 Ready agent 20h v1.27.7 10.224.0.62 Windows Server 2022 Datacenter 10.0.20348.2159 containerd://1.6.21+azure +``` + +> [!NOTE] +> The container runtime for each node pool is shown under *CONTAINER-RUNTIME*. The container runtime values begin with `containerd://`, which means that they each use `containerd` for the container runtime. + +## Deploy the application + +A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. In this article, you use a manifest to create all objects needed to run the ASP.NET sample application in a Windows Server container. This manifest includes a [Kubernetes deployment][kubernetes-deployment] for the ASP.NET sample application and an external [Kubernetes service][kubernetes-service] to access the application from the internet. + +The ASP.NET sample application is provided as part of the [.NET Framework Samples][dotnet-samples] and runs in a Windows Server container. AKS requires Windows Server containers to be based on images of *Windows Server 2019* or greater. The Kubernetes manifest file must also define a [node selector][node-selector] to tell your AKS cluster to run your ASP.NET sample application's pod on a node that can run Windows Server containers. + +1. Create a file named `sample.yaml` and copy in the following YAML definition. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: sample + labels: + app: sample +spec: + replicas: 1 + template: + metadata: + name: sample + labels: + app: sample + spec: + nodeSelector: + "kubernetes.io/os": windows + containers: + - name: sample + image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp + resources: + limits: + cpu: 1 + memory: 800M + ports: + - containerPort: 80 + selector: + matchLabels: + app: sample +--- +apiVersion: v1 +kind: Service +metadata: + name: sample +spec: + type: LoadBalancer + ports: + - protocol: TCP + port: 80 + selector: + app: sample +``` + +For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). + +If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. + +2. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. + +```bash +kubectl apply -f sample.yaml +``` + +The following sample output shows the deployment and service created successfully: + + + +```text +{ + "deployment.apps/sample": "created", + "service/sample": "created" +} +``` + +## Test the application + +When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Occasionally, the service can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning. + +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. Make sure all pods are `Running` before proceeding. + +```bash +kubectl get pods +``` + +2. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. + +```bash +while true; do + export EXTERNAL_IP=$(kubectl get service sample -o jsonpath="{.status.loadBalancer.ingress[0].ip}" 2>/dev/null) + if [[ -n "$EXTERNAL_IP" && "$EXTERNAL_IP" != "" ]]; then + kubectl get service sample + break + fi + echo "Still waiting for external IP assignment..." + sleep 5 +done +``` + +Initially, the output shows the *EXTERNAL-IP* for the sample service as *pending*: + + + +```text +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +sample LoadBalancer xx.xx.xx.xx pending xx:xxxx/TCP 2m +``` + +When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following sample output shows a valid public IP address assigned to the service: + +```JSON +{ + "NAME": "sample", + "TYPE": "LoadBalancer", + "CLUSTER-IP": "10.0.37.27", + "EXTERNAL-IP": "52.179.23.131", + "PORT(S)": "80:30572/TCP", + "AGE": "2m" +} +``` + +See the sample app in action by opening a web browser to the external IP address of your service after a few minutes. + +:::image type="content" source="media/quick-windows-container-deploy-cli/asp-net-sample-app.png" alt-text="Screenshot of browsing to ASP.NET sample application." lightbox="media/quick-windows-container-deploy-cli/asp-net-sample-app.png"::: + +## Next steps + +In this quickstart, you deployed a Kubernetes cluster and then deployed an ASP.NET sample application in a Windows Server container to it. This sample application is for demo purposes only and doesn't represent all the best practices for Kubernetes applications. For guidance on creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance]. + +To learn more about AKS, and to walk through a complete code-to-deployment example, continue to the Kubernetes cluster tutorial. + +> [!div class="nextstepaction"] +> [AKS tutorial][aks-tutorial] + + +[kubectl]: https://kubernetes.io/docs/reference/kubectl/ +[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply +[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get +[node-selector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ +[dotnet-samples]: https://hub.docker.com/_/microsoft-dotnet-framework-samples/ +[azure-cni]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md +[aks-release-notes]: https://github.com/Azure/AKS/releases + + +[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md +[az-aks-create]: /cli/azure/aks#az_aks_create +[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials +[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli +[az-group-create]: /cli/azure/group#az_group_create +[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json +[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests +[kubernetes-service]: ../concepts-network-services.md +[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference +[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json \ No newline at end of file diff --git a/scenarios/azure-aks-docs/articles/aks/learn/sample.yaml b/scenarios/azure-aks-docs/articles/aks/learn/sample.yaml new file mode 100644 index 000000000..926ff7496 --- /dev/null +++ b/scenarios/azure-aks-docs/articles/aks/learn/sample.yaml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: sample + labels: + app: sample +spec: + replicas: 1 + template: + metadata: + name: sample + labels: + app: sample + spec: + nodeSelector: + "kubernetes.io/os": windows + containers: + - name: sample + image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp + resources: + limits: + cpu: 1 + memory: 800M + ports: + - containerPort: 80 + selector: + matchLabels: + app: sample +--- +apiVersion: v1 +kind: Service +metadata: + name: sample +spec: + type: LoadBalancer + ports: + - protocol: TCP + port: 80 + selector: + app: sample \ No newline at end of file diff --git a/scenarios/azure-aks-docs/articles/aks/node-image-upgrade.md b/scenarios/azure-aks-docs/articles/aks/node-image-upgrade.md new file mode 100644 index 000000000..b92230640 --- /dev/null +++ b/scenarios/azure-aks-docs/articles/aks/node-image-upgrade.md @@ -0,0 +1,173 @@ +--- +title: Upgrade Azure Kubernetes Service (AKS) node images +description: Learn how to upgrade the images on AKS cluster nodes and node pools. +ms.topic: how-to +ms.custom: devx-track-azurecli, innovation-engine +ms.subservice: aks-upgrade +ms.service: azure-kubernetes-service +ms.date: 09/20/2024 +author: schaffererin +ms.author: schaffererin +--- + +# Upgrade Azure Kubernetes Service (AKS) node images + +Azure Kubernetes Service (AKS) regularly provides new node images, so it's beneficial to upgrade your node images frequently to use the latest AKS features. Linux node images are updated weekly, and Windows node images are updated monthly. Image upgrade announcements are included in the [AKS release notes](https://github.com/Azure/AKS/releases), and it can take up to a week for these updates to be rolled out across all regions. You can also perform node image upgrades automatically and schedule them using planned maintenance. For more information, see [Automatically upgrade node images][auto-upgrade-node-image]. + +This article shows you how to upgrade AKS cluster node images and how to update node pool images without upgrading the Kubernetes version. For information on upgrading the Kubernetes version for your cluster, see [Upgrade an AKS cluster][upgrade-cluster]. + +> [!NOTE] +> The AKS cluster must use virtual machine scale sets for the nodes. +> +> It's not possible to downgrade a node image version (for example *AKSUbuntu-2204 to AKSUbuntu-1804*, or *AKSUbuntu-2204-202308.01.0 to AKSUbuntu-2204-202307.27.0*). + + +## Connect to your AKS cluster + +1. Connect to your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. + + ```azurecli-interactive + az aks get-credentials \ + --resource-group $AKS_RESOURCE_GROUP \ + --name $AKS_CLUSTER + ``` +## Check for available node image upgrades + +1. Check for available node image upgrades using the [`az aks nodepool get-upgrades`][az-aks-nodepool-get-upgrades] command. + + ```azurecli-interactive + az aks nodepool get-upgrades \ + --nodepool-name $AKS_NODEPOOL \ + --cluster-name $AKS_CLUSTER \ + --resource-group $AKS_RESOURCE_GROUP + ``` + +1. In the output, find and make note of the `latestNodeImageVersion` value. This value is the latest node image version available for your node pool. +1. Check your current node image version to compare with the latest version using the [`az aks nodepool show`][az-aks-nodepool-show] command. + + ```azurecli-interactive + az aks nodepool show \ + --resource-group $AKS_RESOURCE_GROUP \ + --cluster-name $AKS_CLUSTER \ + --name $AKS_NODEPOOL \ + --query nodeImageVersion + ``` + +1. If the `nodeImageVersion` value is different from the `latestNodeImageVersion`, you can upgrade your node image. + +## Upgrade all node images in all node pools + +1. Upgrade all node images in all node pools in your cluster using the [`az aks upgrade`][az-aks-upgrade] command with the `--node-image-only` flag. + + ```text + az aks upgrade \ + --resource-group $AKS_RESOURCE_GROUP \ + --name $AKS_CLUSTER \ + --node-image-only \ + --yes + ``` + +1. You can check the status of the node images using the `kubectl get nodes` command. + + > [!NOTE] + > This command might differ slightly depending on the shell you use. For more information on Windows and PowerShell environments, see the [Kubernetes JSONPath documentation][kubernetes-json-path]. + + ```bash + kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' + ``` + +1. When the upgrade completes, use the [`az aks show`][az-aks-show] command to get the updated node pool details. The current node image is shown in the `nodeImageVersion` property. + + ```azurecli-interactive + az aks show \ + --resource-group $AKS_RESOURCE_GROUP \ + --name $AKS_CLUSTER + ``` + +## Upgrade a specific node pool + +1. Update the OS image of a node pool without doing a Kubernetes cluster upgrade using the [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] command with the `--node-image-only` flag. + + ```azurecli-interactive + az aks nodepool upgrade \ + --resource-group $AKS_RESOURCE_GROUP \ + --cluster-name $AKS_CLUSTER \ + --name $AKS_NODEPOOL \ + --node-image-only + ``` + +1. You can check the status of the node images with the `kubectl get nodes` command. + + > [!NOTE] + > This command may differ slightly depending on the shell you use. For more information on Windows and PowerShell environments, see the [Kubernetes JSONPath documentation][kubernetes-json-path]. + + ```bash + kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' + ``` + +1. When the upgrade completes, use the [`az aks nodepool show`][az-aks-nodepool-show] command to get the updated node pool details. The current node image is shown in the `nodeImageVersion` property. + + ```azurecli-interactive + az aks nodepool show \ + --resource-group $AKS_RESOURCE_GROUP \ + --cluster-name $AKS_CLUSTER \ + --name $AKS_NODEPOOL + ``` + +## Upgrade node images with node surge + +To speed up the node image upgrade process, you can upgrade your node images using a customizable node surge value. By default, AKS uses one extra node to configure upgrades. + +1. Upgrade node images with node surge using the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--max-surge` flag to configure the number of nodes used for upgrades. + + > [!NOTE] + > To learn more about the trade-offs of various `--max-surge` settings, see [Customize node surge upgrade][max-surge]. + + ```azurecli-interactive + az aks nodepool update \ + --resource-group $AKS_RESOURCE_GROUP \ + --cluster-name $AKS_CLUSTER \ + --name $AKS_NODEPOOL \ + --max-surge 33% \ + --no-wait + ``` + +1. You can check the status of the node images with the `kubectl get nodes` command. + + ```bash + kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' + ``` + +1. Get the updated node pool details using the [`az aks nodepool show`][az-aks-nodepool-show] command. The current node image is shown in the `nodeImageVersion` property. + + ```azurecli-interactive + az aks nodepool show \ + --resource-group $AKS_RESOURCE_GROUP \ + --cluster-name $AKS_CLUSTER \ + --name $AKS_NODEPOOL + ``` + +## Next steps + +- For information about the latest node images, see the [AKS release notes](https://github.com/Azure/AKS/releases). +- Learn how to upgrade the Kubernetes version with [Upgrade an AKS cluster][upgrade-cluster]. +- [Automatically apply cluster and node pool upgrades with GitHub Actions][github-schedule]. +- Learn more about multiple node pools with [Create multiple node pools][use-multiple-node-pools]. +- Learn about upgrading best practices with [AKS patch and upgrade guidance][upgrade-operators-guide]. + + +[kubernetes-json-path]: https://kubernetes.io/docs/reference/kubectl/jsonpath/ + + +[upgrade-cluster]: upgrade-aks-cluster.md +[github-schedule]: node-upgrade-github-actions.md +[use-multiple-node-pools]: create-node-pools.md +[max-surge]: upgrade-aks-cluster.md#customize-node-surge-upgrade +[auto-upgrade-node-image]: auto-upgrade-node-image.md +[az-aks-nodepool-get-upgrades]: /cli/azure/aks/nodepool#az_aks_nodepool_get_upgrades +[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show +[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade +[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az_aks_nodepool_update +[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade +[az-aks-show]: /cli/azure/aks#az_aks_show +[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices \ No newline at end of file diff --git a/scenarios/azure-aks-docs/articles/aks/spot-node-pool.md b/scenarios/azure-aks-docs/articles/aks/spot-node-pool.md new file mode 100644 index 000000000..e093dcd16 --- /dev/null +++ b/scenarios/azure-aks-docs/articles/aks/spot-node-pool.md @@ -0,0 +1,240 @@ +--- +title: Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster +description: Learn how to add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster. +ms.topic: how-to +ms.date: 03/29/2023 +author: schaffererin +ms.author: schaffererin +ms.subservice: aks-nodes +--- + +# Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster + +In this article, you add a secondary Spot node pool to an existing Azure Kubernetes Service (AKS) cluster. + +A Spot node pool is a node pool backed by an [Azure Spot Virtual Machine scale set][vmss-spot]. With Spot VMs in your AKS cluster, you can take advantage of unutilized Azure capacity with significant cost savings. The amount of available unutilized capacity varies based on many factors, such as node size, region, and time of day. + +When you deploy a Spot node pool, Azure allocates the Spot nodes if there's capacity available and deploys a Spot scale set that backs the Spot node pool in a single default domain. There's no SLA for the Spot nodes. There are no high availability guarantees. If Azure needs capacity back, the Azure infrastructure evicts the Spot nodes. + +Spot nodes are great for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads might be good candidates to schedule on a Spot node pool. + +## Before you begin + +* This article assumes a basic understanding of Kubernetes and Azure Load Balancer concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. +* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +* When you create a cluster to use a Spot node pool, the cluster must use Virtual Machine Scale Sets for node pools and the *Standard* SKU load balancer. You must also add another node pool after you create your cluster, which is covered in this tutorial. +* This article requires that you're running the Azure CLI version 2.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. + +### Limitations + +The following limitations apply when you create and manage AKS clusters with a Spot node pool: + +* A Spot node pool can't be a default node pool, it can only be used as a secondary pool. +* You can't upgrade the control plane and node pools at the same time. You must upgrade them separately or remove the Spot node pool to upgrade the control plane and remaining node pools at the same time. +* A Spot node pool must use Virtual Machine Scale Sets. +* You can't change `ScaleSetPriority` or `SpotMaxPrice` after creation. +* When setting `SpotMaxPrice`, the value must be *-1* or a *positive value with up to five decimal places*. +* A Spot node pool has the `kubernetes.azure.com/scalesetpriority:spot` label, the `kubernetes.azure.com/scalesetpriority=spot:NoSchedule` taint, and the system pods have anti-affinity. +* You must add a [corresponding toleration][spot-toleration] and affinity to schedule workloads on a Spot node pool. + +## Add a Spot node pool to an AKS cluster + +When adding a Spot node pool to an existing cluster, it must be a cluster with multiple node pools enabled. When you create an AKS cluster with multiple node pools enabled, you create a node pool with a `priority` of `Regular` by default. To add a Spot node pool, you must specify `Spot` as the value for `priority`. For more details on creating an AKS cluster with multiple node pools, see [use multiple node pools][use-multiple-node-pools]. + +* Create a node pool with a `priority` of `Spot` using the [`az aks nodepool add`][az-aks-nodepool-add] command. + +```azurecli-interactive +export SPOT_NODEPOOL="spotnodepool" + +az aks nodepool add \ + --resource-group $RESOURCE_GROUP \ + --cluster-name $AKS_CLUSTER \ + --name $SPOT_NODEPOOL \ + --priority Spot \ + --eviction-policy Delete \ + --spot-max-price -1 \ + --enable-cluster-autoscaler \ + --min-count 1 \ + --max-count 3 \ + --no-wait +``` + +In the previous command, the `priority` of `Spot` makes the node pool a Spot node pool. The `eviction-policy` parameter is set to `Delete`, which is the default value. When you set the [eviction policy][eviction-policy] to `Delete`, nodes in the underlying scale set of the node pool are deleted when they're evicted. + +You can also set the eviction policy to `Deallocate`, which means that the nodes in the underlying scale set are set to the *stopped-deallocated* state upon eviction. Nodes in the *stopped-deallocated* state count against your compute quota and can cause issues with cluster scaling or upgrading. The `priority` and `eviction-policy` values can only be set during node pool creation. Those values can't be updated later. + +The previous command also enables the [cluster autoscaler][cluster-autoscaler], which we recommend using with Spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales the number of nodes up and down. For Spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if more nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the `maxCount` value associated with the cluster autoscaler. If you don't use a cluster autoscaler, upon eviction, the Spot pool will eventually decrease to *0* and require manual operation to receive any additional Spot nodes. + +> [!IMPORTANT] +> Only schedule workloads on Spot node pools that can handle interruptions, such as batch processing jobs and testing environments. We recommend you set up [taints and tolerations][taints-tolerations] on your Spot node pool to ensure that only workloads that can handle node evictions are scheduled on a Spot node pool. For example, the above command adds a taint of `kubernetes.azure.com/scalesetpriority=spot:NoSchedule`, so only pods with a corresponding toleration are scheduled on this node. + +## Verify the Spot node pool + +* Verify your node pool was added using the [`az aks nodepool show`][az-aks-nodepool-show] command and confirming the `scaleSetPriority` is `Spot`. + +```azurecli-interactive +az aks nodepool show --resource-group $RESOURCE_GROUP --cluster-name $AKS_CLUSTER --name $SPOT_NODEPOOL +``` + +Results: + + + +```JSON +{ + "artifactStreamingProfile": null, + "availabilityZones": null, + "capacityReservationGroupId": null, + "count": 3, + "creationData": null, + "currentOrchestratorVersion": "1.30.10", + "eTag": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", + "enableAutoScaling": true, + "enableCustomCaTrust": false, + "enableEncryptionAtHost": false, + "enableFips": false, + "enableNodePublicIp": false, + "enableUltraSsd": false, + "gatewayProfile": null, + "gpuInstanceProfile": null, + "gpuProfile": null, + "hostGroupId": null, + "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/xxxxxxxxxxxxxxxx/providers/Microsoft.ContainerService/managedClusters/xxxxxxxxxxxxxxxx/agentPools/xxxxxxxxxxxx", + "kubeletConfig": null, + "kubeletDiskType": "OS", + "linuxOsConfig": null, + "maxCount": 3, + "maxPods": 30, + "messageOfTheDay": null, + "minCount": 1, + "mode": "User", + "name": "xxxxxxxxxxxx", + "networkProfile": { + "allowedHostPorts": null, + "applicationSecurityGroups": null, + "nodePublicIpTags": null + }, + "nodeImageVersion": "AKSUbuntu-2204gen2containerd-xxxxxxxx.xx.x", + "nodeInitializationTaints": null, + "nodeLabels": { + "kubernetes.azure.com/scalesetpriority": "spot" + }, + "nodePublicIpPrefixId": null, + "nodeTaints": [ + "kubernetes.azure.com/scalesetpriority=spot:NoSchedule" + ], + "orchestratorVersion": "x.xx.xx", + "osDiskSizeGb": 128, + "osDiskType": "Managed", + "osSku": "Ubuntu", + "osType": "Linux", + "podIpAllocationMode": null, + "podSubnetId": null, + "powerState": { + "code": "Running" + }, + "provisioningState": "Creating", + "proximityPlacementGroupId": null, + "resourceGroup": "xxxxxxxxxxxxxxxx", + "scaleDownMode": "Delete", + "scaleSetEvictionPolicy": "Delete", + "scaleSetPriority": "Spot", + "securityProfile": { + "enableSecureBoot": false, + "enableVtpm": false, + "sshAccess": "LocalUser" + }, + "spotMaxPrice": -1.0, + "status": null, + "tags": null, + "type": "Microsoft.ContainerService/managedClusters/agentPools", + "typePropertiesType": "VirtualMachineScaleSets", + "upgradeSettings": { + "drainTimeoutInMinutes": null, + "maxSurge": null, + "maxUnavailable": null, + "nodeSoakDurationInMinutes": null, + "undrainableNodeBehavior": null + }, + "virtualMachineNodesStatus": null, + "virtualMachinesProfile": null, + "vmSize": "Standard_DS2_v2", + "vnetSubnetId": null, + "windowsProfile": null, + "workloadRuntime": "OCIContainer" +} +``` + +## Schedule a pod to run on the Spot node + +To schedule a pod to run on a Spot node, you can add a toleration and node affinity that corresponds to the taint applied to your Spot node. + +The following example shows a portion of a YAML file that defines a toleration corresponding to the `kubernetes.azure.com/scalesetpriority=spot:NoSchedule` taint and a node affinity corresponding to the `kubernetes.azure.com/scalesetpriority=spot` label used in the previous step with `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution` node affinity rules: + +```yaml +spec: + containers: + - name: spot-example + tolerations: + - key: "kubernetes.azure.com/scalesetpriority" + operator: "Equal" + value: "spot" + effect: "NoSchedule" + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: "kubernetes.azure.com/scalesetpriority" + operator: In + values: + - "spot" + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + preference: + matchExpressions: + - key: another-node-label-key + operator: In + values: + - another-node-label-value +``` + +When you deploy a pod with this toleration and node affinity, Kubernetes successfully schedules the pod on the nodes with the taint and label applied. In this example, the following rules apply: + +* The node *must* have a label with the key `kubernetes.azure.com/scalesetpriority`, and the value of that label *must* be `spot`. +* The node *preferably* has a label with the key `another-node-label-key`, and the value of that label *must* be `another-node-label-value`. + +For more information, see [Assigning pods to nodes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity). + +## Upgrade a Spot node pool + +When you upgrade a Spot node pool, AKS internally issues a cordon and an eviction notice, but no drain is applied. There are no surge nodes available for Spot node pool upgrades. Outside of these changes, the behavior when upgrading Spot node pools is consistent with that of other node pool types. + +For more information on upgrading, see [Upgrade an AKS cluster][upgrade-cluster]. + +## Max price for a Spot pool + +[Pricing for Spot instances is variable][pricing-spot], based on region and SKU. For more information, see pricing information for [Linux][pricing-linux] and [Windows][pricing-windows]. + +With variable pricing, you have the option to set a max price, in US dollars (USD) using up to five decimal places. For example, the value *0.98765* would be a max price of *$0.98765 USD per hour*. If you set the max price to *-1*, the instance won't be evicted based on price. As long as there's capacity and quota available, the price for the instance will be the lower price of either the current price for a Spot instance or for a standard instance. + +## Next steps + +In this article, you learned how to add a Spot node pool to an AKS cluster. For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler]. + + +[azure-cli-install]: /cli/azure/install-azure-cli +[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add +[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show +[cluster-autoscaler]: cluster-autoscaler.md +[eviction-policy]: /azure/virtual-machine-scale-sets/use-spot#eviction-policy +[kubernetes-concepts]: concepts-clusters-workloads.md +[operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md +[pricing-linux]: https://azure.microsoft.com/pricing/details/virtual-machine-scale-sets/linux/ +[pricing-spot]: /azure/virtual-machine-scale-sets/use-spot#pricing +[pricing-windows]: https://azure.microsoft.com/pricing/details/virtual-machine-scale-sets/windows/ +[spot-toleration]: #verify-the-spot-node-pool +[taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations +[use-multiple-node-pools]: create-node-pools.md +[vmss-spot]: /azure/virtual-machine-scale-sets/use-spot +[upgrade-cluster]: upgrade-cluster.md \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/container-instances/container-instances-vnet.md b/scenarios/azure-compute-docs/articles/container-instances/container-instances-vnet.md new file mode 100644 index 000000000..ec4f35ee9 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/container-instances/container-instances-vnet.md @@ -0,0 +1,410 @@ +--- +title: Deploy container group to Azure virtual network +description: Learn how to deploy a container group to a new or existing Azure virtual network via the Azure CLI. +ms.topic: how-to +ms.author: tomcassidy +author: tomvcassidy +ms.service: azure-container-instances +services: container-instances +ms.date: 09/09/2024 +ms.custom: devx-track-azurecli, innovation-engine +--- + +# Deploy container instances into an Azure virtual network + +[Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) provides secure, private networking for your Azure and on-premises resources. By deploying container groups into an Azure virtual network, your containers can communicate securely with other resources in the virtual network. + +This article shows how to use the [az container create][az-container-create] command in the Azure CLI to deploy container groups to either a new virtual network or an existing virtual network. + +> [!IMPORTANT] +> * Subnets must be delegated before using a virtual network +> * Before deploying container groups in virtual networks, we suggest checking the limitation first. For networking scenarios and limitations, see [Virtual network scenarios and resources for Azure Container Instances](container-instances-virtual-network-concepts.md). +> * Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [available-regions][available-regions]. + +[!INCLUDE [network profile callout](./includes/network-profile-callout.md)] + +Examples in this article are formatted for the Bash shell. If you prefer another shell such as PowerShell or Command Prompt, adjust the line continuation characters accordingly. + +## Prerequisites + +### Define environment variables + +The automated deployment pathway uses the following environment variables and resource names throughout this guide. Users proceeding through the guide manually can use their own variables and names as preferred. + +```azurecli-interactive +export RANDOM_ID="$(openssl rand -hex 3)" +export MY_RESOURCE_GROUP_NAME="myACIResourceGroup$RANDOM_ID" +export MY_VNET_NAME="aci-vnet" +export MY_SUBNET_NAME="aci-subnet" +export MY_SUBNET_ID="/subscriptions/$(az account show --query id --output tsv)/resourceGroups/$MY_RESOURCE_GROUP_NAME/providers/Microsoft.Network/virtualNetworks/$MY_VNET_NAME/subnets/$MY_SUBNET_NAME" +export MY_APP_CONTAINER_NAME="appcontainer" +export MY_COMM_CHECKER_NAME="commchecker" +export MY_YAML_APP_CONTAINER_NAME="appcontaineryaml" +export MY_REGION="eastus2" +``` + +### Create a resource group + +You need a resource group to manage all the resources used in the following examples. To create a resource group, use [az group create][az-group-create]: + +```azurecli-interactive +az group create --name $MY_RESOURCE_GROUP_NAME --location $MY_REGION +``` + +A successful operation should produce output similar to the following JSON: + +Results: + + + +```json +{ + "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx/resourceGroups/myACIResourceGroup123abc", + "location": "abcdef", + "managedBy": null, + "name": "myACIResourceGroup123", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +## Deploy to new virtual network + +> [!NOTE] +> If you are using subnet IP range /29 to have only 3 IP addresses. we recommend always to go one range above (never below). For example, use subnet IP range /28 so you can have at least 1 or more IP buffer per container group. By doing this, you can avoid containers in stuck, not able to start, restart or even not able to stop states. + +To deploy to a new virtual network and have Azure create the network resources for you automatically, specify the following when you execute [az container create][az-container-create]: + +* Virtual network name +* Virtual network address prefix in CIDR format +* Subnet name +* Subnet address prefix in CIDR format + +The virtual network and subnet address prefixes specify the address spaces for the virtual network and subnet, respectively. These values are represented in Classless Inter-Domain Routing (CIDR) notation, for example `10.0.0.0/16`. For more information about working with subnets, see [Add, change, or delete a virtual network subnet](/azure/virtual-network/virtual-network-manage-subnet). + +Once you deploy your first container group with this method, you can deploy to the same subnet by specifying the virtual network and subnet names, or the network profile that Azure automatically creates for you. Because Azure delegates the subnet to Azure Container Instances, you can deploy *only* container groups to the subnet. + +### Example + +The following [az container create][az-container-create] command specifies settings for a new virtual network and subnet. Provide the name of a resource group that was created in a region where container group deployments in a virtual network are [available](container-instances-region-availability.md). This command deploys the public Microsoft aci-helloworld container that runs a small Node.js webserver serving a static web page. In the next section, you'll deploy a second container group to the same subnet, and test communication between the two container instances. + +```azurecli-interactive +az container create \ + --name $MY_APP_CONTAINER_NAME \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --image mcr.microsoft.com/azuredocs/aci-helloworld \ + --vnet $MY_VNET_NAME \ + --vnet-address-prefix 10.0.0.0/16 \ + --subnet $MY_SUBNET_NAME \ + --subnet-address-prefix 10.0.0.0/24 \ + --os-type Linux \ + --cpu 1.0 \ + --memory 1.5 +``` + +A successful operation should produce output similar to the following JSON: + +Results: + + + +```json +{ + "confidentialComputeProperties": null, + "containers": [ + { + "command": null, + "environmentVariables": [], + "image": "mcr.microsoft.com/azuredocs/aci-helloworld", + "instanceView": { + "currentState": { + "detailStatus": "", + "exitCode": null, + "finishTime": null, + "startTime": "0000-00-00T00:00:00.000000+00:00", + "state": "Running" + }, + "events": [ + { + "count": 1, + "firstTimestamp": "0000-00-00T00:00:00+00:00", + "lastTimestamp": "0000-00-00T00:00:00+00:00", + "message": "Successfully pulled image \"mcr.microsoft.com/azuredocs/aci-helloworld@sha256:0000000000000000000000000000000000000000000000000000000000000000\"", + "name": "Pulled", + "type": "Normal" + }, + { + "count": 1, + "firstTimestamp": "0000-00-00T00:00:00+00:00", + "lastTimestamp": "0000-00-00T00:00:00+00:00", + "message": "pulling image \"mcr.microsoft.com/azuredocs/aci-helloworld@sha256:0000000000000000000000000000000000000000000000000000000000000000\"", + "name": "Pulling", + "type": "Normal" + }, + { + "count": 1, + "firstTimestamp": "0000-00-00T00:00:00+00:00", + "lastTimestamp": "0000-00-00T00:00:00+00:00", + "message": "Started container", + "name": "Started", + "type": "Normal" + } + ], + "previousState": null, + "restartCount": 0 + }, + "livenessProbe": null, + "name": "appcontainer", + "ports": [ + { + "port": 80, + "protocol": "TCP" + } + ], + "readinessProbe": null, + "resources": { + "limits": null, + "requests": { + "cpu": 1.0, + "gpu": null, + "memoryInGb": 1.5 + } + }, + "securityContext": null, + "volumeMounts": null + } + ], + "diagnostics": null, + "dnsConfig": null, + "encryptionProperties": null, + "extensions": null, + "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx/resourceGroups/myACIResourceGroup123/providers/Microsoft.ContainerInstance/containerGroups/appcontainer", + "identity": null, + "imageRegistryCredentials": null, + "initContainers": [], + "instanceView": { + "events": [], + "state": "Running" + }, + "ipAddress": { + "autoGeneratedDomainNameLabelScope": null, + "dnsNameLabel": null, + "fqdn": null, + "ip": "10.0.0.4", + "ports": [ + { + "port": 80, + "protocol": "TCP" + } + ], + "type": "Private" + }, + "location": "eastus", + "name": "appcontainer", + "osType": "Linux", + "priority": null, + "provisioningState": "Succeeded", + "resourceGroup": "myACIResourceGroup123abc", + "restartPolicy": "Always", + "sku": "Standard", + "subnetIds": [ + { + "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx/resourceGroups/myACIResourceGroup123/providers/Microsoft.Network/virtualNetworks/aci-vnet/subnets/aci-subnet", + "name": null, + "resourceGroup": "myACIResourceGroup123abc" + } + ], + "tags": {}, + "type": "Microsoft.ContainerInstance/containerGroups", + "volumes": null, + "zones": null +} +``` + +When you deploy to a new virtual network by using this method, the deployment can take a few minutes while the network resources are created. After the initial deployment, further container group deployments to the same subnet complete more quickly. + +## Deploy to existing virtual network + +To deploy a container group to an existing virtual network: + +1. Create a subnet within your existing virtual network, use an existing subnet in which a container group is already deployed, or use an existing subnet emptied of *all* other resources and configuration. The subnet that you use for container groups can contain only container groups. Before you deploy a container group to a subnet, you must explicitly delegate the subnet before provisioning. Once delegated, the subnet can be used only for container groups. If you attempt to deploy resources other than container groups to a delegated subnet, the operation fails. +1. Deploy a container group with [az container create][az-container-create] and specify one of the following: + * Virtual network name and subnet name + * Virtual network resource ID and subnet resource ID, which allows using a virtual network from a different resource group + +### Deploy using a YAML file + +You can also deploy a container group to an existing virtual network by using a YAML file, a [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet), or another programmatic method such as with the Python SDK. + +For example, when using a YAML file, you can deploy to a virtual network with a subnet delegated to Azure Container Instances. Specify the following properties: + +* `ipAddress`: The private IP address settings for the container group. + * `ports`: The ports to open, if any. + * `protocol`: The protocol (TCP or UDP) for the opened port. +* `subnetIds`: The resource IDs of the subnets to be deployed to + * `id`: The resource ID of the subnet + * `name`: The name of the subnet + +This YAML creates a container group in your virtual network. Enter your container group name in the name fields and your subnet ID in the subnet ID field. We use *appcontaineryaml* for the name. If you need to find your subnet ID and no longer have access to previous outputs, you can use the [az container show][az-container-show] command to view it. Look for the `id` field under `subnetIds`. + +```YAML +apiVersion: '2021-07-01' +location: eastus +name: appcontaineryaml +properties: + containers: + - name: appcontaineryaml + properties: + image: mcr.microsoft.com/azuredocs/aci-helloworld + ports: + - port: 80 + protocol: TCP + resources: + requests: + cpu: 1.0 + memoryInGB: 1.5 + ipAddress: + type: Private + ports: + - protocol: tcp + port: '80' + osType: Linux + restartPolicy: Always + subnetIds: + - id: + name: default +tags: null +type: Microsoft.ContainerInstance/containerGroups +``` + +The following Bash command is for the automated deployment pathway. + +```bash +echo -e "apiVersion: '2021-07-01'\nlocation: $MY_REGION\nname: $MY_YAML_APP_CONTAINER_NAME\nproperties:\n containers:\n - name: $MY_YAML_APP_CONTAINER_NAME\n properties:\n image: mcr.microsoft.com/azuredocs/aci-helloworld\n ports:\n - port: 80\n protocol: TCP\n resources:\n requests:\n cpu: 1.0\n memoryInGB: 1.5\n ipAddress:\n type: Private\n ports:\n - protocol: tcp\n port: '80'\n osType: Linux\n restartPolicy: Always\n subnetIds:\n - id: $MY_SUBNET_ID\n name: default\ntags: null\ntype: Microsoft.ContainerInstance/containerGroups" > container-instances-vnet.yaml +``` + +Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name for the `--file` parameter: + +```azurecli-interactive +az container create --resource-group $MY_RESOURCE_GROUP_NAME \ + --file container-instances-vnet.yaml \ + --os-type Linux +``` + +The following Bash command is for the automated deployment pathway. + +```bash +rm container-instances-vnet.yaml +``` + +Once the deployment completes, run the [az container show][az-container-show] command to display its status: + +```azurecli-interactive +az container list --resource-group $MY_RESOURCE_GROUP_NAME --output table +``` + +The output should resemble the sample below: + +Results: + + + +```output +Name ResourceGroup Status Image IP:ports Network CPU/Memory OsType Location +---------------- ------------------------ --------- ------------------------------------------ -------------- --------- --------------- -------- ---------- +appcontainer myACIResourceGroup123abc Succeeded mcr.microsoft.com/azuredocs/aci-helloworld 10.0.0.4:80,80 Private 1.0 core/1.5 gb Linux abcdef +appcontaineryaml myACIResourceGroup123abc Succeeded mcr.microsoft.com/azuredocs/aci-helloworld 10.0.0.5:80,80 Private 1.0 core/1.5 gb Linux abcdef +``` + +### Demonstrate communication between container instances + +The following example deploys a third container group to the same subnet created previously. Using an Alpine Linux image, it verifies communication between itself and the first container instance. + +> [!NOTE] +> Due to rate limiting in effect for pulling public Docker images like the Alpine Linux one used here, you may receive an error in the form: +> +> (RegistryErrorResponse) An error response is received from the docker registry 'index.docker.io'. Please retry later. +> Code: RegistryErrorResponse +> Message: An error response is received from the docker registry 'index.docker.io'. Please retry later. + +The following Bash command is for the automated deployment pathway. + +```bash +echo -e "Due to rate limiting in effect for pulling public Docker images like the Alpine Linux one used here, you may receive an error in the form:\n\n(RegistryErrorResponse) An error response is received from the docker registry 'index.docker.io'. Please retry later.\nCode: RegistryErrorResponse\nMessage: An error response is received from the docker registry 'index.docker.io'. Please retry later.\n\nIf this occurs, the automated deployment will exit. You can try again or go to the end of the guide to see instructions for cleaning up your resources." +``` + +First, get the IP address of the first container group you deployed, the *appcontainer*: + +```azurecli-interactive +az container show --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $MY_APP_CONTAINER_NAME \ + --query ipAddress.ip --output tsv +``` + +The output displays the IP address of the container group in the private subnet. For example: + +Results: + + + +```output +10.0.0.4 +``` + +Now, set `CONTAINER_GROUP_IP` to the IP you retrieved with the `az container show` command, and execute the following `az container create` command. This second container, *commchecker*, runs an Alpine Linux-based image and executes `wget` against the first container group's private subnet IP address. + +```azurecli-interactive +az container create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $MY_COMM_CHECKER_NAME \ + --image mcr.microsoft.com/devcontainers/base:alpine \ + --command-line "wget 10.0.0.4" \ + --restart-policy never \ + --vnet $MY_VNET_NAME \ + --subnet $MY_SUBNET_NAME \ + --os-type Linux \ + --cpu 1.0 \ + --memory 1.5 +``` + +After this second container deployment completes, pull its logs so you can see the output of the `wget` command it executed: + +```azurecli-interactive +az container logs --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_COMM_CHECKER_NAME +``` + +If the second container communicated successfully with the first, output is similar to: + +```output +Connecting to 10.0.0.4 (10.0.0.4:80) +index.html 100% |*******************************| 1663 0:00:00 ETA +``` + +The log output should show that `wget` was able to connect and download the index file from the first container using its private IP address on the local subnet. Network traffic between the two container groups remained within the virtual network. + +## Clean up resources + +If you don't plan to continue using these resources, you can delete them to avoid Azure charges. You can clean up all the resources you used in this guide by deleting the resource group with the [az group delete][az-group-delete] command. Once deleted, **these resources are unrecoverable**. + +## Next steps + +* To deploy a new virtual network, subnet, network profile, and container group using a Resource Manager template, see [Create an Azure container group with virtual network](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet). + +* To deploy Azure Container Instances that can pull images from an Azure Container Registry through a private endpoint, see [Deploy to Azure Container Instances from Azure Container Registry using a managed identity](../container-instances/using-azure-container-registry-mi.md). + + +[aci-vnet-01]: ./media/container-instances-vnet/aci-vnet-01.png + + +[aci-helloworld]: https://hub.docker.com/_/microsoft-azuredocs-aci-helloworld + + +[az-group-create]: /cli/azure/group#az-group-create +[az-container-create]: /cli/azure/container#az_container_create +[az-container-show]: /cli/azure/container#az_container_show +[az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create +[az-group-delete]: /cli/azure/group#az-group-delete +[available-regions]: https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=container-instances \ No newline at end of file diff --git a/scenarios/azure-docs/articles/virtual-machine-scale-sets/.openpublishing.redirection.virtual-machine-scale-sets.json b/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/.openpublishing.redirection.virtual-machine-scale-sets.json similarity index 100% rename from scenarios/azure-docs/articles/virtual-machine-scale-sets/.openpublishing.redirection.virtual-machine-scale-sets.json rename to scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/.openpublishing.redirection.virtual-machine-scale-sets.json diff --git a/scenarios/azure-docs/articles/virtual-machine-scale-sets/TOC.yml b/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/TOC.yml similarity index 100% rename from scenarios/azure-docs/articles/virtual-machine-scale-sets/TOC.yml rename to scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/TOC.yml diff --git a/scenarios/azure-docs/articles/virtual-machine-scale-sets/breadcrumb/toc.yml b/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/breadcrumb/toc.yml similarity index 100% rename from scenarios/azure-docs/articles/virtual-machine-scale-sets/breadcrumb/toc.yml rename to scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/breadcrumb/toc.yml diff --git a/scenarios/azure-docs/articles/virtual-machine-scale-sets/index.yml b/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/index.yml similarity index 100% rename from scenarios/azure-docs/articles/virtual-machine-scale-sets/index.yml rename to scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/index.yml diff --git a/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md b/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md new file mode 100644 index 000000000..f3cc966c0 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md @@ -0,0 +1,147 @@ +--- +title: Tutorial - Autoscale a scale set with the Azure CLI +description: Learn how to use the Azure CLI to automatically scale a Virtual Machine Scale Set as CPU demands increases and decreases +author: ju-shim +ms.author: jushiman +ms.topic: tutorial +ms.service: azure-virtual-machine-scale-sets +ms.subservice: autoscale +ms.date: 06/14/2024 +ms.reviewer: mimckitt +ms.custom: avverma, devx-track-azurecli, linux-related-content, innovation-engine +--- + +# Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI + +When you create a scale set, you define the number of VM instances that you wish to run. As your application demand changes, you can automatically increase or decrease the number of VM instances. The ability to autoscale lets you keep up with customer demand or respond to application performance changes throughout the lifecycle of your app. In this tutorial you learn how to: + +> [!div class="checklist"] +> * Use autoscale with a scale set +> * Create and use autoscale rules +> * Simulate CPU load to trigger autoscale rules +> * Monitor autoscale actions as demand changes + +[!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] + +[!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] + +- This tutorial requires version 2.0.32 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. + +## Create a scale set +Create a resource group with [az group create](/cli/azure/group). + +```azurecli-interactive +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export REGION="WestUS2" +export MY_RESOURCE_GROUP_NAME="myResourceGroup$RANDOM_SUFFIX" +az group create --name $MY_RESOURCE_GROUP_NAME --location $REGION +``` + +Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). The following example creates a scale set with an instance count of 2, generates SSH keys if they don't exist, and uses a valid image *Ubuntu2204*. + +```azurecli-interactive +export MY_SCALE_SET_NAME="myScaleSet$RANDOM_SUFFIX" +az vmss create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $MY_SCALE_SET_NAME \ + --image Ubuntu2204 \ + --orchestration-mode Flexible \ + --instance-count 2 \ + --admin-username azureuser \ + --generate-ssh-keys +``` + +## Define an autoscale profile +To enable autoscale on a scale set, you first define an autoscale profile. This profile defines the default, minimum, and maximum scale set capacity. These limits let you control cost by not continually creating VM instances, and balance acceptable performance with a minimum number of instances that remain in a scale-in event. Create an autoscale profile with [az monitor autoscale create](/cli/azure/monitor/autoscale#az-monitor-autoscale-create). The following example sets the default and minimum capacity of 2 VM instances, and a maximum of 10: + +```azurecli-interactive +az monitor autoscale create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --resource $MY_SCALE_SET_NAME \ + --resource-type Microsoft.Compute/virtualMachineScaleSets \ + --name autoscale \ + --min-count 2 \ + --max-count 10 \ + --count 2 +``` + +## Create a rule to autoscale out +If your application demand increases, the load on the VM instances in your scale set increases. If this increased load is consistent, rather than just a brief demand, you can configure autoscale rules to increase the number of VM instances. When these instances are created and your application is deployed, the scale set starts to distribute traffic to them through the load balancer. You control which metrics to monitor, how long the load must meet a given threshold, and how many VM instances to add. + +Create a rule with [az monitor autoscale rule create](/cli/azure/monitor/autoscale/rule#az-monitor-autoscale-rule-create) that increases the number of VM instances when the average CPU load is greater than 70% over a 5-minute period. When the rule triggers, the number of VM instances is increased by three. + +```azurecli-interactive +az monitor autoscale rule create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --autoscale-name autoscale \ + --condition "Percentage CPU > 70 avg 5m" \ + --scale out 3 +``` + +## Create a rule to autoscale in +When application demand decreases, the load on the VM instances drops. If this decreased load persists over a period of time, you can configure autoscale rules to decrease the number of VM instances in the scale set. This scale-in action helps reduce costs by running only the necessary number of instances required to meet current demand. + +Create another rule with [az monitor autoscale rule create](/cli/azure/monitor/autoscale/rule#az-monitor-autoscale-rule-create) that decreases the number of VM instances when the average CPU load drops below 30% over a 5-minute period. The following example scales in the number of VM instances by one. + +```azurecli-interactive +az monitor autoscale rule create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --autoscale-name autoscale \ + --condition "Percentage CPU < 30 avg 5m" \ + --scale in 1 +``` + +## Simulate CPU load on scale set +To test the autoscale rules, you need to simulate sustained CPU load on the VM instances in the scale set. In this minimalist approach, we avoid installing additional packages by using the built-in `yes` command to generate CPU load. The following command starts 3 background processes that continuously output data to `/dev/null` for 60 seconds and then terminates them. + +```bash +for i in {1..3}; do + yes > /dev/null & +done +sleep 60 +pkill yes +``` + +This command simulates CPU load without introducing package installation errors. + +## Monitor the active autoscale rules +To monitor the number of VM instances in your scale set, use the `watch` command. It may take up to 5 minutes for the autoscale rules to begin the scale-out process in response to the CPU load. However, once it happens, you can exit watch with *CTRL + C* keys. + +By then, the scale set will automatically increase the number of VM instances to meet the demand. The following command shows the list of VM instances in the scale set: + +```azurecli-interactive +az vmss list-instances \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $MY_SCALE_SET_NAME \ + --output table +``` + +Once the CPU threshold has been met, the autoscale rules increase the number of VM instances in the scale set. The output will show the list of VM instances as new ones are created. + +```output + InstanceId LatestModelApplied Location Name ProvisioningState ResourceGroup VmId +------------ -------------------- ---------- --------------- ------------------- -------------------- ------------------------------------ + 1 True WestUS2 myScaleSet_1 Succeeded myResourceGroupxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx + 2 True WestUS2 myScaleSet_2 Succeeded myResourceGroupxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx + 4 True WestUS2 myScaleSet_4 Creating myResourceGroupxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx + 5 True WestUS2 myScaleSet_5 Creating myResourceGroupxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx + 6 True WestUS2 myScaleSet_6 Creating myResourceGroupxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx +``` + +Once the CPU load subsides, the average CPU load returns to normal. After another 5 minutes, the autoscale rules then scale in the number of VM instances. Scale-in actions remove VM instances with the highest IDs first. When a scale set uses Availability Sets or Availability Zones, scale-in actions are evenly distributed across the VM instances. The following sample output shows one VM instance being deleted as the scale set autoscales in: + +```output +6 True WestUS2 myScaleSet_6 Deleting myResourceGroupxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx +``` + +## Clean up resources +To remove your scale set and associated resources, please manually delete the resource group using your preferred method. + +## Next steps +In this tutorial, you learned how to automatically scale in or out a scale set with the Azure CLI: + +> [!div class="checklist"] +> * Use autoscale with a scale set +> * Create and use autoscale rules +> * Simulate CPU load to trigger autoscale rules +> * Monitor autoscale actions as demand changes \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md b/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md new file mode 100644 index 000000000..94c5a5c89 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md @@ -0,0 +1,438 @@ +--- +title: Modify an Azure Virtual Machine Scale Set using Azure CLI +description: Learn how to modify and update an Azure Virtual Machine Scale Set using Azure CLI +author: ju-shim +ms.author: jushiman +ms.topic: how-to +ms.service: azure-virtual-machine-scale-sets +ms.date: 06/14/2024 +ms.reviewer: mimckitt +ms.custom: mimckitt, devx-track-azurecli, linux-related-content, innovation-engine +--- + +# Tutorial: Modify a Virtual Machine Scale Set using Azure CLI +Throughout the lifecycle of your applications, you may need to modify or update your Virtual Machine Scale Set. These updates may include how to update the configuration of the scale set, or change the application configuration. This article describes how to modify an existing scale set using the Azure CLI. + +Below, we declare environment variables that will be used throughout this document. A random suffix is appended to resource names that need to be unique for each deployment. The `REGION` is set to *WestUS2*. + +## Setup Resource Group +Before proceeding, ensure the resource group exists. This step creates the resource group if it does not already exist. + +```bash +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export MY_RESOURCE_GROUP_NAME="myResourceGroup$RANDOM_SUFFIX" +export REGION="WestUS2" +az group create --name $MY_RESOURCE_GROUP_NAME --location $REGION +``` + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx", + "location": "WestUS2", + "managedBy": null, + "name": "myResourceGroupxxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +## Create the Virtual Machine Scale Set +To ensure that subsequent update and query commands have a valid resource to work on, create a Virtual Machine Scale Set. In this step, we deploy a basic scale set using a valid image (*Ubuntu2204*) and set the instance count to 5 so that instance-specific updates can target an existing instance ID. + +```azurecli-interactive +export SCALE_SET_NAME="myScaleSet$RANDOM_SUFFIX" +az vmss create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $SCALE_SET_NAME \ + --image Ubuntu2204 \ + --upgrade-policy-mode manual \ + --instance-count 5 \ + --admin-username azureuser \ + --generate-ssh-keys +``` + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Compute/virtualMachineScaleSets/myScaleSetxxx", + "location": "WestUS2", + "name": "myScaleSetxxx", + "provisioningState": "Succeeded" +} +``` + +## Update the scale set model +A scale set has a "scale set model" that captures the *desired* state of the scale set as a whole. To query the model for a scale set, you can use [az vmss show](/cli/azure/vmss#az-vmss-show): + +```azurecli +az vmss show --resource-group $MY_RESOURCE_GROUP_NAME --name $SCALE_SET_NAME +``` + +The exact presentation of the output depends on the options you provide to the command. The following example shows condensed sample output from the Azure CLI: + +```output +{ + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Compute/virtualMachineScaleSets/myScaleSetxxx", + "location": "WestUS2", + "name": "myScaleSetxxx", + "orchestrationMode": "Flexible", + "platformFaultDomainCount": 1, + "resourceGroup": "myResourceGroupxxx", + "sku": { + "capacity": 5, + "name": "Standard_DS1_v2", + "tier": "Standard" + }, + "timeCreated": "2022-11-29T22:16:43.250912+00:00", + "type": "Microsoft.Compute/virtualMachineScaleSets", + "networkProfile": { + "networkApiVersion": "2020-11-01", + "networkInterfaceConfigurations": [ + { + "deleteOption": "Delete", + "disableTcpStateTracking": false, + "dnsSettings": { + "dnsServers": [] + }, + "enableIpForwarding": false, + "ipConfigurations": [ + { + "applicationGatewayBackendAddressPools": [], + "applicationSecurityGroups": [], + "loadBalancerBackendAddressPools": [ + { + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Network/loadBalancers/myScaleSetLB/backendAddressPools/myScaleSetLBBEPool", + "resourceGroup": "myResourceGroupxxx" + } + ], + "name": "mysca2215IPConfig", + "privateIpAddressVersion": "IPv4", + "subnet": { + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Network/virtualNetworks/myScaleSetVNET/subnets/myScaleSetSubnet", + "resourceGroup": "myResourceGroupxxx" + } + } + ], + "name": "mysca2215Nic", + "networkSecurityGroup": { + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Network/networkSecurityGroups/myScaleSetNSG", + "resourceGroup": "myResourceGroupxxx" + }, + "primary": true + } + ] + }, + "osProfile": { + "allowExtensionOperations": true, + "computerNamePrefix": "myScaleS", + "linuxConfiguration": { + "disablePasswordAuthentication": true, + "enableVmAgentPlatformUpdates": false, + "patchSettings": { + "assessmentMode": "ImageDefault", + "patchMode": "ImageDefault" + }, + "provisionVmAgent": true + } + }, + "storageProfile": { + "imageReference": { + "offer": "UbuntuServer", + "publisher": "Canonical", + "sku": "22_04-lts", + "version": "latest" + }, + "osDisk": { + "caching": "ReadWrite", + "createOption": "FromImage", + "deleteOption": "Delete", + "diskSizeGb": 30, + "managedDisk": { + "storageAccountType": "Premium_LRS" + }, + "osType": "Linux" + } + } +} +``` + +You can use [az vmss update](/cli/azure/vmss#az-vmss-update) to update various properties of your scale set. For example, updating your license type or a VM's instance protection policy. Note that the allowed license type value is *RHEL_BYOS* rather than *Windows_Server*. + +```azurecli-interactive +az vmss update --name $SCALE_SET_NAME --resource-group $MY_RESOURCE_GROUP_NAME --license-type RHEL_BYOS +``` + +```azurecli-interactive +export INSTANCE_ID=$(az vmss list-instances \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $SCALE_SET_NAME \ + --query "[0].instanceId" \ + -o tsv) + +az vmss update \ + --name $SCALE_SET_NAME \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --instance-id "$INSTANCE_ID" \ + --protect-from-scale-set-actions False \ + --protect-from-scale-in +``` + +Additionally, if you previously deployed the scale set with the `az vmss create` command, you can run the `az vmss create` command again to update the scale set. Make sure that all properties in the `az vmss create` command are the same as before, except for the properties that you wish to modify. For example, below we're increasing the instance count to five. + +> [!IMPORTANT] +>Starting November 2023, VM scale sets created using PowerShell and Azure CLI will default to Flexible Orchestration Mode if no orchestration mode is specified. For more information about this change and what actions you should take, go to [Breaking Change for VMSS PowerShell/CLI Customers - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/azure-compute-blog/breaking-change-for-vmss-powershell-cli-customers/ba-p/3818295) + +```azurecli-interactive +az vmss create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $SCALE_SET_NAME \ + --orchestration-mode flexible \ + --image RHELRaw8LVMGen2 \ + --admin-username azureuser \ + --generate-ssh-keys \ + --instance-count 5 \ + --os-disk-size-gb 64 +``` + +## Updating individual VM instances in a scale set +Similar to how a scale set has a model view, each VM instance in the scale set has its own model view. To query the model view for a particular VM instance in a scale set, you can use [az vm show](/cli/azure/vm#az-vm-show). + +```azurecli +export INSTANCE_NAME=$(az vmss list-instances \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $SCALE_SET_NAME \ + --query "[0].name" \ + -o tsv) + +az vm show --resource-group $MY_RESOURCE_GROUP_NAME --name $INSTANCE_NAME +``` + +The exact presentation of the output depends on the options you provide to the command. The following example shows condensed sample output from the Azure CLI: + +```output +{ + "hardwareProfile": { + "vmSize": "Standard_DS1_v2" + }, + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Compute/virtualMachines/myScaleSet_Instance1", + "location": "WestUS2", + "name": "myScaleSet_Instance1", + "networkProfile": { + "networkInterfaces": [ + { + "deleteOption": "Delete", + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Network/networkInterfaces/mysca2215Nic-5cf164f7", + "primary": true, + "resourceGroup": "myResourceGroupxxx" + } + ] + }, + "osProfile": { + "allowExtensionOperations": true, + "computerName": "myScaleset_Computer1", + "linuxConfiguration": { + "disablePasswordAuthentication": true, + "enableVmAgentPlatformUpdates": false, + "patchSettings": { + "assessmentMode": "ImageDefault", + "patchMode": "ImageDefault" + }, + "provisionVmAgent": true + } + }, + "provisioningState": "Succeeded", + "resourceGroup": "myResourceGroupxxx", + "storageProfile": { + "dataDisks": [], + "imageReference": { + "exactVersion": "22.04.202204200", + "offer": "0001-com-ubuntu-server-jammy", + "publisher": "Canonical", + "sku": "22_04-lts", + "version": "latest" + }, + "osDisk": { + "caching": "ReadWrite", + "createOption": "FromImage", + "deleteOption": "Delete", + "diskSizeGb": 30, + "managedDisk": { + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Compute/disks/myScaleSet_Instance1_disk1_xxx", + "resourceGroup": "myResourceGroupxxx", + "storageAccountType": "Premium_LRS" + }, + "name": "myScaleSet_Instance1_disk1_xxx", + "osType": "Linux" + } + }, + "timeCreated": "2022-11-29T22:16:44.500895+00:00", + "type": "Microsoft.Compute/virtualMachines", + "virtualMachineScaleSet": { + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Compute/virtualMachineScaleSets/myScaleSetxxx", + "resourceGroup": "myResourceGroupxxx" + } +} +``` + +These properties describe the configuration of a VM instance within a scale set, not the configuration of the scale set as a whole. + +You can perform updates to individual VM instances in a scale set just like you would a standalone VM. For example, attaching a new data disk to instance 1: + +```azurecli-interactive +az vm disk attach --resource-group $MY_RESOURCE_GROUP_NAME --vm-name $INSTANCE_NAME --name disk_name1 --new +``` + +Running [az vm show](/cli/azure/vm#az-vm-show) again, we now will see that the VM instance has the new disk attached. + +```output +{ + "storageProfile": { + "dataDisks": [ + { + "caching": "None", + "createOption": "Empty", + "deleteOption": "Detach", + "diskSizeGb": 1023, + "lun": 0, + "managedDisk": { + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Compute/disks/disk_name1", + "resourceGroup": "myResourceGroupxxx", + "storageAccountType": "Premium_LRS" + }, + "name": "disk_name1", + "toBeDetached": false + } + ] + } +} +``` + +## Add an Instance to your scale set +There are times where you might want to add a new VM to your scale set but want different configuration options than those listed in the scale set model. VMs can be added to a scale set during creation by using the [az vm create](/cli/azure/vmss#az-vmss-create) command and specifying the scale set name you want the instance added to. + +```azurecli-interactive +export NEW_INSTANCE_NAME="myNewInstance$RANDOM_SUFFIX" +az vm create --name $NEW_INSTANCE_NAME --resource-group $MY_RESOURCE_GROUP_NAME --vmss $SCALE_SET_NAME --image RHELRaw8LVMGen2 +``` + +```output +{ + "fqdns": "", + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Compute/virtualMachines/myNewInstancexxx", + "location": "WestUS2", + "macAddress": "60-45-BD-D7-13-DD", + "powerState": "VM running", + "privateIpAddress": "10.0.0.6", + "publicIpAddress": "20.172.144.96", + "resourceGroup": "myResourceGroupxxx", + "zones": "" +} +``` + +If we then check our scale set, we'll see the new instance added. + +```azurecli-interactive +az vm list --resource-group $MY_RESOURCE_GROUP_NAME --output table +``` + +```output +Name ResourceGroup Location +-------------------- --------------- ---------- +myNewInstancexxx myResourceGroupxxx WestUS2 +myScaleSet_Instance1 myResourceGroupxxx WestUS2 +myScaleSet_Instance1 myResourceGroupxxx WestUS2 +``` + +## Bring VMs up-to-date with the latest scale set model + +> [!NOTE] +> Upgrade modes are not currently supported on Virtual Machine Scale Sets using Flexible orchestration mode. + +Scale sets have an "upgrade policy" that determine how VMs are brought up-to-date with the latest scale set model. The three modes for the upgrade policy are: + +- **Automatic** - In this mode, the scale set makes no guarantees about the order of VMs being brought down. The scale set may take down all VMs at the same time. +- **Rolling** - In this mode, the scale set rolls out the update in batches with an optional pause time between batches. +- **Manual** - In this mode, when you update the scale set model, nothing happens to existing VMs until a manual update is triggered. + +If your scale set is set to manual upgrades, you can trigger a manual upgrade using [az vmss update](/cli/azure/vmss#az-vmss-update). + +```azurecli +az vmss update --resource-group $MY_RESOURCE_GROUP_NAME --name $SCALE_SET_NAME +``` + +>[!NOTE] +> Service Fabric clusters can only use *Automatic* mode, but the update is handled differently. For more information, see [Service Fabric application upgrades](../service-fabric/service-fabric-application-upgrade.md). + +## Reimage a scale set +Virtual Machine Scale Sets will generate a unique name for each VM in the scale set. The naming convention differs by orchestration mode: + +- Flexible orchestration Mode: {scale-set-name}_{8-char-guid} +- Uniform orchestration mode: {scale-set-name}_{instance-id} + +In the cases where you need to reimage a specific instance, use [az vmss reimage](/cli/azure/vmss#az-vmss-reimage) and specify the instance id. Another option is to use [az vm redeploy](/cli/azure/vm#az-vm-redeploy) to reimage the VM directly. This command is useful if you want to reimage a VM without having to specify the instance ID. + +```azurecli +# Get the VM name first +VM_NAME=$(az vmss list-instances \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $SCALE_SET_NAME \ + --query "[0].name" \ + -o tsv) + +# Reimage the VM directly +az vm redeploy \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $VM_NAME +``` + +## Update the OS image for your scale set +You may have a scale set that runs an old version of Ubuntu. You want to update to a newer version of Ubuntu, such as the latest version. The image reference version property isn't part of a list, so you can directly modify these properties using [az vmss update](/cli/azure/vmss#az-vmss-update). + +```azurecli +az vmss update --resource-group $MY_RESOURCE_GROUP_NAME --name $SCALE_SET_NAME --set virtualMachineProfile.storageProfile.imageReference.version=latest +``` + +Alternatively, you may want to change the image your scale set uses. For example, you may want to update or change a custom image used by your scale set. You can change the image your scale set uses by updating the image reference ID property. The image reference ID property isn't part of a list, so you can directly modify this property using [az vmss update](/cli/azure/vmss#az-vmss-update). + +If you use Azure platform images, you can update the image by modifying the *imageReference* (more information, see the [REST API documentation](/rest/api/compute/virtualmachinescalesets/createorupdate)). + +>[!NOTE] +> With platform images, it is common to specify "latest" for the image reference version. When you create, scale out, and reimage, VMs are created with the latest available version. However, it **does not** mean that the OS image is automatically updated over time as new image versions are released. A separate feature provides automatic OS upgrades. For more information, see the [Automatic OS Upgrades documentation](virtual-machine-scale-sets-automatic-upgrade.md). + +If you use custom images, you can update the image by updating the *imageReference* ID (more information, see the [REST API documentation](/rest/api/compute/virtualmachinescalesets/createorupdate)). + +## Update the load balancer for your scale set +Let's say you have a scale set with an Azure Load Balancer, and you want to replace the Azure Load Balancer with an Azure Application Gateway. The load balancer and Application Gateway properties for a scale set are part of a list, so you can use the commands to remove or add list elements instead of modifying the properties directly. + +```text +# Remove the load balancer backend pool from the scale set model +az vmss update --resource-group $MY_RESOURCE_GROUP_NAME --name $SCALE_SET_NAME --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerBackendAddressPools 0 + +# Remove the load balancer backend pool from the scale set model; only necessary if you have NAT pools configured on the scale set +az vmss update --resource-group $MY_RESOURCE_GROUP_NAME --name $SCALE_SET_NAME --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools 0 + +# Add the application gateway backend pool to the scale set model +az vmss update --resource-group $MY_RESOURCE_GROUP_NAME --name $SCALE_SET_NAME --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].ApplicationGatewayBackendAddressPools '{"id": "/subscriptions/xxxxx/resourceGroups/'$MY_RESOURCE_GROUP_NAME'/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/backendAddressPools/{applicationGatewayBackendPoolName}"}' +``` + +>[!NOTE] +> These commands assume there is only one IP configuration and load balancer on the scale set. If there are multiple, you may need to use a list index other than *0*. + +## Next steps +In this tutorial, you learned how to modify various aspects of your scale set and individual instances. + +> [!div class="checklist"] +> * Update the scale set model +> * Update an individual VM instance in a scale set +> * Add an instance to your scale set +> * Bring VMs up-to-date with the latest scale set model +> * Reimage a scale set +> * Update the OS image for your scale set +> * Update the load balancer for your scale set + +> [!div class="nextstepaction"] +> [Use data disks with scale sets](tutorial-use-disks-powershell.md) \ No newline at end of file diff --git a/scenarios/azure-docs/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml b/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml similarity index 100% rename from scenarios/azure-docs/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml rename to scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/disks-enable-performance.md b/scenarios/azure-compute-docs/articles/virtual-machines/disks-enable-performance.md new file mode 100644 index 000000000..cb1c2373e --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/disks-enable-performance.md @@ -0,0 +1,241 @@ +--- +title: Preview - Increase performance of Premium SSDs and Standard SSD/HDDs +description: Increase the performance of Azure Premium SSDs and Standard SSD/HDDs using performance plus. +author: roygara +ms.service: azure-disk-storage +ms.topic: how-to +ms.date: 12/09/2024 +ms.author: rogarana +ms.custom: devx-track-azurepowershell +--- + +# Preview - Increase IOPS and throughput limits for Azure Premium SSDs and Standard SSD/HDDs + +The Input/Output Operations Per Second (IOPS) and throughput limits for Azure Premium solid-state drives (SSD), Standard SSDs, and Standard hard disk drives (HDD) that are 513 GiB and larger can be increased by enabling performance plus. Enabling performance plus (preview) improves the experience for workloads that require high IOPS and throughput, such as database and transactional workloads. There's no extra charge for enabling performance plus on a disk. + +Once enabled, the IOPS and throughput limits for an eligible disk increase to the higher maximum limits. To see the new IOPS and throughput limits for eligible disks, consult the columns that begin with "*Expanded" in the [Scalability and performance targets for VM disks](disks-scalability-targets.md) article. + +## Limitations + +- Can only be enabled on Standard HDD, Standard SSD, and Premium SSD managed disks that are 513 GiB or larger. +- Can only be enabled on new disks. + - To work around this, create a snapshot of your disk, then create a new disk from the snapshot. +- Not supported for disks recovered with Azure Site Recovery or Azure Backup. +- Can't be enabled in the Azure portal. + +## Prerequisites + +Either use the Azure Cloud Shell to run your commands or install a version of the [Azure PowerShell module](/powershell/azure/install-azure-powershell) 9.5 or newer, or a version of the [Azure CLI](/cli/azure/install-azure-cli) that is 2.44.0 or newer. + +## Enable performance plus + +You need to create a new disk to use performance plus. The following script creates a disk that has performance plus enabled and attach it to a VM: + +# [Azure CLI](#tab/azure-cli) + +### Create a resource group + +This step creates a resource group with a unique name. + +```azurecli +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export MY_RG="PerfPlusRG$RANDOM_SUFFIX" +export REGION="WestUS2" +az group create -g $MY_RG -l $REGION +``` + +Results: + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/PerfPlusRGxxx", + "location": "WestUS2", + "name": "PerfPlusRGxxx", + "properties": { + "provisioningState": "Succeeded" + } +} +``` + +### Create a new disk with performance plus enabled + +This step creates a new disk of 513 GiB (or larger) with performance plus enabled using a valid SKU value. + +```azurecli +export MY_DISK="PerfPlusDisk$RANDOM_SUFFIX" +export SKU="Premium_LRS" +export DISK_SIZE=513 +az disk create -g $MY_RG -n $MY_DISK --size-gb $DISK_SIZE --sku $SKU -l $REGION --performance-plus true +``` + +Results: + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/PerfPlusRGxxx/providers/Microsoft.Compute/disks/PerfPlusDiskxxx", + "location": "WestUS2", + "name": "PerfPlusDiskxxx", + "properties": { + "provisioningState": "Succeeded", + "diskSizeGb": 513, + "sku": "Premium_LRS", + "performancePlus": true + }, + "type": "Microsoft.Compute/disks" +} +``` + +### Attempt to attach the disk to a VM + +This optional step attempts to attach the disk to an existing VM. It first checks if the VM exists and then proceeds accordingly. + +```azurecli +export MY_VM="NonExistentVM" +if az vm show -g $MY_RG -n $MY_VM --query "name" --output tsv >/dev/null 2>&1; then + az vm disk attach --vm-name $MY_VM --name $MY_DISK --resource-group $MY_RG +else + echo "VM $MY_VM not found. Skipping disk attachment." +fi +``` + +Results: + + +```text +VM NonExistentVM not found. Skipping disk attachment. +``` + +### Create a new disk from an existing disk or snapshot with performance plus enabled + +This series of steps creates a separate resource group and then creates a new disk from an existing disk or snapshot. Replace the SOURCE_URI with a valid source blob URI that belongs to the same region (WestUS2) as the disk. + +#### Create a resource group for migration + +```azurecli +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export MY_MIG_RG="PerfPlusMigrRG$RANDOM_SUFFIX" +export REGION="WestUS2" +az group create -g $MY_MIG_RG -l $REGION +``` + +Results: + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/PerfPlusMigrRGxxx", + "location": "WestUS2", + "name": "PerfPlusMigrRGxxx", + "properties": { + "provisioningState": "Succeeded" + } +} +``` + +#### Create the disk from an existing snapshot or disk + +```azurecli +# Create a snapshot from the original disk +export MY_SNAPSHOT_NAME="PerfPlusSnapshot$RANDOM_SUFFIX" +echo "Creating snapshot from original disk..." +az snapshot create \ + --name $MY_SNAPSHOT_NAME \ + --resource-group $MY_RG \ + --source $MY_DISK + +# Get the snapshot ID for use as source +SNAPSHOT_ID=$(az snapshot show \ + --name $MY_SNAPSHOT_NAME \ + --resource-group $MY_RG \ + --query id \ + --output tsv) + +echo "Using snapshot ID: $SNAPSHOT_ID" + +# Create the new disk using the snapshot as source +export MY_MIG_DISK="PerfPlusMigrDisk$RANDOM_SUFFIX" +export SKU="Premium_LRS" +export DISK_SIZE=513 + +az disk create \ + --name $MY_MIG_DISK \ + --resource-group $MY_MIG_RG \ + --size-gb $DISK_SIZE \ + --performance-plus true \ + --sku $SKU \ + --source $SNAPSHOT_ID \ + --location $REGION +``` + +Results: + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/PerfPlusMigrRGxxx/providers/Microsoft.Compute/disks/PerfPlusMigrDiskxxx", + "location": "WestUS2", + "name": "PerfPlusMigrDiskxxx", + "properties": { + "provisioningState": "Succeeded", + "diskSizeGb": 513, + "sku": "Premium_LRS", + "performancePlus": true, + "source": "https://examplestorageaccount.blob.core.windows.net/snapshots/sample-westus2.vhd" + }, + "type": "Microsoft.Compute/disks" +} +``` + +# [Azure PowerShell](#tab/azure-powershell) + +You need to create a new disk to use performance plus. The following script creates a disk that has performance plus enabled and attach it to a VM: + +```azurepowershell +$myRG=yourResourceGroupName +$myDisk=yourDiskName +$myVM=yourVMName +$region=desiredRegion +# Valid values are Premium_LRS, Premium_ZRS, StandardSSD_LRS, StandardSSD_ZRS, or Standard_LRS +$sku=desiredSKU +#Size must be 513 or larger +$size=513 +$lun=desiredLun + +Set-AzContext -SubscriptionName + +$diskConfig = New-AzDiskConfig -Location $region -CreateOption Empty -DiskSizeGB $size -SkuName $sku -PerformancePlus $true + +$dataDisk = New-AzDisk -ResourceGroupName $myRG -DiskName $myDisk -Disk $diskConfig + +Add-AzVMDataDisk -VMName $myVM -ResourceGroupName $myRG -DiskName $myDisk -Lun $lun -CreateOption Empty -ManagedDiskId $dataDisk.Id +``` + +To migrate data from an existing disk or snapshot to a new disk with performance plus enabled, use the following script: + +```azurepowershell +$myDisk=yourDiskOrSnapshotName +$myVM=yourVMName +$region=desiredRegion +# Valid values are Premium_LRS, Premium_ZRS, StandardSSD_LRS, StandardSSD_ZRS, or Standard_LRS +$sku=desiredSKU +#Size must be 513 or larger +$size=513 +$sourceURI=diskOrSnapshotURI +$lun=desiredLun + +Set-AzContext -SubscriptionName <> + +$diskConfig = New-AzDiskConfig -Location $region -CreateOption Copy -DiskSizeGB $size -SkuName $sku -PerformancePlus $true -SourceResourceID $sourceURI + +$dataDisk = New-AzDisk -ResourceGroupName $myRG -DiskName $myDisk -Disk $diskconfig +Add-AzVMDataDisk -VMName $myVM -ResourceGroupName $myRG -DiskName $myDisk -Lun $lun -CreateOption Empty -ManagedDiskId $dataDisk.Id +``` +--- + +## Next steps + +- [Create an incremental snapshot for managed disks](disks-incremental-snapshots.md) +- [Expand virtual hard disks on a Linux VM](linux/expand-disks.md) +- [How to expand virtual hard disks attached to a Windows virtual machine](windows/expand-os-disk.md) \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/cloud-init.txt b/scenarios/azure-compute-docs/articles/virtual-machines/linux/cloud-init.txt new file mode 100644 index 000000000..6f0566319 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/cloud-init.txt @@ -0,0 +1,41 @@ +#cloud-config +package_upgrade: true +packages: + - nginx + - nodejs + - npm +write_files: + - owner: www-data:www-data + path: /etc/nginx/sites-available/default + defer: true + content: | + server { + listen 80; + location / { + proxy_pass http://localhost:3000; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection keep-alive; + proxy_set_header Host $host; + proxy_cache_bypass $http_upgrade; + } + } + - owner: azureuser:azureuser + path: /home/azureuser/myapp/index.js + defer: true + content: | + var express = require('express') + var app = express() + var os = require('os'); + app.get('/', function (req, res) { + res.send('Hello World from host ' + os.hostname() + '!') + }) + app.listen(3000, function () { + console.log('Hello world app listening on port 3000!') + }) +runcmd: + - service nginx restart + - cd "/home/azureuser/myapp" + - npm init + - npm install express -y + - nodejs index.js \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/multiple-nics.md b/scenarios/azure-compute-docs/articles/virtual-machines/linux/multiple-nics.md new file mode 100644 index 000000000..8f02ee1a8 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/multiple-nics.md @@ -0,0 +1,268 @@ +--- +title: Create a Linux VM in Azure with multiple NICs +description: Learn how to create a Linux VM with multiple NICs attached to it using the Azure CLI or Resource Manager templates. +author: mattmcinnes +ms.service: azure-virtual-machines +ms.subservice: networking +ms.topic: how-to +ms.custom: devx-track-azurecli, linux-related-content, innovation-engine +ms.date: 04/06/2023 +ms.author: mattmcinnes +ms.reviewer: cynthn +--- + +# How to create a Linux virtual machine in Azure with multiple network interface cards + +**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets + +This article details how to create a VM with multiple NICs with the Azure CLI. + +## Create supporting resources +Install the latest [Azure CLI](/cli/azure/install-az-cli2) and log in to an Azure account using [az login](/cli/azure/reference-index). + +In the following examples, replace example parameter names with your own values. Example parameter names included *myResourceGroup*, *mystorageaccount*, and *myVM*. + +First, create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *myResourceGroup* in the *eastus* location. In these examples, we declare environment variables as they are used and add a random suffix to unique resource names. + +```azurecli +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export MY_RESOURCE_GROUP_NAME="myResourceGroup$RANDOM_SUFFIX" +export REGION="WestUS2" +az group create --name $MY_RESOURCE_GROUP_NAME --location $REGION +``` + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx", + "location": "WestUS2", + "managedBy": null, + "name": "myResourceGroupxxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +Create the virtual network with [az network vnet create](/cli/azure/network/vnet). The following example creates a virtual network named *myVnet* and subnet named *mySubnetFrontEnd*: + +```azurecli +export VNET_NAME="myVnet" +export FRONTEND_SUBNET="mySubnetFrontEnd" +az network vnet create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $VNET_NAME \ + --address-prefix 10.0.0.0/16 \ + --subnet-name $FRONTEND_SUBNET \ + --subnet-prefix 10.0.1.0/24 +``` + +Create a subnet for the back-end traffic with [az network vnet subnet create](/cli/azure/network/vnet/subnet). The following example creates a subnet named *mySubnetBackEnd*: + +```azurecli +export BACKEND_SUBNET="mySubnetBackEnd" +az network vnet subnet create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --vnet-name $VNET_NAME \ + --name $BACKEND_SUBNET \ + --address-prefix 10.0.2.0/24 +``` + +Create a network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named *myNetworkSecurityGroup*: + +```azurecli +export NSG_NAME="myNetworkSecurityGroup" +az network nsg create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $NSG_NAME +``` + +## Create and configure multiple NICs +Create two NICs with [az network nic create](/cli/azure/network/nic). The following example creates two NICs, named *myNic1* and *myNic2*, connected to the network security group, with one NIC connecting to each subnet: + +```azurecli +export NIC1="myNic1" +export NIC2="myNic2" +az network nic create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $NIC1 \ + --vnet-name $VNET_NAME \ + --subnet $FRONTEND_SUBNET \ + --network-security-group $NSG_NAME +az network nic create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $NIC2 \ + --vnet-name $VNET_NAME \ + --subnet $BACKEND_SUBNET \ + --network-security-group $NSG_NAME +``` + +## Create a VM and attach the NICs +When you create the VM, specify the NICs you created with --nics. You also need to take care when you select the VM size. There are limits for the total number of NICs that you can add to a VM. Read more about [Linux VM sizes](../sizes.md). + +Create a VM with [az vm create](/cli/azure/vm). The following example creates a VM named *myVM*: + +```azurecli +export VM_NAME="myVM" +az vm create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $VM_NAME \ + --image Ubuntu2204 \ + --size Standard_DS3_v2 \ + --admin-username azureuser \ + --generate-ssh-keys \ + --nics $NIC1 $NIC2 +``` + +Add routing tables to the guest OS by completing the steps in [Configure the guest OS for multiple NICs](#configure-guest-os-for-multiple-nics). + +## Add a NIC to a VM +The previous steps created a VM with multiple NICs. You can also add NICs to an existing VM with the Azure CLI. Different [VM sizes](../sizes.md) support a varying number of NICs, so size your VM accordingly. If needed, you can [resize a VM](../resize-vm.md). + +Create another NIC with [az network nic create](/cli/azure/network/nic). The following example creates a NIC named *myNic3* connected to the back-end subnet and network security group created in the previous steps: + +```azurecli +export NIC3="myNic3" +az network nic create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $NIC3 \ + --vnet-name $VNET_NAME \ + --subnet $BACKEND_SUBNET \ + --network-security-group $NSG_NAME +``` + +To add a NIC to an existing VM, first deallocate the VM with [az vm deallocate](/cli/azure/vm). The following example deallocates the VM named *myVM*: + +```azurecli +az vm deallocate --resource-group $MY_RESOURCE_GROUP_NAME --name $VM_NAME +``` + +Add the NIC with [az vm nic add](/cli/azure/vm/nic). The following example adds *myNic3* to *myVM*: + +```azurecli +az vm nic add \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --vm-name $VM_NAME \ + --nics $NIC3 +``` + +Start the VM with [az vm start](/cli/azure/vm): + +```azurecli +az vm start --resource-group $MY_RESOURCE_GROUP_NAME --name $VM_NAME +``` + +Add routing tables to the guest OS by completing the steps in [Configure the guest OS for multiple NICs](#configure-guest-os-for-multiple-nics). + +## Remove a NIC from a VM +To remove a NIC from an existing VM, first deallocate the VM with [az vm deallocate](/cli/azure/vm). The following example deallocates the VM named *myVM*: + +```azurecli +az vm deallocate --resource-group $MY_RESOURCE_GROUP_NAME --name $VM_NAME +``` + +Remove the NIC with [az vm nic remove](/cli/azure/vm/nic). The following example removes *myNic3* from *myVM*: + +```azurecli +az vm nic remove \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --vm-name $VM_NAME \ + --nics $NIC3 +``` + +Start the VM with [az vm start](/cli/azure/vm): + +```azurecli +az vm start --resource-group $MY_RESOURCE_GROUP_NAME --name $VM_NAME +``` + +## Create multiple NICs using Resource Manager templates +Azure Resource Manager templates use declarative JSON files to define your environment. You can read an [overview of Azure Resource Manager](/azure/azure-resource-manager/management/overview). Resource Manager templates provide a way to create multiple instances of a resource during deployment, such as creating multiple NICs. You use *copy* to specify the number of instances to create: + +```json +"copy": { + "name": "multiplenics" + "count": "[parameters('count')]" +} +``` + +Read more about [creating multiple instances using *copy*](/azure/azure-resource-manager/templates/copy-resources). + +You can also use a copyIndex() to then append a number to a resource name, which allows you to create myNic1, myNic2, etc. The following shows an example of appending the index value: + +```json +"name": "[concat('myNic', copyIndex())]", +``` + +You can read a complete example of [creating multiple NICs using Resource Manager templates](/azure/virtual-network/template-samples). + +Add routing tables to the guest OS by completing the steps in [Configure the guest OS for multiple NICs](#configure-guest-os-for-multiple-nics). + +## Configure guest OS for multiple NICs + +The previous steps created a virtual network and subnet, attached NICs, then created a VM. A public IP address and network security group rules that allow SSH traffic were not created. To configure the guest OS for multiple NICs, you need to allow remote connections and run commands locally on the VM. + +To allow SSH traffic, create a network security group rule with [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) as follows: + +```azurecli +az network nsg rule create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --nsg-name $NSG_NAME \ + --name allow_ssh \ + --priority 101 \ + --destination-port-ranges 22 +``` + +Create a public IP address with [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) and assign it to the first NIC with [az network nic ip-config update](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-update): + +```azurecli +export PUBLIC_IP_NAME="myPublicIP" +az network public-ip create --resource-group $MY_RESOURCE_GROUP_NAME --name $PUBLIC_IP_NAME + +az network nic ip-config update \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --nic-name $NIC1 \ + --name ipconfig1 \ + --public-ip $PUBLIC_IP_NAME +``` + +To view the public IP address of the VM, use [az vm show](/cli/azure/vm#az-vm-show) as follows: + +```azurecli +az vm show --resource-group $MY_RESOURCE_GROUP_NAME --name $VM_NAME -d --query publicIps -o tsv +``` + +```TEXT +x.x.x.x +``` + +Now SSH to the public IP address of your VM. The default username provided in a previous step was *azureuser*. Provide your own username and public IP address: + +```bash +export IP_ADDRESS=$(az vm show --resource-group $MY_RESOURCE_GROUP_NAME --name $VM_NAME -d --query publicIps -o tsv) +ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS +``` +To send to or from a secondary network interface, you have to manually add persistent routes to the operating system for each secondary network interface. In this article, *eth1* is the secondary interface. Instructions for adding persistent routes to the operating system vary by distro. See documentation for your distro for instructions. + +When adding the route to the operating system, the gateway address is the first address of the subnet the network interface is in. For example, if the subnet has been assigned the range 10.0.2.0/24, the gateway you specify for the route is 10.0.2.1 or if the subnet has been assigned the range 10.0.2.128/25, the gateway you specify for the route is 10.0.2.129. You can define a specific network for the route's destination, or specify a destination of 0.0.0.0, if you want all traffic for the interface to go through the specified gateway. The gateway for each subnet is managed by the virtual network. + +Once you've added the route for a secondary interface, verify that the route is in your route table with `route -n`. The following example output is for the route table that has the two network interfaces added to the VM in this article: + +```output +Kernel IP routing table +Destination Gateway Genmask Flags Metric Ref Use Iface +0.0.0.0 10.0.1.1 0.0.0.0 UG 0 0 0 eth0 +0.0.0.0 10.0.2.1 0.0.0.0 UG 0 0 0 eth1 +10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 +10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 +168.63.129.16 10.0.1.1 255.255.255.255 UGH 0 0 0 eth0 +169.254.169.254 10.0.1.1 255.255.255.255 UGH 0 0 0 eth0 +``` + +Confirm that the route you added persists across reboots by checking your route table again after a reboot. To test connectivity, you can enter the following command, for example, where *eth1* is the name of a secondary network interface: `ping bing.com -c 4 -I eth1` + +## Next steps +Review [Linux VM sizes](../sizes.md) when trying to creating a VM with multiple NICs. Pay attention to the maximum number of NICs each VM size supports. + +To further secure your VMs, use just in time VM access. This feature opens network security group rules for SSH traffic when needed, and for a defined period of time. For more information, see [Manage virtual machine access using just in time](/azure/security-center/security-center-just-in-time). \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/main.tf b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/main.tf new file mode 100644 index 000000000..9482a95fa --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/main.tf @@ -0,0 +1,124 @@ +resource "random_pet" "rg_name" { + prefix = var.resource_group_name_prefix +} + +resource "azurerm_resource_group" "rg" { + location = var.resource_group_location + name = random_pet.rg_name.id +} + +# Create virtual network +resource "azurerm_virtual_network" "my_terraform_network" { + name = "myVnet" + address_space = ["10.0.0.0/16"] + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name +} + +# Create subnet +resource "azurerm_subnet" "my_terraform_subnet" { + name = "mySubnet" + resource_group_name = azurerm_resource_group.rg.name + virtual_network_name = azurerm_virtual_network.my_terraform_network.name + address_prefixes = ["10.0.1.0/24"] +} + +# Create public IPs +resource "azurerm_public_ip" "my_terraform_public_ip" { + name = "myPublicIP" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + allocation_method = "Dynamic" +} + +# Create Network Security Group and rule +resource "azurerm_network_security_group" "my_terraform_nsg" { + name = "myNetworkSecurityGroup" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + + security_rule { + name = "SSH" + priority = 1001 + direction = "Inbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "22" + source_address_prefix = "*" + destination_address_prefix = "*" + } +} + +# Create network interface +resource "azurerm_network_interface" "my_terraform_nic" { + name = "myNIC" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + + ip_configuration { + name = "my_nic_configuration" + subnet_id = azurerm_subnet.my_terraform_subnet.id + private_ip_address_allocation = "Dynamic" + public_ip_address_id = azurerm_public_ip.my_terraform_public_ip.id + } +} + +# Connect the security group to the network interface +resource "azurerm_network_interface_security_group_association" "example" { + network_interface_id = azurerm_network_interface.my_terraform_nic.id + network_security_group_id = azurerm_network_security_group.my_terraform_nsg.id +} + +# Generate random text for a unique storage account name +resource "random_id" "random_id" { + keepers = { + # Generate a new ID only when a new resource group is defined + resource_group = azurerm_resource_group.rg.name + } + + byte_length = 8 +} + +# Create storage account for boot diagnostics +resource "azurerm_storage_account" "my_storage_account" { + name = "diag${random_id.random_id.hex}" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + account_tier = "Standard" + account_replication_type = "LRS" +} + +# Create virtual machine +resource "azurerm_linux_virtual_machine" "my_terraform_vm" { + name = "myVM" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + network_interface_ids = [azurerm_network_interface.my_terraform_nic.id] + size = "Standard_DS1_v2" + + os_disk { + name = "myOsDisk" + caching = "ReadWrite" + storage_account_type = "Premium_LRS" + } + + source_image_reference { + publisher = "Canonical" + offer = "0001-com-ubuntu-server-jammy" + sku = "22_04-lts-gen2" + version = "latest" + } + + computer_name = "hostname" + admin_username = var.username + + admin_ssh_key { + username = var.username + public_key = azapi_resource_action.ssh_public_key_gen.output.publicKey + } + + boot_diagnostics { + storage_account_uri = azurerm_storage_account.my_storage_account.primary_blob_endpoint + } +} \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/outputs.tf b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/outputs.tf new file mode 100644 index 000000000..f7d0c3184 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/outputs.tf @@ -0,0 +1,7 @@ +output "resource_group_name" { + value = azurerm_resource_group.rg.name +} + +output "public_ip_address" { + value = azurerm_linux_virtual_machine.my_terraform_vm.public_ip_address +} \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/providers.tf b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/providers.tf new file mode 100644 index 000000000..158b40408 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/providers.tf @@ -0,0 +1,22 @@ +terraform { + required_version = ">=0.12" + + required_providers { + azapi = { + source = "azure/azapi" + version = "~>1.5" + } + azurerm = { + source = "hashicorp/azurerm" + version = "~>3.0" + } + random = { + source = "hashicorp/random" + version = "~>3.0" + } + } +} + +provider "azurerm" { + features {} +} \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/quick-create-terraform.md b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/quick-create-terraform.md new file mode 100644 index 000000000..d6e92dc62 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/quick-create-terraform.md @@ -0,0 +1,367 @@ +--- +title: 'Quickstart: Use Terraform to create a Linux VM' +description: In this quickstart, you learn how to use Terraform to create a Linux virtual machine +author: tomarchermsft +ms.service: azure-virtual-machines +ms.collection: linux +ms.topic: quickstart +ms.date: 07/24/2023 +ms.author: tarcher +ms.custom: devx-track-terraform, linux-related-content, innovation-engine +ai-usage: ai-assisted +--- + +# Quickstart: Use Terraform to create a Linux VM + +**Applies to:** :heavy_check_mark: Linux VMs + +Article tested with the following Terraform and Terraform provider versions: + +This article shows you how to create a complete Linux environment and supporting resources with Terraform. Those resources include a virtual network, subnet, public IP address, and more. + +[!INCLUDE [Terraform abstract](~/azure-dev-docs-pr/articles/terraform/includes/abstract.md)] + +In this article, you learn how to: +> [!div class="checklist"] +> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet). +> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group). +> * Create a virtual network (VNET) using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network). +> * Create a subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet). +> * Create a public IP using [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip). +> * Create a network security group using [azurerm_network_security_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_security_group). +> * Create a network interface using [azurerm_network_interface](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface). +> * Create an association between the network security group and the network interface using [azurerm_network_interface_security_group_association](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface_security_group_association). +> * Generate a random value for a unique storage account name using [random_id](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id). +> * Create a storage account for boot diagnostics using [azurerm_storage_account](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account). +> * Create a Linux VM using [azurerm_linux_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine). +> * Create an AzAPI resource using [azapi_resource](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource). +> * Create an AzAPI resource to generate an SSH key pair using [azapi_resource_action](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource_action). + +## Prerequisites + +- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure) + +## Implement the Terraform code + +> [!NOTE] +> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-vm-with-infrastructure). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-vm-with-infrastructure/TestRecord.md). +> +> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform) + +1. Create a directory in which to test the sample Terraform code and make it the current directory. + +1. Create a file named providers.tf and insert the following code: + +```text +terraform { + required_version = ">=0.12" + + required_providers { + azapi = { + source = "azure/azapi" + version = "~>1.5" + } + azurerm = { + source = "hashicorp/azurerm" + version = "~>3.0" + } + random = { + source = "hashicorp/random" + version = "~>3.0" + } + } +} + +provider "azurerm" { + features {} +} +``` + +1. Create a file named ssh.tf and insert the following code: + +```text +resource "random_pet" "ssh_key_name" { + prefix = "ssh" + separator = "" +} + +resource "azapi_resource_action" "ssh_public_key_gen" { + type = "Microsoft.Compute/sshPublicKeys@2022-11-01" + resource_id = azapi_resource.ssh_public_key.id + action = "generateKeyPair" + method = "POST" + + response_export_values = ["publicKey", "privateKey"] +} + +resource "azapi_resource" "ssh_public_key" { + type = "Microsoft.Compute/sshPublicKeys@2022-11-01" + name = random_pet.ssh_key_name.id + location = azurerm_resource_group.rg.location + parent_id = azurerm_resource_group.rg.id +} + +output "key_data" { + value = azapi_resource_action.ssh_public_key_gen.output.publicKey +} +``` + +1. Create a file named main.tf and insert the following code: + +```text +resource "random_pet" "rg_name" { + prefix = var.resource_group_name_prefix +} + +resource "azurerm_resource_group" "rg" { + location = var.resource_group_location + name = random_pet.rg_name.id +} + +# Create virtual network +resource "azurerm_virtual_network" "my_terraform_network" { + name = "myVnet" + address_space = ["10.0.0.0/16"] + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name +} + +# Create subnet +resource "azurerm_subnet" "my_terraform_subnet" { + name = "mySubnet" + resource_group_name = azurerm_resource_group.rg.name + virtual_network_name = azurerm_virtual_network.my_terraform_network.name + address_prefixes = ["10.0.1.0/24"] +} + +# Create public IPs +resource "azurerm_public_ip" "my_terraform_public_ip" { + name = "myPublicIP" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + allocation_method = "Dynamic" +} + +# Create Network Security Group and rule +resource "azurerm_network_security_group" "my_terraform_nsg" { + name = "myNetworkSecurityGroup" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + + security_rule { + name = "SSH" + priority = 1001 + direction = "Inbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "22" + source_address_prefix = "*" + destination_address_prefix = "*" + } +} + +# Create network interface +resource "azurerm_network_interface" "my_terraform_nic" { + name = "myNIC" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + + ip_configuration { + name = "my_nic_configuration" + subnet_id = azurerm_subnet.my_terraform_subnet.id + private_ip_address_allocation = "Dynamic" + public_ip_address_id = azurerm_public_ip.my_terraform_public_ip.id + } +} + +# Connect the security group to the network interface +resource "azurerm_network_interface_security_group_association" "example" { + network_interface_id = azurerm_network_interface.my_terraform_nic.id + network_security_group_id = azurerm_network_security_group.my_terraform_nsg.id +} + +# Generate random text for a unique storage account name +resource "random_id" "random_id" { + keepers = { + # Generate a new ID only when a new resource group is defined + resource_group = azurerm_resource_group.rg.name + } + + byte_length = 8 +} + +# Create storage account for boot diagnostics +resource "azurerm_storage_account" "my_storage_account" { + name = "diag${random_id.random_id.hex}" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + account_tier = "Standard" + account_replication_type = "LRS" +} + +# Create virtual machine +resource "azurerm_linux_virtual_machine" "my_terraform_vm" { + name = "myVM" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + network_interface_ids = [azurerm_network_interface.my_terraform_nic.id] + size = "Standard_DS1_v2" + + os_disk { + name = "myOsDisk" + caching = "ReadWrite" + storage_account_type = "Premium_LRS" + } + + source_image_reference { + publisher = "Canonical" + offer = "0001-com-ubuntu-server-jammy" + sku = "22_04-lts-gen2" + version = "latest" + } + + computer_name = "hostname" + admin_username = var.username + + admin_ssh_key { + username = var.username + public_key = azapi_resource_action.ssh_public_key_gen.output.publicKey + } + + boot_diagnostics { + storage_account_uri = azurerm_storage_account.my_storage_account.primary_blob_endpoint + } +} +``` + +1. Create a file named variables.tf and insert the following code: + +```text +variable "resource_group_location" { + type = string + default = "eastus2" + description = "Location of the resource group." +} + +variable "resource_group_name_prefix" { + type = string + default = "rg" + description = "Prefix of the resource group name that's combined with a random ID so name is unique in your Azure subscription." +} + +variable "username" { + type = string + description = "The username for the local account that will be created on the new VM." + default = "azureadmin" +} +``` + +1. Create a file named outputs.tf and insert the following code: + +```text +output "resource_group_name" { + value = azurerm_resource_group.rg.name +} + +output "public_ip_address" { + value = azurerm_linux_virtual_machine.my_terraform_vm.public_ip_address +} +``` + +## Initialize Terraform + +In this section, Terraform is initialized; this command downloads the Azure provider required to manage your Azure resources. Before running the command, ensure you are in the directory where you created the Terraform files. You can set any necessary environment variables here. + +```bash +# Set your preferred Azure region (defaults to eastus2 if not specified) +export TF_VAR_resource_group_location="eastus2" +export TERRAFORM_DIR=$(pwd) +terraform init -upgrade +``` + +Key points: + +- The -upgrade parameter upgrades the necessary provider plugins to the newest version that complies with the configuration's version constraints. + +## Create a Terraform execution plan + +This step creates an execution plan but does not execute it. It shows what actions are necessary to create the configuration specified in your files. + +```bash +terraform plan -out main.tfplan +``` + +Key points: + +- The terraform plan command creates an execution plan, allowing you to verify whether it matches your expectations before applying any changes. +- The optional -out parameter writes the plan to a file so that the exact plan can be applied later. + +## Apply a Terraform execution plan + +Apply the previously created execution plan to deploy the infrastructure to your cloud. + +```bash +terraform apply main.tfplan +``` + +Key points: + +- This command applies the plan created with terraform plan -out main.tfplan. +- If you used a different filename for the -out parameter, use that same filename with terraform apply. +- If the -out parameter wasn’t used, run terraform apply without any parameters. + +Cost information isn't presented during the virtual machine creation process for Terraform like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../plan-to-manage-costs.md). + +## Verify the results + +#### [Azure CLI](#tab/azure-cli) + +1. Get the Azure resource group name. + +```bash +export RESOURCE_GROUP_NAME=$(terraform output -raw resource_group_name) +``` + +1. Run az vm list with a JMESPath query to display the names of the virtual machines created in the resource group. + +```azurecli +az vm list \ + --resource-group $RESOURCE_GROUP_NAME \ + --query "[].{\"VM Name\":name}" -o table +``` + +Results: + + + +```console +VM Name +----------- +myVM +``` + +#### [Azure PowerShell](#tab/azure-powershell) + +1. Get the Azure resource group name. + +```console +$resource_group_name=$(terraform output -raw resource_group_name) +``` + +1. Run Get-AzVm to display the names of all the virtual machines in the resource group. + +```azurepowershell +Get-AzVm -ResourceGroupName $resource_group_name +``` + +## Troubleshoot Terraform on Azure + +[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot) + +## Next steps + +In this quickstart, you deployed a simple virtual machine using Terraform. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs. + +> [!div class="nextstepaction"] +> [Azure Linux virtual machine tutorials](./tutorial-manage-vm.md) \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/ssh.tf b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/ssh.tf new file mode 100644 index 000000000..11de7c0a4 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/ssh.tf @@ -0,0 +1,25 @@ +resource "random_pet" "ssh_key_name" { + prefix = "ssh" + separator = "" +} + +resource "azapi_resource_action" "ssh_public_key_gen" { + type = "Microsoft.Compute/sshPublicKeys@2022-11-01" + resource_id = azapi_resource.ssh_public_key.id + action = "generateKeyPair" + method = "POST" + + response_export_values = ["publicKey", "privateKey"] +} + +resource "azapi_resource" "ssh_public_key" { + type = "Microsoft.Compute/sshPublicKeys@2022-11-01" + name = random_pet.ssh_key_name.id + location = azurerm_resource_group.rg.location + parent_id = azurerm_resource_group.rg.id +} + +output "key_data" { + value = azapi_resource_action.ssh_public_key_gen.output.publicKey + sensitive = true +} \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/variables.tf b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/variables.tf new file mode 100644 index 000000000..37a12b1f4 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/variables.tf @@ -0,0 +1,17 @@ +variable "resource_group_location" { + type = string + default = "eastus2" + description = "Location of the resource group." +} + +variable "resource_group_name_prefix" { + type = string + default = "rg" + description = "Prefix of the resource group name that's combined with a random ID so name is unique in your Azure subscription." +} + +variable "username" { + type = string + description = "The username for the local account that will be created on the new VM." + default = "azureadmin" +} \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md b/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md new file mode 100644 index 000000000..5b46a9fd8 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md @@ -0,0 +1,193 @@ +--- +title: Tutorial - Customize a Linux VM with cloud-init in Azure +description: In this tutorial, you learn how to use cloud-init and Key Vault to customize Linux VMs the first time they boot in Azure +author: ju-shim +ms.service: azure-virtual-machines +ms.collection: linux +ms.topic: tutorial +ms.date: 10/18/2023 +ms.author: jushiman +ms.reviewer: mattmcinnes +ms.custom: mvc, devx-track-azurecli, linux-related-content, innovation-engine +--- + +# Tutorial - How to use cloud-init to customize a Linux virtual machine in Azure on first boot + +**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets + +In a previous tutorial, you learned how to SSH to a virtual machine (VM) and manually install NGINX. To create VMs in a quick and consistent manner, some form of automation is typically desired. A common approach to customize a VM on first boot is to use [cloud-init](https://cloudinit.readthedocs.io). In this tutorial you learn how to: + +> [!div class="checklist"] +> * Create a cloud-init config file +> * Create a VM that uses a cloud-init file +> * View a running Node.js app after the VM is created +> * Use Key Vault to securely store certificates +> * Automate secure deployments of NGINX with cloud-init + +If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). + +## Cloud-init overview + +[Cloud-init](https://cloudinit.readthedocs.io) is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to install packages and write files, or to configure users and security. As cloud-init runs during the initial boot process, there are no additional steps or required agents to apply your configuration. + +Cloud-init also works across distributions. For example, you don't use **apt-get install** or **yum install** to install a package. Instead you can define a list of packages to install. Cloud-init automatically uses the native package management tool for the distro you select. + +We are working with our partners to get cloud-init included and working in the images that they provide to Azure. For detailed information cloud-init support for each distribution, see [Cloud-init support for VMs in Azure](using-cloud-init.md). + +## Create cloud-init config file + +To see cloud-init in action, create a VM that installs NGINX and runs a simple 'Hello World' Node.js app. The following cloud-init configuration installs the required packages, creates a Node.js app, then initializes and starts the app. + +At your bash prompt or in the Cloud Shell, create a file named *cloud-init.txt* and paste the following configuration. For example, type `sensible-editor cloud-init.txt` to create the file and see a list of available editors. Make sure that the whole cloud-init file is copied correctly, especially the first line: + +```yaml +#cloud-config +package_upgrade: true +packages: + - nginx + - nodejs + - npm +write_files: + - owner: www-data:www-data + path: /etc/nginx/sites-available/default + defer: true + content: | + server { + listen 80; + location / { + proxy_pass http://localhost:3000; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection keep-alive; + proxy_set_header Host $host; + proxy_cache_bypass $http_upgrade; + } + } + - owner: azureuser:azureuser + path: /home/azureuser/myapp/index.js + defer: true + content: | + var express = require('express') + var app = express() + var os = require('os'); + app.get('/', function (req, res) { + res.send('Hello World from host ' + os.hostname() + '!') + }) + app.listen(3000, function () { + console.log('Hello world app listening on port 3000!') + }) +runcmd: + - service nginx restart + - cd "/home/azureuser/myapp" + - npm init + - npm install express -y + - nodejs index.js +``` + +For more information about cloud-init configuration options, see [cloud-init config examples](https://cloudinit.readthedocs.io/en/latest/topics/examples.html). + +## Create virtual machine + +Before you can create a VM, create a resource group with [az group create](/cli/azure/group#az-group-create). The following example creates a resource group. In these commands, a random suffix is appended to the resource group and VM names to prevent name collisions during repeated deployments. + +```bash +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export RESOURCE_GROUP="myResourceGroupAutomate$RANDOM_SUFFIX" +export REGION="eastus2" +az group create --name $RESOURCE_GROUP --location $REGION +``` + +Results: + + +```JSON +{ + "id": "/subscriptions/xxxxx-xxxxx-xxxxx-xxxxx/resourceGroups/myResourceGroupAutomatexxx", + "location": "eastus", + "managedBy": null, + "name": "myResourceGroupAutomatexxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +Now create a VM with [az vm create](/cli/azure/vm#az-vm-create). Use the `--custom-data` parameter to pass in your cloud-init config file. Provide the full path to the *cloud-init.txt* config if you saved the file outside of your present working directory. The following example creates a VM; note that the VM name is also appended with the random suffix. + +```bash +export VM_NAME="myAutomatedVM$RANDOM_SUFFIX" +az vm create \ + --resource-group $RESOURCE_GROUP \ + --name $VM_NAME \ + --image Ubuntu2204 \ + --admin-username azureuser \ + --generate-ssh-keys \ + --custom-data cloud-init.txt +``` + +Results: + + +```JSON +{ + "fqdns": "", + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupAutomatexxx/providers/Microsoft.Compute/virtualMachines/myAutomatedVMxxx", + "location": "eastus", + "name": "myAutomatedVMxxx", + "powerState": "VM running", + "publicIpAddress": "x.x.x.x", + "resourceGroup": "myResourceGroupAutomatexxx", + "zones": "" +} +``` + +It takes a few minutes for the VM to be created, the packages to install, and the app to start. There are background tasks that continue to run after the Azure CLI returns you to the prompt. It may be another couple of minutes before you can access the app. When the VM has been created, take note of the `publicIpAddress` displayed by the Azure CLI. This address is used to access the Node.js app via a web browser. + +To allow web traffic to reach your VM, open port 80 from the Internet with [az vm open-port](/cli/azure/vm#az-vm-open-port): + +```bash +az vm open-port --port 80 --resource-group $RESOURCE_GROUP --name $VM_NAME +``` + +Results: + + +```JSON +{ + "endpoints": [ + { + "name": "80", + "protocol": "tcp", + "publicPort": 80, + "privatePort": 80 + } + ], + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupAutomatexxx/providers/Microsoft.Compute/virtualMachines/myAutomatedVMxxx", + "location": "eastus", + "name": "myAutomatedVMxxx" +} +``` + +## Test web app + +Now you can open a web browser and enter *http://* in the address bar. Provide your own public IP address from the VM create process. Your Node.js app is displayed as shown in the following example: + +![View running NGINX site](./media/tutorial-automate-vm-deployment/nginx.png) + +## Next steps + +In this tutorial, you configured VMs on first boot with cloud-init. You learned how to: + +> [!div class="checklist"] +> * Create a cloud-init config file +> * Create a VM that uses a cloud-init file +> * View a running Node.js app after the VM is created +> * Use Key Vault to securely store certificates +> * Automate secure deployments of NGINX with cloud-init + +Advance to the next tutorial to learn how to create custom VM images. + +> [!div class="nextstepaction"] +> [Create custom VM images](./tutorial-custom-images.md) \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-elasticsearch.md b/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-elasticsearch.md new file mode 100644 index 000000000..1bcd70639 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-elasticsearch.md @@ -0,0 +1,304 @@ +--- +title: Deploy ElasticSearch on a development virtual machine in Azure +description: Install the Elastic Stack (ELK) onto a development Linux VM in Azure +services: virtual-machines +author: rloutlaw +manager: justhe +ms.service: azure-virtual-machines +ms.collection: linux +ms.devlang: azurecli +ms.custom: devx-track-azurecli, linux-related-content, innovation-engine +ms.topic: how-to +ms.date: 10/11/2017 +ms.author: routlaw +--- + +# Install the Elastic Stack (ELK) on an Azure VM + +**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets + +This article walks you through how to deploy [Elasticsearch](https://www.elastic.co/products/elasticsearch), [Logstash](https://www.elastic.co/products/logstash), and [Kibana](https://www.elastic.co/products/kibana), on an Ubuntu VM in Azure. To see the Elastic Stack in action, you can optionally connect to Kibana and work with some sample logging data. + +Additionally, you can follow the [Deploy Elastic on Azure Virtual Machines](/training/modules/deploy-elastic-azure-virtual-machines/) module for a more guided tutorial on deploying Elastic on Azure Virtual Machines. + +In this tutorial you learn how to: + +> [!div class="checklist"] +> * Create an Ubuntu VM in an Azure resource group +> * Install Elasticsearch, Logstash, and Kibana on the VM +> * Send sample data to Elasticsearch with Logstash +> * Open ports and work with data in the Kibana console + +This deployment is suitable for basic development with the Elastic Stack. For more on the Elastic Stack, including recommendations for a production environment, see the [Elastic documentation](https://www.elastic.co/guide/index.html) and the [Azure Architecture Center](/azure/architecture/elasticsearch/). + +[!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] + +- This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. + +## Create a resource group + +In this section, environment variables are declared for use in subsequent commands. A random suffix is appended to resource names for uniqueness. + +```bash +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export RESOURCE_GROUP="myResourceGroup$RANDOM_SUFFIX" +export REGION="eastus2" +az group create --name $RESOURCE_GROUP --location $REGION +``` + +Results: + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxxxxx", + "location": "eastus", + "managedBy": null, + "name": "myResourceGroupxxxxxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +## Create a virtual machine + +This section creates a VM with a unique name, while also generating SSH keys if they do not already exist. A random suffix is appended to ensure uniqueness. + +```bash +export VM_NAME="myVM$RANDOM_SUFFIX" +az vm create \ + --resource-group $RESOURCE_GROUP \ + --name $VM_NAME \ + --image Ubuntu2204 \ + --admin-username azureuser \ + --generate-ssh-keys +``` + +When the VM has been created, the Azure CLI shows information similar to the following example. Take note of the publicIpAddress. This address is used to access the VM. + +Results: + + +```JSON +{ + "fqdns": "", + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxxxxx/providers/Microsoft.Compute/virtualMachines/myVMxxxxxx", + "location": "eastus", + "macAddress": "xx:xx:xx:xx:xx:xx", + "powerState": "VM running", + "privateIpAddress": "10.0.0.4", + "publicIpAddress": "x.x.x.x", + "resourceGroup": "$RESOURCE_GROUP" +} +``` + +## SSH into your VM + +If you don't already know the public IP address of your VM, run the following command to list it: + +```azurecli-interactive +az network public-ip list --resource-group $RESOURCE_GROUP --query [].ipAddress +``` + +Use the following command to create an SSH session with the virtual machine. Substitute the correct public IP address of your virtual machine. In this example, the IP address is *40.68.254.142*. + +```bash +export PUBLIC_IP_ADDRESS=$(az network public-ip list --resource-group $RESOURCE_GROUP --query [].ipAddress -o tsv) +``` + +## Install the Elastic Stack + +In this section, you import the Elasticsearch signing key and update your APT sources list to include the Elastic package repository. This is followed by installing the Java runtime environment which is required for the Elastic Stack components. + +```bash +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " +wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - +echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list +" +``` + +Install the Java Virtual Machine on the VM and configure the JAVA_HOME variable: + +```bash +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " +sudo apt install -y openjdk-8-jre-headless +export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 +" +``` + +Run the following command to update Ubuntu package sources and install Elasticsearch, Kibana, and Logstash. + +```bash +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " + wget -qO elasticsearch.gpg https://artifacts.elastic.co/GPG-KEY-elasticsearch + sudo mv elasticsearch.gpg /etc/apt/trusted.gpg.d/ + + echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list + + sudo apt update + + # Now install the ELK stack + sudo apt install -y elasticsearch kibana logstash +" +``` + +> [!NOTE] +> Detailed installation instructions, including directory layouts and initial configuration, are maintained in [Elastic's documentation](https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html) + +## Start Elasticsearch + +Start Elasticsearch on your VM with the following command: + +```bash +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " +sudo systemctl start elasticsearch.service +" +``` + +This command produces no output, so verify that Elasticsearch is running on the VM with this curl command: + +```bash +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " +sleep 11 +sudo curl -XGET 'localhost:9200/' +" +``` + +If Elasticsearch is running, you see output like the following: + +Results: + + +```json +{ + "name" : "w6Z4NwR", + "cluster_name" : "elasticsearch", + "cluster_uuid" : "SDzCajBoSK2EkXmHvJVaDQ", + "version" : { + "number" : "5.6.3", + "build_hash" : "1a2f265", + "build_date" : "2017-10-06T20:33:39.012Z", + "build_snapshot" : false, + "lucene_version" : "6.6.1" + }, + "tagline" : "You Know, for Search" +} +``` + +## Start Logstash and add data to Elasticsearch + +Start Logstash with the following command: + +```bash +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " +sudo systemctl start logstash.service +" +``` + +Test Logstash to make sure it's working correctly: + +```bash +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " +# Time-limited test with file input instead of stdin +sudo timeout 11s /usr/share/logstash/bin/logstash -e 'input { file { path => "/var/log/syslog" start_position => "end" sincedb_path => "/dev/null" stat_interval => "1 second" } } output { stdout { codec => json } }' || echo "Logstash test completed" +" +``` + +This is a basic Logstash [pipeline](https://www.elastic.co/guide/en/logstash/5.6/pipeline.html) that echoes standard input to standard output. + +Set up Logstash to forward the kernel messages from this VM to Elasticsearch. To create the Logstash configuration file, run the following command which writes the configuration to a new file called vm-syslog-logstash.conf: + +```bash +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " +cat << 'EOF' > vm-syslog-logstash.conf +input { + stdin { + type => "stdin-type" + } + + file { + type => "syslog" + path => [ "/var/log/*.log", "/var/log/*/*.log", "/var/log/messages", "/var/log/syslog" ] + start_position => "beginning" + } +} + +output { + + stdout { + codec => rubydebug + } + elasticsearch { + hosts => "localhost:9200" + } +} +EOF +" +``` + +Test this configuration and send the syslog data to Elasticsearch: + +```bash +# Run Logstash with the configuration for 60 seconds +sudo timeout 60s /usr/share/logstash/bin/logstash -f vm-syslog-logstash.conf & +LOGSTASH_PID=$! + +# Wait for data to be processed +echo "Processing logs for 60 seconds..." +sleep 65 + +# Verify data was sent to Elasticsearch with proper error handling +echo "Verifying data in Elasticsearch..." +ES_COUNT=$(sudo curl -s -XGET 'localhost:9200/_cat/count?v' | tail -n 1 | awk '{print $3}' 2>/dev/null || echo "0") + +# Make sure ES_COUNT is a number or default to 0 +if ! [[ "$ES_COUNT" =~ ^[0-9]+$ ]]; then + ES_COUNT=0 + echo "Warning: Could not get valid document count from Elasticsearch" +fi + +echo "Found $ES_COUNT documents in Elasticsearch" + +if [ "$ES_COUNT" -gt 0 ]; then + echo "✅ Logstash successfully sent data to Elasticsearch" +else + echo "❌ No data found in Elasticsearch, there might be an issue with Logstash configuration" +fi +``` + +You see the syslog entries in your terminal echoed as they are sent to Elasticsearch. Use CTRL+C to exit out of Logstash once you've sent some data. + +## Start Kibana and visualize the data in Elasticsearch + +Edit the Kibana configuration file (/etc/kibana/kibana.yml) and change the IP address Kibana listens on so you can access it from your web browser: + +```text +server.host: "0.0.0.0" +``` + +Start Kibana with the following command: + +```bash +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " +sudo systemctl start kibana.service +" +``` + +Open port 5601 from the Azure CLI to allow remote access to the Kibana console: + +```azurecli-interactive +az vm open-port --port 5601 --resource-group $RESOURCE_GROUP --name $VM_NAME +``` + +## Next steps + +In this tutorial, you deployed the Elastic Stack into a development VM in Azure. You learned how to: + +> [!div class="checklist"] +> * Create an Ubuntu VM in an Azure resource group +> * Install Elasticsearch, Logstash, and Kibana on the VM +> * Send sample data to Elasticsearch from Logstash +> * Open ports and work with data in the Kibana console \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-lamp-stack.md b/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-lamp-stack.md new file mode 100644 index 000000000..a318871e8 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-lamp-stack.md @@ -0,0 +1,186 @@ +--- +title: Tutorial - Deploy LAMP and WordPress on a VM +description: In this tutorial, you learn how to install the LAMP stack, and WordPress, on a Linux virtual machine in Azure. +author: ju-shim +ms.collection: linux +ms.service: azure-virtual-machines +ms.devlang: azurecli +ms.custom: linux-related-content, innovation-engine +ms.topic: tutorial +ms.date: 4/4/2023 +ms.author: mattmcinnes +ms.reviewer: cynthn +#Customer intent: As an IT administrator, I want to learn how to install the LAMP stack so that I can quickly prepare a Linux VM to run web applications. +--- + +# Tutorial: Install a LAMP stack on an Azure Linux VM + +**Applies to:** :heavy_check_mark: Linux VMs + +This article walks you through how to deploy an Apache web server, MySQL, and PHP (the LAMP stack) on an Ubuntu VM in Azure. To see the LAMP server in action, you can optionally install and configure a WordPress site. In this tutorial you learn how to: + +> [!div class="checklist"] +> * Create an Ubuntu VM +> * Open port 80 for web traffic +> * Install Apache, MySQL, and PHP +> * Verify installation and configuration +> * Install WordPress + +This setup is for quick tests or proof of concept. For more on the LAMP stack, including recommendations for a production environment, see the [Ubuntu documentation](https://help.ubuntu.com/community/ApacheMySQLPHP). + +This tutorial uses the CLI within the [Azure Cloud Shell](/azure/cloud-shell/overview), which is constantly updated to the latest version. To open the Cloud Shell, select **Try it** from the top of any code block. + +If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). + +## Create a resource group + +Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. + +The following example creates a resource group using environment variables and appends a random suffix to ensure uniqueness. + +```azurecli-interactive +export REGION="eastus2" +export RANDOM_SUFFIX="$(openssl rand -hex 3)" +export MY_RESOURCE_GROUP_NAME="myResourceGroup${RANDOM_SUFFIX}" +az group create --name "${MY_RESOURCE_GROUP_NAME}" --location $REGION +``` + +Results: + + + +```JSON +{ + "id": "/subscriptions/xxxxx-xxxxx-xxxxx-xxxxx/resourceGroups/myResourceGroupxxxxx", + "location": "eastus", + "name": "myResourceGroupxxxxx", + "properties": { + "provisioningState": "Succeeded" + } +} +``` + +## Create a virtual machine + +Create a VM with the [az vm create](/cli/azure/vm) command. + +The following example creates a VM using environment variables. It creates a VM named *myVM* and creates SSH keys if they don't already exist in a default key location. To use a specific set of keys, use the `--ssh-key-value` option. The command also sets *azureuser* as an administrator user name. You use this name later to connect to the VM. + +```azurecli-interactive +export MY_VM_NAME="myVM${RANDOM_SUFFIX}" +export IMAGE="Ubuntu2204" +export ADMIN_USERNAME="azureuser" +az vm create \ + --resource-group "${MY_RESOURCE_GROUP_NAME}" \ + --name $MY_VM_NAME \ + --image $IMAGE \ + --admin-username $ADMIN_USERNAME \ + --generate-ssh-keys +``` + +When the VM has been created, the Azure CLI shows information similar to the following example. Take note of the `publicIpAddress`. This address is used to access the VM in later steps. + +```output +{ + "fqdns": "", + "id": "/subscriptions//resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM", + "location": "eastus", + "macAddress": "00-0D-3A-23-9A-49", + "powerState": "VM running", + "privateIpAddress": "10.0.0.4", + "publicIpAddress": "40.68.254.142", + "resourceGroup": "myResourceGroup" +} +``` + +## Open port 80 for web traffic + +By default, only SSH connections are allowed into Linux VMs deployed in Azure. Because this VM is going to be a web server, you need to open port 80 from the internet. Use the [az vm open-port](/cli/azure/vm) command to open the desired port. + +```azurecli-interactive +az vm open-port --port 80 --resource-group "${MY_RESOURCE_GROUP_NAME}" --name $MY_VM_NAME +``` + +For more information about opening ports to your VM, see [Open ports](nsg-quickstart.md). + +## SSH into your VM + +If you don't already know the public IP address of your VM, run the [az network public-ip list](/cli/azure/network/public-ip) command. You need this IP address for several later steps. + +```azurecli-interactive +export PUBLIC_IP=$(az network public-ip list --resource-group "${MY_RESOURCE_GROUP_NAME}" --query [].ipAddress -o tsv) +``` + +Use the `ssh` command to create an SSH session with the virtual machine. Substitute the correct public IP address of your virtual machine. + +## Install Apache, MySQL, and PHP + +Run the following command to update Ubuntu package sources and install Apache, MySQL, and PHP. Note the caret (^) at the end of the command, which is part of the `lamp-server^` package name. + +```bash +ssh -o StrictHostKeyChecking=no azureuser@$PUBLIC_IP "sudo apt-get update && sudo DEBIAN_FRONTEND=noninteractive apt-get -y install lamp-server^" +``` + +You're prompted to install the packages and other dependencies. This process installs the minimum required PHP extensions needed to use PHP with MySQL. + +## Verify Apache + +Check the version of Apache with the following command: +```bash +ssh -o StrictHostKeyChecking=no azureuser@$PUBLIC_IP "apache2 -v" +``` + +With Apache installed, and port 80 open to your VM, the web server can now be accessed from the internet. To view the Apache2 Ubuntu Default Page, open a web browser, and enter the public IP address of the VM. Use the public IP address you used to SSH to the VM: + +![Apache default page][3] + +## Verify and secure MySQL + +Check the version of MySQL with the following command (note the capital `V` parameter): + +```bash +ssh -o StrictHostKeyChecking=no azureuser@$PUBLIC_IP "mysql -V" +``` + +To help secure the installation of MySQL, including setting a root password, you can run the `sudo mysql_secure_installation` command. This command prompts you to answer several questions to help secure your MySQL installation. + +You can optionally set up the Validate Password Plugin (recommended). Then, set a password for the MySQL root user, and configure the remaining security settings for your environment. We recommend that you answer "Y" (yes) to all questions. + +If you want to try MySQL features (create a MySQL database, add users, or change configuration settings), login to MySQL. This step isn't required to complete this tutorial. For doing this, you can use the `sudo mysql -u root -p` command in your VM and then enter your root password when prompted. This command connects to your VM via SSH and launches the MySQL command line client as the root user. + +When done, exit the mysql prompt by typing `\q`. + +## Verify PHP + +Check the version of PHP with the following command: + +```bash +ssh -o StrictHostKeyChecking=no azureuser@$PUBLIC_IP "php -v" +``` + +If you want to test further, you can create a quick PHP info page to view in a browser. The following command creates the PHP info page `sudo sh -c 'echo \"\" > /var/www/html/info.php` + +Now you can check the PHP info page you created. Open a browser and go to `http://yourPublicIPAddress/info.php`. Substitute the public IP address of your VM. It should look similar to this image. + +![PHP info page][2] + +[!INCLUDE [virtual-machines-linux-tutorial-wordpress.md](../includes/virtual-machines-linux-tutorial-wordpress.md)] + +## Next steps + +In this tutorial, you deployed a LAMP server in Azure. You learned how to: + +> [!div class="checklist"] +> * Create an Ubuntu VM +> * Open port 80 for web traffic +> * Install Apache, MySQL, and PHP +> * Verify installation and configuration +> * Install WordPress on the LAMP server + +Advance to the next tutorial to learn how to secure web servers with TLS/SSL certificates. + +> [!div class="nextstepaction"] +> [Secure web server with TLS](tutorial-secure-web-server.md) + +[2]: ./media/tutorial-lamp-stack/phpsuccesspage.png +[3]: ./media/tutorial-lamp-stack/apachesuccesspage.png \ No newline at end of file diff --git a/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-manage-vm.md b/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-manage-vm.md new file mode 100644 index 000000000..08dd74bc9 --- /dev/null +++ b/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-manage-vm.md @@ -0,0 +1,332 @@ +--- +title: Tutorial - Create and manage Linux VMs with the Azure CLI +description: In this tutorial, you learn how to use the Azure CLI to create and manage Linux VMs in Azure +author: ju-shim +ms.service: azure-virtual-machines +ms.collection: linux +ms.topic: tutorial +ms.date: 03/23/2023 +ms.author: jushiman +ms.custom: mvc, devx-track-azurecli, linux-related-content, innovation-engine +#Customer intent: As an IT administrator, I want to learn about common maintenance tasks so that I can create and manage Linux VMs in Azure +--- + +# Tutorial: Create and Manage Linux VMs with the Azure CLI + +**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets + +Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers basic Azure virtual machine deployment items such as selecting a VM size, selecting a VM image, and deploying a VM. You learn how to: + +> [!div class="checklist"] +> * Create and connect to a VM +> * Select and use VM images +> * View and use specific VM sizes +> * Resize a VM +> * View and understand VM state + +This tutorial uses the CLI within the [Azure Cloud Shell](/azure/cloud-shell/overview), which is constantly updated to the latest version. + +If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). + +## Create resource group + +Below, we declare environment variables. A random suffix is appended to resource names that need to be unique for each deployment. + +```bash +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export REGION="eastus2" +export MY_RESOURCE_GROUP_NAME="myResourceGroupVM$RANDOM_SUFFIX" +az group create --name $MY_RESOURCE_GROUP_NAME --location $REGION +``` + +Results: + + + +```JSON +{ + "id": "/subscriptions/xxxxx-xxxxx-xxxxx-xxxxx/resourceGroups/myResourceGroupVMxxx", + "location": "eastus2", + "name": "myResourceGroupVMxxx", + "properties": { + "provisioningState": "Succeeded" + } +} +``` + +An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a virtual machine. In this example, a resource group named *myResourceGroupVM* is created in the *eastus2* region. + +The resource group is specified when creating or modifying a VM, which can be seen throughout this tutorial. + +## Create virtual machine + +When you create a virtual machine, several options are available such as operating system image, disk sizing, and administrative credentials. The following example creates a VM named *myVM* that runs SUSE Linux Enterprise Server (SLES). A user account named *azureuser* is created on the VM, and SSH keys are generated if they do not exist in the default key location (*~/.ssh*). + +```bash +export MY_VM_NAME="myVM$RANDOM_SUFFIX" +az vm create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $MY_VM_NAME \ + --image SuseSles15SP5 \ + --public-ip-sku Standard \ + --admin-username azureuser \ + --generate-ssh-keys +``` + +It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information about the VM. Take note of the `publicIpAddress`; this address can be used to access the virtual machine. + +```JSON +{ + "fqdns": "", + "id": "/subscriptions/xxxxx-xxxxx-xxxxx-xxxxx/resourceGroups/myResourceGroupVMxxx/providers/Microsoft.Compute/virtualMachines/myVMxxx", + "location": "eastus2", + "macAddress": "00-0D-3A-23-9A-49", + "powerState": "VM running", + "privateIpAddress": "10.0.0.4", + "publicIpAddress": "52.174.34.95", + "resourceGroup": "myResourceGroupVMxxx" +} +``` + +## Connect to VM + +You can now connect to the VM with SSH in the Azure Cloud Shell or from your local computer. Replace the example IP address with the `publicIpAddress` noted in the previous step. + +To connect to the VM, first retrieve the public IP address using the Azure CLI. Execute the following command to store the IP address in a variable: +```export IP_ADDRESS=$(az vm show --show-details --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME --query publicIps --output tsv)``` + +Once you have the IP address, use SSH to connect to the VM. The following command connects to the VM using the `azureuser` account and the retrieved IP address: +```ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS``` + +## Understand VM images + +The Azure Marketplace includes many images that can be used to create VMs. In the previous steps, a virtual machine was created using a SUSE image. In this step, the Azure CLI is used to search the marketplace for an Ubuntu image, which is then used to deploy a second virtual machine. + +To see a list of the most commonly used images, use the [az vm image list](/cli/azure/vm/image) command. + +```bash +az vm image list --output table +``` + +The command output returns the most popular VM images on Azure. + +```output +Architecture Offer Publisher Sku Urn UrnAlias Version +-------------- ---------------------------- ---------------------- ---------------------------------- ------------------------------------------------------------------------------ ----------------------- --------- +x64 debian-10 Debian 10 Debian:debian-10:10:latest Debian latest +x64 flatcar-container-linux-free kinvolk stable kinvolk:flatcar-container-linux-free:stable:latest Flatcar latest +x64 opensuse-leap-15-3 SUSE gen2 SUSE:opensuse-leap-15-3:gen2:latest openSUSE-Leap latest +x64 RHEL RedHat 7-LVM RedHat:RHEL:7-LVM:latest RHEL latest +x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest SLES latest +x64 0001-com-ubuntu-server-jammy Canonical 18.04-LTS Canonical:UbuntuServer:18.04-LTS:latest UbuntuLTS latest +x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest Win2022Datacenter latest +x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest Win2022AzureEditionCore latest +x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest Win2019Datacenter latest +x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest Win2016Datacenter latest +x64 WindowsServer MicrosoftWindowsServer 2012-R2-Datacenter MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest Win2012R2Datacenter latest +x64 WindowsServer MicrosoftWindowsServer 2012-Datacenter MicrosoftWindowsServer:WindowsServer:2012-Datacenter:latest Win2012Datacenter latest +x64 WindowsServer MicrosoftWindowsServer 2008-R2-SP1 MicrosoftWindowsServer:WindowsServer:2008-R2-SP1:latest Win2008R2SP1 latest +``` + +A full list can be seen by adding the `--all` parameter. The image list can also be filtered by `--publisher` or `–-offer`. In this example, the list is filtered for all images, published by OpenLogic, with an offer that matches *0001-com-ubuntu-server-jammy*. + +```bash +az vm image list --offer 0001-com-ubuntu-server-jammy --publisher Canonical --all --output table +``` + +Example partial output: + +```output +Architecture Offer Publisher Sku Urn Version +-------------- --------------------------------- ----------- --------------- ------------------------------------------------------------------------ --------------- +x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202204200 22.04.202204200 +x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202205060 22.04.202205060 +x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202205280 22.04.202205280 +x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202206040 22.04.202206040 +x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202206090 22.04.202206090 +x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202206160 22.04.202206160 +x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202206220 22.04.202206220 +x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202207060 22.04.202207060 +``` + +> [!NOTE] +> Canonical has changed the **Offer** names they use for the most recent versions. Before Ubuntu 20.04, the **Offer** name is UbuntuServer. For Ubuntu 20.04 the **Offer** name is `0001-com-ubuntu-server-focal` and for Ubuntu 22.04 it's `0001-com-ubuntu-server-jammy`. + +To deploy a VM using a specific image, take note of the value in the *Urn* column, which consists of the publisher, offer, SKU, and optionally a version number to [identify](cli-ps-findimage.md#terminology) the image. When specifying the image, the image version number can be replaced with `latest`, which selects the latest version of the distribution. In this example, the `--image` parameter is used to specify the latest version of a Ubuntu 22.04. + +```bash +export MY_VM2_NAME="myVM2$RANDOM_SUFFIX" +az vm create --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM2_NAME --image Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest --generate-ssh-keys +``` + +## Understand VM sizes + +A virtual machine size determines the amount of compute resources such as CPU, GPU, and memory that are made available to the virtual machine. Virtual machines need to be sized appropriately for the expected work load. If workload increases, an existing virtual machine can be resized. + +### VM Sizes + +The following table categorizes sizes into use cases. + +| Type | Description | +|--------------------------|------------------------------------------------------------------------------------------------------------------------------------| +| [General purpose](../sizes-general.md) | Balanced CPU-to-memory. Ideal for dev / test and small to medium applications and data solutions. | +| [Compute optimized](../sizes-compute.md) | High CPU-to-memory. Good for medium traffic applications, network appliances, and batch processes. | +| [Memory optimized](../sizes-memory.md) | High memory-to-core. Great for relational databases, medium to large caches, and in-memory analytics. | +| [Storage optimized](../sizes-storage.md) | High disk throughput and IO. Ideal for Big Data, SQL, and NoSQL databases. | +| [GPU](../sizes-gpu.md) | Specialized VMs targeted for heavy graphic rendering and video editing. | +| [High performance](../sizes-hpc.md) | Our most powerful CPU VMs with optional high-throughput network interfaces (RDMA). | + +### Find available VM sizes + +To see a list of VM sizes available in a particular region, use the [az vm list-sizes](/cli/azure/vm) command. + +```bash +az vm list-sizes --location $REGION --output table +``` + +Example partial output: + +```output + MaxDataDiskCount MemoryInMb Name NumberOfCores OsDiskSizeInMb ResourceDiskSizeInMb +------------------ ------------ ---------------------- --------------- ---------------- ---------------------- +4 8192 Standard_D2ds_v4 2 1047552 76800 +8 16384 Standard_D4ds_v4 4 1047552 153600 +16 32768 Standard_D8ds_v4 8 1047552 307200 +32 65536 Standard_D16ds_v4 16 1047552 614400 +32 131072 Standard_D32ds_v4 32 1047552 1228800 +32 196608 Standard_D48ds_v4 48 1047552 1843200 +32 262144 Standard_D64ds_v4 64 1047552 2457600 +4 8192 Standard_D2ds_v5 2 1047552 76800 +8 16384 Standard_D4ds_v5 4 1047552 153600 +16 32768 Standard_D8ds_v5 8 1047552 307200 +32 65536 Standard_D16ds_v5 16 1047552 614400 +32 131072 Standard_D32ds_v5 32 1047552 1228800 +32 196608 Standard_D48ds_v5 48 1047552 1843200 +32 262144 Standard_D64ds_v5 64 1047552 2457600 +32 393216 Standard_D96ds_v5 96 1047552 3686400 +``` + +### Create VM with specific size + +In the previous VM creation example, a size was not provided, which results in a default size. A VM size can be selected at creation time using [az vm create](/cli/azure/vm) and the `--size` parameter. + +```bash +export MY_VM3_NAME="myVM3$RANDOM_SUFFIX" +az vm create \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --name $MY_VM3_NAME \ + --image SuseSles15SP5 \ + --size Standard_D2ds_v4 \ + --generate-ssh-keys +``` + +### Resize a VM + +After a VM has been deployed, it can be resized to increase or decrease resource allocation. You can view the current size of a VM with [az vm show](/cli/azure/vm): + +```bash +az vm show --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME --query hardwareProfile.vmSize +``` + +Before resizing a VM, check if the desired size is available on the current Azure cluster. The [az vm list-vm-resize-options](/cli/azure/vm) command returns the list of sizes. + +```bash +az vm list-vm-resize-options --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME --query [].name +``` + +If the desired size is available, the VM can be resized from a powered-on state, although it will be rebooted during the operation. Use the [az vm resize]( /cli/azure/vm) command to perform the resize. + +```bash +az vm resize --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME --size Standard_D4s_v3 +``` + +If the desired size is not available on the current cluster, the VM needs to be deallocated before the resize operation can occur. Use the [az vm deallocate]( /cli/azure/vm) command to stop and deallocate the VM. Note that when the VM is powered back on, any data on the temporary disk may be removed. The public IP address also changes unless a static IP address is being used. Once deallocated, the resize can occur. + +After the resize, the VM can be started. + +```bash +az vm start --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME +``` + +## VM power states + +An Azure VM can have one of many power states. This state represents the current state of the VM from the standpoint of the hypervisor. + +### Power states + +| Power State | Description | +|-------------|-------------| +| Starting | Indicates the virtual machine is being started. | +| Running | Indicates that the virtual machine is running. | +| Stopping | Indicates that the virtual machine is being stopped. | +| Stopped | Indicates that the virtual machine is stopped. Virtual machines in the stopped state still incur compute charges. | +| Deallocating| Indicates that the virtual machine is being deallocated. | +| Deallocated | Indicates that the virtual machine is removed from the hypervisor but still available in the control plane. Virtual machines in the Deallocated state do not incur compute charges. | +| - | Indicates that the power state of the virtual machine is unknown. | + +### Find the power state + +To retrieve the state of a particular VM, use the [az vm get-instance-view](/cli/azure/vm) command. Be sure to specify a valid name for a virtual machine and resource group. + +```bash +az vm get-instance-view \ + --name $MY_VM_NAME \ + --resource-group $MY_RESOURCE_GROUP_NAME \ + --query instanceView.statuses[1] --output table +``` + +Output: + +```output +Code Level DisplayStatus +------------------ ------- --------------- +PowerState/running Info VM running +``` + +To retrieve the power state of all the VMs in your subscription, use the [Virtual Machines - List All API](/rest/api/compute/virtualmachines/listall) with parameter **statusOnly** set to *true*. + +## Management tasks + +During the life-cycle of a virtual machine, you may want to run management tasks such as starting, stopping, or deleting a virtual machine. Additionally, you may want to create scripts to automate repetitive or complex tasks. Using the Azure CLI, many common management tasks can be run from the command line or in scripts. + +### Get IP address + +This command returns the private and public IP addresses of a virtual machine. + +```bash +az vm list-ip-addresses --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME --output table +``` + +### Stop virtual machine + +```bash +az vm stop --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME +``` + +### Start virtual machine + +```bash +az vm start --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME +``` + +### Deleting VM resources + +Depending on how you delete a VM, it may only delete the VM resource, not the networking and disk resources. You can change the default behavior to delete other resources when you delete the VM. For more information, see [Delete a VM and attached resources](../delete.md). + +Deleting a resource group also deletes all resources in the resource group, like the VM, virtual network, and disk. The `--no-wait` parameter returns control to the prompt without waiting for the operation to complete. The `--yes` parameter confirms that you wish to delete the resources without an additional prompt to do so. + +## Next steps + +In this tutorial, you learned about basic VM creation and management such as how to: + +> [!div class="checklist"] +> * Create and connect to a VM +> * Select and use VM images +> * View and use specific VM sizes +> * Resize a VM +> * View and understand VM state + +Advance to the next tutorial to learn about VM disks. + +> [!div class="nextstepaction"] +> [Create and Manage VM disks](./tutorial-manage-disks.md) \ No newline at end of file diff --git a/scenarios/azure-dev-docs/articles/ansible/vm-configure.md b/scenarios/azure-dev-docs/articles/ansible/vm-configure.md new file mode 100644 index 000000000..e785a1230 --- /dev/null +++ b/scenarios/azure-dev-docs/articles/ansible/vm-configure.md @@ -0,0 +1,138 @@ +--- +title: Create a Linux virtual machines in Azure using Ansible +description: Learn how to create a Linux virtual machine in Azure using Ansible +keywords: ansible, azure, devops, virtual machine +ms.topic: tutorial +ms.date: 08/14/2024 +ms.custom: devx-track-ansible, linux-related-content +--- + +# Create a Linux virtual machines in Azure using Ansible + +This article presents a sample Ansible playbook for configuring a Linux virtual machine. + +In this article, you learn how to: + +> [!div class="checklist"] +> * Create a resource group +> * Create a virtual network +> * Create a public IP address +> * Create a network security group +> * Create a virtual network interface card +> * Create a virtual machine + +## 1. Configure your environment + +[!INCLUDE [open-source-devops-prereqs-azure-sub.md](../includes/open-source-devops-prereqs-azure-subscription.md)] +[!INCLUDE [ansible-prereqs-cloudshell-use-or-vm-creation1.md](includes/ansible-prereqs-cloudshell-use-or-vm-creation1.md)] + +## 2. Create an SSH key pair + +1. Run the following command. When prompted, specify the files to be created in the following directory: `/home/azureuser/.ssh/authorized_keys`. + + ```bash + ssh-keygen -m PEM -t rsa -b 4096 + ``` + +1. Copy the contents of the public key file. By default, the public key file is named `id_rsa.pub`. The value is a long string starting with "ssh-rsa ". You'll need this value in the next step. + +## 3. Implement the Ansible playbook + +1. Create a directory in which to test and run the sample Ansible code and make it the current directory. + +1. Create a file named `main.yml` and insert the following code. Replace the `` placeholder with the public key value from the previous step. + + ```yaml + - name: Create Azure VM + hosts: localhost + connection: local + tasks: + - name: Create resource group + azure_rm_resourcegroup: + name: myResourceGroup + location: eastus + - name: Create virtual network + azure_rm_virtualnetwork: + resource_group: myResourceGroup + name: myVnet + address_prefixes: "10.0.0.0/16" + - name: Add subnet + azure_rm_subnet: + resource_group: myResourceGroup + name: mySubnet + address_prefix: "10.0.1.0/24" + virtual_network: myVnet + - name: Create public IP address + azure_rm_publicipaddress: + resource_group: myResourceGroup + allocation_method: Static + name: myPublicIP + register: output_ip_address + - name: Public IP of VM + debug: + msg: "The public IP is {{ output_ip_address.state.ip_address }}." + - name: Create Network Security Group that allows SSH + azure_rm_securitygroup: + resource_group: myResourceGroup + name: myNetworkSecurityGroup + rules: + - name: SSH + protocol: Tcp + destination_port_range: 22 + access: Allow + priority: 1001 + direction: Inbound + - name: Create virtual network interface card + azure_rm_networkinterface: + resource_group: myResourceGroup + name: myNIC + virtual_network: myVnet + subnet: mySubnet + public_ip_name: myPublicIP + security_group: myNetworkSecurityGroup + - name: Create VM + azure_rm_virtualmachine: + resource_group: myResourceGroup + name: myVM + vm_size: Standard_DS1_v2 + admin_username: azureuser + ssh_password_enabled: false + ssh_public_keys: + - path: /home/azureuser/.ssh/authorized_keys + key_data: "" + network_interfaces: myNIC + image: + offer: 0001-com-ubuntu-server-jammy + publisher: Canonical + sku: 22_04-lts + version: latest + ``` + +## 4. Run the playbook + +[!INCLUDE [ansible-playbook.md](includes/ansible-playbook.md)] + +## 5. Verify the results + +Run [az vm list](/cli/azure/vm#az-vm-list) to verify the VM was created. + + ```azurecli + az vm list -d -o table --query "[?name=='myVM']" + ``` + +## 6. Connect to the VM + +Run the SSH command to connect to your new Linux VM. Replace the <ip-address> placeholder with the IP address from the previous step. + +```bash +ssh azureuser@ -i /home/azureuser/.ssh/authorized_keys/id_rsa +``` + +## Clean up resources + +[!INCLUDE [ansible-delete-resource-group.md](includes/ansible-delete-resource-group.md)] + +## Next steps + +> [!div class="nextstepaction"] +> [Manage a Linux virtual machine in Azure using Ansible](./vm-manage.md) \ No newline at end of file diff --git a/scenarios/azure-docs/articles/batch/quick-create-cli.md b/scenarios/azure-docs/articles/batch/quick-create-cli.md new file mode 100644 index 000000000..b0b86f1f4 --- /dev/null +++ b/scenarios/azure-docs/articles/batch/quick-create-cli.md @@ -0,0 +1,247 @@ +--- +title: 'Quickstart: Use the Azure CLI to create a Batch account and run a job' +description: Follow this quickstart to use the Azure CLI to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool. +ms.topic: quickstart +ms.date: 03/19/2025 +ms.custom: mvc, devx-track-azurecli, mode-api, linux-related-content, innovation-engine +author: padmalathas +ms.author: padmalathas +--- + +# Quickstart: Use the Azure CLI to create a Batch account and run a job + +This quickstart shows you how to get started with Azure Batch by using Azure CLI commands and scripts to create and manage Batch resources. You create a Batch account that has a pool of virtual machines, or compute nodes. You then create and run a job with tasks that run on the pool nodes. + +After you complete this quickstart, you understand the [key concepts of the Batch service](batch-service-workflow-features.md) and are ready to use Batch with more realistic, larger scale workloads. + +## Prerequisites + +- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] + +- Azure Cloud Shell or Azure CLI. + + You can run the Azure CLI commands in this quickstart interactively in Azure Cloud Shell. To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also [run Cloud Shell from within the Azure portal](https://shell.azure.com). Cloud Shell always uses the latest version of the Azure CLI. + + Alternatively, you can [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. The steps in this article require Azure CLI version 2.0.20 or later. Run [az version](/cli/azure/reference-index?#az-version) to see your installed version and dependent libraries, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) to upgrade. If you use a local installation, sign in to Azure by using the appropriate command. + +>[!NOTE] +>For some regions and subscription types, quota restrictions might cause Batch account or node creation to fail or not complete. In this situation, you can request a quota increase at no charge. For more information, see [Batch service quotas and limits](batch-quota-limit.md). + +## Create a resource group + +Run the following [az group create](/cli/azure/group#az-group-create) command to create an Azure resource group. The resource group is a logical container that holds the Azure resources for this quickstart. + +```azurecli-interactive +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export REGION="canadacentral" +export RESOURCE_GROUP="qsBatch$RANDOM_SUFFIX" + +az group create \ + --name $RESOURCE_GROUP \ + --location $REGION +``` + +Results: + + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/qsBatchxxx", + "location": "eastus2", + "managedBy": null, + "name": "qsBatchxxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +## Create a storage account + +Use the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command to create an Azure Storage account to link to your Batch account. Although this quickstart doesn't use the storage account, most real-world Batch workloads use a linked storage account to deploy applications and store input and output data. + +Run the following command to create a Standard_LRS SKU storage account in your resource group: + +```azurecli-interactive +export STORAGE_ACCOUNT="mybatchstorage$RANDOM_SUFFIX" + +az storage account create \ + --resource-group $RESOURCE_GROUP \ + --name $STORAGE_ACCOUNT \ + --location $REGION \ + --sku Standard_LRS +``` + +## Create a Batch account + +Run the following [az batch account create](/cli/azure/batch/account#az-batch-account-create) command to create a Batch account in your resource group and link it with the storage account. + +```azurecli-interactive +export BATCH_ACCOUNT="mybatchaccount$RANDOM_SUFFIX" + +az batch account create \ + --name $BATCH_ACCOUNT \ + --storage-account $STORAGE_ACCOUNT \ + --resource-group $RESOURCE_GROUP \ + --location $REGION +``` + +Sign in to the new Batch account by running the [az batch account login](/cli/azure/batch/account#az-batch-account-login) command. Once you authenticate your account with Batch, subsequent `az batch` commands in this session use this account context. + +```azurecli-interactive +az batch account login \ + --name $BATCH_ACCOUNT \ + --resource-group $RESOURCE_GROUP \ + --shared-key-auth +``` + +## Create a pool of compute nodes + +Run the [az batch pool create](/cli/azure/batch/pool#az-batch-pool-create) command to create a pool of Linux compute nodes in your Batch account. The following example creates a pool that consists of two Standard_A1_v2 size VMs running Ubuntu 20.04 LTS OS. This node size offers a good balance of performance versus cost for this quickstart example. + +```azurecli-interactive +export POOL_ID="myPool$RANDOM_SUFFIX" + +az batch pool create \ + --id $POOL_ID \ + --image canonical:0001-com-ubuntu-server-focal:20_04-lts \ + --node-agent-sku-id "batch.node.ubuntu 20.04" \ + --target-dedicated-nodes 2 \ + --vm-size Standard_A1_v2 +``` + +Batch creates the pool immediately, but takes a few minutes to allocate and start the compute nodes. To see the pool status, use the [az batch pool show](/cli/azure/batch/pool#az-batch-pool-show) command. This command shows all the properties of the pool, and you can query for specific properties. The following command queries for the pool allocation state: + +```azurecli-interactive +az batch pool show --pool-id $POOL_ID \ + --query "{allocationState: allocationState}" +``` + +Results: + + + +```JSON +{ + "allocationState": "resizing" +} +``` + +While Batch allocates and starts the nodes, the pool is in the `resizing` state. You can create a job and tasks while the pool state is still `resizing`. The pool is ready to run tasks when the allocation state is `steady` and all the nodes are running. + +## Create a job + +Use the [az batch job create](/cli/azure/batch/job#az-batch-job-create) command to create a Batch job to run on your pool. A Batch job is a logical group of one or more tasks. The job includes settings common to the tasks, such as the pool to run on. The following example creates a job that initially has no tasks. + +```azurecli-interactive +export JOB_ID="myJob$RANDOM_SUFFIX" + +az batch job create \ + --id $JOB_ID \ + --pool-id $POOL_ID +``` + +## Create job tasks + +Batch provides several ways to deploy apps and scripts to compute nodes. Use the [az batch task create](/cli/azure/batch/task#az-batch-task-create) command to create tasks to run in the job. Each task has a command line that specifies an app or script. + +The following Bash script creates four identical, parallel tasks called `myTask1` through `myTask4`. The task command line displays the Batch environment variables on the compute node, and then waits 90 seconds. + +```azurecli-interactive +for i in {1..4} +do + az batch task create \ + --task-id myTask$i \ + --job-id $JOB_ID \ + --command-line "/bin/bash -c 'printenv | grep AZ_BATCH; sleep 90s'" +done +``` + +Batch distributes the tasks to the compute nodes. + +## View task status + +After you create the tasks, Batch queues them to run on the pool. Once a node is available, a task runs on the node. + +Use the [az batch task show](/cli/azure/batch/task#az-batch-task-show) command to view the status of Batch tasks. The following example shows details about the status of `myTask1`: + +```azurecli-interactive +az batch task show \ + --job-id $JOB_ID \ + --task-id myTask1 +``` + +The command output includes many details. For example, an `exitCode` of `0` indicates that the task command completed successfully. The `nodeId` shows the name of the pool node that ran the task. + +## View task output + +Use the [az batch task file list](/cli/azure/batch/task#az-batch-task-file-show) command to list the files a task created on a node. The following command lists the files that `myTask1` created: + +```azurecli-interactive +# Wait for task to complete before downloading output +echo "Waiting for task to complete..." +while true; do + STATUS=$(az batch task show --job-id $JOB_ID --task-id myTask1 --query "state" -o tsv) + if [ "$STATUS" == "running" ]; then + break + fi + sleep 10 +done + +az batch task file list --job-id $JOB_ID --task-id myTask1 --output table +``` + +Results are similar to the following output: + +Results: + + + +```output +Name URL Is Directory Content Length +---------- ---------------------------------------------------------------------------------------- -------------- ---------------- +stdout.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/stdout.txt False 695 +certs https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/certs True +wd https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/wd True +stderr.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/stderr.txt False 0 +``` + +The [az batch task file download](/cli/azure/batch/task#az-batch-task-file-download) command downloads output files to a local directory. Run the following example to download the *stdout.txt* file: + +```azurecli-interactive +az batch task file download \ + --job-id $JOB_ID \ + --task-id myTask1 \ + --file-path stdout.txt \ + --destination ./stdout.txt +``` + +You can view the contents of the standard output file in a text editor. The following example shows a typical *stdout.txt* file. The standard output from this task shows the Azure Batch environment variables that are set on the node. You can refer to these environment variables in your Batch job task command lines, and in the apps and scripts the command lines run. + +```text +AZ_BATCH_TASK_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1 +AZ_BATCH_NODE_STARTUP_DIR=/mnt/batch/tasks/startup +AZ_BATCH_CERTIFICATES_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1/certs +AZ_BATCH_ACCOUNT_URL=https://mybatchaccount.eastus2.batch.azure.com/ +AZ_BATCH_TASK_WORKING_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1/wd +AZ_BATCH_NODE_SHARED_DIR=/mnt/batch/tasks/shared +AZ_BATCH_TASK_USER=_azbatch +AZ_BATCH_NODE_ROOT_DIR=/mnt/batch/tasks +AZ_BATCH_JOB_ID=myJob +AZ_BATCH_NODE_IS_DEDICATED=true +AZ_BATCH_NODE_ID=tvm-257509324_2-20180703t215033z +AZ_BATCH_POOL_ID=myPool +AZ_BATCH_TASK_ID=myTask1 +AZ_BATCH_ACCOUNT_NAME=mybatchaccount +AZ_BATCH_TASK_USER_IDENTITY=PoolNonAdmin +``` + +## Next steps + +In this quickstart, you created a Batch account and pool, created and ran a Batch job and tasks, and viewed task output from the nodes. Now that you understand the key concepts of the Batch service, you're ready to use Batch with more realistic, larger scale workloads. To learn more about Azure Batch, continue to the Azure Batch tutorials. + +> [!div class="nextstepaction"] +> [Tutorial: Run a parallel workload with Azure Batch](./tutorial-parallel-python.md) \ No newline at end of file diff --git a/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-add-nodepool.md b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-add-nodepool.md new file mode 100644 index 000000000..f88c2f19b --- /dev/null +++ b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-add-nodepool.md @@ -0,0 +1,140 @@ +--- +title: Azure Linux Container Host for AKS tutorial - Add an Azure Linux node pool to your existing AKS cluster +description: In this Azure Linux Container Host for AKS tutorial, you learn how to add an Azure Linux node pool to your existing cluster. +author: suhuruli +ms.author: suhuruli +ms.service: microsoft-linux +ms.custom: linux-related-content, innovation-engine +ms.topic: tutorial +ms.date: 06/06/2023 +--- + +# Tutorial: Add an Azure Linux node pool to your existing AKS cluster + +In AKS, nodes with the same configurations are grouped together into node pools. Each pool contains the VMs that run your applications. In the previous tutorial, you created an Azure Linux Container Host cluster with a single node pool. To meet the varying compute or storage requirements of your applications, you can create additional user node pools. + +In this tutorial, part two of five, you learn how to: + +> [!div class="checklist"] +> +> * Add an Azure Linux node pool. +> * Check the status of your node pools. + +In later tutorials, you learn how to migrate nodes to Azure Linux and enable telemetry to monitor your clusters. + +## Prerequisites + +* In the previous tutorial, you created and deployed an Azure Linux Container Host cluster. If you haven't done these steps and would like to follow along, start with [Tutorial 1: Create a cluster with the Azure Linux Container Host for AKS](./tutorial-azure-linux-create-cluster.md). +* You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). + +## Add an Azure Linux node pool + +To add an Azure Linux node pool into your existing cluster, use the `az aks nodepool add` command and specify `--os-sku AzureLinux`. The following example creates a node pool named *ALnodepool* that runs three nodes in the *testAzureLinuxCluster* cluster in the *testAzureLinuxResourceGroup* resource group. Environment variables are declared below and a random suffix is appended to the resource group and cluster names to ensure uniqueness. + +```azurecli-interactive +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export NODEPOOL_NAME="np$RANDOM_SUFFIX" + +az aks nodepool add \ + --resource-group $RESOURCE_GROUP \ + --cluster-name $CLUSTER_NAME \ + --name $NODEPOOL_NAME \ + --node-count 3 \ + --os-sku AzureLinux +``` + + +```JSON +{ + "agentPoolType": "VirtualMachineScaleSets", + "count": 3, + "name": "alnodepool", + "osType": "Linux", + "provisioningState": "Succeeded", + "resourceGroup": "testAzureLinuxResourceGroupxxxxx", + "type": "Microsoft.ContainerService/managedClusters/agentPools" +} +``` + +> [!NOTE] +> The name of a node pool must start with a lowercase letter and can only contain alphanumeric characters. For Linux node pools the length must be between one and 12 characters. + +## Check the node pool status + +To see the status of your node pools, use the `az aks nodepool list` command and specify your resource group and cluster name. The same environment variable values declared earlier are used here. + +```azurecli-interactive +az aks nodepool list --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME +``` + + +```output +[ + { + "agentPoolType": "VirtualMachineScaleSets", + "availabilityZones": null, + "count": 1, + "enableAutoScaling": false, + "enableEncryptionAtHost": false, + "enableFips": false, + "enableNodePublicIp": false, + "id": "/subscriptions/REDACTED/resourcegroups/myAKSResourceGroupxxxxx/providers/Microsoft.ContainerService/managedClusters/myAKSClusterxxxxx/agentPools/nodepoolx", + "maxPods": 110, + "mode": "System", + "name": "nodepoolx", + "nodeImageVersion": "AKSUbuntu-1804gen2containerd-2023.06.06", + "orchestratorVersion": "1.25.6", + "osDiskSizeGb": 128, + "osDiskType": "Managed", + "osSku": "Ubuntu", + "osType": "Linux", + "powerState": { + "code": "Running" + }, + "provisioningState": "Succeeded", + "resourceGroup": "myAKSResourceGroupxxxxx", + "type": "Microsoft.ContainerService/managedClusters/agentPools", + "vmSize": "Standard_DS2_v2" + }, + { + "agentPoolType": "VirtualMachineScaleSets", + "availabilityZones": null, + "count": 3, + "enableAutoScaling": false, + "enableEncryptionAtHost": false, + "enableFips": false, + "enableNodePublicIp": false, + "id": "/subscriptions/REDACTED/resourcegroups/myAKSResourceGroupxxxxx/providers/Microsoft.ContainerService/managedClusters/myAKSClusterxxxxx/agentPools/npxxxxxx", + "maxPods": 110, + "mode": "User", + "name": "npxxxxxx", + "nodeImageVersion": "AzureLinuxContainerHost-2023.06.06", + "orchestratorVersion": "1.25.6", + "osDiskSizeGb": 128, + "osDiskType": "Managed", + "osSku": "AzureLinux", + "osType": "Linux", + "powerState": { + "code": "Running" + }, + "provisioningState": "Succeeded", + "resourceGroup": "myAKSResourceGroupxxxxx", + "type": "Microsoft.ContainerService/managedClusters/agentPools", + "vmSize": "Standard_DS2_v2" + } +] +``` + +## Next steps + +In this tutorial, you added an Azure Linux node pool to your existing cluster. You learned how to: + +> [!div class="checklist"] +> +> * Add an Azure Linux node pool. +> * Check the status of your node pools. + +In the next tutorial, you learn how to migrate existing nodes to Azure Linux. + +> [!div class="nextstepaction"] +> [Migrating to Azure Linux](./tutorial-azure-linux-migration.md) \ No newline at end of file diff --git a/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-create-cluster.md b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-create-cluster.md new file mode 100644 index 000000000..c9254eacf --- /dev/null +++ b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-create-cluster.md @@ -0,0 +1,120 @@ +--- +title: Azure Linux Container Host for AKS tutorial - Create a cluster +description: In this Azure Linux Container Host for AKS tutorial, you will learn how to create an AKS cluster with Azure Linux. +author: suhuruli +ms.author: suhuruli +ms.service: microsoft-linux +ms.custom: linux-related-content, innovation-engine +ms.topic: tutorial +ms.date: 04/18/2023 +--- + +# Tutorial: Create a cluster with the Azure Linux Container Host for AKS + +To create a cluster with the Azure Linux Container Host, you will use: +1. Azure resource groups, a logical container into which Azure resources are deployed and managed. +1. [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes), a hosted Kubernetes service that allows you to quickly create a production ready Kubernetes cluster. + +In this tutorial, part one of five, you will learn how to: + +> [!div class="checklist"] +> * Install the Kubernetes CLI, `kubectl`. +> * Create an Azure resource group. +> * Create and deploy an Azure Linux Container Host cluster. +> * Configure `kubectl` to connect to your Azure Linux Container Host cluster. + +In later tutorials, you'll learn how to add an Azure Linux node pool to an existing cluster and migrate existing nodes to Azure Linux. + +## Prerequisites + +- You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). + +## Create a resource group + +When creating a resource group, it is required to specify a location. This location is: +- The storage location of your resource group metadata. +- Where your resources will run in Azure if you don't specify another region when creating a resource. + +Before running the command, environment variables are declared to ensure unique resource names for each deployment. + +```bash +export REGION="EastUS2" +az group create --name $RESOURCE_GROUP_NAME --location $REGION +``` + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/testAzureLinuxResourceGroupxxxxx", + "location": "EastUS2", + "managedBy": null, + "name": "testAzureLinuxResourceGroupxxxxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +> [!NOTE] +> The above example uses *WestUS2*, but Azure Linux Container Host clusters are available in all regions. + +## Create an Azure Linux Container Host cluster + +Create an AKS cluster using the `az aks create` command with the `--os-sku` parameter to provision the Azure Linux Container Host with an Azure Linux image. The following example creates an Azure Linux Container Host cluster. + +```bash +az aks create --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --os-sku AzureLinux +``` + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/testAzureLinuxResourceGroupxxxxx/providers/Microsoft.ContainerService/managedClusters/testAzureLinuxClusterxxxxx", + "location": "WestUS2", + "name": "testAzureLinuxClusterxxxxx", + "properties": { + "provisioningState": "Succeeded" + }, + "type": "Microsoft.ContainerService/managedClusters" +} +``` + +After a few minutes, the command completes and returns JSON-formatted information about the cluster. + +## Connect to the cluster using kubectl + +To configure `kubectl` to connect to your Kubernetes cluster, use the `az aks get-credentials` command. The following example gets credentials for the Azure Linux Container Host cluster using the resource group and cluster name created earlier: + +```azurecli +az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME +``` + +To verify the connection to your cluster, run the [kubectl get nodes](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command to return a list of the cluster nodes: + +```azurecli-interactive +kubectl get nodes +``` + + +```text +NAME STATUS ROLES AGE VERSION +aks-nodepool1-00000000-0 Ready agent 10m v1.20.7 +aks-nodepool1-00000000-1 Ready agent 10m v1.20.7 +``` + +## Next steps + +In this tutorial, you created and deployed an Azure Linux Container Host cluster. You learned how to: + +> [!div class="checklist"] +> * Install the Kubernetes CLI, `kubectl`. +> * Create an Azure resource group. +> * Create and deploy an Azure Linux Container Host cluster. +> * Configure `kubectl` to connect to your Azure Linux Container Host cluster. + +In the next tutorial, you'll learn how to add an Azure Linux node pool to an existing cluster. + +> [!div class="nextstepaction"] +> [Add an Azure Linux node pool](./tutorial-azure-linux-add-nodepool.md) \ No newline at end of file diff --git a/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-migration.md b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-migration.md new file mode 100644 index 000000000..adc85d4a0 --- /dev/null +++ b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-migration.md @@ -0,0 +1,144 @@ +--- +title: Azure Linux Container Host for AKS tutorial - Migrating to Azure Linux +description: In this Azure Linux Container Host for AKS tutorial, you learn how to migrate your nodes to Azure Linux nodes. +author: suhuruli +ms.author: suhuruli +ms.reviewer: schaffererin +ms.service: microsoft-linux +ms.custom: devx-track-azurecli, linux-related-content, innovation-engine +ms.topic: tutorial +ms.date: 01/19/2024 +--- + +# Tutorial: Migrate nodes to Azure Linux + +In this tutorial, part three of five, you migrate your existing nodes to Azure Linux. You can migrate your existing nodes to Azure Linux using one of the following methods: + +* Remove existing node pools and add new Azure Linux node pools. +* In-place OS SKU migration. + +If you don't have any existing nodes to migrate to Azure Linux, skip to the [next tutorial](./tutorial-azure-linux-telemetry-monitor.md). In later tutorials, you learn how to enable telemetry and monitoring in your clusters and upgrade Azure Linux nodes. + +## Prerequisites + +* In previous tutorials, you created and deployed an Azure Linux Container Host for AKS cluster. To complete this tutorial, you need to add an Azure Linux node pool to your existing cluster. If you haven't done this step and would like to follow along, start with [Tutorial 2: Add an Azure Linux node pool to your existing AKS cluster](./tutorial-azure-linux-add-nodepool.md). + + > [!NOTE] + > When adding a new Azure Linux node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing node pool. + +* You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). + +## Add Azure Linux node pools and remove existing node pools + +1. Add a new Azure Linux node pool using the `az aks nodepool add` command. This command adds a new node pool to your cluster with the `--mode System` flag, which makes it a system node pool. System node pools are required for Azure Linux clusters. + +```azurecli-interactive +# Declare environment variables with a random suffix for uniqueness +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export NODE_POOL_NAME="np$RANDOM_SUFFIX" +az aks nodepool add --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME --name $NODE_POOL_NAME --mode System --os-sku AzureLinux +``` + +Results: + + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/nodePools/systempool", + "name": "systempool", + "provisioningState": "Succeeded" +} +``` + +2. Remove your existing nodes using the `az aks nodepool delete` command. + +## In-place OS SKU migration + +You can now migrate your existing Ubuntu node pools to Azure Linux by changing the OS SKU of the node pool, which rolls the cluster through the standard node image upgrade process. This new feature doesn't require the creation of new node pools. + +### Limitations + +There are several settings that can block the OS SKU migration request. To ensure a successful migration, review the following guidelines and limitations: + +* The OS SKU migration feature isn't available through PowerShell or the Azure portal. +* The OS SKU migration feature isn't able to rename existing node pools. +* Ubuntu and Azure Linux are the only supported Linux OS SKU migration targets. +* An Ubuntu OS SKU with `UseGPUDedicatedVHD` enabled can't perform an OS SKU migration. +* An Ubuntu OS SKU with CVM 20.04 enabled can't perform an OS SKU migration. +* Node pools with Kata enabled can't perform an OS SKU migration. +* Windows OS SKU migration isn't supported. +* OS SKU migration from Mariner to Azure Linux is supported, but rolling back to Mariner is not supported. + +### Prerequisites + +* An existing AKS cluster with at least one Ubuntu node pool. +* We recommend that you ensure your workloads configure and run successfully on the Azure Linux container host before attempting to use the OS SKU migration feature by [deploying an Azure Linux cluster](./quickstart-azure-cli.md) in dev/prod and verifying your service remains healthy. +* Ensure the migration feature is working for you in test/dev before using the process on a production cluster. +* Ensure that your pods have enough [Pod Disruption Budget](/azure/aks/operator-best-practices-scheduler#plan-for-availability-using-pod-disruption-budgets) to allow AKS to move pods between VMs during the upgrade. +* You need Azure CLI version [2.61.0](/cli/azure/release-notes-azure-cli#may-21-2024) or higher. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). +* If you are using Terraform, you must have [v3.111.0](https://github.com/hashicorp/terraform-provider-azurerm/releases/tag/v3.111.0) or greater of the AzureRM Terraform module. + +### [Azure CLI](#tab/azure-cli) + +#### Migrate the OS SKU of your Ubuntu node pool + +* Migrate the OS SKU of your node pool to Azure Linux using the `az aks nodepool update` command. This command updates the OS SKU for your node pool from Ubuntu to Azure Linux. The OS SKU change triggers an immediate upgrade operation, which takes several minutes to complete. + +```azurecli-interactive +az aks nodepool update --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME --name $NODE_POOL_NAME --os-sku AzureLinux +``` + +Results: + + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/nodePools/nodepool1", + "name": "nodepool1", + "osSku": "AzureLinux", + "provisioningState": "Succeeded" +} +``` + +> [!NOTE] +> If you experience issues during the OS SKU migration, you can [roll back to your previous OS SKU](#rollback). + +### Verify the OS SKU migration + +Once the migration is complete on your test clusters, you should verify the following to ensure a successful migration: + +* If your migration target is Azure Linux, run the `kubectl get nodes -o wide` command. The output should show `CBL-Mariner/Linux` as your OS image and `.cm2` at the end of your kernel version. +* Run the `kubectl get pods -o wide -A` command to verify that all of your pods and daemonsets are running on the new node pool. +* Run the `kubectl get nodes --show-labels` command to verify that all of the node labels in your upgraded node pool are what you expect. + +> [!TIP] +> We recommend monitoring the health of your service for a couple weeks before migrating your production clusters. + +### Run the OS SKU migration on your production clusters + +1. Update your existing templates to set `OSSKU=AzureLinux`. In ARM templates, you use `"OSSKU": "AzureLinux"` in the `agentPoolProfile` section. In Bicep, you use `osSku: "AzureLinux"` in the `agentPoolProfile` section. Lastly, for Terraform, you use `os_sku = "AzureLinux"` in the `default_node_pool` section. Make sure that your `apiVersion` is set to `2023-07-01` or later. +2. Redeploy your ARM, Bicep, or Terraform template for the cluster to apply the new `OSSKU` setting. During this deploy, your cluster behaves as if it's taking a node image upgrade. Your cluster surges capacity, and then reboots your existing nodes one by one into the latest AKS image from your new OS SKU. + +### Rollback + +If you experience issues during the OS SKU migration, you can roll back to your previous OS SKU. To do this, you need to change the OS SKU field in your template and resubmit the deployment, which triggers another upgrade operation and restores the node pool to its previous OS SKU. + + > [!NOTE] + > + > OS SKU migration does not support rolling back to OS SKU Mariner. + +* Roll back to your previous OS SKU using the `az aks nodepool update` command. This command updates the OS SKU for your node pool from Azure Linux back to Ubuntu. + +## Next steps + +In this tutorial, you migrated existing nodes to Azure Linux using one of the following methods: + +* Remove existing node pools and add new Azure Linux node pools. +* In-place OS SKU migration. + +In the next tutorial, you learn how to enable telemetry to monitor your clusters. + +> [!div class="nextstepaction"] +> [Enable telemetry and monitoring](./tutorial-azure-linux-telemetry-monitor.md) \ No newline at end of file diff --git a/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-telemetry-monitor.md b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-telemetry-monitor.md new file mode 100644 index 000000000..926da4616 --- /dev/null +++ b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-telemetry-monitor.md @@ -0,0 +1,120 @@ +--- +title: Azure Linux Container Host for AKS tutorial - Enable telemetry and monitoring for the Azure Linux Container Host +description: In this Azure Linux Container Host for AKS tutorial, you'll learn how to enable telemetry and monitoring for the Azure Linux Container Host. +author: suhuruli +ms.author: suhuruli +ms.service: microsoft-linux +ms.custom: linux-related-content, innovation-engine +ms.topic: tutorial +ms.date: 03/26/2025 +--- + +# Tutorial: Enable telemetry and monitoring for your Azure Linux Container Host cluster + +In this tutorial, part four of five, you'll set up Container Insights to monitor an Azure Linux Container Host cluster. You'll learn how to: + +> [!div class="checklist"] +> * Enable monitoring for an existing cluster. +> * Verify that the agent is deployed successfully. +> * Verify that the solution is enabled. + +In the next and last tutorial, you'll learn how to upgrade your Azure Linux nodes. + +## Prerequisites + +- In previous tutorials, you created and deployed an Azure Linux Container Host cluster. To complete this tutorial, you need an existing cluster. If you haven't done this step and would like to follow along, start with [Tutorial 1: Create a cluster with the Azure Linux Container Host for AKS](./tutorial-azure-linux-create-cluster.md). +- If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider). +- You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). + +## Enable monitoring + +## Connect to your cluster + +Before enabling monitoring, it's important to ensure you're connected to the correct cluster. The following command retrieves the credentials for your Azure Linux Container Host cluster and configures kubectl to use them: + +```azurecli +az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME +``` + +### Use a default Log Analytics workspace + +The following step enables monitoring for your Azure Linux Container Host cluster using Azure CLI. In this example, you aren't required to precreate or specify an existing workspace. This command simplifies the process for you by creating a default workspace in the default resource group of the AKS cluster subscription. If one doesn't already exist in the region, the default workspace created will resemble the format *DefaultWorkspace-< GUID >-< Region >*. + +```azurecli +# Check if monitoring addon is already enabled +MONITORING_ENABLED=$(az aks show -g $RESOURCE_GROUP -n $CLUSTER_NAME --query "addonProfiles.omsagent.enabled" -o tsv) + +if [ "$MONITORING_ENABLED" != "true" ]; then + az aks enable-addons -a monitoring -n $CLUSTER_NAME -g $RESOURCE_GROUP +fi +``` + +### Option 2: Specify a Log Analytics workspace + +In this example, you can specify a Log Analytics workspace to enable monitoring of your Azure Linux Container Host cluster. The resource ID of the workspace will be in the form `"/subscriptions//resourceGroups//providers/Microsoft.OperationalInsights/workspaces/"`. The command to enable monitoring with a specified workspace is as follows: ```az aks enable-addons -a monitoring -n $CLUSTER_NAME -g $RESOURCE_GROUP --workspace-resource-id ``` + +## Verify agent and solution deployment + +Run the following command to verify that the agent is deployed successfully. + +```bash +kubectl get ds ama-logs --namespace=kube-system +``` + +The output should resemble the following example, which indicates that it was deployed properly: + + +```text +User@aksuser:~$ kubectl get ds ama-logs --namespace=kube-system +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE +ama-logs 3 3 3 3 3 3m22s +``` + +To verify deployment of the solution, run the following command: + +```bash +kubectl get deployment ama-logs-rs -n=kube-system +``` + +The output should resemble the following example, which indicates that it was deployed properly: + + +```text +User@aksuser:~$ kubectl get deployment ama-logs-rs -n=kube-system +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +ama-logs-rs 1 1 1 1 3h +``` + +## Verify solution configuration + +Use the `aks show` command to find out whether the solution is enabled or not, what the Log Analytics workspace resource ID is, and summary information about the cluster. + +```azurecli +az aks show -g $RESOURCE_GROUP -n $CLUSTER_NAME --query "addonProfiles.omsagent" +``` + +After a few minutes, the command completes and returns JSON-formatted information about the solution. The results of the command should show the monitoring add-on profile and resemble the following example output: + + +```JSON +{ + "config": { + "logAnalyticsWorkspaceResourceID": "/subscriptions/xxxxx/resourceGroups/xxxxx/providers/Microsoft.OperationalInsights/workspaces/xxxxx" + }, + "enabled": true +} +``` + +## Next steps + +In this tutorial, you enabled telemetry and monitoring for your Azure Linux Container Host cluster. You learned how to: + +> [!div class="checklist"] +> * Enable monitoring for an existing cluster. +> * Verify that the agent is deployed successfully. +> * Verify that the solution is enabled. + +In the next tutorial, you'll learn how to upgrade your Azure Linux nodes. + +> [!div class="nextstepaction"] +> [Upgrade Azure Linux nodes](./tutorial-azure-linux-upgrade.md) \ No newline at end of file diff --git a/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-upgrade.md b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-upgrade.md new file mode 100644 index 000000000..a0373ff2c --- /dev/null +++ b/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-upgrade.md @@ -0,0 +1,108 @@ +--- +title: Azure Linux Container Host for AKS tutorial - Upgrade Azure Linux Container Host nodes +description: In this Azure Linux Container Host for AKS tutorial, you learn how to upgrade Azure Linux Container Host nodes. +author: suhuruli +ms.author: suhuruli +ms.service: microsoft-linux +ms.custom: linux-related-content, innovation-engine +ms.topic: tutorial +ms.date: 08/18/2024 +--- + +# Tutorial: Upgrade Azure Linux Container Host nodes + +The Azure Linux Container Host ships updates through two mechanisms: updated Azure Linux node images and automatic package updates. + +As part of the application and cluster lifecycle, we recommend keeping your clusters up to date and secured by enabling upgrades for your cluster. You can enable automatic node-image upgrades to ensure your clusters use the latest Azure Linux Container Host image when it scales up. You can also manually upgrade the node-image on a cluster. + +In this tutorial, part five of five, you learn how to: + +> [!div class="checklist"] +> +> * Manually upgrade the node-image on a cluster. +> * Automatically upgrade an Azure Linux Container Host cluster. +> * Deploy Kured in an Azure Linux Container Host cluster. + +> [!NOTE] +> Any upgrade operation, whether performed manually or automatically, upgrades the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker](/azure/aks/release-tracker). + +## Prerequisites + +* In previous tutorials, you created and deployed an Azure Linux Container Host cluster. To complete this tutorial, you need an existing cluster. If you haven't done this step and would like to follow along, start with [Tutorial 1: Create a cluster with the Azure Linux Container Host for AKS](./tutorial-azure-linux-create-cluster.md). +* You need the latest version of Azure CLI. Find the version using the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). + +## Manually upgrade your cluster + +In order to manually upgrade the node-image on a cluster, you can run the `az aks nodepool upgrade. + +## Automatically upgrade your cluster + +Auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest Azure Linux Container Host features or patches from AKS and upstream Kubernetes. + +Automatically completed upgrades are functionally the same as manual upgrades. The selected channel determines the timing of upgrades. When making changes to auto-upgrade, allow 24 hours for the changes to take effect. + +To set the auto-upgrade channel on an existing cluster, update the --auto-upgrade-channel parameter: + +```bash +az aks update --resource-group $AZ_LINUX_RG --name $AZ_LINUX_CLUSTER --auto-upgrade-channel stable +``` + + +```json +{ + "id": "/subscriptions/xxxxx/resourceGroups/testAzureLinuxResourceGroup", + "location": "WestUS2", + "name": "testAzureLinuxCluster", + "properties": { + "autoUpgradeChannel": "stable", + "provisioningState": "Succeeded" + } +} +``` + +For more information on upgrade channels, see [Using cluster auto-upgrade](/azure/aks/auto-upgrade-cluster). + +## Enable automatic package upgrades + +Similar to setting your clusters to auto-upgrade, you can use the same set once and forget mechanism for package upgrades by enabling the node-os upgrade channel. If automatic package upgrades are enabled, the dnf-automatic systemd service runs daily and installs any updated packages that have been published. + +To set the node-os upgrade channel on an existing cluster, update the --node-os-upgrade-channel parameter: + +```bash +az aks update --resource-group $AZ_LINUX_RG --name $AZ_LINUX_CLUSTER --node-os-upgrade-channel Unmanaged +``` + + +```json +{ + "id": "/subscriptions/xxxxx/resourceGroups/testAzureLinuxResourceGroup", + "location": "WestUS2", + "name": "testAzureLinuxCluster", + "properties": { + "nodeOsUpgradeChannel": "Unmanaged", + "provisioningState": "Succeeded" + } +} +``` + +## Enable an automatic reboot daemon + +To protect your clusters, security updates are automatically applied to Azure Linux nodes. These updates include OS security fixes, kernel updates, and package upgrades. Some of these updates require a node reboot to complete the process. AKS doesn't automatically reboot these nodes to complete the update process. + +We recommend enabling an automatic reboot daemon, such as [Kured](https://kured.dev/docs/), so that your cluster can reboot nodes that have taken kernel updates. To deploy the Kured DaemonSet in an Azure Linux Container Host cluster, see [Deploy Kured in an AKS cluster](/azure/aks/node-updates-kured#deploy-kured-in-an-aks-cluster). + +## Clean up resources + +As this tutorial is the last part of the series, you may want to delete your Azure Linux Container Host cluster. The Kubernetes nodes run on Azure virtual machines and continue incurring charges even if you don't use the cluster. + +## Next steps + +In this tutorial, you upgraded your Azure Linux Container Host cluster. You learned how to: + +> [!div class="checklist"] +> +> * Manually upgrade the node-image on a cluster. +> * Automatically upgrade an Azure Linux Container Host cluster. +> * Deploy kured in an Azure Linux Container Host cluster. + +For more information on the Azure Linux Container Host, see the [Azure Linux Container Host overview](./intro-azure-linux.md). \ No newline at end of file diff --git a/scenarios/azure-stack-docs/azure-stack/user/azure-stack-quick-create-vm-linux-cli.md b/scenarios/azure-stack-docs/azure-stack/user/azure-stack-quick-create-vm-linux-cli.md new file mode 100644 index 000000000..e60b44bd3 --- /dev/null +++ b/scenarios/azure-stack-docs/azure-stack/user/azure-stack-quick-create-vm-linux-cli.md @@ -0,0 +1,188 @@ +--- +title: Create Linux VM with Azure CLI in Azure Stack Hub +description: Create a Linux virtual machine by using the Azure CLI in Azure Stack Hub. +author: sethmanheim +ms.topic: quickstart +ms.date: 03/06/2025 +ms.author: sethm +ms.custom: mode-api, devx-track-azurecli, linux-related-content +--- + +# Quickstart: Create a Linux server VM by using the Azure CLI in Azure Stack Hub + +You can create an Ubuntu Server 20.04 LTS virtual machine (VM) by using the Azure CLI. In this article, you create and use a virtual machine. This article also shows you how to: + +* Connect to the virtual machine with a remote client. +* Install an NGINX web server and view the default home page. +* Clean up unused resources. + +## Prerequisites + +Before you begin, make sure you have the following prerequisites: + +* A Linux image in the Azure Stack Hub Marketplace + + The Azure Stack Hub Marketplace doesn't contain a Linux image by default. Have the Azure Stack Hub operator provide the Ubuntu Server 20.04 LTS image you need. The operator can use the instructions in [Download Marketplace items from Azure to Azure Stack Hub](../operator/azure-stack-download-azure-marketplace-item.md). + +* Azure Stack Hub requires a specific version of the Azure CLI to create and manage its resources. If you don't have the Azure CLI configured for Azure Stack Hub, sign in to a Windows-based external client if you're connected through VPN, and follow the instructions for [installing and configuring the Azure CLI](azure-stack-version-profiles-azurecli2.md). + +* A public Secure Shell (SSH) key with the name id_rsa.pub saved in the **.ssh** directory of your Windows user profile. For more information about creating SSH keys, see [Use an SSH key pair with Azure Stack Hub](azure-stack-dev-start-howto-ssh-public-key.md). + +## Create a resource group + +A resource group is a logical container where you can deploy and manage Azure Stack Hub resources. From your Azure Stack Hub integrated system, run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. + +> [!NOTE] +> We assigned values for all variables in the following code examples. However, you can assign your own values. + +The following example creates a resource group named myResourceGroup with a random suffix in the local location: + +```azurecli +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export RESOURCE_GROUP="myResourceGroup$RANDOM_SUFFIX" +export LOCATION="eastus2" +az group create --name $RESOURCE_GROUP --location $LOCATION +``` + +Results: + + +```JSON +{ + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx", + "location": "local", + "managedBy": null, + "name": "myResourceGroupxxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +## Create a virtual machine + +Create a virtual machine by using the [az vm create](/cli/azure/vm#az-vm-create) command. The following example creates a VM named myVM. The example uses Demouser as the admin username. Change these values to something that's appropriate for your environment. + +```azurecli +export VM_NAME="myVM$RANDOM_SUFFIX" +az vm create \ + --resource-group $RESOURCE_GROUP \ + --name $VM_NAME \ + --image "Ubuntu2204" \ + --admin-username "azureuser" \ + --assign-identity \ + --generate-ssh-keys \ + --public-ip-sku Standard \ + --location $LOCATION +``` + +Results: + + +```JSON +{ + "fqdns": "", + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxx/providers/Microsoft.Compute/virtualMachines/myVMxxx", + "location": "local", + "name": "myVMxxx", + "osProfile": { + "computerName": "myVMxxx", + "adminUsername": "Demouser" + }, + "publicIpAddress": "x.x.x.x", + "powerState": "VM running", + "provisioningState": "Succeeded" +} +``` + +The public IP address is returned in the PublicIpAddress parameter. Note the address for later use with the virtual machine. + +## Open port 80 for web traffic + +Because this virtual machine runs the IIS web server, you must open port 80 to internet traffic. To open the port, use the [az vm open-port](/cli/azure/vm) command: + +```azurecli +az vm open-port --port 80 --resource-group $RESOURCE_GROUP --name $VM_NAME +``` + +Results: + + +```JSON +{ + "endPort": 80, + "name": "openPort80", + "port": 80, + "protocol": "Tcp", + "provisioningState": "Succeeded", + "resourceGroup": "myResourceGroupxxx", + "startPort": 80 +} +``` + +## Use SSH to connect to the virtual machine + +From a client computer with SSH installed, connect to the virtual machine. If you work on a Windows client, use [PuTTY](https://www.putty.org/) to create the connection. To connect to the virtual machine, you can use the `ssh` command. + +## Install the NGINX web server + +To update package resources and install the latest NGINX package, run the following script: + +```bash +output=$(az vm run-command invoke --resource-group $RESOURCE_GROUP --name $VM_NAME --command-id RunShellScript --scripts 'apt-get -y install nginx') +value=$(echo "$output" | jq -r '.value[0].message') +extracted=$(echo "$value" | awk '/\[stdout\]/,/\[stderr\]/' | sed '/\[stdout\]/d' | sed '/\[stderr\]/d') +echo "$extracted" +``` + +## View the NGINX welcome page + +With the NGINX web server installed, and port 80 open on your virtual machine, you can access the web server by using the virtual machine's public IP address. To do so, open a browser, and go to http://. Alternatively, you can use the curl command to view the NGINX welcome page: + +```bash +export PUBLIC_IP=$(az vm show -d -g $RESOURCE_GROUP -n $VM_NAME --query publicIps -o tsv) + +output=$(az vm run-command invoke --resource-group $RESOURCE_GROUP --name $VM_NAME --command-id RunShellScript --scripts 'curl -v http://localhost') +value=$(echo "$output" | jq -r '.value[0].message') +extracted=$(echo "$value" | awk '/\[stdout\]/,/\[stderr\]/' | sed '/\[stdout\]/d' | sed '/\[stderr\]/d') +echo "$extracted" +``` + +Results: + + +```HTML + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + +``` + +![The NGINX web server Welcome page](./media/azure-stack-quick-create-vm-linux-cli/nginx.png) + +## Next steps + +In this quickstart, you deployed a basic Linux server virtual machine with a web server. To learn more about Azure Stack Hub virtual machines, see [Considerations for virtual machines in Azure Stack Hub](azure-stack-vm-considerations.md). \ No newline at end of file diff --git a/scenarios/metadata.json b/scenarios/metadata.json index 3ed32c58b..cd2ab3685 100644 --- a/scenarios/metadata.json +++ b/scenarios/metadata.json @@ -5,7 +5,7 @@ "title": "Deploy an Azure Kubernetes Service (AKS) cluster", "description": "Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using Azure CLI", "stackDetails": "", - "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/learn/quick-kubernetes-deploy-cli.md", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-docs/articles/aks/learn/quick-kubernetes-deploy-cli.md", "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli", "nextSteps": [ { @@ -260,8 +260,8 @@ "title": "Deploy Inspektor Gadget in an Azure Kubernetes Service cluster", "description": "This tutorial shows how to deploy Inspektor Gadget in an AKS cluster", "stackDetails": "", - "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/DeployIGonAKS/README.md", - "documentationUrl": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/DeployIGonAKS/deploy-ig-on-aks.md", + "documentationUrl": "https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/logs/capture-system-insights-from-aks", "nextSteps": [ { "title": "Real-world scenarios where Inspektor Gadget can help you", @@ -392,7 +392,13 @@ "description": "Learn how to obtainer Performance metrics from a Linux system.", "stackDetails": "", "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/ObtainPerformanceMetricsLinuxSustem/obtain-performance-metrics-linux-system.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/linux/collect-performance-metrics-from-a-linux-system", + "nextSteps": [ + { + "title": "Create a Support Request for your VM", + "url": "https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview" + } + ], "configurations": { "permissions": [], "configurableParams": [ @@ -418,7 +424,7 @@ "description": "Create the infrastructure needed to deploy a highly available PostgreSQL database on AKS using the CloudNativePG operator.", "stackDetails": "", "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/create-postgresql-ha.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/create-postgresql-ha?tabs=helm", "nextSteps": [ { "title": "Deploy a highly available PostgreSQL database on AKS with Azure CLI", @@ -436,18 +442,14 @@ "description": "In this article, you deploy a highly available PostgreSQL database on AKS using the CloudNativePG operator.", "stackDetails": "", "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/deploy-postgresql-ha.md", - "documentationUrl": "", - "configurations": { - } - }, - { - "status": "inactive", - "key": "azure-aks-docs/articles/aks/postgresql-ha-overview.md", - "title": "Overview of deploying a highly available PostgreSQL database on AKS with Azure CLI", - "description": "Learn how to deploy a highly available PostgreSQL database on AKS using the CloudNativePG operator.", - "stackDetails": "", - "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/postgresql-ha-overview.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/deploy-postgresql-ha", + "nextSteps": [ + { + "title": "Deploy a highly available PostgreSQL database on AKS with Azure CLI", + "url": "https://learn.microsoft.com/en-us/azure/aks/deploy-postgresql-ha?tabs=helm" + } + + ], "configurations": { } }, @@ -458,7 +460,7 @@ "description": "This tutorial shows how to create a Container App leveraging Blob Store, SQL, and Computer Vision", "stackDetails": "", "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/CreateContainerAppDeploymentFromSource/create-container-app-deployment-from-source.md", - "documentationUrl": "", + "documentationUrl": "https://github.com/Azure/computer-vision-nextjs-webapp", "nextSteps": [ { "title": "Azure Container Apps documentation", @@ -480,10 +482,6 @@ "configurations": { } }, - { - "status": "inactive", - "key": "BlobVisionOnAKS/blob-vision-aks.md" - }, { "status": "inactive", "key": "DeployHAPGonARO/deploy-ha-pg-on-aro.md", @@ -492,11 +490,18 @@ "stackDetails": "", "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/DeployHAPGonARO/deploy-ha-pg-aro.md", "documentationUrl": "", + "nextSteps": [ + { + "title": "Deploy a highly available PostgreSQL database on AKS with Azure CLI", + "url": "https://learn.microsoft.com/en-us/azure/aks/deploy-postgresql-ha?tabs=helm" + } + + ], "configurations": { } }, { - "status": "active", + "status": "inactive", "key": "AIChatApp/ai-chat-app.md", "title": "Create an Azure OpenAI, LangChain, ChromaDB, and Chainlit Chat App in Container Apps", "description": "", @@ -541,7 +546,7 @@ "description": "In this article, you create the infrastructure needed to deploy Apache Airflow on Azure Kubernetes Service (AKS) using Helm.", "stackDetails": "", "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/airflow-create-infrastructure.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/airflow-create-infrastructure", "nextSteps": [ { "title": "Deploy Apache Airflow on AKS", @@ -559,7 +564,7 @@ "description": "In this article, you create the infrastructure needed to deploy Apache Airflow on Azure Kubernetes Service (AKS) using Helm.", "stackDetails": "", "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/airflow-deploy.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/airflow-deploy", "nextSteps": [ { "title": "Deploy a MongoDB cluster on Azure Kubernetes Service (AKS)", @@ -639,7 +644,7 @@ "description": "Learn how to use the Azure CLI to create an Azure OpenAI resource and manage deployments with the Azure OpenAI Service.", "stackDetails": "", "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/CreateAOAIDeployment/create-aoai-deployment.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=cli", "nextSteps": [], "configurations": { "permissions": [] @@ -652,7 +657,7 @@ "description": "Learn how to enable the AI toolchain operator add-on on Azure Kubernetes Service (AKS) to simplify OSS AI model management and deployment", "stackDetails": "", "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/AksKaito/README.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/ai-toolchain-operator", "nextSteps": [ { "title": "Check out the KAITO GitHub repository", @@ -738,7 +743,7 @@ "description": "Learn how to create an Azure Kubernetes Service (AKS) cluster with enclave confidential containers a Hello World app by using the Azure CLI.", "stackDetails": [], "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-docs/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-enclave-nodes-aks-get-started", "nextSteps": [ { "title": "Samples to run Python, Node, and other applications through confidential containers", @@ -760,7 +765,7 @@ "description": "Learn how to quickly create an Azure Linux Container Host for AKS cluster using the Azure CLI.", "stackDetails": [], "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-management-docs/articles/azure-linux/quickstart-azure-cli.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/azure-linux/quickstart-azure-cli", "nextSteps": [ { "title": "Azure Linux Container Host tutorial", @@ -779,7 +784,7 @@ "description": "Learn how to use the Azure CLI to create a custom VM image that you can use to deploy a Virtual Machine Scale Set", "stackDetails": [], "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-docs/articles/virtual-machine-scale-sets/tutorial-use-custom-image-cli.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-use-custom-image-cli", "nextSteps": [ { "title": "Deploy applications to your scale sets", @@ -840,7 +845,7 @@ "description": "In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity.", "stackDetails": [], "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/workload-identity-migrate-from-pod-identity.md", - "documentationUrl": "", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/workload-identity-migrate-from-pod-identity", "nextSteps": [ { "title": "Use Microsoft Entra Workload ID with Azure Kubernetes Service (AKS)", @@ -933,6 +938,258 @@ }, { "status": "active", + "key": "FixFstabIssuesRepairVM/fix-fstab-issues-repair-vm.md", + "title": "Troubleshoot Linux VM boot issues due to fstab errors", + "description": "Explains why Linux VM cannot start and how to solve the problem.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/FixFstabIssuesRepairVM/fix-fstab-issues-repair-vm.md", + "documentationUrl": "https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/linux/linux-virtual-machine-cannot-start-fstab-errors#use-azure-linux-auto-repair-alar", + "nextSteps": [ + { + "title": "Create a Support Request for your VM", + "url": "https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "MY_RESOURCE_GROUP_NAME", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "MY_VM_NAME", + "title": "VM Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "KernelBootIssuesRepairVM/kernel-related-boot-issues-repairvm.md", + "title": "Recover Azure Linux VM from kernel panic due to missing initramfs", + "description": "Provides solutions to an issue in which a Linux virtual machine (VM) can't boot after applying kernel changes", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/KernelBootIssuesRepairVM/kernel-related-boot-issues-repairvm.md", + "documentationUrl": "https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/linux/kernel-related-boot-issues#missing-initramfs-alar", + "nextSteps": [ + { + "title": "Create a Support Request for your VM", + "url": "https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "MY_RESOURCE_GROUP_NAME", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "MY_VM_NAME", + "title": "VM Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "TroubleshootVMGrubError/troubleshoot-vm-grub-error-repairvm.md", + "title": "Linux VM boots to GRUB rescue", + "description": "Provides troubleshooting guidance for GRUB rescue issues with Linux virtual machines.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/TroubleshootVMGrubError/troubleshoot-vm-grub-error-repairvm.md", + "documentationUrl": "https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/linux/troubleshoot-vm-boot-error", + "nextSteps": [ + { + "title": "Create a Support Request for your VM", + "url": "https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "MY_RESOURCE_GROUP_NAME", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "MY_VM_NAME", + "title": "VM Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-docs/articles/batch/quick-create-cli.md", + "title": "Quickstart: Use the Azure CLI to create a Batch account and run a job", + "description": "Follow this quickstart to use the Azure CLI to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool.", + "stackDetails": [ + ], + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-docs/articles/batch/quick-create-cli.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/batch/quick-create-cli", + "nextSteps": [ + { + "title": "Tutorial: Run a parallel workload with Azure Batch", + "url": "https://learn.microsoft.com/en-us/azure/batch/tutorial-parallel-python" + } + ], + "configurations": { + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machines/linux/tutorial-manage-vm.md", + "title": "Tutorial - Create and manage Linux VMs with the Azure CLI", + "description": "In this tutorial, you learn how to use the Azure CLI to create and manage Linux VMs in Azure", + "stackDetails": [ + ], + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-manage-vm.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-manage-vm", + "nextSteps": [ + { + "title": "Create and Manage VM Disks", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-manage-disks" + } + ], + "configurations": { + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md", + "title": "Tutorial - Autoscale a scale set with the Azure CLI", + "description": "Learn how to use the Azure CLI to automatically scale a Virtual Machine Scale Set as CPU demands increases and decreases", + "stackDetails": [ + ], + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-autoscale-cli?tabs=Ubuntu", + "nextSteps": [ + ], + "configurations": { + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md", + "title": "Modify an Azure Virtual Machine Scale Set using Azure CLI", + "description": "Learn how to modify and update an Azure Virtual Machine Scale Set using Azure CLI", + "stackDetails": [ + ], + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli", + "nextSteps": [ + { + "title": "Use data disks with scale sets", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-use-disks-powershell" + } + ], + "configurations": { + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machines/disks-enable-performance.md", + "title": "Preview - Increase performance of Premium SSDs and Standard SSD/HDDs", + "description": "Increase the performance of Azure Premium SSDs and Standard SSD/HDDs using performance plus.", + "stackDetails": [ + ], + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/disks-enable-performance.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/disks-enable-performance?tabs=azure-cli", + "nextSteps": [ + { + "title": "Create an incremental snapshot for managed disks", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machines/disks-incremental-snapshots" + }, + { + "title": "Expand virtual hard disks on a Linux VM", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks" + } + ], + "configurations": { + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/container-instances/container-instances-vnet.md", + "title": "Deploy container group to Azure virtual network", + "description": "Learn how to deploy a container group to a new or existing Azure virtual network via the Azure CLI.", + "stackDetails": [ + ], + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/container-instances/container-instances-vnet.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet", + "nextSteps": [ + { + "title": "Create an Azure container group with virtual network", + "url": "https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet" + }, + { + "title": " Deploy to Azure Container Instances from Azure Container Registry using a managed identity", + "url": "https://learn.microsoft.com/en-us/azure/container-instances/using-azure-container-registry-mi" + } + ], + "configurations": { + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machines/linux/multiple-nics.md", + "title": "Create a Linux VM in Azure with multiple NICs", + "description": "Learn how to create a Linux VM with multiple NICs attached to it using the Azure CLI or Resource Manager templates.", + "stackDetails": [ + ], + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/linux/multiple-nics.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/multiple-nics", + "nextSteps": [ + { + "title": "Review Linux VM Sizes", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machines/sizes" + }, + { + "title": "Manage virtual machine access using just in time", + "url": "https://learn.microsoft.com/en-us/azure/security-center/security-center-just-in-time" + } + ], + "configurations": { + } + }, + { + "status": "inactive", + "key": "azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/quick-create-terraform.md", + "title": "Quickstart: Use Terraform to create a Linux VM", + "description": "In this quickstart, you learn how to use Terraform to create a Linux virtual machine.", + "stackDetails": [ + ], + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/linux/quick-create-terraform/quick-create-terraform.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-terraform?tabs=azure-cli", + "nextSteps": [ + { + "title": "Troubleshoot common problems when using Terraform on Azure", + "url": "https://learn.microsoft.com/en-us/azure/developer/terraform/troubleshoot" + }, + { + "title": "Azure Linux Virtual Machine Tutorials", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-manage-vm" + } + ], + "configurations": { + } + }, + { + "status": "inactive", "key": "AksOpenAiTerraform/README.md", "title": "How to deploy and run an Azure OpenAI ChatGPT application on AKS via Terraform", "description": "This article shows how to deploy an AKS cluster and Azure OpenAI Service via Terraform and how to deploy a ChatGPT-like application in Python.", @@ -943,5 +1200,655 @@ "configurations": { "permissions": [] } + }, + { + "status": "active", + "key": "upstream/FlatcarOnAzure/flatcar-on-azure.md", + "title": "Running Flatcar Container Linux on Microsoft Azure", + "description": "Deploy Flatcar Container Linux in Microsoft Azure by creating resource groups and using official marketplace images.", + "stackDetails": [ + ], + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/upstream/FlatcarOnAzure/flatcar-on-azure.md", + "documentationUrl": "https://www.flatcar.org/docs/latest/installing/cloud/azure/", + "configurations": { + } + }, + { + "status": "active", + "key": "azure-management-docs/articles/azure-linux/tutorial-azure-linux-migration.md", + "title": "Azure Linux Container Host for AKS tutorial - Migrating to Azure Linux", + "description": "In this Azure Linux Container Host for AKS tutorial, you learn how to migrate your nodes to Azure Linux nodes.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-migration.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/azure-linux/tutorial-azure-linux-migration?tabs=azure-cli", + "nextSteps": [ + { + "title": "Enable telemetry and monitoring", + "url": "https://github.com/MicrosoftDocs/azure-management-docs/blob/main/articles/azure-linux/tutorial-azure-linux-telemetry-monitor.md" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "RESOURCE_GROUP", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "CLUSTER_NAME", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-management-docs/articles/azure-linux/tutorial-azure-linux-create-cluster.md", + "title": "Azure Linux Container Host for AKS tutorial - Create a cluster", + "description": "In this Azure Linux Container Host for AKS tutorial, you will learn how to create an AKS cluster with Azure Linux.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-create-cluster.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/azure-linux/tutorial-azure-linux-create-cluster", + "nextSteps": [ + { + "title": "Add an Azure Linux node pool", + "url": "https://learn.microsoft.com/en-us/azure/azure-linux/tutorial-azure-linux-add-nodepool" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "RESOURCE_GROUP_NAME", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "CLUSTER_NAME", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-management-docs/articles/azure-linux/tutorial-azure-linux-add-nodepool.md", + "title": "Azure Linux Container Host for AKS tutorial - Add an Azure Linux node pool to your existing AKS cluster", + "description": "In this Azure Linux Container Host for AKS tutorial, you learn how to add an Azure Linux node pool to your existing cluster.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-add-nodepool.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/azure-linux/tutorial-azure-linux-add-nodepool", + "nextSteps": [ + { + "title": "Migrating to Azure Linux", + "url": "https://learn.microsoft.com/en-us/azure/azure-linux/tutorial-azure-linux-migration" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "RESOURCE_GROUP", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "CLUSTER_NAME", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-management-docs/articles/azure-linux/tutorial-azure-linux-upgrade.md", + "title": "Azure Linux Container Host for AKS tutorial - Upgrade Azure Linux Container Host nodes", + "description": "In this Azure Linux Container Host for AKS tutorial, you learn how to upgrade Azure Linux Container Host nodes.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-upgrade.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/azure-linux/tutorial-azure-linux-upgrade", + "nextSteps": [ + { + "title": "Azure Linux Container Host Overview", + "url": "https://learn.microsoft.com/en-us/azure/azure-linux/intro-azure-linux" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "AZ_LINUX_RG", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "AZ_LINUX_CLUSTER", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-management-docs/articles/azure-linux/tutorial-azure-linux-telemetry-monitor.md", + "title": "Azure Linux Container Host for AKS tutorial - Enable telemetry and monitoring for the Azure Linux Container Host", + "description": "In this Azure Linux Container Host for AKS tutorial, you'll learn how to enable telemetry and monitoring for the Azure Linux Container Host.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-management-docs/articles/azure-linux/tutorial-azure-linux-telemetry-monitor.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/azure-linux/tutorial-azure-linux-telemetry-monitor", + "nextSteps": [ + { + "title": "Upgrade Azure Linux Nodes", + "url": "https://github.com/MicrosoftDocs/azure-management-docs/blob/main/articles/azure-linux/tutorial-azure-linux-upgrade.md" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "RESOURCE_GROUP", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "CLUSTER_NAME", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-stack-docs/azure-stack/user/azure-stack-quick-create-vm-linux-cli.md", + "title": "Create Linux VM with Azure CLI in Azure Stack Hub", + "description": "Create a Linux virtual machine by using the Azure CLI in Azure Stack Hub.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-stack-docs/azure-stack/user/azure-stack-quick-create-vm-linux-cli.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure-stack/user/azure-stack-quick-create-vm-linux-cli?view=azs-2501", + "nextSteps": [ + { + "title": "Considerations for virtual machines in Azure Stack Hub", + "url": "https://github.com/MicrosoftDocs/azure-stack-docs/blob/main/azure-stack/user/azure-stack-vm-considerations.md" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-aks-docs/articles/aks/azure-cni-powered-by-cilium.md", + "title": "Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS)", + "description": "Learn how to create an Azure Kubernetes Service (AKS) cluster with Azure CNI Powered by Cilium.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/azure-cni-powered-by-cilium.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium", + "nextSteps": [ + { + "title": "Upgrade Azure CNI IPAM modes and Dataplane Technology.", + "url": "https://learn.microsoft.com/en-us/azure/aks/upgrade-azure-cni" + }, + { + "title": "Use a static IP address with the Azure Kubernetes Service (AKS) load balancer", + "url": "https://learn.microsoft.com/en-us/azure/aks/static-ip" + }, + { + "title": "Use an internal load balancer with Azure Container Service (AKS)", + "url": "https://learn.microsoft.com/en-us/azure/aks/internal-lb" + }, + { + "title": "Create a basic ingress controller with external network connectivity", + "url": "https://learn.microsoft.com/en-us/azure/aks/ingress-basic" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md", + "title": "Tutorial - Customize a Linux VM with cloud-init in Azure", + "description": "In this tutorial, you learn how to use cloud-init and Key Vault to customize Linux VMs the first time they boot in Azure", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-automate-vm-deployment", + "nextSteps": [ + { + "title": "Create custom VM images", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-custom-images" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machines/linux/multiple-nics.md", + "title": "Create a Linux VM in Azure with multiple NICs", + "description": "Learn how to create a Linux VM with multiple NICs attached to it using the Azure CLI or Resource Manager templates.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/linux/multiple-nics.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/multiple-nics", + "nextSteps": [ + { + "title": "Review Linux VM Sizes", + "url": "https://github.com/MicrosoftDocs/azure-compute-docs/blob/main/articles/virtual-machines/sizes.md" + }, + { + "title": " Manage virtual machine access using just in time", + "url": "https://github.com/MicrosoftDocs/azure-compute-docs/blob/main/azure/security-center/security-center-just-in-time" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machines/disks-enable-performance.md", + "title": "Preview - Increase performance of Premium SSDs and Standard SSD/HDDs", + "description": "Increase the performance of Azure Premium SSDs and Standard SSD/HDDs using performance plus.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/disks-enable-performance.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/disks-enable-performance?tabs=azure-cli", + "nextSteps": [ + { + "title": "Create an incremental snapshot for managed disks", + "url": "https://github.com/MicrosoftDocs/azure-compute-docs/blob/main/articles/virtual-machines/disks-incremental-snapshots.md" + }, + { + "title": "Expand virtual hard disks on a Linux VM", + "url": "https://github.com/MicrosoftDocs/azure-compute-docs/blob/main/articles/virtual-machines/linux/expand-disks.md" + }, + { + "title": "How to expand virtual hard disks attached to a Windows virtual machine", + "url": "https://github.com/MicrosoftDocs/azure-compute-docs/blob/main/articles/virtual-machines/windows/expand-os-disk.md" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md", + "title": "Modify an Azure Virtual Machine Scale Set using Azure CLI", + "description": "Learn how to modify and update an Azure Virtual Machine Scale Set using Azure CLI.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli", + "nextSteps": [ + { + "title": "Use data disks with scale sets", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-use-disks-powershell" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md", + "title": "Tutorial - Autoscale a scale set with the Azure CLI", + "description": "Learn how to use the Azure CLI to automatically scale a Virtual Machine Scale Set as CPU demands increases and decreases", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-autoscale-cli", + "nextSteps": [ + { + "title": "Learn about scale set instance protection", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-protection" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machines/linux/tutorial-manage-vm.md", + "title": "Tutorial - Create and manage Linux VMs with the Azure CLI", + "description": "In this tutorial, you learn how to use the Azure CLI to create and manage Linux VMs in Azure", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-manage-vm.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-manage-vm", + "nextSteps": [ + { + "title": "Create and Manage VM Disks", + "url": "https://github.com/MicrosoftDocs/azure-compute-docs/blob/main/articles/virtual-machines/linux/tutorial-manage-disks.md" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machines/linux/tutorial-lamp-stack.md", + "title": "Tutorial - Deploy LAMP and WordPress on a VM", + "description": "In this tutorial, you learn how to install the LAMP stack, and WordPress, on a Linux virtual machine in Azure.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-lamp-stack.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-lamp-stack", + "nextSteps": [ + { + "title": "Secure web server with TLS", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-secure-web-server" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-docs/articles/batch/quick-create-cli.md", + "title": "Quickstart: Use the Azure CLI to create a Batch account and run a job", + "description": "Follow this quickstart to use the Azure CLI to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-docs/articles/batch/quick-create-cli.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/batch/quick-create-cli", + "nextSteps": [ + { + "title": "Tutorial: Run a parallel workload with Azure Batch", + "url": "https://learn.microsoft.com/en-us/azure/batch/tutorial-parallel-python" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-aks-docs/articles/aks/node-image-upgrade.md", + "title": "Upgrade Azure Kubernetes Service (AKS) node images", + "description": "Learn how to upgrade the images on AKS cluster nodes and node pools.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/node-image-upgrade.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/node-image-upgrade", + "nextSteps": [ + { + "title": "For information about the latest node images, see the AKS release notes", + "url": "https://github.com/Azure/AKS/releases" + }, + { + "title": "Learn how to upgrade the Kubernetes version with Upgrade an AKS cluster", + "url": "https://learn.microsoft.com/en-us/azure/aks/upgrade-aks-cluster" + }, + { + "title": "Automatically apply cluster and node pool upgrades with GitHub Actions", + "url": "https://learn.microsoft.com/en-us/azure/aks/node-upgrade-github-actions" + }, + { + "title": "Learn more about multiple node pools with Create multiple node pools", + "url": "https://learn.microsoft.com/en-us/azure/aks/create-node-pools" + }, + { + "title": "Learn about upgrading best practices with AKS patch and upgrade guidance", + "url": "https://learn.microsoft.com/en-us/azure/architecture/operator-guides/aks/aks-upgrade-practices" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "AKS_RESOURCE_GROUP", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "AKS_CLUSTER", + "title": "AKS Cluster Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "AKS_NODEPOOL", + "title": "AKS Node Pool Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-compute-docs/articles/virtual-machines/linux/tutorial-elasticsearch.md", + "title": "Deploy ElasticSearch on a development virtual machine in Azure", + "description": "Install the Elastic Stack (ELK) onto a development Linux VM in Azure", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-elasticsearch.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-elasticsearch", + "nextSteps": [ + { + "title": "Create a Linux VM with the Azure CLI", + "url": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-cli" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-aks-docs/articles/aks/learn/quick-windows-container-deploy-cli.md", + "title": "Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure CLI", + "description": "Learn how to quickly deploy a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using Azure CLI.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/learn/quick-windows-container-deploy-cli.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/learn/quick-windows-container-deploy-cli?tabs=add-windows-node-pool", + "nextSteps": [ + { + "title": "AKS solution guidance", + "url": "https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json" + }, + { + "title": "AKS tutorial", + "url": "https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-app" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [] + } + }, + { + "status": "active", + "key": "azure-aks-docs/articles/aks/spot-node-pool.md", + "title": "Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster", + "description": "Learn how to add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/spot-node-pool.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/spot-node-pool", + "nextSteps": [ + { + "title": "Best practices for advanced scheduler features in AKS", + "url": "https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-advanced-scheduler" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "RESOURCE_GROUP", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "AKS_CLUSTER", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-aks-docs/articles/aks/auto-upgrade-cluster.md", + "title": "Automatically upgrade an Azure Kubernetes Service (AKS) cluster", + "description": "Learn how to automatically upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/auto-upgrade-cluster.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/auto-upgrade-cluster?tabs=azure-cli", + "nextSteps": [ + { + "title": "AKS Patch and Upgrade Guidance", + "url": "https://learn.microsoft.com/en-us/azure/architecture/operator-guides/aks/aks-upgrade-practices" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "RESOURCE_GROUP", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "AKS_CLUSTER_NAME", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-aks-docs/articles/aks/auto-upgrade-node-os-image.md", + "title": "autoupgrade Node OS Images", + "description": "Learn how to automatically upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/auto-upgrade-node-os-image.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/auto-upgrade-node-os-image?tabs=azure-cli", + "nextSteps": [ + { + "title": "AKS Patch and Upgrade Guidance", + "url": "https://learn.microsoft.com/en-us/azure/architecture/operator-guides/aks/aks-upgrade-practices" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "RESOURCE_GROUP", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "AKS_CLUSTER", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-aks-docs/articles/aks/cost-analysis.md", + "title": "Azure Kubernetes Service (AKS) cost analysis", + "description": "Learn how to use cost analysis to surface granular cost allocation data for your Azure Kubernetes Service (AKS) cluster.", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/cost-analysis.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/cost-analysis", + "nextSteps": [ + { + "title": "Understand Azure Kubernetes Service (AKS) usage and costs", + "url": "https://learn.microsoft.com/en-us/azure/aks/understand-aks-costs" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "RESOURCE_GROUP", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "CLUSTER_NAME", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } + }, + { + "status": "active", + "key": "azure-aks-docs/articles/aks/istio-deploy-addon.md", + "title": "Deploy Istio-based service mesh add-on for Azure Kubernetes Service", + "description": "Deploy Istio-based service mesh add-on for Azure Kubernetes Service", + "stackDetails": "", + "sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-aks-docs/articles/aks/istio-deploy-addon.md", + "documentationUrl": "https://learn.microsoft.com/en-us/azure/aks/istio-deploy-addon", + "nextSteps": [ + { + "title": "Deploy external or internal ingresses for Istio service mesh add-on", + "url": "https://github.com/MicrosoftDocs/azure-aks-docs/blob/main/articles/aks/istio-deploy-ingress.md" + }, + { + "title": "Scale istiod and ingress gateway HPA", + "url": "https://github.com/MicrosoftDocs/azure-aks-docs/blob/main/articles/aks/istio-scale.md#scaling" + }, + { + "title": "Collect metrics for Istio service mesh add-on workloads in Azure Managed Prometheus", + "url": "https://github.com/MicrosoftDocs/azure-aks-docs/blob/main/articles/aks/istio-metrics-managed-prometheus.md" + } + ], + "configurations": { + "permissions": [], + "configurableParams": [ + { + "inputType": "textInput", + "commandKey": "RESOURCE_GROUP", + "title": "Resource Group Name", + "defaultValue": "" + }, + { + "inputType": "textInput", + "commandKey": "CLUSTER", + "title": "AKS Cluster Name", + "defaultValue": "" + } + ] + } } -] +] \ No newline at end of file diff --git a/scenarios/sql-docs/docs/linux/quickstart-install-connect-docker.md b/scenarios/sql-docs/docs/linux/quickstart-install-connect-docker.md new file mode 100644 index 000000000..66b00cb3d --- /dev/null +++ b/scenarios/sql-docs/docs/linux/quickstart-install-connect-docker.md @@ -0,0 +1,1245 @@ +--- +title: "Docker: Install Containers for SQL Server on Linux" +description: This quickstart shows how to use Docker to run the SQL Server Linux container images. You connect to a database and run a query. +author: amitkh-msft +ms.author: amitkh +ms.reviewer: vanto, randolphwest +ms.date: 11/18/2024 +ms.service: sql +ms.subservice: linux +ms.topic: quickstart +ms.custom: + - intro-quickstart + - kr2b-contr-experiment + - linux-related-content +zone_pivot_groups: cs1-command-shell +monikerRange: ">=sql-server-linux-2017 || >=sql-server-2017" +--- +# Quickstart: Run SQL Server Linux container images with Docker + +[!INCLUDE [SQL Server - Linux](../includes/applies-to-version/sql-linux.md)] + + +::: moniker range="=sql-server-linux-2017 || =sql-server-2017" + +In this quickstart, you use Docker to pull and run the [!INCLUDE [sssql17-md](../includes/sssql17-md.md)] Linux container image, [mssql-server-linux](https://mcr.microsoft.com/product/mssql/server/about). Then you can connect with **sqlcmd** to create your first database and run queries. + +For more information on supported platforms, see [Release notes for SQL Server 2017 on Linux](sql-server-linux-release-notes-2017.md). + +> [!WARNING] +> When you stop and remove a container, your [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] data in the container is permanently deleted. For more information on preserving your data, [create and copy a backup file out of the container](tutorial-restore-backup-in-sql-server-container.md) or use a [container data persistence technique](sql-server-linux-docker-container-configure.md#persist). + +This quickstart creates [!INCLUDE [sssql17-md](../includes/sssql17-md.md)] containers. If you prefer to create Linux containers for different versions of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], see the versions of this article for [[!INCLUDE [sssql19-md](../includes/sssql19-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver15&preserve-view=true#pullandrun2019) or [[!INCLUDE [sssql22-md](../includes/sssql22-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver16&preserve-view=true#pullandrun2022) versions of this article. + +::: moniker-end + + +::: moniker range="=sql-server-linux-ver15 || =sql-server-ver15" + +In this quickstart, you use Docker to pull and run the [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] Linux container image, [mssql-server-linux](https://mcr.microsoft.com/product/mssql/server/about). Then you can connect with **sqlcmd** to create your first database and run queries. + +For more information on supported platforms, see [Release notes for SQL Server 2019 on Linux](sql-server-linux-release-notes-2019.md). + +> [!WARNING] +> When you stop and remove a container, your [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] data in the container is permanently deleted. For more information on preserving your data, [create and copy a backup file out of the container](tutorial-restore-backup-in-sql-server-container.md) or use a [container data persistence technique](sql-server-linux-docker-container-configure.md#persist). + +This quickstart creates [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] containers. If you prefer to create Linux containers for different versions of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], see the [[!INCLUDE [sssql17-md](../includes/sssql17-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-2017&preserve-view=true#pullandrun2017) or [[!INCLUDE [sssql22-md](../includes/sssql22-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver16&preserve-view=true#pullandrun2022) versions of this article. + +::: moniker-end + + +::: moniker range=">= sql-server-linux-ver16 || >= sql-server-ver16" + +In this quickstart, you use Docker to pull and run the [!INCLUDE [sssql22-md](../includes/sssql22-md.md)] Linux container image, [mssql-server-linux](https://mcr.microsoft.com/product/mssql/server/about). Then you can connect with **sqlcmd** to create your first database and run queries. + +For more information on supported platforms, see [Release notes for SQL Server 2022 on Linux](sql-server-linux-release-notes-2022.md). + +> [!WARNING] +> When you stop and remove a container, your [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] data in the container is permanently deleted. For more information on preserving your data, [create and copy a backup file out of the container](tutorial-restore-backup-in-sql-server-container.md) or use a [container data persistence technique](sql-server-linux-docker-container-configure.md#persist). + +This quickstart creates [!INCLUDE [sssql22-md](../includes/sssql22-md.md)] containers. If you prefer to create Linux containers for different versions of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], see the [[!INCLUDE [sssql17-md](../includes/sssql17-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-2017&preserve-view=true#pullandrun2017) or [[!INCLUDE [sssql19-md](../includes/sssql19-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver15&preserve-view=true#pullandrun2019) versions of this article. + +::: moniker-end + +This image consists of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] running on Linux based on Ubuntu. It can be used with the Docker Engine 1.8+ on Linux. + +Starting with [!INCLUDE [sssql22-md](../includes/sssql22-md.md)] CU 14 and [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] CU 28, the container images include the [new mssql-tools18](sql-server-linux-setup-tools.md#install-tools-on-linux) package. The previous directory `/opt/mssql-tools/bin` is being phased out. The new directory for Microsoft ODBC 18 tools is `/opt/mssql-tools18/bin`, aligning with the latest tools offering. For more information about changes and security enhancements, see [ODBC Driver 18.0 for SQL Server Released](https://techcommunity.microsoft.com/blog/sqlserver/odbc-driver-18-0-for-sql-server-released/3169228). + +The examples in this article use the `docker` command. However, most of these commands also work with Podman. Podman provides a command-line interface similar to the Docker Engine. You can [find out more about Podman](https://docs.podman.io/en/latest). + +> [!IMPORTANT] +> **sqlcmd** doesn't currently support the `MSSQL_PID` parameter when creating containers. If you use the **sqlcmd** instructions in this quickstart, you create a container with the Developer edition of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)]. Use the command line interface (CLI) instructions to create a container using the license of your choice. For more information, see [Deploy and connect to SQL Server Linux containers](sql-server-linux-docker-container-deployment.md). + + + +## Prerequisites + +- Docker Engine 1.8+ on any supported Linux distribution. For more information, see [Install Docker](https://docs.docker.com/engine/installation/). + + +::: moniker range="=sql-server-linux-2017 || =sql-server-2017" + +- For more information on hardware requirements and processor support, see [SQL Server 2016 and 2017: Hardware and software requirements](../sql-server/install/hardware-and-software-requirements-for-installing-sql-server.md) + +::: moniker-end + + +::: moniker range="=sql-server-linux-ver15 || =sql-server-ver15" + +- For more information on hardware requirements and processor support, see [SQL Server 2019: Hardware and software requirements](../sql-server/install/hardware-and-software-requirements-for-installing-sql-server-2019.md) + +::: moniker-end + + +::: moniker range=">= sql-server-linux-ver16 || >= sql-server-ver16" + +- For more information on hardware requirements and processor support, see [SQL Server 2022: Hardware and software requirements](../sql-server/install/hardware-and-software-requirements-for-installing-sql-server-2022.md) + +::: moniker-end + +- Docker `overlay2` storage driver. This driver is the default for most users. If you aren't using this storage provider and need to change, see the instructions and warnings in the [Docker documentation for configuring overlay2](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/#configure-docker-with-the-overlay-or-overlay2-storage-driver). + +- Install the latest **[sqlcmd](../tools/sqlcmd/sqlcmd-utility.md?&tabs=go)** on your Docker host. + +- At least 2 GB of disk space. + +- At least 2 GB of RAM. + +- [System requirements for SQL Server on Linux](sql-server-linux-setup.md#system). + + +::: moniker range="=sql-server-linux-2017 || =sql-server-2017" + + + +## Pull and run the SQL Server Linux container image + +Before starting the following steps, make sure that you select your preferred shell (**bash**, **PowerShell**, or **cmd**) at the top of this article. + +::: zone pivot="cs1-bash" +For the bash commands in this article, `sudo` is used. If you don't want to use `sudo` to run Docker, you can configure a `docker` group and add users to that group. For more information, see [Post-installation steps for Linux](https://docs.docker.com/engine/install/linux-postinstall). +::: zone-end + +## [CLI](#tab/cli) + +### Pull the container image from the registry + +Pull the [!INCLUDE [sssql17-md](../includes/sssql17-md.md)] Linux container image from the Microsoft Container Registry. + +::: zone pivot="cs1-bash" + +```bash +sudo docker pull mcr.microsoft.com/mssql/server:2017-latest +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +docker pull mcr.microsoft.com/mssql/server:2017-latest +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +docker pull mcr.microsoft.com/mssql/server:2017-latest +``` + +::: zone-end + +This quickstart creates [!INCLUDE [sssql17-md](../includes/sssql17-md.md)] containers. If you prefer to create Linux containers for different versions of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], see the [[!INCLUDE [sssql19-md](../includes/sssql19-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver15&preserve-view=true#pullandrun2019) or [[!INCLUDE [sssql22-md](../includes/sssql22-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver16&preserve-view=true#pullandrun2022) versions of this article. + +The previous command pulls the latest [!INCLUDE [sssql17-md](../includes/sssql17-md.md)] Linux container image. If you want to pull a specific image, you add a colon and the tag name, such as `mcr.microsoft.com/mssql/server:2017-GA-ubuntu`. To see all available images, see the [Microsoft Artifact Registry](https://mcr.microsoft.com/product/mssql/server/about). + +### Run the container + +To run the Linux container image with Docker, you can use the following command from a bash shell or elevated PowerShell command prompt. + +> [!IMPORTANT] +> The `SA_PASSWORD` environment variable is deprecated. Use `MSSQL_SA_PASSWORD` instead. + +::: zone pivot="cs1-bash" + +```bash +sudo docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=" \ + -p 1433:1433 --name sql1 --hostname sql1 \ + -d \ + mcr.microsoft.com/mssql/server:2017-latest +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +If you're using PowerShell Core, replace the double quotes with single quotes. + +```powershell +docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=" ` + -p 1433:1433 --name sql1 --hostname sql1 ` + -d ` + mcr.microsoft.com/mssql/server:2017-latest +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=" ` + -p 1433:1433 --name sql1 --hostname sql1 ` + -d ` + mcr.microsoft.com/mssql/server:2017-latest +``` + +::: zone-end + +> [!CAUTION] +> [!INCLUDE [password-complexity](includes/password-complexity.md)] If you don't follow these password requirements, the container can't set up [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], and stops working. You can examine the error log by using the [`docker logs`](https://docs.docker.com/reference/cli/docker/container/logs) command. + +By default, this quickstart creates a container with the Developer edition of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)]. The process for running production editions in containers is slightly different. For more information, see [Run production container images](./sql-server-linux-docker-container-deployment.md#production). + +The following table provides a description of the parameters in the previous `docker run` example: + +| Parameter | Description | +| --- | --- | +| `-e "ACCEPT_EULA=Y"` | Set the `ACCEPT_EULA` variable to any value to confirm your acceptance of the End-User Licensing Agreement. Required setting for the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] image. | +| `-e "MSSQL_SA_PASSWORD="` | Specify your own strong password that is at least eight characters and meets the [Password Policy](../relational-databases/security/password-policy.md). Required setting for the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] image. | +| `-e "MSSQL_COLLATION="` | Specify a custom [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] collation, instead of the default `SQL_Latin1_General_CP1_CI_AS`. | +| `-p 1433:1433` | Map a TCP port on the host environment (first value) with a TCP port in the container (second value). In this example, [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] is listening on TCP 1433 in the container and this container port is then exposed to TCP port 1433 on the host. | +| `--name sql1` | Specify a custom name for the container rather than a randomly generated one. If you run more than one container, you can't reuse this same name. | +| `--hostname sql1` | Used to explicitly set the container hostname. If you don't specify the hostname, it defaults to the container ID, which is a randomly generated system GUID. | +| `-d` | Run the container in the background (daemon). | +| `mcr.microsoft.com/mssql/server:2017-latest` | The [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] Linux container image. | + +## [sqlcmd](#tab/sqlcmd) + +### Pull and run the container + +Pull and run the [!INCLUDE [sssql17-md](../includes/sssql17-md.md)] Linux container image from the Microsoft Container Registry. + +::: zone pivot="cs1-bash" + +```bash +sudo sqlcmd create mssql --tag 2017-latest --hostname sql1 --name sql1 --port 1433 --accept-eula +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +sqlcmd create mssql --tag 2017-latest --hostname sql1 --name sql1 --port 1433 --accept-eula +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +sqlcmd create mssql --tag 2017-latest --hostname sql1 --name sql1 --port 1433 --accept-eula +``` + +::: zone-end + +This quickstart creates [!INCLUDE [sssql17-md](../includes/sssql17-md.md)] containers. If you prefer to create Linux containers for different versions of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], see the [[!INCLUDE [sssql19-md](../includes/sssql19-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver15&preserve-view=true#pullandrun2019) or [[!INCLUDE [sssql22-md](../includes/sssql22-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver16&preserve-view=true#pullandrun2022) versions of this article. + +The previous command uses the latest [!INCLUDE [sssql17-md](../includes/sssql17-md.md)] Linux container image. If you want to pull a specific image, change the tag name, such as `2017-GA-ubuntu`. To see all available images, run the following command: + +::: zone pivot="cs1-bash" + +```bash +sudo sqlcmd create mssql get-tags +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +sqlcmd create mssql get-tags +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +sqlcmd create mssql get-tags +``` + +::: zone-end + +The following table provides a description of the parameters in the previous `sqlcmd create mssql` example: + +| Parameter | Description | +| --- | --- | +| `--ACCEPT-EULA` | Include the `ACCEPT-EULA` flag to confirm your acceptance of the End-User Licensing Agreement. Required setting for the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] image. | +| `--port 1433` | Map a TCP port on the host environment and a TCP port in the container. In this example, [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] is listening on TCP 1433 in the container and this container port is then exposed to TCP port 1433 on the host. | +| `--name sql1` | Specify a custom name for the container rather than a randomly generated one. If you run more than one container, you can't reuse this same name. | +| `--hostname sql1` | Used to explicitly set the container hostname. If you don't specify the hostname, it defaults to the container ID, which is a randomly generated system GUID. | +| `--tag 2017-latest` | The [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] Linux container image. | + +--- + +### View list of containers + +1. To view your Docker containers, use the `docker ps` command. + + ::: zone pivot="cs1-bash" + + ```bash + sudo docker ps -a + ``` + + ::: zone-end + + ::: zone pivot="cs1-powershell" + + ```powershell + docker ps -a + ``` + + ::: zone-end + + ::: zone pivot="cs1-cmd" + + ```cmd + docker ps -a + ``` + + ::: zone-end + + You should see output similar to the following example: + + ```output + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + d4a1999ef83e mcr.microsoft.com/mssql/server:2017-latest "/opt/mssql/bin/perm..." 2 minutes ago Up 2 minutes 0.0.0.0:1433->1433/tcp, :::1433->1433/tcp sql1 + ``` + +1. If the `STATUS` column shows a status of `Up`, then [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] is running in the container and listening on the port specified in the `PORTS` column. If the `STATUS` column for your [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] container shows `Exited`, see [Troubleshoot SQL Server Docker containers](sql-server-linux-docker-container-troubleshooting.md). The server is ready for connections once the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] error logs display the message: `SQL Server is now ready for client connections. This is an informational message; no user action is required`. You can review the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] error log inside the container using the command: + + ```bash + sudo docker exec -t sql1 cat /var/opt/mssql/log/errorlog | grep connection + ``` + + The `--hostname` parameter, as discussed previously, changes the internal name of the container to a custom value. This value is the name you see returned in the following Transact-SQL query: + + ```sql + SELECT @@SERVERNAME, + SERVERPROPERTY('ComputerNamePhysicalNetBIOS'), + SERVERPROPERTY('MachineName'), + SERVERPROPERTY('ServerName'); + ``` + + Setting `--hostname` and `--name` to the same value is a good way to easily identify the target container. + +1. As a final step, [change your SA password](#sapassword) in a production environment, because the `MSSQL_SA_PASSWORD` is visible in `ps -eax` output and stored in the environment variable of the same name. + +::: moniker-end + + +::: moniker range="=sql-server-linux-ver15 || =sql-server-ver15" + + + +## Pull and run the SQL Server Linux container image + +Before starting the following steps, make sure that you select your preferred shell (**bash**, **PowerShell**, or **cmd**) at the top of this article. + +::: zone pivot="cs1-bash" +For the bash commands in this article, `sudo` is used. If you don't want to use `sudo` to run Docker, you can configure a `docker` group and add users to that group. For more information, see [Post-installation steps for Linux](https://docs.docker.com/engine/install/linux-postinstall). +::: zone-end + +## [CLI](#tab/cli) + +### Pull the container from the registry + +Pull the [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] Linux container image from the Microsoft Container Registry. + +::: zone pivot="cs1-bash" + +```bash +docker pull mcr.microsoft.com/mssql/server:2019-latest +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +docker pull mcr.microsoft.com/mssql/server:2019-latest +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +docker pull mcr.microsoft.com/mssql/server:2019-latest +``` + +::: zone-end + +This quickstart creates [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] containers. If you prefer to create Linux containers for different versions of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], see the [[!INCLUDE [sssql17-md](../includes/sssql17-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-2017&preserve-view=true#pullandrun2017) or [[!INCLUDE [sssql22-md](../includes/sssql22-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver16&preserve-view=true#pullandrun2022) versions of this article. + +The previous command pulls the latest [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] Linux container image. If you want to pull a specific image, you add a colon and the tag name, such as `mcr.microsoft.com/mssql/server:2019-GA-ubuntu`. To see all available images, see the [Microsoft Artifact Registry](https://mcr.microsoft.com/product/mssql/server/about). + +### Run the container + +To run the Linux container image with Docker, you can use the following command from a bash shell or elevated PowerShell command prompt. + +> [!IMPORTANT] +> The `SA_PASSWORD` environment variable is deprecated. Use `MSSQL_SA_PASSWORD` instead. + +::: zone pivot="cs1-bash" + +```bash +docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=" \ + -p 1433:1433 --name sql1 --hostname sql1 \ + -d \ + mcr.microsoft.com/mssql/server:2019-latest +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +If you're using PowerShell Core, replace the double quotes with single quotes. + +```powershell +docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=" ` + -p 1433:1433 --name sql1 --hostname sql1 ` + -d ` + mcr.microsoft.com/mssql/server:2019-latest +``` + +> [!CAUTION] +> [!INCLUDE [password-complexity](includes/password-complexity.md)] + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=" ` + -p 1433:1433 --name sql1 --hostname sql1 ` + -d ` + mcr.microsoft.com/mssql/server:2019-latest +``` + +::: zone-end + +> [!CAUTION] +> [!INCLUDE [password-complexity](includes/password-complexity.md)] If you don't follow these password requirements, the container can't set up [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], and stops working. You can examine the error log by using the [`docker logs`](https://docs.docker.com/reference/cli/docker/container/logs) command. + +By default, this quickstart creates a container with the Developer edition of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)]. The process for running production editions in containers is slightly different. For more information, see [Run production container images](./sql-server-linux-docker-container-deployment.md#production). + +The following table provides a description of the parameters in the previous `docker run` example: + +| Parameter | Description | +| --- | --- | +| `-e "ACCEPT_EULA=Y"` | Set the `ACCEPT_EULA` variable to any value to confirm your acceptance of the End-User Licensing Agreement. Required setting for the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] image. | +| `-e "MSSQL_SA_PASSWORD="` | Specify your own strong password that is at least eight characters and meets the [Password Policy](../relational-databases/security/password-policy.md). Required setting for the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] image. | +| `-e "MSSQL_COLLATION="` | Specify a custom [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] collation, instead of the default `SQL_Latin1_General_CP1_CI_AS`. | +| `-p 1433:1433` | Map a TCP port on the host environment (first value) with a TCP port in the container (second value). In this example, [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] is listening on TCP 1433 in the container and this container port is then exposed to TCP port 1433 on the host. | +| `--name sql1` | Specify a custom name for the container rather than a randomly generated one. If you run more than one container, you can't reuse this same name. | +| `--hostname sql1` | Used to explicitly set the container hostname. If you don't specify the hostname, it defaults to the container ID, which is a randomly generated system GUID. | +| `-d` | Run the container in the background (daemon). | +| `mcr.microsoft.com/mssql/server:2019-latest` | The [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] Linux container image. | + +## [sqlcmd](#tab/sqlcmd) + +### Pull and run the container + +Pull and run the [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] Linux container image from the Microsoft Container Registry. + +::: zone pivot="cs1-bash" + +```bash +sudo sqlcmd create mssql --tag 2019-latest --hostname sql1 --name sql1 --port 1433 --accept-eula +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +sqlcmd create mssql --tag 2019-latest --hostname sql1 --name sql1 --port 1433 --accept-eula +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +sqlcmd create mssql --tag 2019-latest --hostname sql1 --name sql1 --port 1433 --accept-eula +``` + +::: zone-end + +This quickstart creates [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] containers. If you prefer to create Linux containers for different versions of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], see the [[!INCLUDE [sssql17-md](../includes/sssql17-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-2017&preserve-view=true#pullandrun2017) or [[!INCLUDE [sssql22-md](../includes/sssql22-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver16&preserve-view=true#pullandrun2022) versions of this article. + +The previous command pulls the latest [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] Linux container image. If you want to pull a specific image, change the tag name, such as `2019-GA-ubuntu-16.04`. To see all available images, run the following command: + +::: zone pivot="cs1-bash" + +```bash +sudo sqlcmd create mssql get-tags +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +sqlcmd create mssql get-tags +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +sqlcmd create mssql get-tags +``` + +::: zone-end + +By default, this quickstart creates a container with the Developer edition of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)]. The process for running production editions in containers is slightly different. For more information, see [Run production container images](./sql-server-linux-docker-container-deployment.md#production). + +The following table provides a description of the parameters in the previous `docker run` example: + +| Parameter | Description | +| --- | --- | +| `--ACCEPT_EULA` | Include the `ACCEPT_EULA` flag to confirm your acceptance of the End-User Licensing Agreement. Required setting for the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] image. | +| `--port 1433` | Map a TCP port on the host environment and a TCP port in the container. In this example, [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] is listening on TCP 1433 in the container and this container port is then exposed to TCP port 1433 on the host. | +| `--name sql1` | Specify a custom name for the container rather than a randomly generated one. If you run more than one container, you can't reuse this same name. | +| `--hostname sql1` | Used to explicitly set the container hostname. If you don't specify the hostname, it defaults to the container ID, which is a randomly generated system GUID. | +| `--tag 2019-latest` | The [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] Linux container image. | + +--- + +### View list of containers + +1. To view your Docker containers, use the `docker ps` command. + + ::: zone pivot="cs1-bash" + + ```bash + docker ps -a + ``` + + ::: zone-end + + ::: zone pivot="cs1-powershell" + + ```powershell + docker ps -a + ``` + + ::: zone-end + + ::: zone pivot="cs1-cmd" + + ```cmd + docker ps -a + ``` + + ::: zone-end + + You should see output similar to the following example: + + ```output + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + d4a1999ef83e mcr.microsoft.com/mssql/server:2019-latest "/opt/mssql/bin/perm..." 2 minutes ago Up 2 minutes 0.0.0.0:1433->1433/tcp, :::1433->1433/tcp sql1 + ``` + +1. If the `STATUS` column shows a status of `Up`, then [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] is running in the container and listening on the port specified in the `PORTS` column. If the `STATUS` column for your [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] container shows `Exited`, see [Troubleshoot SQL Server Docker containers](sql-server-linux-docker-container-troubleshooting.md). The server is ready for connections once the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] error logs display the message: `SQL Server is now ready for client connections. This is an informational message; no user action is required`. You can review the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] error log inside the container using the command: + + ```bash + docker exec -t sql1 cat /var/opt/mssql/log/errorlog | grep connection + ``` + + The `--hostname` parameter, as discussed previously, changes the internal name of the container to a custom value. This value is the name you see returned in the following Transact-SQL query: + + ```sql + SELECT @@SERVERNAME, + SERVERPROPERTY('ComputerNamePhysicalNetBIOS'), + SERVERPROPERTY('MachineName'), + SERVERPROPERTY('ServerName'); + ``` + + Setting `--hostname` and `--name` to the same value is a good way to easily identify the target container. + +1. As a final step, [change your SA password](#sapassword) in a production environment, because the `MSSQL_SA_PASSWORD` is visible in `ps -eax` output and stored in the environment variable of the same name. + +::: moniker-end + + +::: moniker range=">= sql-server-linux-ver16 || >= sql-server-ver16" + + + +## Pull and run the SQL Server Linux container image + +Before starting the following steps, make sure that you select your preferred shell (**bash**, **PowerShell**, or **cmd**) at the top of this article. + +::: zone pivot="cs1-bash" +For the bash commands in this article, `sudo` is used. If you don't want to use `sudo` to run Docker, you can configure a `docker` group and add users to that group. For more information, see [Post-installation steps for Linux](https://docs.docker.com/engine/install/linux-postinstall). +::: zone-end + +## [CLI](#tab/cli) + +### Pull the container image from the registry + +Pull the [!INCLUDE [sssql22-md](../includes/sssql22-md.md)] Linux container image from the Microsoft Container Registry. + +::: zone pivot="cs1-bash" + +```bash +docker pull mcr.microsoft.com/mssql/server:2022-latest +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +docker pull mcr.microsoft.com/mssql/server:2022-latest +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +docker pull mcr.microsoft.com/mssql/server:2022-latest +``` + +::: zone-end + +This quickstart creates [!INCLUDE [sssql22-md](../includes/sssql22-md.md)] containers. If you prefer to create Linux containers for different versions of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], see the [[!INCLUDE [sssql17-md](../includes/sssql17-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-2017&preserve-view=true#pullandrun2017) or [[!INCLUDE [sssql19-md](../includes/sssql19-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver15&preserve-view=true#pullandrun2019) versions of this article. + +The previous command pulls the latest [!INCLUDE [sssql22-md](../includes/sssql22-md.md)] Linux container image. If you want to pull a specific image, you add a colon and the tag name, such as `mcr.microsoft.com/mssql/server:2022-GA-ubuntu`. To see all available images, see the [Microsoft Artifact Registry](https://mcr.microsoft.com/product/mssql/server/about). + +### Run the container + +To run the Linux container image with Docker, you can use the following command from a bash shell or elevated PowerShell command prompt. + +> [!IMPORTANT] +> The `SA_PASSWORD` environment variable is deprecated. Use `MSSQL_SA_PASSWORD` instead. + +::: zone pivot="cs1-bash" + +```bash +docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=" \ + -p 1433:1433 --name sql1 --hostname sql1 \ + -d \ + mcr.microsoft.com/mssql/server:2022-latest +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +If you're using PowerShell Core, replace the double quotes with single quotes. + +```powershell +docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=" ` + -p 1433:1433 --name sql1 --hostname sql1 ` + -d ` + mcr.microsoft.com/mssql/server:2022-latest +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=" ` + -p 1433:1433 --name sql1 --hostname sql1 ` + -d ` + mcr.microsoft.com/mssql/server:2022-latest +``` + +::: zone-end + +> [!CAUTION] +> [!INCLUDE [password-complexity](includes/password-complexity.md)] If you don't follow these password requirements, the container can't set up [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], and stops working. You can examine the error log by using the [`docker logs`](https://docs.docker.com/reference/cli/docker/container/logs) command. + +By default, this quickstart creates a container with the Developer edition of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)]. The process for running production editions in containers is slightly different. For more information, see [Run production container images](./sql-server-linux-docker-container-deployment.md#production). + +The following table provides a description of the parameters in the previous `docker run` example: + +| Parameter | Description | +| --- | --- | +| `-e "ACCEPT_EULA=Y"` | Set the `ACCEPT_EULA` variable to any value to confirm your acceptance of the End-User Licensing Agreement. Required setting for the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] image. | +| `-e "MSSQL_SA_PASSWORD="` | Specify your own strong password that is at least eight characters and meets the [Password Policy](../relational-databases/security/password-policy.md). Required setting for the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] image. | +| `-e "MSSQL_COLLATION="` | Specify a custom [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] collation, instead of the default `SQL_Latin1_General_CP1_CI_AS`. | +| `-p 1433:1433` | Map a TCP port on the host environment (first value) with a TCP port in the container (second value). In this example, [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] is listening on TCP 1433 in the container and this container port is then exposed to TCP port 1433 on the host. | +| `--name sql1` | Specify a custom name for the container rather than a randomly generated one. If you run more than one container, you can't reuse this same name. | +| `--hostname sql1` | Used to explicitly set the container hostname. If you don't specify the hostname, it defaults to the container ID, which is a randomly generated system GUID. | +| `-d` | Run the container in the background (daemon). | +| `mcr.microsoft.com/mssql/server:2022-latest` | The [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] Linux container image. | + + + +## Change the system administrator password + +The system administrator (`sa`) account is a system administrator on the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] instance that gets created during setup. After you create your [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] container, the `MSSQL_SA_PASSWORD` environment variable you specified is discoverable by running `echo $MSSQL_SA_PASSWORD` in the container. For security purposes, you should change your `sa` password in a production environment. + +1. Choose a strong password to use for the `sa` account. [!INCLUDE [password-complexity](includes/password-complexity.md)] + +1. Use `docker exec` to run **sqlcmd** to change the password using Transact-SQL. In the following example, the old and new passwords are read from user input. + + ::: zone pivot="cs1-bash" + + ```bash + docker exec -it sql1 /opt/mssql-tools18/bin/sqlcmd \ + -S localhost -U sa \ + -P "$(read -sp "Enter current SA password: "; echo "${REPLY}")" \ + -Q "ALTER LOGIN sa WITH PASSWORD=\"$(read -sp "Enter new SA password: "; echo "${REPLY}")\"" + ``` + + ::: zone-end + + ::: zone pivot="cs1-powershell" + + ```powershell + docker exec -it sql1 /opt/mssql-tools18/bin/sqlcmd ` + -S localhost -U sa -P "" ` + -Q "ALTER LOGIN sa WITH PASSWORD=''" + ``` + + ::: zone-end + + ::: zone pivot="cs1-cmd" + + ```cmd + docker exec -it sql1 /opt/mssql-tools18/bin/sqlcmd ` + -S localhost -U sa -P "" ` + -Q "ALTER LOGIN sa WITH PASSWORD=''" + ``` + + ::: zone-end + + > [!CAUTION] + > [!INCLUDE [password-complexity](includes/password-complexity.md)] + + Recent versions of **sqlcmd** are secure by default. For more information about connection encryption, see [sqlcmd utility](../tools/sqlcmd/sqlcmd-utility.md) for Windows, and [Connecting with sqlcmd](../connect/odbc/linux-mac/connecting-with-sqlcmd.md) for Linux and macOS. If the connection doesn't succeed, you can add the `-No` option to **sqlcmd** to specify that encryption is optional, not mandatory. + +## Disable the SA account as a best practice + +> [!IMPORTANT] +> You'll need these credentials for later steps. Be sure to write down the user ID and password that you enter here. + +[!INCLUDE [connect-with-sa](includes/connect-with-sa.md)] + +## [sqlcmd](#tab/sqlcmd) + +### Pull and run the container + +Pull and run the [!INCLUDE [sssql22-md](../includes/sssql22-md.md)] Linux container image from the Microsoft Container Registry. + +::: zone pivot="cs1-bash" + +```bash +sudo sqlcmd create mssql --tag 2022-latest --hostname sql1 --name sql1 --port 1433 --accept-eula +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +sqlcmd create mssql --tag 2022-latest --hostname sql1 --name sql1 --port 1433 --accept-eula +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +sqlcmd create mssql --tag 2022-latest --hostname sql1 --name sql1 --port 1433 --accept-eula +``` + +::: zone-end + +This quickstart creates [!INCLUDE [sssql22-md](../includes/sssql22-md.md)] containers. If you prefer to create Linux containers for different versions of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)], see the [[!INCLUDE [sssql17-md](../includes/sssql17-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-2017&preserve-view=true#pullandrun2017) or [[!INCLUDE [sssql19-md](../includes/sssql19-md.md)]](quickstart-install-connect-docker.md?view=sql-server-linux-ver15&preserve-view=true#pullandrun2019) versions of this article. + +The previous command pulls the latest [!INCLUDE [sssql22-md](../includes/sssql22-md.md)] Linux container image. If you want to pull a specific image, change the tag name, such as `2022-CU11-ubuntu-22.04`. To see all available images, run the following command: + +::: zone pivot="cs1-bash" + +```bash +sudo sqlcmd create mssql get-tags +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +sqlcmd create mssql get-tags +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +sqlcmd create mssql get-tags +``` + +::: zone-end + +By default, this quickstart creates a container with the Developer edition of [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)]. The process for running production editions in containers is slightly different. For more information, see [Run production container images](./sql-server-linux-docker-container-deployment.md#production). + +The following table provides a description of the parameters in the previous `docker run` example: + +| Parameter | Description | +| --- | --- | +| `--ACCEPT-EULA` | Include the `--ACCEPT-EULA` flag to confirm your acceptance of the End-User Licensing Agreement. Required setting for the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] image. | +| `--port 1433` | Map a TCP port on the host environment and a TCP port in the container. In this example, [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] is listening on TCP 1433 in the container and this container port is then exposed to TCP port 1433 on the host. | +| `--name sql1` | Specify a custom name for the container rather than a randomly generated one. If you run more than one container, you can't reuse this same name. | +| `--hostname sql1` | Used to explicitly set the container hostname. If you don't specify the hostname, it defaults to the container ID, which is a randomly generated system GUID. | +| `--tag 2022-latest` | The [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] Linux container image. | + +**sqlcmd** disables the `sa` password and creates a new login based on the current user when it creates a container. Use the following command to view your login information. You need it in later steps. + +::: zone pivot="cs1-bash" + +```bash +sudo sqlcmd config view --raw +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +sqlcmd config view --raw +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +sqlcmd config view --raw +``` + +::: zone-end + +--- + +### View list of containers + +1. To view your Docker containers, use the `docker ps` command. + + ::: zone pivot="cs1-bash" + + ```bash + docker ps -a + ``` + + ::: zone-end + + ::: zone pivot="cs1-powershell" + + ```powershell + docker ps -a + ``` + + ::: zone-end + + ::: zone pivot="cs1-cmd" + + ```cmd + docker ps -a + ``` + + ::: zone-end + + You should see output similar to the following example: + + ```output + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + d4a1999ef83e mcr.microsoft.com/mssql/server:2022-latest "/opt/mssql/bin/perm..." 2 minutes ago Up 2 minutes 0.0.0.0:1433->1433/tcp, :::1433->1433/tcp sql1 + ``` + +1. If the `STATUS` column shows a status of `Up`, then [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] is running in the container and listening on the port specified in the `PORTS` column. If the `STATUS` column for your [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] container shows `Exited`, see [Troubleshoot SQL Server Docker containers](sql-server-linux-docker-container-troubleshooting.md). The server is ready for connections once the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] error logs display the message: `SQL Server is now ready for client connections. This is an informational message; no user action is required`. You can review the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] error log inside the container using the command: + + ```bash + docker exec -t sql1 cat /var/opt/mssql/log/errorlog | grep connection + ``` + + The `--hostname` parameter, as discussed previously, changes the internal name of the container to a custom value. This value is the name you see returned in the following Transact-SQL query: + + ```sql + SELECT @@SERVERNAME, + SERVERPROPERTY('ComputerNamePhysicalNetBIOS'), + SERVERPROPERTY('MachineName'), + SERVERPROPERTY('ServerName'); + ``` + + Setting `--hostname` and `--name` to the same value is a good way to easily identify the target container. + +::: moniker-end + +## Connect to SQL Server + +The following steps use the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] command-line tool, [sqlcmd utility](../tools/sqlcmd/sqlcmd-utility.md), inside the container to connect to [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)]. + +1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example, `sql1` is name specified by the `--name` parameter when you created the container. + + ::: zone pivot="cs1-bash" + + ```bash + docker exec -it sql1 "bash" + ``` + + ::: zone-end + + ::: zone pivot="cs1-powershell" + + ```powershell + docker exec -it sql1 "bash" + ``` + + ::: zone-end + + ::: zone pivot="cs1-cmd" + + ```cmd + docker exec -it sql1 "bash" + ``` + + ::: zone-end + + +::: moniker range="=sql-server-linux-2017 || =sql-server-2017" + +1. Once inside the container, connect locally with **sqlcmd**, using its full path. + + ```bash + /opt/mssql-tools/bin/sqlcmd -S localhost -U -P "" + ``` + + Recent versions of **sqlcmd** are secure by default. For more information about connection encryption, see [sqlcmd utility](../tools/sqlcmd/sqlcmd-utility.md) for Windows, and [Connecting with sqlcmd](../connect/odbc/linux-mac/connecting-with-sqlcmd.md) for Linux and macOS. If the connection doesn't succeed, you can add the `-No` option to **sqlcmd** to specify that encryption is optional, not mandatory. + + You can omit the password on the command-line to be prompted to enter it. For example: + + ```bash + /opt/mssql-tools/bin/sqlcmd -S localhost -U + ``` + +::: moniker-end + + +::: moniker range="=sql-server-linux-ver15 || =sql-server-ver15" + +1. Once inside the container, connect locally with **sqlcmd**, using its full path. + + ```bash + /opt/mssql-tools18/bin/sqlcmd -S localhost -U -P "" + ``` + + Recent versions of **sqlcmd** are secure by default. For more information about connection encryption, see [sqlcmd utility](../tools/sqlcmd/sqlcmd-utility.md) for Windows, and [Connecting with sqlcmd](../connect/odbc/linux-mac/connecting-with-sqlcmd.md) for Linux and macOS. If the connection doesn't succeed, you can add the `-No` option to **sqlcmd** to specify that encryption is optional, not mandatory. + + You can omit the password on the command-line to be prompted to enter it. For example: + + ```bash + /opt/mssql-tools18/bin/sqlcmd -S localhost -U + ``` + +::: moniker-end + + +::: moniker range="= sql-server-linux-ver16 || = sql-server-ver16" + +1. Once inside the container, connect locally with **sqlcmd**, using its full path. + + ```bash + /opt/mssql-tools18/bin/sqlcmd -S localhost -U -P "" + ``` + + Recent versions of **sqlcmd** are secure by default. For more information about connection encryption, see [sqlcmd utility](../tools/sqlcmd/sqlcmd-utility.md) for Windows, and [Connecting with sqlcmd](../connect/odbc/linux-mac/connecting-with-sqlcmd.md) for Linux and macOS. If the connection doesn't succeed, you can add the `-No` option to **sqlcmd** to specify that encryption is optional, not mandatory. + + You can omit the password on the command-line to be prompted to enter it. For example: + + ```bash + /opt/mssql-tools18/bin/sqlcmd -S localhost -U + ``` + +::: moniker-end + +1. If successful, you should get to a **sqlcmd** command prompt: `1>`. + +## Create and query data + +The following sections walk you through using **sqlcmd** and Transact-SQL to create a new database, add data, and run a query. + +### Create a new database + +The following steps create a new database named `TestDB`. + +1. From the **sqlcmd** command prompt, paste the following Transact-SQL command to create a test database: + + ```sql + CREATE DATABASE TestDB; + ``` + +1. On the next line, write a query to return the name of all of the databases on your server: + + ```sql + SELECT name + FROM sys.databases; + ``` + +1. The previous two commands weren't run immediately. Type `GO` on a new line to run the previous commands: + + ```sql + GO + ``` + +### Insert data + +Next create a new table, `Inventory`, and insert two new rows. + +1. From the *sqlcmd* command prompt, switch context to the new `TestDB` database: + + ```sql + USE TestDB; + ``` + +1. Create new table named `Inventory`: + + ```sql + CREATE TABLE Inventory + ( + id INT, + name NVARCHAR (50), + quantity INT + ); + ``` + +1. Insert data into the new table: + + ```sql + INSERT INTO Inventory + VALUES (1, 'banana', 150); + + INSERT INTO Inventory + VALUES (2, 'orange', 154); + ``` + +1. Type `GO` to run the previous commands: + + ```sql + GO + ``` + +### Select data + +Now, run a query to return data from the `Inventory` table. + +1. From the **sqlcmd** command prompt, enter a query that returns rows from the `Inventory` table where the quantity is greater than 152: + + ```sql + SELECT * + FROM Inventory + WHERE quantity > 152; + ``` + +1. Run the command: + + ```sql + GO + ``` + +### Exit the sqlcmd command prompt + +1. To end your **sqlcmd** session, type `QUIT`: + + ```sql + QUIT + ``` + +1. To exit the interactive command-prompt in your container, type `exit`. Your container continues to run after you exit the interactive bash shell. + + + +## Connect from outside the container + +## [CLI](#tab/cli) + +You can also connect to the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] instance on your Docker machine from any external Linux, Windows, or macOS tool that supports SQL connections. The external tool uses the IP address for the host machine. + +The following steps use **sqlcmd** outside of your container to connect to [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] running in the container. These steps assume that you already have the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] command-line tools installed outside of your container. The same principles apply when using other tools, but the process of connecting is unique to each tool. + +1. Find the IP address for your container's host machine, using `ifconfig` or `ip addr`. + +1. For this example, install the **sqlcmd** tool on your client machine. For more information, see [sqlcmd utility](../tools/sqlcmd/sqlcmd-utility.md) or [Install the SQL Server command-line tools sqlcmd and bcp on Linux](sql-server-linux-setup-tools.md). + +1. Run **sqlcmd** specifying the IP address and the port mapped to port 1433 in your container. In this example, the port is the same as port 1433 on the host machine. If you specified a different mapped port on the host machine, you would use it here. You also need to open the appropriate inbound port on your firewall to allow the connection. + + Recent versions of **sqlcmd** are secure by default. If the connection doesn't succeed, and you're using version 18 or higher, you can add the `-No` option to **sqlcmd** to specify that encryption is optional, not mandatory. + + ::: zone pivot="cs1-bash" + + ```text + sudo sqlcmd -S ,1433 -U -P "" + ``` + + ::: zone-end + + ::: zone pivot="cs1-powershell" + + ```powershell + sqlcmd -S ,1433 -U -P "" + ``` + + ::: zone-end + + ::: zone pivot="cs1-cmd" + + ```cmd + sqlcmd -S ,1433 -U -P "" + ``` + + ::: zone-end + + > [!CAUTION] + > [!INCLUDE [password-complexity](includes/password-complexity.md)] + +1. Run Transact-SQL commands. When finished, type `QUIT`. + +## [sqlcmd](#tab/sqlcmd) + +You can also connect to the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] instance on your Docker machine from any external Linux, Windows, or macOS tool that supports SQL connections. The external tool uses the IP address for the host machine. + +The following steps use **sqlcmd** outside of your container to connect to [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] running in the container. The same principles apply when using other tools, but the process of connecting is unique to each tool. + +1. Run **sqlcmd** in the same session you used to create your container. It keeps track of the connection information via contexts so you can easily connect at any time. `sqlcmd config view` can be used to view your available contexts. + + ::: zone pivot="cs1-bash" + + ```text + sudo sqlcmd + ``` + + ::: zone-end + + ::: zone pivot="cs1-powershell" + + ```powershell + sqlcmd query + ``` + + ::: zone-end + + ::: zone pivot="cs1-cmd" + + ```cmd + sqlcmd query + ``` + + ::: zone-end + +1. Run Transact-SQL commands. When finished, type `QUIT`. + +--- + +Other common tools to connect to [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] include: + +- [SQL Server extension for Visual Studio Code](../tools/visual-studio-code/sql-server-develop-use-vscode.md) +- [Use SQL Server Management Studio on Windows to manage SQL Server on Linux](sql-server-linux-manage-ssms.md) +- [What is Azure Data Studio?](/azure-data-studio/what-is-azure-data-studio) +- [mssql-cli (Preview)](https://github.com/dbcli/mssql-cli/blob/master/doc/usage_guide.md) +- [Manage SQL Server on Linux with PowerShell Core](sql-server-linux-manage-powershell-core.md) + +## Remove your container + +## [CLI](#tab/cli) + +If you want to remove the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] container used in this tutorial, run the following commands: + +::: zone pivot="cs1-bash" + +```text +docker stop sql1 +docker rm sql1 +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +docker stop sql1 +docker rm sql1 +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +docker stop sql1 +docker rm sql1 +``` + +::: zone-end + +## [sqlcmd](#tab/sqlcmd) + +If you want to remove the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] container used in this tutorial, run the following command: + +::: zone pivot="cs1-bash" + +```text +sudo sqlcmd delete --force +``` + +::: zone-end + +::: zone pivot="cs1-powershell" + +```powershell +sqlcmd delete --force +``` + +::: zone-end + +::: zone pivot="cs1-cmd" + +```cmd +sqlcmd delete --force +``` + +::: zone-end + +--- + +## Docker demo + +After you finish using the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.md)] Linux container image for Docker, you might want to know how Docker is used to improve development and testing. The following video shows how Docker can be used in a continuous integration and deployment scenario. + +> [!VIDEO https://channel9.msdn.com/Events/Connect/2017/T152/player] + +## Related tasks + +- [Run multiple SQL Server containers](sql-server-linux-docker-container-deployment.md#multiple) +- [Persist your data](sql-server-linux-docker-container-configure.md#persist) + +## Related content + +- [Restore a SQL Server database in a Linux container](tutorial-restore-backup-in-sql-server-container.md) +- [Troubleshoot SQL Server Docker containers](sql-server-linux-docker-container-troubleshooting.md) +- [mssql-docker GitHub repository](https://github.com/microsoft/mssql-docker) + +[!INCLUDE [contribute-to-content](../includes/paragraph-content/contribute-to-content.md)] \ No newline at end of file diff --git a/scenarios/upstream/FlatcarOnAzure/flatcar-on-azure.md b/scenarios/upstream/FlatcarOnAzure/flatcar-on-azure.md new file mode 100644 index 000000000..aaaf474a2 --- /dev/null +++ b/scenarios/upstream/FlatcarOnAzure/flatcar-on-azure.md @@ -0,0 +1,187 @@ +--- +title: 'Running Flatcar Container Linux on Microsoft Azure' +description: 'Deploy Flatcar Container Linux in Microsoft Azure by creating resource groups and using official marketplace images.' +ms.topic: article +ms.date: 03/17/2025 +author: naman-msft +ms.author: namanparikh +ms.custom: innovation-engine, azure, flatcar +--- + +## Creating resource group via Microsoft Azure CLI + +Follow the [installation and configuration guides][azure-cli] for the Microsoft Azure CLI to set up your local installation. + +Instances on Microsoft Azure must be created within a resource group. Create a new resource group with the following command: + +```bash +export RANDOM_SUFFIX=$(openssl rand -hex 3) +export RESOURCE_GROUP_NAME="group-1$RANDOM_SUFFIX" +export REGION="WestUS2" +az group create --name $RESOURCE_GROUP_NAME --location $REGION +``` + +Results: + + +```json +{ + "id": "/subscriptions/xxxxx/resourceGroups/group-1xxx", + "location": "WestUS2", + "managedBy": null, + "name": "group-1xxx", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null, + "type": "Microsoft.Resources/resourceGroups" +} +``` + +Now that you have a resource group, you can choose a channel of Flatcar Container Linux you would like to install. + +## Using the official image from the Marketplace + +Official Flatcar Container Linux images for all channels are available in the Marketplace. +Flatcar is published by the `kinvolk` publisher on Marketplace. +Flatcar Container Linux is designed to be [updated automatically][update-docs] with different schedules per channel. Updating +can be [disabled][reboot-docs], although it is not recommended to do so. The [release notes][release-notes] contain +information about specific features and bug fixes. + +The following command will query for the latest image URN specifier through the Azure CLI: + +```bash +az vm image list --all -p kinvolk -f flatcar -s stable-gen2 --query '[-1]' +``` + +Results: + + + +```json +{ + "architecture": "x64", + "offer": "flatcar-container-linux-free", + "publisher": "kinvolk", + "sku": "stable-gen2", + "urn": "kinvolk:flatcar-container-linux-free:stable-gen2:3815.2.0", + "version": "3815.2.0" +} +``` + +Use the offer named `flatcar-container-linux-free`; there is also a legacy offer called `flatcar-container-linux` with the same contents. +The SKU, which is the third element of the image URN, relates to one of the release channels and also depends on whether to use Hyper-V Generation 1 or 2 VMs. +Generation 2 instance types use UEFI boot and should be preferred, the SKU matches the pattern `-gen`: `alpha-gen2`, `beta-gen2` or `stable-gen2`. +For Generation 1 instance types drop the `-gen2` from the SKU: `alpha`, `beta` or `stable`. +Note: _`az vm image list -s` flag matches parts of the SKU, which means that `-s stable` will return both the `stable` and `stable-gen2` SKUs._ + +Before being able to use the offers, you may need to accept the legal terms once, which is demonstrated for `flatcar-container-linux-free` and `stable-gen2`: + +```bash +az vm image terms show --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2 +az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2 +``` + +For quick tests the official Azure CLI also supports an alias for the latest Flatcar stable image: + +```bash +az vm create --name node-1 --resource-group $RESOURCE_GROUP_NAME --admin-username core --image FlatcarLinuxFreeGen2 --generate-ssh-keys +``` + +Results: + + + +```json +{ + "fqdns": null, + "id": "/subscriptions/xxxxx/resourceGroups/group-1xxx/providers/Microsoft.Compute/virtualMachines/node-1", + "location": "WestUS2", + "name": "node-1", + "powerState": "VM running", + "provisioningState": "Succeeded", + "resourceGroup": "group-1xxx", + "zones": null +} +``` + +### CoreVM + +Flatcar images are also published under an offer called `flatcar-container-linux-corevm-amd64`. This offer does not require accepting image terms and does not require specifying plan information when creating instances or building derived images. The content of the images matches the other offers. + +```bash +az vm image list --all -p kinvolk -f flatcar-container-linux-corevm-amd64 -s stable-gen2 --query '[-1]' +``` + +Results: + + + +```json +{ + "architecture": "x64", + "offer": "flatcar-container-linux-corevm-amd64", + "publisher": "kinvolk", + "sku": "stable-gen2", + "urn": "kinvolk:flatcar-container-linux-corevm-amd64:stable-gen2:3815.2.0", + "version": "3815.2.0" +} +``` + +### ARM64 + +Arm64 images are published under the offer called `flatcar-container-linux-corevm`. These are Generation 2 images—the only supported option on Azure for Arm64 instances—so the SKU contains only the release channel name without the `-gen2` suffix: `alpha`, `beta`, or `stable`. This offer has the same properties as the `CoreVM` offer described above. + +```bash +az vm image list --all --architecture arm64 -p kinvolk -f flatcar -s stable --query '[-1]' +``` + +Results: + + + +```json +{ + "architecture": "Arm64", + "offer": "flatcar-container-linux-corevm", + "publisher": "kinvolk", + "sku": "stable", + "urn": "kinvolk:flatcar-container-linux-corevm:stable:3815.2.0", + "version": "3815.2.0" +} +``` + +### Flatcar Pro Images + +Flatcar Pro images were paid marketplace images that came with commercial support and extra features. All the previous features of Flatcar Pro images, such as support for NVIDIA GPUs, are now available to all users in standard Flatcar marketplace images. + +### Plan information for building your image from the Marketplace Image + +When building an image based on the Marketplace image you sometimes need to specify the original plan. The plan name is the image SKU (for example, `stable`), the plan product is the image offer (for example, `flatcar-container-linux-free`), and the plan publisher is the same (`kinvolk`). + +## Community Shared Image Gallery + +While the Marketplace images are recommended, it sometimes might be easier or required to use Shared Image Galleries—for example, when using Packer for Kubernetes CAPI images. + +A public Shared Image Gallery hosts recent Flatcar Stable images for amd64. Here is how to list the image definitions (for now you will only find `flatcar-stable-amd64`) and the image versions they provide: + +```bash +az sig image-definition list-community --public-gallery-name flatcar-23485951-527a-48d6-9d11-6931ff0afc2e --location westeurope +az sig image-version list-community --public-gallery-name flatcar-23485951-527a-48d6-9d11-6931ff0afc2e --gallery-image-definition flatcar-stable-amd64 --location westeurope +``` + +A second gallery, `flatcar4capi-742ef0cb-dcaa-4ecb-9cb0-bfd2e43dccc0`, exists for prebuilt Kubernetes CAPI images. It has image definitions for each CAPI version—for example, `flatcar-stable-amd64-capi-v1.26.3` provides recent Flatcar Stable versions. + +[flatcar-user]: https://groups.google.com/forum/#!forum/flatcar-linux-user +[etcd-docs]: https://etcd.io/docs +[quickstart]: ../ +[reboot-docs]: ../../setup/releases/update-strategies +[azure-cli]: https://docs.microsoft.com/en-us/cli/azure/overview +[butane-configs]: ../../provisioning/config-transpiler +[irc]: irc://irc.freenode.org:6667/#flatcar +[docs]: ../../ +[resource-group]: https://docs.microsoft.com/en-us/azure/architecture/best-practices/naming-conventions#naming-rules-and-restrictions +[storage-account]: https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts +[azure-flatcar-image-upload]: https://github.com/flatcar/flatcar-cloud-image-uploader +[release-notes]: https://flatcar.org/releases +[update-docs]: ../../setup/releases/update-strategies \ No newline at end of file