title | titleSuffix | description | services | documentationcenter | author | manager | tags | Customer intent | ms.service | ms.devlang | ms.topic | ms.tgt_pltfrm | ms.workload | ms.date | ms.author | ms.custom |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Quickstart: Create an internal load balancer - Azure CLI |
Azure Load Balancer |
This quickstart shows how to create an internal load balancer using the Azure CLI |
load-balancer |
na |
asudbring |
KumudD |
azure-resource-manager |
I want to create a load balancer so that I can load balance internal traffic to VMs. |
load-balancer |
na |
quickstart |
na |
infrastructure-services |
08/20/2020 |
allensu |
mvc, devx-track-js, devx-track-azurecli |
Get started with Azure Load Balancer by using Azure CLI to create a public load balancer and three virtual machines.
- An Azure account with an active subscription. Create an account for free.
- Azure CLI installed locally or Azure Cloud Shell
[!INCLUDE cloud-shell-try-it.md]
If you choose to install and use the CLI locally, this quickstart requires Azure CLI version 2.0.28 or later. To find the version, run az --version
. If you need to install or upgrade, see Install the Azure CLI.
An Azure resource group is a logical container into which Azure resources are deployed and managed.
Create a resource group with az group create:
- Named myResourceGroupLB.
- In the eastus location.
az group create \
--name myResourceGroupLB \
--location eastus
Note
Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see Azure Load Balancer SKUs.
Before you deploy VMs and deploy your load balancer, create the supporting virtual network resources.
Create a virtual network using az network vnet create:
- Named myVNet.
- Address prefix of 10.1.0.0/16.
- Subnet named myBackendSubnet.
- Subnet prefix of 10.1.0.0/24.
- In the myResourceGroupLB resource group.
- Location of eastus.
az network vnet create \
--resource-group myResourceGroupLB \
--location eastus \
--name myVNet \
--address-prefixes 10.1.0.0/16 \
--subnet-name myBackendSubnet \
--subnet-prefixes 10.1.0.0/24
For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
Create a network security group using az network nsg create:
- Named myNSG.
- In resource group myResourceGroupLB.
az network nsg create \
--resource-group myResourceGroupLB \
--name myNSG
Create a network security group rule using az network nsg rule create:
- Named myNSGRuleHTTP.
- In the network security group you created in the previous step, myNSG.
- In resource group myResourceGroupLB.
- Protocol (*).
- Direction Inbound.
- Source (*).
- Destination (*).
- Destination port Port 80.
- Access Allow.
- Priority 200.
az network nsg rule create \
--resource-group myResourceGroupLB \
--nsg-name myNSG \
--name myNSGRuleHTTP \
--protocol '*' \
--direction inbound \
--source-address-prefix '*' \
--source-port-range '*' \
--destination-address-prefix '*' \
--destination-port-range 80 \
--access allow \
--priority 200
Create two network interfaces with az network nic create:
- Named myNicVM1.
- In resource group myResourceGroupLB.
- In virtual network myVNet.
- In subnet myBackendSubnet.
- In network security group myNSG.
az network nic create \
--resource-group myResourceGroupLB \
--name myNicVM1 \
--vnet-name myVNet \
--subnet myBackEndSubnet \
--network-security-group myNSG
- Named myNicVM2.
- In resource group myResourceGroupLB.
- In virtual network myVNet.
- In subnet myBackendSubnet.
- In network security group myNSG.
az network nic create \
--resource-group myResourceGroupLB \
--name myNicVM2 \
--vnet-name myVnet \
--subnet myBackEndSubnet \
--network-security-group myNSG
In this section, you create:
- A cloud configuration file named cloud-init.txt for the server configuration.
- Two virtual machines to be used as backend servers for the load balancer.
Use a cloud-init configuration file to install NGINX and run a 'Hello World' Node.js app on a Linux virtual machine.
In your current shell, create a file named cloud-init.txt. Copy and paste the following configuration into the shell. Ensure that you copy the whole cloud-init file correctly, especially the first line:
#cloud-config
package_upgrade: true
packages:
- nginx
- nodejs
- npm
write_files:
- owner: www-data:www-data
- path: /etc/nginx/sites-available/default
content: |
server {
listen 80;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
- owner: azureuser:azureuser
- path: /home/azureuser/myapp/index.js
content: |
var express = require('express')
var app = express()
var os = require('os');
app.get('/', function (req, res) {
res.send('Hello World from host ' + os.hostname() + '!')
})
app.listen(3000, function () {
console.log('Hello world app listening on port 3000!')
})
runcmd:
- service nginx restart
- cd "/home/azureuser/myapp"
- npm init
- npm install express -y
- nodejs index.js
Create the virtual machines with az vm create:
- Named myVM1.
- In resource group myResourceGroupLB.
- Attached to network interface myNicVM1.
- Virtual machine image UbuntuLTS.
- Configuration file cloud-init.txt you created in step above.
- In Zone 1.
az vm create \
--resource-group myResourceGroupLB \
--name myVM1 \
--nics myNicVM1 \
--image UbuntuLTS \
--admin-user azureuser \
--generate-ssh-keys \
--custom-data cloud-init.txt \
--zone 1 \
--no-wait
- Named myVM2.
- In resource group myResourceGroupLB.
- Attached to network interface myNicVM2.
- Virtual machine image UbuntuLTS.
- Configuration file cloud-init.txt you created in step above.
- In Zone 2.
az vm create \
--resource-group myResourceGroupLB \
--name myVM2 \
--nics myNicVM2 \
--image UbuntuLTS \
--admin-user azureuser \
--generate-ssh-keys \
--custom-data cloud-init.txt \
--zone 2 \
--no-wait
It may take a few minutes for the VMs to deploy.
This section details how you can create and configure the following components of the load balancer:
- A frontend IP pool that receives the incoming network traffic on the load balancer.
- A backend IP pool where the frontend pool sends the load balanced network traffic.
- A health probe that determines health of the backend VM instances.
- A load balancer rule that defines how traffic is distributed to the VMs.
Create a public load balancer with az network lb create:
- Named myLoadBalancer.
- A frontend pool named myFrontEnd.
- A backend pool named myBackEndPool.
- Associated with the virtual network myVNet.
- Associated with the backend subnet myBackendSubnet.
az network lb create \
--resource-group myResourceGroupLB \
--name myLoadBalancer \
--sku Standard \
--vnet-name myVnet \
--subnet myBackendSubnet \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool
A health probe checks all virtual machine instances to ensure they can send network traffic.
A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
Create a health probe with az network lb probe create:
- Monitors the health of the virtual machines.
- Named myHealthProbe.
- Protocol TCP.
- Monitoring Port 80.
az network lb probe create \
--resource-group myResourceGroupLB \
--lb-name myLoadBalancer \
--name myHealthProbe \
--protocol tcp \
--port 80
A load balancer rule defines:
- Frontend IP configuration for the incoming traffic.
- The backend IP pool to receive the traffic.
- The required source and destination port.
Create a load balancer rule with az network lb rule create:
- Named myHTTPRule
- Listening on Port 80 in the frontend pool myFrontEnd.
- Sending load-balanced network traffic to the backend address pool myBackEndPool using Port 80.
- Using health probe myHealthProbe.
- Protocol TCP.
- Enable outbound source network address translation (SNAT) using the frontend IP address.
az network lb rule create \
--resource-group myResourceGroupLB \
--lb-name myLoadBalancer \
--name myHTTPRule \
--protocol tcp \
--frontend-port 80 \
--backend-port 80 \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool \
--probe-name myHealthProbe \
--disable-outbound-snat true
Note
The virtual machines in the backend pool will not have outbound internet connectivity with this configuration.
For more information on providing outbound connectivity, see:
Outbound connections in Azure
Options for providing connectivity:
Outbound-only load balancer configuration
What is Virtual Network NAT?
Add the virtual machines to the backend pool with az network nic ip-config address-pool add:
- In backend address pool myBackEndPool.
- In resource group myResourceGroupLB.
- Associated with network interface myNicVM1 and ipconfig1.
- Associated with load balancer myLoadBalancer.
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name myNicVM1 \
--resource-group myResourceGroupLB \
--lb-name myLoadBalancer
- In backend address pool myBackEndPool.
- In resource group myResourceGroupLB.
- Associated with network interface myNicVM2 and ipconfig1.
- Associated with load balancer myLoadBalancer.
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name myNicVM2 \
--resource-group myResourceGroupLB \
--lb-name myLoadBalancer
Note
Standard SKU load balancer is recommended for production workloads. For more information about SKUS, see Azure Load Balancer SKUs.
Before you deploy VMs and deploy your load balancer, create the supporting virtual network resources.
Create a virtual network using az network vnet create:
- Named myVNet.
- Address prefix of 10.1.0.0/16.
- Subnet named myBackendSubnet.
- Subnet prefix of 10.1.0.0/24.
- In the myResourceGroupLB resource group.
- Location of eastus.
az network vnet create \
--resource-group myResourceGroupLB \
--location eastus \
--name myVNet \
--address-prefixes 10.1.0.0/16 \
--subnet-name myBackendSubnet \
--subnet-prefixes 10.1.0.0/24
For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
Create a network security group using az network nsg create:
- Named myNSG.
- In resource group myResourceGroupLB.
az network nsg create \
--resource-group myResourceGroupLB \
--name myNSG
Create a network security group rule using az network nsg rule create:
- Named myNSGRuleHTTP.
- In the network security group you created in the previous step, myNSG.
- In resource group myResourceGroupLB.
- Protocol (*).
- Direction Inbound.
- Source (*).
- Destination (*).
- Destination port Port 80.
- Access Allow.
- Priority 200.
az network nsg rule create \
--resource-group myResourceGroupLB \
--nsg-name myNSG \
--name myNSGRuleHTTP \
--protocol '*' \
--direction inbound \
--source-address-prefix '*' \
--source-port-range '*' \
--destination-address-prefix '*' \
--destination-port-range 80 \
--access allow \
--priority 200
Create two network interfaces with az network nic create:
- Named myNicVM1.
- In resource group myResourceGroupLB.
- In virtual network myVNet.
- In subnet myBackendSubnet.
- In network security group myNSG.
az network nic create \
--resource-group myResourceGroupLB \
--name myNicVM1 \
--vnet-name myVNet \
--subnet myBackEndSubnet \
--network-security-group myNSG
- Named myNicVM2.
- In resource group myResourceGroupLB.
- In virtual network myVNet.
- In subnet myBackendSubnet.
az network nic create \
--resource-group myResourceGroupLB \
--name myNicVM2 \
--vnet-name myVnet \
--subnet myBackEndSubnet \
--network-security-group myNSG
In this section, you create:
- A cloud configuration file named cloud-init.txt for the server configuration.
- Availability set for the virtual machines
- Two virtual machines to be used as backend servers for the load balancer.
To verify that the load balancer was successfully created, you install NGINX on the virtual machines.
Use a cloud-init configuration file to install NGINX and run a 'Hello World' Node.js app on a Linux virtual machine.
In your current shell, create a file named cloud-init.txt. Copy and paste the following configuration into the shell. Ensure that you copy the whole cloud-init file correctly, especially the first line:
#cloud-config
package_upgrade: true
packages:
- nginx
- nodejs
- npm
write_files:
- owner: www-data:www-data
- path: /etc/nginx/sites-available/default
content: |
server {
listen 80;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
- owner: azureuser:azureuser
- path: /home/azureuser/myapp/index.js
content: |
var express = require('express')
var app = express()
var os = require('os');
app.get('/', function (req, res) {
res.send('Hello World from host ' + os.hostname() + '!')
})
app.listen(3000, function () {
console.log('Hello world app listening on port 3000!')
})
runcmd:
- service nginx restart
- cd "/home/azureuser/myapp"
- npm init
- npm install express -y
- nodejs index.js
Create the availability set with az vm availability-set create:
- Named myAvSet.
- In resource group myResourceGroupLB.
- Location eastus.
az vm availability-set create \
--name myAvSet \
--resource-group myResourceGroupLB \
--location eastus
Create the virtual machines with az vm create:
- Named myVM1.
- In resource group myResourceGroupLB.
- Attached to network interface myNicVM1.
- Virtual machine image UbuntuLTS.
- Configuration file cloud-init.txt you created in step above.
- In availability set myAvSet.
az vm create \
--resource-group myResourceGroupLB \
--name myVM1 \
--nics myNicVM1 \
--image UbuntuLTS \
--admin-user azureuser \
--generate-ssh-keys \
--custom-data cloud-init.txt \
--availability-set myAvSet \
--no-wait
- Named myVM2.
- In resource group myResourceGroupLB.
- Attached to network interface myNicVM2.
- Virtual machine image UbuntuLTS.
- Configuration file cloud-init.txt you created in step above.
- In Zone 2.
az vm create \
--resource-group myResourceGroupLB \
--name myVM2 \
--nics myNicVM2 \
--image UbuntuLTS \
--admin-user azureuser \
--generate-ssh-keys \
--custom-data cloud-init.txt \
--availability-set myAvSet \
--no-wait
It may take a few minutes for the VMs to deploy.
This section details how you can create and configure the following components of the load balancer:
- A frontend IP pool that receives the incoming network traffic on the load balancer.
- A backend IP pool where the frontend pool sends the load balanced network traffic.
- A health probe that determines health of the backend VM instances.
- A load balancer rule that defines how traffic is distributed to the VMs.
Create a public load balancer with az network lb create:
- Named myLoadBalancer.
- A frontend pool named myFrontEnd.
- A backend pool named myBackEndPool.
- Associated with the virtual network myVNet.
- Associated with the backend subnet myBackendSubnet.
az network lb create \
--resource-group myResourceGroupLB \
--name myLoadBalancer \
--sku Basic \
--vnet-name myVNet \
--subnet myBackendSubnet \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool
A health probe checks all virtual machine instances to ensure they can send network traffic.
A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
Create a health probe with az network lb probe create:
- Monitors the health of the virtual machines.
- Named myHealthProbe.
- Protocol TCP.
- Monitoring Port 80.
az network lb probe create \
--resource-group myResourceGroupLB \
--lb-name myLoadBalancer \
--name myHealthProbe \
--protocol tcp \
--port 80
A load balancer rule defines:
- Frontend IP configuration for the incoming traffic.
- The backend IP pool to receive the traffic.
- The required source and destination port.
Create a load balancer rule with az network lb rule create:
- Named myHTTPRule
- Listening on Port 80 in the frontend pool myFrontEnd.
- Sending load-balanced network traffic to the backend address pool myBackEndPool using Port 80.
- Using health probe myHealthProbe.
- Protocol TCP.
az network lb rule create \
--resource-group myResourceGroupLB \
--lb-name myLoadBalancer \
--name myHTTPRule \
--protocol tcp \
--frontend-port 80 \
--backend-port 80 \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool \
--probe-name myHealthProbe
Add the virtual machines to the backend pool with az network nic ip-config address-pool add:
- In backend address pool myBackEndPool.
- In resource group myResourceGroupLB.
- Associated with network interface myNicVM1 and ipconfig1.
- Associated with load balancer myLoadBalancer.
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name myNicVM1 \
--resource-group myResourceGroupLB \
--lb-name myLoadBalancer
- In backend address pool myBackEndPool.
- In resource group myResourceGroupLB.
- Associated with network interface myNicVM2 and ipconfig1.
- Associated with load balancer myLoadBalancer.
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name myNicVM2 \
--resource-group myResourceGroupLB \
--lb-name myLoadBalancer
Use az network public-ip create to create a public ip address for the bastion host:
- Create a standard zone redundant public IP address named myBastionIP.
- In myResourceGroupLB.
az network public-ip create \
--resource-group myResourceGroupLB \
--name myBastionIP \
--sku Standard
Use az network vnet subnet create to create a subnet:
- Named AzureBastionSubnet.
- Address prefix of 10.1.1.0/24.
- In virtual network myVNet.
- In resource group myResourceGroupLB.
az network vnet subnet create \
--resource-group myResourceGroupLB \
--name AzureBastionSubnet \
--vnet-name myVNet \
--address-prefixes 10.1.1.0/24
Use az network bastion create to create a bastion host:
- Named myBastionHost
- In myResourceGroupLB
- Associated with public IP myBastionIP.
- Associated with virtual network myVNet.
- In eastus location.
az network bastion create \
--resource-group myResourceGroupLB \
--name myBastionHost \
--public-ip-address myBastionIP \
--vnet-name myVNet \
--location eastus
It will take a few minutes for the bastion host to deploy.
Create the network interface with az network nic create:
- Named myNicTestVM.
- In resource group myResourceGroupLB.
- In virtual network myVNet.
- In subnet myBackendSubnet.
- In network security group myNSG.
az network nic create \
--resource-group myResourceGroupLB \
--name myNicTestVM \
--vnet-name myVNet \
--subnet myBackEndSubnet \
--network-security-group myNSG
Create the virtual machine with az vm create:
- Named myTestVM.
- In resource group myResourceGroupLB.
- Attached to network interface myNicTestVM.
- Virtual machine image Win2019Datacenter.
- Choose values for <adminpass> and <adminuser>.
az vm create \
--resource-group myResourceGroupLB \
--name myTestVM \
--nics myNicTestVM \
--image Win2019Datacenter \
--admin-username <adminuser> \
--admin-password <adminpass> \
--no-wait
Can take a few minutes for the virtual machine to deploy.
-
Sign in to the Azure portal.
-
Find the private IP address for the load balancer on the Overview screen. Select All services in the left-hand menu, select All resources, and then select myLoadBalancer.
-
Make note or copy the address next to Private IP Address in the Overview of myLoadBalancer.
-
Select All services in the left-hand menu, select All resources, and then from the resources list, select myTestVM that is located in the myResourceGroupLB resource group.
-
On the Overview page, select Connect, then Bastion.
-
Enter the username and password entered during VM creation.
-
Open Internet Explorer on myTestVM.
-
Enter the IP address from the previous step into the address bar of the browser. The default page of IIS Web server is displayed on the browser.
:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Create a standard internal load balancer" border="true":::
To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's IIS Web server and then force-refresh your web browser from the client machine.
When no longer needed, use the az group delete command to remove the resource group, load balancer, and all related resources.
az group delete \
--name myResourceGroupLB
In this quickstart
- You created a standard or public load balancer
- Attached virtual machines.
- Configured the load balancer traffic rule and health probe.
- Tested the load balancer.
To learn more about Azure Load Balancer, continue to What is Azure Load Balancer? and Load Balancer frequently asked questions.
Learn more about Load Balancer and Availability zones.