title | description | services | documentationcenter | author | ms.service | ms.devlang | ms.topic | ms.tgt_pltfrm | ms.workload | ms.date | ms.author |
---|---|---|---|---|---|---|---|---|---|---|---|
Azure Load Balancer concepts |
Overview of Azure Load Balancer concepts |
load-balancer |
na |
asudbring |
load-balancer |
na |
conceptual |
na |
infrastructure-services |
07/13/2020 |
allensu |
Load balancer provides several capabilities for both UDP and TCP applications.
You can create a load-balancing rule to distribute traffic from the frontend to a backend pool. Azure Load Balancer uses a hashing algorithm for distribution of inbound flows (not bytes). Load balancer rewrites the headers of flows to backend pool instances. A server is available to receive new flows when a health probe indicates a healthy back-end endpoint.
By default, Load balancer uses a Five-tuple hash.
The hash includes:
- Source IP address
- Source port
- Destination IP address
- Destination port
- IP protocol number to map flows to available servers
Affinity to a source IP address is created by using a two or three-tuple hash. Packets of the same flow arrive on the same instance behind the load-balanced front end.
The source port changes when a client starts a new flow from the same source IP. As a result, the five-tuple hash might cause the traffic to go to a different backend endpoint. For more information, see Configure the distribution mode for Azure Load Balancer.
The following image displays the hash-based distribution:
Figure: Hash-based distribution
Load balancer doesn't directly interact with TCP or UDP or the application layer. Any TCP or UDP application scenario can be supported. Load balancer doesn't close or originate flows or interact with the payload of the flow. Load balancer doesn't provide application layer gateway functionality. Protocol handshakes always occur directly between the client and the back-end pool instance. A response to an inbound flow is always a response from a virtual machine. When the flow arrives on the virtual machine, the original source IP address is also preserved.
- Every endpoint is answered by a VM. For example, a TCP handshake occurs between the client and the selected back-end VM. A response to a request to a front end is a response generated by a back-end VM. When you successfully validate connectivity to a front end, you're validating the connectivity throughout to at least one back-end virtual machine.
- Application payloads are transparent to the load balancer. Any UDP or TCP application can be supported.
- Because the load balancer doesn't interact with the TCP payload and provide TLS offload, you can build comprehensive encrypted scenarios. Using load balancer gains large scale-out for TLS applications by ending the TLS connection on the VM itself. For example, your TLS session keying capacity is only limited by the type and number of VMs you add to the back-end pool.
- See Create a public Standard Load Balancer to get started with using a Load Balancer: create one, create VMs with a custom IIS extension installed, and load balance the web app between the VMs.
- Learn about Azure Load Balancer outbound connections.
- Learn more about Azure Load Balancer.
- Learn about Health Probes.
- Learn about Standard Load Balancer Diagnostics.
- Learn more about Network Security Groups.