Troubleshoot Azure Load Balancer
In this, we will learn about troubleshooting information for Basic and Standard common Azure Load Balancer questions. However, when the Load Balancer connectivity is unavailable, the most common symptoms are as follows:
- Firstly, VMs behind the Load Balancer are not responding to health probes
- Secondly, VMs behind the Load Balancer are not responding to the traffic on the configured port
Symptom: No outbound connectivity from Standard internal Load Balancers (ILB)
Validation and resolution
Standard ILBs are secure by default. Basic ILBs allowed connecting to the internet via a hidden Public IP address. However, if you recently moved from a Basic ILB to a Standard ILB, you should create a Public IP explicitly via Outbound only configuration which locks down the IP via NSGs. You can also use a NAT Gateway on your subnet.
Symptom: VMs behind the Load Balancer are not responding to health probes
The Load Balancer backend pool VMs may not be responding to the probes due to any of the following reasons:
- Firstly, Load Balancer backend pool VM is unhealthy
- Secondly, Load Balancer backend pool VM is not listening on the probe port
- Thirdly, Firewall, or a network security group is blocking the port on the Load Balancer backend pool VMs
- Lastly, Other misconfigurations in Load Balancer
Cause 1: Load Balancer backend pool VM is unhealthy
Validation and resolution
To resolve this issue, log in to the participating VMs, and check if the VM state is healthy, and can respond to PsPing or TCPing from another VM in the pool. However, if the VM is unhealthy, or is unable to respond to the probe, you must rectify the issue and get the VM back to a healthy state before it can participate in load balancing.
Cause 2: Load Balancer backend pool VM is not listening on the probe port
If the VM is healthy, but is not responding to the probe, then one possible reason could be that the probe port is not open on the participating VM, or the VM is not listening on that port.
Validation and resolution
- Firstly, Log in to the backend VM.
- Secondly, Open a command prompt and run the following command to validate there is an application listening on the probe port:
netstat -an - Then, If the port state is not listed as LISTENING, configure the proper port.
- Alternatively, select another port, that is listed as LISTENING, and update load balancer configuration accordingly.
Cause 3: Firewall, or a network security group is blocking the port on the load balancer backend pool VMs
If the firewall on the VM is blocking the probe port, or one or more network security groups configured on the subnet. Or, on the VM, is not allowing the probe to reach the port, the VM is unable to respond to the health probe.
Validation and resolution
- Firstly, if the firewall is on, check if it is configured to allow the probe port. If not, configure the firewall to allow traffic on the probe port, and test again.
- Secondly, from the list of network security groups, check if the incoming or outgoing traffic on the probe port has interference. Also, check if a Deny All network security groups rule on the NIC of the VM or the subnet that has a higher priority than the default rule that allows LB probes & traffic (network security groups must allow Load Balancer IP of 168.63.129.16).
- Then, if any of these rules are blocking the probe traffic, remove and reconfigure the rules to allow the probe traffic.
- Lastly, test if the VM has now started responding to the health probes.
Cause 4: Other misconfigurations in Load Balancer
If all the preceding causes seem to be validated and resolved correctly. And, the backend VM still does not respond to the health probe, then manually test for connectivity. Also, collect some traces to understand the connectivity.
Validation and resolution
- Firstly, Use Psping from one of the other VMs within the VNet to test the probe port response (example: .\psping.exe -t 10.0.0.4:3389) and record results.
- Secondly, Use TCPing from one of the other VMs within the VNet to test the probe port response (example: .\tcping.exe 10.0.0.4 3389) and record results.
- Then, If no response is received in these ping tests, then
- Run a simultaneous Netsh trace on the target backend pool VM and another test VM from the same VNet.
- Analyze the network capture and see if there are both incoming and outgoing packets related to the ping query.
- Verify if the probe packets are being forced to another destination (possibly via UDR settings) before reaching the load balancer. This can cause the traffic to never reach the backend VM.
- Lastly, change the probe type (for example, HTTP to TCP), and configure the corresponding port in network security groups ACLs and firewall to validate if the issue is with the configuration of probe response. For more information about health probe configuration, see Endpoint Load Balancing health probe configuration.
Symptom: VMs behind Load Balancer are not responding to traffic on the configured data port
If a backend pool VM is listed as healthy and responds to the health probes, but is still not participating in the Load Balancing, or is not responding to the data traffic, it may be due to any of the following reasons:
- Firstly, Load Balancer Backend pool VM is not listening on the data port
- Secondly, Network security group is blocking the port on the Load Balancer backend pool VM
- Thirdly, Accessing the Load Balancer from the same VM and NIC
- Lastly, Accessing the Internet Load Balancer frontend from the participating Load Balancer backend pool VM
Cause 1: Load Balancer backend pool VM is not listening on the data port
If a VM does not respond to the data traffic, it may be because either the target port is not open on the participating VM, or, the VM is not listening on that port.
Validation and resolution
- Firstly, Log in to the backend VM.
- Then, Open a command prompt and run the following command to validate there is an application listening on the data port: netstat -an
- Thirdly, If the port is not listed with State “LISTENING”, configure the proper listener port
- Lastly, If the port is marked as Listening, then check the target application on that port for any possible issues.
Cause 2: Network security group is blocking the port on the Load Balancer backend pool VM
For the public load balancer, the IP address of the Internet clients will be used for communication between the clients and the load balancer backend VMs. Make sure the IP address of the clients are allowed in the backend VM’s network security group.
- Firstly, list the network security groups configured on the backend VM. For more information, see Manage network security groups
- Secondly, from the list of network security groups, check if:
- the incoming or outgoing traffic on the data port has interference.
- a Deny All network security group rule on the NIC of the VM or the subnet that has a higher priority that the default rule that allows Load Balancer probes and traffic
- Then, if any of the rules are blocking the traffic, remove and reconfigure those rules to allow the data traffic.
- Lastly, test if the VM has now started to respond to the health probes.
Cause 3: Accessing the Load Balancer from the same VM and Network interface
If your application hosted in the backend VM of a Load Balancer is trying to access another application hosted in the same backend VM over the same Network Interface, it is an unsupported scenario and will fail.
Resolution You can resolve this issue via one of the following methods:
- Firstly, configure separate backend pool VMs per application.
- Secondly, configure the application in dual NIC VMs so each application was using its own Network interface and IP address.
Cause 4: Accessing the internal Load Balancer frontend from the participating Load Balancer backend pool VM
If an internal Load Balancer is configured inside a VNet, and one of the participant backend VMs is trying to access the internal Load Balancer frontend, failures can occur when the flow is mapped to the originating VM. This scenario is not supported.
Resolution There are several ways to unblock this scenario, including using a proxy. Evaluate Application Gateway or other 3rd party proxies (for example, nginx or haproxy). For more information about Application Gateway, see Overview of Application Gateway
Details Internal Load Balancers don’t translate outbound originated connections to the front end of an internal Load Balancer because both are in private IP address space. Public Load Balancers provide outbound connections from private IP addresses inside the virtual network to public IP addresses. For internal Load Balancers, this approach avoids potential SNAT port exhaustion inside a unique internal IP address space, where translation isn’t required.
Symptom: Cannot change backend port for existing LB rule of a load balancer which has VM Scale Set deployed in the backend pool.
Cause : The backend port cannot be modified for a load balancing rule that’s used by a health probe for load balancer referenced by VM Scale Set.
Resolution In order to change the port, you can remove the health probe by updating the VM Scale Set, update the port and then configure the health probe again.
Symptom: Small traffic is still going through load balancer after removing VMs from backend pool of the load balancer.
Cause : VMs removed from backend pool should no longer receive traffic. The small amount of network traffic could be related to storage, DNS, and other functions within Azure.
To verify, you can conduct a network trace. The FQDN used for your blob storage accounts are listed within the properties of each storage account. From a virtual machine within your Azure subscription, you can perform an nslookup to determine the Azure IP assigned to that storage account.
Reference: Microsoft Documentation