Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,17 +1,16 @@
---
title: Learn about the impact of network interrupts on cloud workloads
title: Optimize network interrupt handling on Arm servers

draft: true
cascade:
draft: true


minutes_to_complete: 20

who_is_this_for: This is a specialized topic for developers and performance engineers who are interested in understanding how network interrupt patterns can impact performance on cloud servers.
who_is_this_for: This is an introductory topic for developers and performance engineers who are interested in understanding how network interrupt patterns can impact performance on cloud servers.

learning_objectives:
- Analyze the current interrupt request (IRQ) layout on an Arm Linux system
- Experiment with different interrupt options and patterns to improve performance
- Configure optimal IRQ distribution strategies for your workload
- Implement persistent IRQ management solutions

prerequisites:
- An Arm computer running Linux
Expand All @@ -36,6 +35,22 @@ further_reading:
title: Perf for Linux on Arm (LinuxPerf)
link: https://learn.arm.com/install-guides/perf/
type: website
- resource:
title: Tune network workloads on Arm-based bare-metal instances
link: /learning-paths/servers-and-cloud-computing/tune-network-workloads-on-bare-metal/
type: learning-path
- resource:
title: Get started with Arm-based cloud instances
link: /learning-paths/servers-and-cloud-computing/csp/
type: learning-path
- resource:
title: Linux kernel IRQ subsystem documentation
link: https://www.kernel.org/doc/html/latest/core-api/irq/index.html
type: website
- resource:
title: Microbenchmark and tune network performance with iPerf3
link: /learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/
type: learning-path

### FIXED, DO NOT MODIFY
# ================================================================================
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,5 @@ weight: 21 # Set to always be larger than the content in this p
title: "Next Steps" # Always the same, html page title.
layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing.
---


Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Understand and Analyze network IRQ configuration
title: Understand and analyze network IRQ configuration
weight: 2

### FIXED, DO NOT MODIFY
Expand All @@ -10,18 +10,18 @@ layout: learningpathall

In modern cloud environments, network performance is critical to overall system efficiency. Network interface cards (NICs) generate interrupt requests (IRQs) to notify the CPU when data packets arrive or need to be sent. These interrupts temporarily pause normal processing, allowing the system to handle network traffic.

By default, Linux distributes these network interrupts across available CPU cores. However, this distribution is not always optimal for performance:
By default, Linux distributes these network interrupts across available CPU cores. However, this distribution is not always optimal for performance, for the following reasons:

- High interrupt rates: In busy servers, network cards can generate thousands of interrupts per second
- CPU cache locality: Processing related network operations on the same CPU core improves cache efficiency
- Resource contention: When network IRQs compete with application workloads for the same CPU resources, both can suffer
- High interrupt rates: in busy servers, network cards can generate thousands of interrupts per second
- CPU cache locality: processing related network operations on the same CPU core improves cache efficiency
- Resource contention: when network IRQs compete with application workloads for the same CPU resources, both can suffer
- Power efficiency: IRQ management can help reduce unnecessary CPU wake-ups, improving energy efficiency

Understanding and optimizing IRQ assignment allows you to balance network processing loads, reduce latency, and maximize throughput for your specific workloads.

## Identifying IRQs on your system

To get started, run this command to display all IRQs on your system and their CPU assignments:
To get started, display all IRQs on your system and their CPU assignments:

```bash
grep '' /proc/irq/*/smp_affinity_list | while IFS=: read path cpus; do
Expand All @@ -31,7 +31,7 @@ grep '' /proc/irq/*/smp_affinity_list | while IFS=: read path cpus; do
done
```

The output is very long and looks similar to:
The output is long and looks similar to:

```output
IRQ 104 -> CPUs 12 -> Device ens34-Tx-Rx-5
Expand All @@ -50,36 +50,25 @@ IRQ 26 -> CPUs 0-15 -> Device ACPI:Ged

## How to identify network IRQs

Network-related IRQs can be identified by looking at the "Device" column in the output.
Network-related IRQs can be identified by looking at the **Device** column in the output.

You can identify network interfaces using the command:

```bash
ip link show
```

Here are some common patterns to look for:
Look for common interface naming patterns in the output. Traditional ethernet interfaces use names like `eth0`, while wireless interfaces typically appear as `wlan0`. Modern Linux systems often use the predictable naming scheme, which creates names like `enP3p3s0f0` and `ens5-Tx-Rx-0`.

Common interface naming patterns include `eth0` for traditional ethernet, `enP3p3s0f0` and `ens5-Tx-Rx-0` for the Linux predictable naming scheme, or `wlan0` for wireless.

The predictable naming scheme breaks down into:

- en = ethernet
- P3 = PCI domain 3
- p3 = PCI bus 3
- s0 = PCI slot 0
- f0 = function 0

This naming convention helps ensure network interfaces have consistent names across reboots by encoding their physical
location in the system.
The predictable naming scheme encodes the physical location within the interface name. For example, `enP3p3s0f0` breaks down as: `en` for ethernet, `P3` for PCI domain 3, `p3` for PCI bus 3, `s0` for PCI slot 0, and `f0` for function 0. This naming convention helps ensure network interfaces maintain consistent names across reboots by encoding their physical location in the system.

## Improve performance

Once you've identified the network IRQs, you can adjust their CPU assignments to try to improve performance.
Once you've identified the network IRQs, you can adjust their CPU assignments to improve performance.

Identify the NIC (Network Interface Card) IRQs and adjust the system by experimenting and seeing if performance improves.

You may notice that some NIC IRQs are assigned to the same CPU cores by default, creating duplicate assignments.
You might notice that some NIC IRQs are assigned to the same CPU cores by default, creating duplicate assignments.

For example:

Expand All @@ -95,13 +84,13 @@ IRQ 106 -> CPUs 10 -> Device ens34-Tx-Rx-7

## Understanding IRQ performance impact

When network IRQs are assigned to the same CPU cores (as shown in the example above where IRQ 101 and 104 both use CPU 12), this can potentially hurt performance as multiple interrupts compete for the same CPU core's attention, while other cores remain underutilized.
When network IRQs are assigned to the same CPU cores (as shown in the example above where IRQ 101 and 104 both use CPU 12), this can potentially degrade performance as multiple interrupts compete for the same resources, while other cores remain underutilized.

By optimizing IRQ distribution, you can achieve more balanced processing and improved throughput. This optimization is especially important for high-traffic servers where network performance is critical.

Suggested experiments are covered in the next section.
{{% notice Note%}} There are suggestions for experiments in the next section. {{% /notice %}}

### How can I reset my IRQs if I make performance worse?
## How can I reset my IRQs if I worsen performance?

If your experiments reduce performance, you can return the IRQs back to default using the following commands:

Expand All @@ -110,12 +99,14 @@ sudo systemctl unmask irqbalance
sudo systemctl enable --now irqbalance
```

If needed, install `irqbalance` on your system. For Debian based systems run:
If needed, install `irqbalance` on your system.

For Debian based systems run:

```bash
sudo apt install irqbalance
```

### Saving these changes
## Saving the changes

Any changes you make to IRQs will be reset at reboot. You will need to change your system's settings to make your changes permanent.
Any changes you make to IRQs are reset at reboot. You will need to change your system's settings to make your changes permanent.
Original file line number Diff line number Diff line change
Expand Up @@ -8,44 +8,41 @@ layout: learningpathall

## Optimal IRQ Management Strategies

Testing across multiple cloud platforms reveals that IRQ management effectiveness varies significantly based on system size and workload characteristics. No single pattern works optimally for all scenarios, but clear patterns emerged during performance testing under heavy network loads.
Performance testing across multiple cloud platforms shows that IRQ management effectiveness depends heavily on system size and workload characteristics. While no single approach works optimally in all scenarios, clear patterns emerged during testing under heavy network loads.

## Recommendations by system size
## Recommendations for systems with 16 vCPUs or less

### Systems with 16 vCPUs or less
For smaller systems with 16 or fewer vCPUs, different strategies prove more effective:

For smaller systems with 16 or less vCPUs, concentrated IRQ assignment may provide measurable performance improvements.
- Concentrate network IRQs on just one or two CPU cores rather than spreading them across all available cores.
- Use the `smp_affinity` range assignment pattern with a limited core range (example: `0-1`).
- This approach works best when the number of NIC IRQs exceeds the number of available vCPUs.
- Focus on high-throughput network workloads where concentrated IRQ handling delivers the most significant performance improvements.

- Assign all network IRQs to just one or two CPU cores
- This approach showed the most significant performance gains
- Most effective when the number of NIC IRQs exceeds the number of vCPUs
- Use the `smp_affinity` range assignment pattern from the previous section with a very limited core range, for example `0-1`
Performance improves significantly when network IRQs are concentrated rather than dispersed across all available cores on smaller systems. This concentration reduces context switching overhead and improves cache locality for interrupt handling.

Performance improves significantly when network IRQs are concentrated rather than dispersed across all available cores on smaller systems.
## Recommendations for systems with more than 16 vCPUs

### Systems with more than 16 vCPUs
For larger systems with more than 16 vCPUs, different strategies prove more effective:

For larger systems with more than 16 vCPUs, the findings are different:
- Default IRQ distribution typically delivers good performance.
- Focus on preventing multiple network IRQs from sharing the same CPU core.
- Use the diagnostic scripts from the previous section to identify and resolve overlapping IRQ assignments.
- Apply the paired core pattern to ensure balanced distribution across the system.

- Default IRQ distribution generally performs well
- The primary concern is avoiding duplicate core assignments for network IRQs
- Use the scripts from the previous section to check and correct any overlapping IRQ assignments
- The paired core pattern can help ensure optimal distribution on these larger systems
On larger systems, interrupt handling overhead becomes less significant relative to total processing capacity. The primary performance issue occurs when high-frequency network interrupts compete for the same core, creating bottlenecks.

On larger systems, the overhead of interrupt handling is proportionally smaller compared to the available processing power. The main performance bottleneck occurs when multiple high-frequency network interrupts compete for the same core.
## Implementation considerations

## Implementation Considerations
When implementing these IRQ management strategies, several factors influence your success:

When implementing these IRQ management strategies, there are some important points to keep in mind.
- Consider your workload type first, as CPU-bound applications can benefit from different IRQ patterns than I/O-bound applications. Always benchmark your specific workload with different IRQ patterns rather than assuming one approach works universally.
- For real-time monitoring, use `watch -n1 'grep . /proc/interrupts'` to observe IRQ distribution as it happens. This helps you verify your changes are working as expected.
- On multi-socket systems, NUMA effects become important. Keep IRQs on cores close to the PCIe devices generating them to minimize cross-node memory access latency. Additionally, ensure your IRQ affinity settings persist across reboots by adding them to `/etc/rc.local` or creating a systemd service file.

Pay attention to the workload type. CPU-bound applications may benefit from different IRQ patterns than I/O-bound applications.
As workloads and hardware evolve, revisiting and adjusting IRQ management strategies might be necessary to maintain optimal performance. What works well today might need refinement as your application scales or changes.

Always benchmark your specific workload with different IRQ patterns.
## Next Steps

Monitor IRQ counts in real-time using `watch -n1 'grep . /proc/interrupts'` to observe IRQ distribution in real-time.
You have successfully learned how to optimize network interrupt handling on Arm servers. You can now analyze IRQ distributions, implement different management patterns, and configure persistent solutions for your workloads.

Also consider NUMA effects on multi-socket systems. Keep IRQs on cores close to the PCIe devices generating them to minimize cross-node memory access.

Make sure to set up IRQ affinity settings in `/etc/rc.local` or a systemd service file to ensure they persist across reboots.

Remember that as workloads and hardware evolve, revisiting and adjusting IRQ management strategies may be necessary to maintain optimal performance.
Original file line number Diff line number Diff line change
Expand Up @@ -12,28 +12,26 @@ Different IRQ management patterns can significantly impact network performance a

Network interrupt requests (IRQs) can be distributed across CPU cores in various ways, each with potential benefits depending on your workload characteristics and system configuration. By strategically assigning network IRQs to specific cores, you can improve cache locality, reduce contention, and potentially boost overall system performance.

The following patterns have been tested on various systems and can be implemented using the provided scripts. An optimal pattern is suggested at the conclusion of this Learning Path, but your specific workload may benefit from a different approach.
The following patterns have been tested on various systems and can be implemented using the provided scripts. An optimal pattern is suggested at the conclusion of this Learning Path, but your specific workload might benefit from a different approach.

### Patterns
## Common IRQ distribution patterns

1. Default: IRQ pattern provided at boot.
2. Random: All IRQs are assigned a core and do not overlap with network IRQs.
3. Housekeeping: All IRQs outside of network IRQs are assigned to specific core(s).
4. NIC IRQs are assigned to single or multiple ranges of cores, including pairs.
Four main distribution strategies offer different performance characteristics:

### Scripts to change IRQ
- Default: uses the IRQ pattern provided at boot time by the Linux kernel
- Random: assigns all IRQs to cores without overlap with network IRQs
- Housekeeping: assigns all non-network IRQs to specific dedicated cores
- NIC-focused: assigns network IRQs to single or multiple ranges of cores, including pairs

The scripts below demonstrate how to implement different IRQ management patterns on your system. Each script targets a specific distribution strategy:
## Scripts to implement IRQ management patterns

Before running these scripts, identify your network interface name using `ip link show` and determine your system's CPU topology with `lscpu`. Always test these changes in a non-production environment first, as improper IRQ assignment can impact system stability.
The scripts below demonstrate how to implement different IRQ management patterns on your system. Each script targets a specific distribution strategy. Before running these scripts, identify your network interface name using `ip link show` and determine your system's CPU topology with `lscpu`. Always test these changes in a non-production environment first, as improper IRQ assignment can impact system stability.

To change the NIC IRQs or IRQs in general you can use the following scripts.
## Housekeeping pattern

### Housekeeping
The housekeeping pattern isolates non-network IRQs to dedicated cores, reducing interference with your primary workloads.

The housekeeping pattern isolates non-network IRQs to dedicated cores.

You need to add more to account for other IRQs on your system.
Replace `#core range here` with your desired CPU range (for example: "0,3"):

```bash
HOUSEKEEP=#core range here (example: "0,3")
Expand All @@ -43,13 +41,11 @@ for irq in $(awk '/ACPI:Ged/ {sub(":","",$1); print $1}' /proc/interrupts); do
done
```

### Paired core

The paired core assignment pattern distributes network IRQs across CPU core pairs for better cache coherency.
## Paired core pattern

This is for pairs on a 16 vCPU machine.
The paired core assignment pattern distributes network IRQs across CPU core pairs for better cache coherency.

You need to add the interface name.
This example works for a 16 vCPU machine. Replace `#interface name` with your network interface (for example: "ens5"):

```bash
IFACE=#interface name (example: "ens5")
Expand All @@ -68,13 +64,11 @@ for irq in "${irqs[@]}"; do
done
```

### Range assignment

The range assignment pattern assigns network IRQs to a specific range of cores.
## Range assignment pattern

This will assign a specific core(s) to NIC IRQs only.
The range assignment pattern assigns network IRQs to a specific range of cores, providing dedicated network processing capacity.

You need to add the interface name.
Replace `#interface name` with your network interface (for example: "ens5"):

```bash
IFACE=#interface name (example: "ens5")
Expand All @@ -84,6 +78,6 @@ for irq in $(awk '/'$IFACE'/ {sub(":","",$1); print $1}' /proc/interrupts); do
done
```

Each pattern offers different performance characteristics depending on your workload. The housekeeping pattern reduces system noise, paired cores optimize cache usage, and range assignment provides dedicated network processing capacity. Test these patterns in your environment to determine which provides the best performance for your specific use case.
Each pattern offers different performance characteristics depending on your workload. The housekeeping pattern reduces system noise, paired cores optimize cache usage, and range assignment provides dedicated network processing capacity. Improper configuration can degrade performance or stability, so always test these patterns in a non-production environment to determine which provides the best results for your specific use case.

Continue to the next section for additional guidance.