DPDK: Difference between revisions
No edit summary |
No edit summary |
||
(13 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
{{DISPLAYTITLE:High-Performance Packet Capture with DPDK}} | |||
'''This is an expert-level guide for configuring the VoIPmonitor sensor to use the Data Plane Development Kit (DPDK) for ultra-high-performance packet capture. This setup is intended for multi-gigabit traffic loads where the standard Linux network stack becomes a bottleneck.''' | |||
= Why DPDK | == What is DPDK and Why Use It? == | ||
The '''Data Plane Development Kit (DPDK)''' is a set of libraries and drivers that allows an application, like the VoIPmonitor sensor, to bypass the operating system's kernel and interact directly with the network card hardware. | |||
*'''Standard Kernel Method:''' In a normal setup, every incoming packet (or group of packets) triggers a CPU interrupt (IRQ), telling the kernel to process it. This interrupt-driven model is reliable but creates significant overhead, typically maxing out around 2-3 Gbit/s on a 10Gbit NIC, as it is limited by the performance of a single CPU core. | |||
*'''DPDK Method:''' DPDK uses '''poll-mode drivers'''. A dedicated CPU core is assigned to constantly poll the network card for new packets, completely avoiding the overhead of kernel interrupts and context switching. This allows for much higher packet throughput, enabling VoIPmonitor to handle 6 Gbit/s or more on a single server. | |||
The trade-off is that this setup requires careful system tuning and dedicated CPU cores to avoid scheduler delays, which can cause packet drops in a high-throughput environment. | |||
= | == Step 1: System and Hardware Prerequisites == | ||
* '''Supported NIC:''' You must use a network card supported by DPDK. A list of compatible hardware can be found on the [https://core.dpdk.org/supported/ DPDK supported hardware page]. Intel 10-Gigabit cards (like the X540 or X710 series) are a common choice. | |||
* '''BIOS/UEFI Settings:''' '''VT-d''' (for Intel) or '''AMD-Vi''' (for AMD) virtualization technology must be '''enabled''' in your server's BIOS/UEFI. IOMMU must also be enabled. | |||
* '''DPDK Version:''' VoIPmonitor requires DPDK version 21.08.0 or newer. It is recommended to download the latest stable release from the [https://core.dpdk.org/download/ official DPDK website]. | |||
== Step 2: System Preparation (HugePages & IOMMU) == | |||
DPDK requires specific kernel features and pre-allocated memory to function. | |||
=== A. Configure HugePages === | |||
DPDK uses large, contiguous blocks of memory called HugePages for its packet buffers (mbufs). | |||
;To temporarily allocate 16GB of 1G-sized HugePages on NUMA node 0: | |||
<pre> | |||
echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages | echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages | ||
</pre> | |||
*You must allocate HugePages on the same NUMA node (CPU socket) that your network card is physically connected to.* | |||
;To make this permanent, edit the GRUB configuration file (`/etc/default/grub`): | |||
<pre> | |||
# This example allocates 16 1GB pages at boot | |||
GRUB_CMDLINE_LINUX_DEFAULT="... default_hugepagesz=1G hugepagesz=1G hugepages=16" | |||
</pre> | |||
After editing, run `update-grub` and reboot. | |||
== | |||
=== B. Enable IOMMU === | |||
The IOMMU (Input-Output Memory Management Unit) is required for the VFIO driver used to bind the NIC to DPDK. | |||
;Edit `/etc/default/grub` and add the following to `GRUB_CMDLINE_LINUX_DEFAULT`: | |||
<pre> | |||
# For Intel CPUs | |||
GRUB_CMDLINE_LINUX_DEFAULT="... iommu=pt intel_iommu=on" | |||
# For AMD CPUs, adjust accordingly | |||
</pre> | |||
After editing, run `update-grub` and reboot. After rebooting, verify that the `/sys/kernel/iommu_groups/` directory is populated with subdirectories. | |||
== Step 3: Bind the Network Interface to DPDK == | |||
Once the system is prepared, you must unbind the network interface you want to use for sniffing from the kernel driver and bind it to a DPDK-compatible driver. This means the OS will no longer see or be able to use this interface (e.g., it will not appear in `ifconfig` or `ip a`). | |||
;1. Find the PCI address of your network card: | |||
<pre> | |||
# This script is included with the DPDK source package | |||
dpdk-devbind.py -s | |||
Network devices using kernel driver | |||
=================================== | |||
0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f0 drv=ixgbe unused= | |||
0000:1f:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f1 drv=ixgbe unused= | |||
</pre> | |||
;2. Load the VFIO-PCI driver: | |||
<pre>modprobe vfio-pci</pre> | |||
;3. Bind the interface to the driver using its PCI address: | |||
<pre> | |||
# This example binds both ports of the X540 card | |||
dpdk-devbind.py -b vfio-pci 0000:1f:00.0 0000:1f:00.1 | |||
</pre> | |||
;To unbind it and return control to the kernel: | |||
<pre>dpdk-devbind.py -u 0000:1f:00.1 | |||
dpdk-devbind.py -b ixgbe 0000:1f:00.1 | |||
</pre> | |||
*Note: On some systems, `vfio-pci` may not work correctly. An alternative is the `igb_uio` driver, which may need to be compiled manually. See the [https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html official DPDK driver documentation] for more details.* | |||
== | == Step 4: Configure VoIPmonitor == | ||
Finally, configure your `voipmonitor.conf` file to use the DPDK interface. | |||
=== Mandatory Parameters === | |||
<pre> | |||
# /etc/voipmonitor.conf | |||
# Enable DPDK mode | |||
dpdk = yes | |||
# Tell the sniffer to use the DPDK interface instead of a kernel interface like eth0 | |||
interface = dpdk:0 | |||
# The PCI address of the network card to use for sniffing | |||
dpdk_pci_device = 0000:1f:00.0 | |||
= | # Assign dedicated CPU cores for the DPDK polling threads. | ||
# These cores should be on the same NUMA node as the NIC. | |||
dpdk_read_thread_cpu_affinity = 2 | |||
dpdk_worker_thread_cpu_affinity = 30 | |||
</pre> | |||
== | === Optional Performance Parameters === | ||
<pre> | |||
# Number of receive queues on the NIC. Default is 2. | |||
dpdk_nb_rxq = 4 | |||
# Number of packets to read in a single burst. Default is 32. | |||
# Do not change unless advised by support. | |||
dpdk_pkt_burst = 32 | |||
# Number of mbuf segments (x1024) in the memory pool between reader/worker threads. | |||
# Default of 1024 allocates about 2GB of RAM. Increase for >5Gbit traffic. | |||
dpdk_nb_mbufs = 4096 | |||
# Restrict all other voipmonitor threads to specific cores, leaving the | |||
# DPDK cores isolated. | |||
# By default, voipmonitor will automatically use all cores EXCEPT those | |||
# assigned to the DPDK reader/worker. | |||
thread_affinity = 1,3-29,31-59 | |||
</pre> | |||
== | == Advanced OS Tuning for Maximum Performance == | ||
For the most demanding environments (6Gbit+), isolating dedicated cores from the Linux scheduler is critical. | |||
;1. Isolate CPU Cores: | |||
: Edit `/etc/default/grub` and add `isolcpus=2,30` to the kernel command line. This tells the Linux scheduler to avoid scheduling any general tasks on cores 2 and 30, reserving them exclusively for our DPDK threads. | |||
== | ;2. Enable Tickless Kernel (`NOHZ_FULL`): | ||
: To prevent even periodic timer interrupts from disturbing the polling threads, you can configure the kernel to be "tickless." This is an advanced option that may require compiling a custom kernel with `CONFIG_NO_HZ_FULL=y`. Add the following to your GRUB configuration: | |||
: `nohz=on nohz_full=2,30 rcu_nocbs=2,30` | |||
== AI Summary for RAG == | |||
'''Summary:''' This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks. It explains that DPDK bypasses the standard, interrupt-driven Linux kernel stack and uses dedicated CPU cores in "poll-mode" to read packets directly from the NIC, achieving significantly higher throughput. The guide details a four-step process: 1) Ensuring system prerequisites are met (supported NIC, BIOS settings). 2) Preparing the OS by allocating HugePages and enabling IOMMU via GRUB parameters. 3) Using the `dpdk-devbind.py` script to unbind the target network interface from its kernel driver and bind it to a DPDK driver like `vfio-pci`. 4) Configuring `voipmonitor.conf` with mandatory parameters, including `dpdk=yes`, `interface=dpdk:0`, `dpdk_pci_device`, and setting dedicated CPU cores with `dpdk_read_thread_cpu_affinity`. The article also covers advanced OS tuning, such as isolating CPU cores (`isolcpus`) and using a tickless kernel (`nohz_full`) for maximum, jitter-free performance. | |||
'''Keywords:''' dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, `dpdk-devbind`, vfio-pci, igb_uio, hugepages, iommu, cpu affinity, `isolcpus`, nohz_full, tickless kernel, `dpdk_pci_device`, `dpdk_read_thread_cpu_affinity`, `t0CPU`, packet loss | |||
'''Key Questions:''' | |||
* How can I capture more than 3 Gbit/s of traffic with VoIPmonitor? | |||
* What is DPDK and why should I use it? | |||
* How do I configure DPDK for VoIPmonitor? | |||
* What are HugePages and how do I configure them? | |||
* How do I bind a network card to the DPDK driver? | |||
* What does the `dpdk-devbind.py` script do? | |||
* What are the mandatory `voipmonitor.conf` settings for DPDK? | |||
* How do I isolate CPU cores for maximum packet capture performance? | |||
* What is a "tickless kernel" (`nohz_full`)? |
Latest revision as of 17:08, 30 June 2025
This is an expert-level guide for configuring the VoIPmonitor sensor to use the Data Plane Development Kit (DPDK) for ultra-high-performance packet capture. This setup is intended for multi-gigabit traffic loads where the standard Linux network stack becomes a bottleneck.
What is DPDK and Why Use It?
The Data Plane Development Kit (DPDK) is a set of libraries and drivers that allows an application, like the VoIPmonitor sensor, to bypass the operating system's kernel and interact directly with the network card hardware.
- Standard Kernel Method: In a normal setup, every incoming packet (or group of packets) triggers a CPU interrupt (IRQ), telling the kernel to process it. This interrupt-driven model is reliable but creates significant overhead, typically maxing out around 2-3 Gbit/s on a 10Gbit NIC, as it is limited by the performance of a single CPU core.
- DPDK Method: DPDK uses poll-mode drivers. A dedicated CPU core is assigned to constantly poll the network card for new packets, completely avoiding the overhead of kernel interrupts and context switching. This allows for much higher packet throughput, enabling VoIPmonitor to handle 6 Gbit/s or more on a single server.
The trade-off is that this setup requires careful system tuning and dedicated CPU cores to avoid scheduler delays, which can cause packet drops in a high-throughput environment.
Step 1: System and Hardware Prerequisites
- Supported NIC: You must use a network card supported by DPDK. A list of compatible hardware can be found on the DPDK supported hardware page. Intel 10-Gigabit cards (like the X540 or X710 series) are a common choice.
- BIOS/UEFI Settings: VT-d (for Intel) or AMD-Vi (for AMD) virtualization technology must be enabled in your server's BIOS/UEFI. IOMMU must also be enabled.
- DPDK Version: VoIPmonitor requires DPDK version 21.08.0 or newer. It is recommended to download the latest stable release from the official DPDK website.
Step 2: System Preparation (HugePages & IOMMU)
DPDK requires specific kernel features and pre-allocated memory to function.
A. Configure HugePages
DPDK uses large, contiguous blocks of memory called HugePages for its packet buffers (mbufs).
- To temporarily allocate 16GB of 1G-sized HugePages on NUMA node 0
echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
- You must allocate HugePages on the same NUMA node (CPU socket) that your network card is physically connected to.*
- To make this permanent, edit the GRUB configuration file (`/etc/default/grub`)
# This example allocates 16 1GB pages at boot GRUB_CMDLINE_LINUX_DEFAULT="... default_hugepagesz=1G hugepagesz=1G hugepages=16"
After editing, run `update-grub` and reboot.
B. Enable IOMMU
The IOMMU (Input-Output Memory Management Unit) is required for the VFIO driver used to bind the NIC to DPDK.
- Edit `/etc/default/grub` and add the following to `GRUB_CMDLINE_LINUX_DEFAULT`
# For Intel CPUs GRUB_CMDLINE_LINUX_DEFAULT="... iommu=pt intel_iommu=on" # For AMD CPUs, adjust accordingly
After editing, run `update-grub` and reboot. After rebooting, verify that the `/sys/kernel/iommu_groups/` directory is populated with subdirectories.
Step 3: Bind the Network Interface to DPDK
Once the system is prepared, you must unbind the network interface you want to use for sniffing from the kernel driver and bind it to a DPDK-compatible driver. This means the OS will no longer see or be able to use this interface (e.g., it will not appear in `ifconfig` or `ip a`).
- 1. Find the PCI address of your network card
# This script is included with the DPDK source package dpdk-devbind.py -s Network devices using kernel driver =================================== 0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f0 drv=ixgbe unused= 0000:1f:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f1 drv=ixgbe unused=
- 2. Load the VFIO-PCI driver
modprobe vfio-pci
- 3. Bind the interface to the driver using its PCI address
# This example binds both ports of the X540 card dpdk-devbind.py -b vfio-pci 0000:1f:00.0 0000:1f:00.1
- To unbind it and return control to the kernel
dpdk-devbind.py -u 0000:1f:00.1 dpdk-devbind.py -b ixgbe 0000:1f:00.1
- Note: On some systems, `vfio-pci` may not work correctly. An alternative is the `igb_uio` driver, which may need to be compiled manually. See the official DPDK driver documentation for more details.*
Step 4: Configure VoIPmonitor
Finally, configure your `voipmonitor.conf` file to use the DPDK interface.
Mandatory Parameters
# /etc/voipmonitor.conf # Enable DPDK mode dpdk = yes # Tell the sniffer to use the DPDK interface instead of a kernel interface like eth0 interface = dpdk:0 # The PCI address of the network card to use for sniffing dpdk_pci_device = 0000:1f:00.0 # Assign dedicated CPU cores for the DPDK polling threads. # These cores should be on the same NUMA node as the NIC. dpdk_read_thread_cpu_affinity = 2 dpdk_worker_thread_cpu_affinity = 30
Optional Performance Parameters
# Number of receive queues on the NIC. Default is 2. dpdk_nb_rxq = 4 # Number of packets to read in a single burst. Default is 32. # Do not change unless advised by support. dpdk_pkt_burst = 32 # Number of mbuf segments (x1024) in the memory pool between reader/worker threads. # Default of 1024 allocates about 2GB of RAM. Increase for >5Gbit traffic. dpdk_nb_mbufs = 4096 # Restrict all other voipmonitor threads to specific cores, leaving the # DPDK cores isolated. # By default, voipmonitor will automatically use all cores EXCEPT those # assigned to the DPDK reader/worker. thread_affinity = 1,3-29,31-59
Advanced OS Tuning for Maximum Performance
For the most demanding environments (6Gbit+), isolating dedicated cores from the Linux scheduler is critical.
- 1. Isolate CPU Cores
- Edit `/etc/default/grub` and add `isolcpus=2,30` to the kernel command line. This tells the Linux scheduler to avoid scheduling any general tasks on cores 2 and 30, reserving them exclusively for our DPDK threads.
- 2. Enable Tickless Kernel (`NOHZ_FULL`)
- To prevent even periodic timer interrupts from disturbing the polling threads, you can configure the kernel to be "tickless." This is an advanced option that may require compiling a custom kernel with `CONFIG_NO_HZ_FULL=y`. Add the following to your GRUB configuration:
- `nohz=on nohz_full=2,30 rcu_nocbs=2,30`
AI Summary for RAG
Summary: This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks. It explains that DPDK bypasses the standard, interrupt-driven Linux kernel stack and uses dedicated CPU cores in "poll-mode" to read packets directly from the NIC, achieving significantly higher throughput. The guide details a four-step process: 1) Ensuring system prerequisites are met (supported NIC, BIOS settings). 2) Preparing the OS by allocating HugePages and enabling IOMMU via GRUB parameters. 3) Using the `dpdk-devbind.py` script to unbind the target network interface from its kernel driver and bind it to a DPDK driver like `vfio-pci`. 4) Configuring `voipmonitor.conf` with mandatory parameters, including `dpdk=yes`, `interface=dpdk:0`, `dpdk_pci_device`, and setting dedicated CPU cores with `dpdk_read_thread_cpu_affinity`. The article also covers advanced OS tuning, such as isolating CPU cores (`isolcpus`) and using a tickless kernel (`nohz_full`) for maximum, jitter-free performance. Keywords: dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, `dpdk-devbind`, vfio-pci, igb_uio, hugepages, iommu, cpu affinity, `isolcpus`, nohz_full, tickless kernel, `dpdk_pci_device`, `dpdk_read_thread_cpu_affinity`, `t0CPU`, packet loss Key Questions:
- How can I capture more than 3 Gbit/s of traffic with VoIPmonitor?
- What is DPDK and why should I use it?
- How do I configure DPDK for VoIPmonitor?
- What are HugePages and how do I configure them?
- How do I bind a network card to the DPDK driver?
- What does the `dpdk-devbind.py` script do?
- What are the mandatory `voipmonitor.conf` settings for DPDK?
- How do I isolate CPU cores for maximum packet capture performance?
- What is a "tickless kernel" (`nohz_full`)?