DPDK: Difference between revisions
(Review: opravy formátování (markdown->wiki syntax, pre->syntaxhighlight), přidány kategorie a klíčové otázky) |
(Rewrite: consolidated GRUB config, added parameter reference table, See Also section, improved structure) |
||
| Line 5: | Line 5: | ||
== What is DPDK and Why Use It? == | == What is DPDK and Why Use It? == | ||
The '''Data Plane Development Kit (DPDK)''' is a set of libraries and drivers that allows an application | The '''Data Plane Development Kit (DPDK)''' is a set of libraries and drivers that allows an application to bypass the operating system's kernel and interact directly with the network card hardware. | ||
* '''Standard Kernel Method:''' | * '''Standard Kernel Method:''' Every incoming packet triggers a CPU interrupt (IRQ), telling the kernel to process it. This interrupt-driven model is reliable but creates significant overhead, typically maxing out around 2-3 Gbit/s on a 10Gbit NIC. | ||
* '''DPDK Method:''' | * '''DPDK Method:''' Uses '''poll-mode drivers'''. A dedicated CPU core constantly polls the NIC for new packets, completely avoiding interrupt overhead and context switching. This enables throughput of 6+ Gbit/s on a single server. | ||
<kroki lang="mermaid"> | <kroki lang="mermaid"> | ||
| Line 29: | Line 29: | ||
</kroki> | </kroki> | ||
The trade-off is that | {{Note|1=The trade-off is that DPDK requires careful system tuning and dedicated CPU cores. Scheduler delays on polling cores cause packet drops.}} | ||
== Step 1: System | == Step 1: Hardware and System Prerequisites == | ||
{| class="wikitable" | |||
|- | |||
! Requirement !! Details | |||
|- | |||
| '''Supported NIC''' || Must be [https://core.dpdk.org/supported/ DPDK-compatible]. Intel X540/X710 series are common choices. | |||
|- | |||
| '''BIOS/UEFI''' || Enable '''VT-d''' (Intel) or '''AMD-Vi''' (AMD) virtualization technology. Enable '''IOMMU'''. | |||
|- | |||
| '''DPDK Version''' || VoIPmonitor requires DPDK 21.08.0 or newer. Download from [https://core.dpdk.org/download/ dpdk.org]. | |||
|} | |||
== Step 2: System Preparation ( | == Step 2: System Preparation (GRUB Configuration) == | ||
DPDK requires | DPDK requires HugePages for memory allocation and IOMMU for the VFIO driver. | ||
=== | === Complete GRUB Configuration === | ||
Edit <code>/etc/default/grub</code> and add all required parameters: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
# For Intel CPUs: | |||
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=16 iommu=pt intel_iommu=on" | |||
# For AMD CPUs: | |||
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=16 iommu=pt amd_iommu=on" | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Apply and reboot: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
update-grub | |||
reboot | |||
</syntaxhighlight> | </syntaxhighlight> | ||
{{Warning|1=The <code>transparent_hugepage=never</code> parameter is '''critical'''. Without it, DPDK may fail with "Permission denied" errors after reboot.}} | |||
=== Verify Configuration === | |||
After reboot, verify: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
# | # Check HugePages are allocated | ||
cat /proc/meminfo | grep HugePages | |||
# | # Check IOMMU is active (should show subdirectories) | ||
ls /sys/kernel/iommu_groups/ | |||
</syntaxhighlight> | </syntaxhighlight> | ||
{{Note|1=Allocate HugePages on the same NUMA node as your NIC. For runtime allocation: <code>echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages</code>}} | |||
== Step 3: Bind the Network Interface to DPDK == | == Step 3: Bind the Network Interface to DPDK == | ||
Once the system is prepared, | Once the system is prepared, unbind the NIC from the kernel driver and bind it to DPDK. The interface will no longer be visible to the OS. | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
# | # 1. Find the PCI address of your NIC | ||
dpdk-devbind.py -s | dpdk-devbind.py -s | ||
# Output example: | |||
# 0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f0 drv=ixgbe unused= | |||
0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2 | |||
# 2. Load the VFIO-PCI driver | |||
modprobe vfio-pci | modprobe vfio-pci | ||
# 3. Bind the NIC to DPDK | |||
dpdk-devbind.py -b vfio-pci 0000:1f:00.0 | |||
dpdk-devbind.py -b vfio-pci 0000:1f:00.0 | |||
# To unbind and return to kernel: | |||
dpdk-devbind.py -u 0000:1f:00.0 | |||
dpdk-devbind.py -u 0000:1f:00. | dpdk-devbind.py -b ixgbe 0000:1f:00.0 | ||
dpdk-devbind.py -b ixgbe 0000:1f:00. | |||
</syntaxhighlight> | </syntaxhighlight> | ||
{{Note|1= | {{Note|1=If <code>vfio-pci</code> doesn't work, try the <code>igb_uio</code> driver (may require manual compilation). See [https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html DPDK driver documentation].}} | ||
== Step 4: Configure VoIPmonitor == | == Step 4: Configure VoIPmonitor == | ||
=== Mandatory Parameters === | === Mandatory Parameters === | ||
| Line 121: | Line 112: | ||
# /etc/voipmonitor.conf | # /etc/voipmonitor.conf | ||
dpdk = yes | dpdk = yes | ||
interface = dpdk:0 | interface = dpdk:0 | ||
dpdk_pci_device = 0000:1f:00.0 | dpdk_pci_device = 0000:1f:00.0 | ||
# | # CPU cores must be on the same NUMA node as the NIC | ||
dpdk_read_thread_cpu_affinity = 2 | dpdk_read_thread_cpu_affinity = 2 | ||
dpdk_worker_thread_cpu_affinity = 30 | dpdk_worker_thread_cpu_affinity = 30 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
=== | === Performance Tuning === | ||
<syntaxhighlight lang="ini"> | <syntaxhighlight lang="ini"> | ||
# | # Increase ring buffer to reduce imissed drops | ||
dpdk_nb_rx = 16384 | dpdk_nb_rx = 16384 | ||
# | # Larger burst size for efficiency | ||
dpdk_pkt_burst = 512 | dpdk_pkt_burst = 512 | ||
# | # Enable RSS for multi-queue distribution | ||
# | dpdk_nb_rxq = 4 | ||
dpdk_nb_rxq_rss = yes | |||
# Increase memory pool for >5Gbit traffic | |||
dpdk_nb_mbufs = 4096 | dpdk_nb_mbufs = 4096 | ||
# Restrict | # Restrict other threads to non-DPDK cores | ||
thread_affinity = 1,3-29,31-59 | thread_affinity = 1,3-29,31-59 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Troubleshooting: imissed Packet Drops == | |||
The <code>imissed</code> counter indicates packets dropped by the NIC because they couldn't be read fast enough. | |||
{{Warning|1=<code>imissed</code> drops | {{Warning|1=<code>imissed</code> drops occur at the NIC level, not in VoIPmonitor. The NIC's hardware buffer fills faster than packets are read.}} | ||
''' | '''Solutions:''' | ||
{ | {| class="wikitable" | ||
|- | |||
! Cause !! Solution | |||
|- | |||
| Ring buffer too small || <code>dpdk_nb_rx = 16384</code> | |||
|- | |||
| Burst size too low || <code>dpdk_pkt_burst = 512</code> | |||
|- | |||
| Poor packet distribution || Enable <code>dpdk_nb_rxq_rss = yes</code> | |||
|- | |||
| CPU scheduler interference || Use <code>isolcpus</code> (see below) | |||
|} | |||
Monitor with: <code>tail -f /var/log/voipmonitor.log | grep imissed</code> | |||
== Advanced OS Tuning == | |||
For 6+ Gbit/s environments, isolate DPDK cores from the Linux scheduler: | |||
<syntaxhighlight lang="bash"> | |||
<syntaxhighlight lang=" | # Add to /etc/default/grub | ||
GRUB_CMDLINE_LINUX_DEFAULT="... isolcpus=2,30 nohz=on nohz_full=2,30 rcu_nocbs=2,30" | |||
</syntaxhighlight> | </syntaxhighlight> | ||
* <code>isolcpus</code>: Prevents scheduler from using these cores | |||
* <code>nohz_full</code>: Disables timer interrupts on isolated cores (tickless) | |||
* <code>rcu_nocbs</code>: Moves RCU callbacks off isolated cores | |||
{{Note|1=<code>nohz_full</code> may require kernel compiled with <code>CONFIG_NO_HZ_FULL=y</code>.}} | |||
== Parameter Quick Reference == | |||
= | {| class="wikitable" | ||
|- | |||
! Parameter !! Default !! Description | |||
|- | |||
| <code>dpdk</code> || no || Enable DPDK mode | |||
|- | |||
| <code>interface</code> || - || Set to <code>dpdk:0</code> for DPDK | |||
|- | |||
| <code>dpdk_pci_device</code> || - || PCI address of NIC (e.g., 0000:1f:00.0) | |||
|- | |||
| <code>dpdk_read_thread_cpu_affinity</code> || - || CPU core for polling thread | |||
|- | |||
| <code>dpdk_worker_thread_cpu_affinity</code> || - || CPU core for worker thread | |||
|- | |||
| <code>dpdk_nb_rx</code> || 512 || Receive ring buffer size (increase to 16384 for high traffic) | |||
|- | |||
| <code>dpdk_pkt_burst</code> || 32 || Packets per burst (increase to 512 for efficiency) | |||
|- | |||
| <code>dpdk_nb_rxq</code> || 2 || Number of receive queues | |||
|- | |||
| <code>dpdk_nb_rxq_rss</code> || no || Enable RSS for multi-queue distribution | |||
|- | |||
| <code>dpdk_nb_mbufs</code> || 1024 || Memory pool size (x1024 segments, ~2GB default) | |||
|} | |||
== See Also == | |||
* [[Scaling]] - General performance tuning guide | |||
* [[Napatech]] - Alternative high-performance capture using Napatech SmartNICs | |||
* [[Sniffer_configuration]] - Complete sniffer configuration reference | |||
== AI Summary for RAG == | == AI Summary for RAG == | ||
'''Summary:''' This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks. | '''Summary:''' This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks (6+ Gbit/s). DPDK bypasses the interrupt-driven Linux kernel stack using dedicated CPU cores in "poll-mode" to read packets directly from the NIC. The setup requires: (1) DPDK-compatible NIC with VT-d/AMD-Vi enabled in BIOS; (2) GRUB configuration for HugePages (<code>hugepages=16</code>), IOMMU (<code>intel_iommu=on</code>), and critically <code>transparent_hugepage=never</code> to prevent permission errors; (3) Binding NIC to DPDK using <code>dpdk-devbind.py -b vfio-pci</code>; (4) VoIPmonitor config with <code>dpdk=yes</code>, <code>interface=dpdk:0</code>, <code>dpdk_pci_device</code>, and CPU affinity settings. For <code>imissed</code> packet drops, increase <code>dpdk_nb_rx=16384</code> and <code>dpdk_pkt_burst=512</code>. Advanced tuning uses <code>isolcpus</code> and <code>nohz_full</code> for complete CPU isolation. | ||
'''Keywords:''' dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, dpdk-devbind, vfio-pci, igb_uio, hugepages, iommu, cpu affinity, isolcpus, nohz_full, tickless kernel, dpdk_pci_device, dpdk_read_thread_cpu_affinity, | '''Keywords:''' dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, dpdk-devbind, vfio-pci, igb_uio, hugepages, transparent_hugepage, iommu, cpu affinity, isolcpus, nohz_full, tickless kernel, dpdk_pci_device, dpdk_read_thread_cpu_affinity, dpdk_nb_rx, dpdk_pkt_burst, dpdk_nb_rxq_rss, imissed, packet loss, 10gbit, multi-gigabit | ||
'''Key Questions:''' | '''Key Questions:''' | ||
| Line 229: | Line 222: | ||
* What is DPDK and why should I use it? | * What is DPDK and why should I use it? | ||
* How do I configure DPDK for VoIPmonitor? | * How do I configure DPDK for VoIPmonitor? | ||
* What are HugePages and how do I configure them? | * What are HugePages and how do I configure them for DPDK? | ||
* How do I bind a network card to the DPDK driver? | * How do I bind a network card to the DPDK driver? | ||
* What does the dpdk-devbind.py script do? | * What does the dpdk-devbind.py script do? | ||
* What are the mandatory voipmonitor.conf settings for DPDK? | * What are the mandatory voipmonitor.conf settings for DPDK? | ||
* How do I fix imissed packet drops with DPDK? | |||
* What does transparent_hugepage=never do and why is it required? | |||
* How do I isolate CPU cores for maximum packet capture performance? | * How do I isolate CPU cores for maximum packet capture performance? | ||
* What is a | * What is a tickless kernel (nohz_full)? | ||
* What is the difference between vfio-pci and igb_uio drivers? | |||
* What | |||
[[Category:Configuration]] | [[Category:Configuration]] | ||
[[Category:Installation]] | [[Category:Installation]] | ||
[[Category:Performance]] | [[Category:Performance]] | ||
Latest revision as of 16:48, 8 January 2026
This is an expert-level guide for configuring the VoIPmonitor sensor to use the Data Plane Development Kit (DPDK) for ultra-high-performance packet capture. This setup is intended for multi-gigabit traffic loads where the standard Linux network stack becomes a bottleneck.
What is DPDK and Why Use It?
The Data Plane Development Kit (DPDK) is a set of libraries and drivers that allows an application to bypass the operating system's kernel and interact directly with the network card hardware.
- Standard Kernel Method: Every incoming packet triggers a CPU interrupt (IRQ), telling the kernel to process it. This interrupt-driven model is reliable but creates significant overhead, typically maxing out around 2-3 Gbit/s on a 10Gbit NIC.
- DPDK Method: Uses poll-mode drivers. A dedicated CPU core constantly polls the NIC for new packets, completely avoiding interrupt overhead and context switching. This enables throughput of 6+ Gbit/s on a single server.
ℹ️ Note: The trade-off is that DPDK requires careful system tuning and dedicated CPU cores. Scheduler delays on polling cores cause packet drops.
Step 1: Hardware and System Prerequisites
| Requirement | Details |
|---|---|
| Supported NIC | Must be DPDK-compatible. Intel X540/X710 series are common choices. |
| BIOS/UEFI | Enable VT-d (Intel) or AMD-Vi (AMD) virtualization technology. Enable IOMMU. |
| DPDK Version | VoIPmonitor requires DPDK 21.08.0 or newer. Download from dpdk.org. |
Step 2: System Preparation (GRUB Configuration)
DPDK requires HugePages for memory allocation and IOMMU for the VFIO driver.
Complete GRUB Configuration
Edit /etc/default/grub and add all required parameters:
# For Intel CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=16 iommu=pt intel_iommu=on"
# For AMD CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=16 iommu=pt amd_iommu=on"
Apply and reboot:
update-grub
reboot
⚠️ Warning: The transparent_hugepage=never parameter is critical. Without it, DPDK may fail with "Permission denied" errors after reboot.
Verify Configuration
After reboot, verify:
# Check HugePages are allocated
cat /proc/meminfo | grep HugePages
# Check IOMMU is active (should show subdirectories)
ls /sys/kernel/iommu_groups/
ℹ️ Note: Allocate HugePages on the same NUMA node as your NIC. For runtime allocation: echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
Step 3: Bind the Network Interface to DPDK
Once the system is prepared, unbind the NIC from the kernel driver and bind it to DPDK. The interface will no longer be visible to the OS.
# 1. Find the PCI address of your NIC
dpdk-devbind.py -s
# Output example:
# 0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f0 drv=ixgbe unused=
# 2. Load the VFIO-PCI driver
modprobe vfio-pci
# 3. Bind the NIC to DPDK
dpdk-devbind.py -b vfio-pci 0000:1f:00.0
# To unbind and return to kernel:
dpdk-devbind.py -u 0000:1f:00.0
dpdk-devbind.py -b ixgbe 0000:1f:00.0
ℹ️ Note: If vfio-pci doesn't work, try the igb_uio driver (may require manual compilation). See DPDK driver documentation.
Step 4: Configure VoIPmonitor
Mandatory Parameters
# /etc/voipmonitor.conf
dpdk = yes
interface = dpdk:0
dpdk_pci_device = 0000:1f:00.0
# CPU cores must be on the same NUMA node as the NIC
dpdk_read_thread_cpu_affinity = 2
dpdk_worker_thread_cpu_affinity = 30
Performance Tuning
# Increase ring buffer to reduce imissed drops
dpdk_nb_rx = 16384
# Larger burst size for efficiency
dpdk_pkt_burst = 512
# Enable RSS for multi-queue distribution
dpdk_nb_rxq = 4
dpdk_nb_rxq_rss = yes
# Increase memory pool for >5Gbit traffic
dpdk_nb_mbufs = 4096
# Restrict other threads to non-DPDK cores
thread_affinity = 1,3-29,31-59
Troubleshooting: imissed Packet Drops
The imissed counter indicates packets dropped by the NIC because they couldn't be read fast enough.
⚠️ Warning: imissed drops occur at the NIC level, not in VoIPmonitor. The NIC's hardware buffer fills faster than packets are read.
Solutions:
| Cause | Solution |
|---|---|
| Ring buffer too small | dpdk_nb_rx = 16384
|
| Burst size too low | dpdk_pkt_burst = 512
|
| Poor packet distribution | Enable dpdk_nb_rxq_rss = yes
|
| CPU scheduler interference | Use isolcpus (see below)
|
Monitor with: tail -f /var/log/voipmonitor.log | grep imissed
Advanced OS Tuning
For 6+ Gbit/s environments, isolate DPDK cores from the Linux scheduler:
# Add to /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="... isolcpus=2,30 nohz=on nohz_full=2,30 rcu_nocbs=2,30"
isolcpus: Prevents scheduler from using these coresnohz_full: Disables timer interrupts on isolated cores (tickless)rcu_nocbs: Moves RCU callbacks off isolated cores
ℹ️ Note: nohz_full may require kernel compiled with CONFIG_NO_HZ_FULL=y.
Parameter Quick Reference
| Parameter | Default | Description |
|---|---|---|
dpdk |
no | Enable DPDK mode |
interface |
- | Set to dpdk:0 for DPDK
|
dpdk_pci_device |
- | PCI address of NIC (e.g., 0000:1f:00.0) |
dpdk_read_thread_cpu_affinity |
- | CPU core for polling thread |
dpdk_worker_thread_cpu_affinity |
- | CPU core for worker thread |
dpdk_nb_rx |
512 | Receive ring buffer size (increase to 16384 for high traffic) |
dpdk_pkt_burst |
32 | Packets per burst (increase to 512 for efficiency) |
dpdk_nb_rxq |
2 | Number of receive queues |
dpdk_nb_rxq_rss |
no | Enable RSS for multi-queue distribution |
dpdk_nb_mbufs |
1024 | Memory pool size (x1024 segments, ~2GB default) |
See Also
- Scaling - General performance tuning guide
- Napatech - Alternative high-performance capture using Napatech SmartNICs
- Sniffer_configuration - Complete sniffer configuration reference
AI Summary for RAG
Summary: This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks (6+ Gbit/s). DPDK bypasses the interrupt-driven Linux kernel stack using dedicated CPU cores in "poll-mode" to read packets directly from the NIC. The setup requires: (1) DPDK-compatible NIC with VT-d/AMD-Vi enabled in BIOS; (2) GRUB configuration for HugePages (hugepages=16), IOMMU (intel_iommu=on), and critically transparent_hugepage=never to prevent permission errors; (3) Binding NIC to DPDK using dpdk-devbind.py -b vfio-pci; (4) VoIPmonitor config with dpdk=yes, interface=dpdk:0, dpdk_pci_device, and CPU affinity settings. For imissed packet drops, increase dpdk_nb_rx=16384 and dpdk_pkt_burst=512. Advanced tuning uses isolcpus and nohz_full for complete CPU isolation.
Keywords: dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, dpdk-devbind, vfio-pci, igb_uio, hugepages, transparent_hugepage, iommu, cpu affinity, isolcpus, nohz_full, tickless kernel, dpdk_pci_device, dpdk_read_thread_cpu_affinity, dpdk_nb_rx, dpdk_pkt_burst, dpdk_nb_rxq_rss, imissed, packet loss, 10gbit, multi-gigabit
Key Questions:
- How can I capture more than 3 Gbit/s of traffic with VoIPmonitor?
- What is DPDK and why should I use it?
- How do I configure DPDK for VoIPmonitor?
- What are HugePages and how do I configure them for DPDK?
- How do I bind a network card to the DPDK driver?
- What does the dpdk-devbind.py script do?
- What are the mandatory voipmonitor.conf settings for DPDK?
- How do I fix imissed packet drops with DPDK?
- What does transparent_hugepage=never do and why is it required?
- How do I isolate CPU cores for maximum packet capture performance?
- What is a tickless kernel (nohz_full)?
- What is the difference between vfio-pci and igb_uio drivers?