DPDK: Difference between revisions

From VoIPmonitor.org
(Review: opravy formátování (markdown->wiki syntax, pre->syntaxhighlight), přidány kategorie a klíčové otázky)
(Rewrite: consolidated GRUB config, added parameter reference table, See Also section, improved structure)
 
Line 5: Line 5:
== What is DPDK and Why Use It? ==
== What is DPDK and Why Use It? ==


The '''Data Plane Development Kit (DPDK)''' is a set of libraries and drivers that allows an application, like the VoIPmonitor sensor, to bypass the operating system's kernel and interact directly with the network card hardware.
The '''Data Plane Development Kit (DPDK)''' is a set of libraries and drivers that allows an application to bypass the operating system's kernel and interact directly with the network card hardware.


* '''Standard Kernel Method:''' In a normal setup, every incoming packet (or group of packets) triggers a CPU interrupt (IRQ), telling the kernel to process it. This interrupt-driven model is reliable but creates significant overhead, typically maxing out around 2-3 Gbit/s on a 10Gbit NIC, as it is limited by the performance of a single CPU core.
* '''Standard Kernel Method:''' Every incoming packet triggers a CPU interrupt (IRQ), telling the kernel to process it. This interrupt-driven model is reliable but creates significant overhead, typically maxing out around 2-3 Gbit/s on a 10Gbit NIC.
* '''DPDK Method:''' DPDK uses '''poll-mode drivers'''. A dedicated CPU core is assigned to constantly poll the network card for new packets, completely avoiding the overhead of kernel interrupts and context switching. This allows for much higher packet throughput, enabling VoIPmonitor to handle 6 Gbit/s or more on a single server.
* '''DPDK Method:''' Uses '''poll-mode drivers'''. A dedicated CPU core constantly polls the NIC for new packets, completely avoiding interrupt overhead and context switching. This enables throughput of 6+ Gbit/s on a single server.


<kroki lang="mermaid">
<kroki lang="mermaid">
Line 29: Line 29:
</kroki>
</kroki>


The trade-off is that this setup requires careful system tuning and dedicated CPU cores to avoid scheduler delays, which can cause packet drops in a high-throughput environment.
{{Note|1=The trade-off is that DPDK requires careful system tuning and dedicated CPU cores. Scheduler delays on polling cores cause packet drops.}}


== Step 1: System and Hardware Prerequisites ==
== Step 1: Hardware and System Prerequisites ==


* '''Supported NIC:''' You must use a network card supported by DPDK. A list of compatible hardware can be found on the [https://core.dpdk.org/supported/ DPDK supported hardware page]. Intel 10-Gigabit cards (like the X540 or X710 series) are a common choice.
{| class="wikitable"
* '''BIOS/UEFI Settings:''' '''VT-d''' (for Intel) or '''AMD-Vi''' (for AMD) virtualization technology must be '''enabled''' in your server's BIOS/UEFI. IOMMU must also be enabled.
|-
* '''DPDK Version:''' VoIPmonitor requires DPDK version 21.08.0 or newer. It is recommended to download the latest stable release from the [https://core.dpdk.org/download/ official DPDK website].
! Requirement !! Details
|-
| '''Supported NIC''' || Must be [https://core.dpdk.org/supported/ DPDK-compatible]. Intel X540/X710 series are common choices.
|-
| '''BIOS/UEFI''' || Enable '''VT-d''' (Intel) or '''AMD-Vi''' (AMD) virtualization technology. Enable '''IOMMU'''.
|-
| '''DPDK Version''' || VoIPmonitor requires DPDK 21.08.0 or newer. Download from [https://core.dpdk.org/download/ dpdk.org].
|}


== Step 2: System Preparation (HugePages & IOMMU) ==
== Step 2: System Preparation (GRUB Configuration) ==


DPDK requires specific kernel features and pre-allocated memory to function.
DPDK requires HugePages for memory allocation and IOMMU for the VFIO driver.


=== A. Configure HugePages ===
=== Complete GRUB Configuration ===


DPDK uses large, contiguous blocks of memory called HugePages for its packet buffers (mbufs).
Edit <code>/etc/default/grub</code> and add all required parameters:


;To temporarily allocate 16GB of 1G-sized HugePages on NUMA node 0:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
# For Intel CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=16 iommu=pt intel_iommu=on"
 
# For AMD CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=16 iommu=pt amd_iommu=on"
</syntaxhighlight>
</syntaxhighlight>


{{Note|1=You must allocate HugePages on the same NUMA node (CPU socket) that your network card is physically connected to.}}
Apply and reboot:
 
;To make this permanent, edit the GRUB configuration file (<code>/etc/default/grub</code>):
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# This example allocates 16 1GB pages at boot
update-grub
# CRITICAL: transparent_hugepage=never is required to prevent "Permission denied" errors after reboot
reboot
GRUB_CMDLINE_LINUX_DEFAULT="... transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=16"
</syntaxhighlight>
</syntaxhighlight>


After editing, run <code>update-grub</code> and reboot.
{{Warning|1=The <code>transparent_hugepage=never</code> parameter is '''critical'''. Without it, DPDK may fail with "Permission denied" errors after reboot.}}


{{Warning|1=If DPDK fails to start after a system reboot with a "Permission denied" error, verify that your GRUB configuration includes <code>transparent_hugepage=never</code>. This parameter disables transparent hugepages, which can conflict with the explicit 1GB hugepages required by DPDK and cause permission issues when the service starts.}}
=== Verify Configuration ===


=== B. Enable IOMMU ===
After reboot, verify:
 
The IOMMU (Input-Output Memory Management Unit) is required for the VFIO driver used to bind the NIC to DPDK.
 
;Edit <code>/etc/default/grub</code> and add the following to <code>GRUB_CMDLINE_LINUX_DEFAULT</code>:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# For Intel CPUs
# Check HugePages are allocated
GRUB_CMDLINE_LINUX_DEFAULT="... iommu=pt intel_iommu=on"
cat /proc/meminfo | grep HugePages


# For AMD CPUs
# Check IOMMU is active (should show subdirectories)
GRUB_CMDLINE_LINUX_DEFAULT="... iommu=pt amd_iommu=on"
ls /sys/kernel/iommu_groups/
</syntaxhighlight>
</syntaxhighlight>


After editing, run <code>update-grub</code> and reboot. After rebooting, verify that the <code>/sys/kernel/iommu_groups/</code> directory is populated with subdirectories.
{{Note|1=Allocate HugePages on the same NUMA node as your NIC. For runtime allocation: <code>echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages</code>}}


== Step 3: Bind the Network Interface to DPDK ==
== Step 3: Bind the Network Interface to DPDK ==


Once the system is prepared, you must unbind the network interface you want to use for sniffing from the kernel driver and bind it to a DPDK-compatible driver. This means the OS will no longer see or be able to use this interface (e.g., it will not appear in <code>ifconfig</code> or <code>ip a</code>).
Once the system is prepared, unbind the NIC from the kernel driver and bind it to DPDK. The interface will no longer be visible to the OS.


;1. Find the PCI address of your network card:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# This script is included with the DPDK source package
# 1. Find the PCI address of your NIC
dpdk-devbind.py -s
dpdk-devbind.py -s


Network devices using kernel driver
# Output example:
===================================
# 0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f0 drv=ixgbe unused=
0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f0 drv=ixgbe unused=
0000:1f:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f1 drv=ixgbe unused=
</syntaxhighlight>


;2. Load the VFIO-PCI driver:
# 2. Load the VFIO-PCI driver
<syntaxhighlight lang="bash">
modprobe vfio-pci
modprobe vfio-pci
</syntaxhighlight>


;3. Bind the interface to the driver using its PCI address:
# 3. Bind the NIC to DPDK
<syntaxhighlight lang="bash">
dpdk-devbind.py -b vfio-pci 0000:1f:00.0
# This example binds both ports of the X540 card
dpdk-devbind.py -b vfio-pci 0000:1f:00.0 0000:1f:00.1
</syntaxhighlight>


;To unbind it and return control to the kernel:
# To unbind and return to kernel:
<syntaxhighlight lang="bash">
dpdk-devbind.py -u 0000:1f:00.0
dpdk-devbind.py -u 0000:1f:00.1
dpdk-devbind.py -b ixgbe 0000:1f:00.0
dpdk-devbind.py -b ixgbe 0000:1f:00.1
</syntaxhighlight>
</syntaxhighlight>


{{Note|1=On some systems, <code>vfio-pci</code> may not work correctly. An alternative is the <code>igb_uio</code> driver, which may need to be compiled manually. See the [https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html official DPDK driver documentation] for more details.}}
{{Note|1=If <code>vfio-pci</code> doesn't work, try the <code>igb_uio</code> driver (may require manual compilation). See [https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html DPDK driver documentation].}}


== Step 4: Configure VoIPmonitor ==
== Step 4: Configure VoIPmonitor ==
Finally, configure your <code>voipmonitor.conf</code> file to use the DPDK interface.


=== Mandatory Parameters ===
=== Mandatory Parameters ===
Line 121: Line 112:
# /etc/voipmonitor.conf
# /etc/voipmonitor.conf


# Enable DPDK mode
dpdk = yes
dpdk = yes
# Tell the sniffer to use the DPDK interface instead of a kernel interface like eth0
interface = dpdk:0
interface = dpdk:0
# The PCI address of the network card to use for sniffing
dpdk_pci_device = 0000:1f:00.0
dpdk_pci_device = 0000:1f:00.0


# Assign dedicated CPU cores for the DPDK polling threads.
# CPU cores must be on the same NUMA node as the NIC
# These cores should be on the same NUMA node as the NIC.
dpdk_read_thread_cpu_affinity = 2
dpdk_read_thread_cpu_affinity = 2
dpdk_worker_thread_cpu_affinity = 30
dpdk_worker_thread_cpu_affinity = 30
</syntaxhighlight>
</syntaxhighlight>


=== Optional Performance Parameters ===
=== Performance Tuning ===


<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
# Number of receive queues on the NIC. Default is 2.
# Increase ring buffer to reduce imissed drops
dpdk_nb_rxq = 4
 
# Receive Side Scaling (RSS) for distributing packets across multiple queues.
# Recommended to enable for better packet distribution on multi-core systems.
dpdk_nb_rxq_rss = yes
 
# Size of the receive ring buffer on the NIC (number of descriptor entries).
# Larger values reduce packet drops (imissed) by buffering more packets
# before they must be processed. Default is 512; increase to 16384 if you
# experience imissed packet loss even during low network load.
dpdk_nb_rx = 16384
dpdk_nb_rx = 16384


# Number of packets to read in a single burst. Default is 32.
# Larger burst size for efficiency
# Increasing this to 512 can help reduce imissed packet drops by processing
# packets more efficiently. Only increase if you have CPU resources available.
dpdk_pkt_burst = 512
dpdk_pkt_burst = 512


# Number of mbuf segments (x1024) in the memory pool between reader/worker threads.
# Enable RSS for multi-queue distribution
# Default of 1024 allocates about 2GB of RAM. Increase for >5Gbit traffic.
dpdk_nb_rxq = 4
dpdk_nb_rxq_rss = yes
 
# Increase memory pool for >5Gbit traffic
dpdk_nb_mbufs = 4096
dpdk_nb_mbufs = 4096


# Restrict all other voipmonitor threads to specific cores, leaving the
# Restrict other threads to non-DPDK cores
# DPDK cores isolated.
# By default, voipmonitor will automatically use all cores EXCEPT those
# assigned to the DPDK reader/worker.
thread_affinity = 1,3-29,31-59
thread_affinity = 1,3-29,31-59
</syntaxhighlight>
</syntaxhighlight>


=== Troubleshooting: Packet Loss Reported as 'imissed' ===
== Troubleshooting: imissed Packet Drops ==


If you see the <code>imissed</code> counter increasing in your VoIPmonitor logs, this indicates packets are being dropped by the Network Interface Card (NIC) because they could not be read from the receive ring buffer fast enough. This can happen even during periods of low network load if the DPDK polling thread is interrupted or if the ring buffer is too small.
The <code>imissed</code> counter indicates packets dropped by the NIC because they couldn't be read fast enough.


{{Warning|1=<code>imissed</code> drops are caused by the NIC, not by VoIPmonitor. The NIC's small hardware buffer fills up faster than packets are read, causing it to drop packets.}}
{{Warning|1=<code>imissed</code> drops occur at the NIC level, not in VoIPmonitor. The NIC's hardware buffer fills faster than packets are read.}}


'''Common Causes and Solutions:'''
'''Solutions:'''


{{Note|1=Before applying configuration changes, ensure your CPU cores are properly isolated from the OS scheduler using the <code>isolcpus=</code> kernel parameter (see "Advanced OS Tuning" below).}}
{| class="wikitable"
|-
! Cause !! Solution
|-
| Ring buffer too small || <code>dpdk_nb_rx = 16384</code>
|-
| Burst size too low || <code>dpdk_pkt_burst = 512</code>
|-
| Poor packet distribution || Enable <code>dpdk_nb_rxq_rss = yes</code>
|-
| CPU scheduler interference || Use <code>isolcpus</code> (see below)
|}


* '''RX Ring Buffer Too Small (<code>dpdk_nb_rx</code>):''' The default ring buffer size may be insufficient. Increase it to allow more packets to be buffered on the NIC:
Monitor with: <code>tail -f /var/log/voipmonitor.log | grep imissed</code>
<syntaxhighlight lang="ini">
 
dpdk_nb_rx = 16384
== Advanced OS Tuning ==
</syntaxhighlight>


* '''Burst Size Too Low (<code>dpdk_pkt_burst</code>):''' Reading packets in larger batches can improve efficiency and reduce drops:
For 6+ Gbit/s environments, isolate DPDK cores from the Linux scheduler:
<syntaxhighlight lang="ini">
dpdk_pkt_burst = 512
</syntaxhighlight>


* '''Poor Packet Distribution (<code>dpdk_nb_rxq_rss</code>):''' If using multiple receive queues, ensure RSS is enabled to distribute packets evenly:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="ini">
# Add to /etc/default/grub
dpdk_nb_rxq_rss = yes
GRUB_CMDLINE_LINUX_DEFAULT="... isolcpus=2,30 nohz=on nohz_full=2,30 rcu_nocbs=2,30"
dpdk_nb_rxq = 4
</syntaxhighlight>
</syntaxhighlight>


'''Verification Steps:'''
* <code>isolcpus</code>: Prevents scheduler from using these cores
* <code>nohz_full</code>: Disables timer interrupts on isolated cores (tickless)
* <code>rcu_nocbs</code>: Moves RCU callbacks off isolated cores


After applying the above changes and restarting VoIPmonitor:
{{Note|1=<code>nohz_full</code> may require kernel compiled with <code>CONFIG_NO_HZ_FULL=y</code>.}}


# Monitor the <code>imissed</code> counter in your VoIPmonitor logs:
== Parameter Quick Reference ==
#: <syntaxhighlight lang="bash">tail -f /var/log/voipmonitor.log | grep imissed</syntaxhighlight>
# The counter should stop increasing or increase much more slowly under the same network load.
# For persistent <code>imissed</code> drops, verify CPU isolation and check that no other processes are running on your DPDK-affinity cores.


== Advanced OS Tuning for Maximum Performance ==
{| class="wikitable"
|-
! Parameter !! Default !! Description
|-
| <code>dpdk</code> || no || Enable DPDK mode
|-
| <code>interface</code> || - || Set to <code>dpdk:0</code> for DPDK
|-
| <code>dpdk_pci_device</code> || - || PCI address of NIC (e.g., 0000:1f:00.0)
|-
| <code>dpdk_read_thread_cpu_affinity</code> || - || CPU core for polling thread
|-
| <code>dpdk_worker_thread_cpu_affinity</code> || - || CPU core for worker thread
|-
| <code>dpdk_nb_rx</code> || 512 || Receive ring buffer size (increase to 16384 for high traffic)
|-
| <code>dpdk_pkt_burst</code> || 32 || Packets per burst (increase to 512 for efficiency)
|-
| <code>dpdk_nb_rxq</code> || 2 || Number of receive queues
|-
| <code>dpdk_nb_rxq_rss</code> || no || Enable RSS for multi-queue distribution
|-
| <code>dpdk_nb_mbufs</code> || 1024 || Memory pool size (x1024 segments, ~2GB default)
|}


For the most demanding environments (6Gbit+), isolating dedicated cores from the Linux scheduler is critical.
== See Also ==


;1. Isolate CPU Cores:
* [[Scaling]] - General performance tuning guide
Edit <code>/etc/default/grub</code> and add <code>isolcpus=2,30</code> to the kernel command line. This tells the Linux scheduler to avoid scheduling any general tasks on cores 2 and 30, reserving them exclusively for our DPDK threads.
* [[Napatech]] - Alternative high-performance capture using Napatech SmartNICs
<syntaxhighlight lang="bash">
* [[Sniffer_configuration]] - Complete sniffer configuration reference
GRUB_CMDLINE_LINUX_DEFAULT="... isolcpus=2,30"
</syntaxhighlight>
 
;2. Enable Tickless Kernel (<code>NOHZ_FULL</code>):
To prevent even periodic timer interrupts from disturbing the polling threads, you can configure the kernel to be "tickless." This is an advanced option that may require compiling a custom kernel with <code>CONFIG_NO_HZ_FULL=y</code>. Add the following to your GRUB configuration:
<syntaxhighlight lang="bash">
GRUB_CMDLINE_LINUX_DEFAULT="... nohz=on nohz_full=2,30 rcu_nocbs=2,30"
</syntaxhighlight>


== AI Summary for RAG ==
== AI Summary for RAG ==


'''Summary:''' This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks. It explains that DPDK bypasses the standard, interrupt-driven Linux kernel stack and uses dedicated CPU cores in "poll-mode" to read packets directly from the NIC, achieving significantly higher throughput. The guide details a four-step process: 1) Ensuring system prerequisites are met (supported NIC, BIOS settings). 2) Preparing the OS by allocating HugePages and enabling IOMMU via GRUB parameters. 3) Using the <code>dpdk-devbind.py</code> script to unbind the target network interface from its kernel driver and bind it to a DPDK driver like <code>vfio-pci</code>. 4) Configuring <code>voipmonitor.conf</code> with mandatory parameters, including <code>dpdk=yes</code>, <code>interface=dpdk:0</code>, <code>dpdk_pci_device</code>, and setting dedicated CPU cores with <code>dpdk_read_thread_cpu_affinity</code>. The article also covers advanced OS tuning, such as isolating CPU cores (<code>isolcpus</code>) and using a tickless kernel (<code>nohz_full</code>) for maximum, jitter-free performance.
'''Summary:''' This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks (6+ Gbit/s). DPDK bypasses the interrupt-driven Linux kernel stack using dedicated CPU cores in "poll-mode" to read packets directly from the NIC. The setup requires: (1) DPDK-compatible NIC with VT-d/AMD-Vi enabled in BIOS; (2) GRUB configuration for HugePages (<code>hugepages=16</code>), IOMMU (<code>intel_iommu=on</code>), and critically <code>transparent_hugepage=never</code> to prevent permission errors; (3) Binding NIC to DPDK using <code>dpdk-devbind.py -b vfio-pci</code>; (4) VoIPmonitor config with <code>dpdk=yes</code>, <code>interface=dpdk:0</code>, <code>dpdk_pci_device</code>, and CPU affinity settings. For <code>imissed</code> packet drops, increase <code>dpdk_nb_rx=16384</code> and <code>dpdk_pkt_burst=512</code>. Advanced tuning uses <code>isolcpus</code> and <code>nohz_full</code> for complete CPU isolation.


'''Keywords:''' dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, dpdk-devbind, vfio-pci, igb_uio, hugepages, iommu, cpu affinity, isolcpus, nohz_full, tickless kernel, dpdk_pci_device, dpdk_read_thread_cpu_affinity, t0CPU, packet loss
'''Keywords:''' dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, dpdk-devbind, vfio-pci, igb_uio, hugepages, transparent_hugepage, iommu, cpu affinity, isolcpus, nohz_full, tickless kernel, dpdk_pci_device, dpdk_read_thread_cpu_affinity, dpdk_nb_rx, dpdk_pkt_burst, dpdk_nb_rxq_rss, imissed, packet loss, 10gbit, multi-gigabit


'''Key Questions:'''
'''Key Questions:'''
Line 229: Line 222:
* What is DPDK and why should I use it?
* What is DPDK and why should I use it?
* How do I configure DPDK for VoIPmonitor?
* How do I configure DPDK for VoIPmonitor?
* What are HugePages and how do I configure them?
* What are HugePages and how do I configure them for DPDK?
* How do I bind a network card to the DPDK driver?
* How do I bind a network card to the DPDK driver?
* What does the dpdk-devbind.py script do?
* What does the dpdk-devbind.py script do?
* What are the mandatory voipmonitor.conf settings for DPDK?
* What are the mandatory voipmonitor.conf settings for DPDK?
* How do I fix imissed packet drops with DPDK?
* What does transparent_hugepage=never do and why is it required?
* How do I isolate CPU cores for maximum packet capture performance?
* How do I isolate CPU cores for maximum packet capture performance?
* What is a "tickless kernel" (nohz_full)?
* What is a tickless kernel (nohz_full)?
* How do I fix <code>imissed</code> packet drops with DPDK?
* What is the difference between vfio-pci and igb_uio drivers?
* What does the <code>transparent_hugepage=never</code> parameter do?


[[Category:Configuration]]
[[Category:Configuration]]
[[Category:Installation]]
[[Category:Installation]]
[[Category:Performance]]
[[Category:Performance]]

Latest revision as of 16:48, 8 January 2026


This is an expert-level guide for configuring the VoIPmonitor sensor to use the Data Plane Development Kit (DPDK) for ultra-high-performance packet capture. This setup is intended for multi-gigabit traffic loads where the standard Linux network stack becomes a bottleneck.

What is DPDK and Why Use It?

The Data Plane Development Kit (DPDK) is a set of libraries and drivers that allows an application to bypass the operating system's kernel and interact directly with the network card hardware.

  • Standard Kernel Method: Every incoming packet triggers a CPU interrupt (IRQ), telling the kernel to process it. This interrupt-driven model is reliable but creates significant overhead, typically maxing out around 2-3 Gbit/s on a 10Gbit NIC.
  • DPDK Method: Uses poll-mode drivers. A dedicated CPU core constantly polls the NIC for new packets, completely avoiding interrupt overhead and context switching. This enables throughput of 6+ Gbit/s on a single server.

ℹ️ Note: The trade-off is that DPDK requires careful system tuning and dedicated CPU cores. Scheduler delays on polling cores cause packet drops.

Step 1: Hardware and System Prerequisites

Requirement Details
Supported NIC Must be DPDK-compatible. Intel X540/X710 series are common choices.
BIOS/UEFI Enable VT-d (Intel) or AMD-Vi (AMD) virtualization technology. Enable IOMMU.
DPDK Version VoIPmonitor requires DPDK 21.08.0 or newer. Download from dpdk.org.

Step 2: System Preparation (GRUB Configuration)

DPDK requires HugePages for memory allocation and IOMMU for the VFIO driver.

Complete GRUB Configuration

Edit /etc/default/grub and add all required parameters:

# For Intel CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=16 iommu=pt intel_iommu=on"

# For AMD CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=16 iommu=pt amd_iommu=on"

Apply and reboot:

update-grub
reboot

⚠️ Warning: The transparent_hugepage=never parameter is critical. Without it, DPDK may fail with "Permission denied" errors after reboot.

Verify Configuration

After reboot, verify:

# Check HugePages are allocated
cat /proc/meminfo | grep HugePages

# Check IOMMU is active (should show subdirectories)
ls /sys/kernel/iommu_groups/

ℹ️ Note: Allocate HugePages on the same NUMA node as your NIC. For runtime allocation: echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages

Step 3: Bind the Network Interface to DPDK

Once the system is prepared, unbind the NIC from the kernel driver and bind it to DPDK. The interface will no longer be visible to the OS.

# 1. Find the PCI address of your NIC
dpdk-devbind.py -s

# Output example:
# 0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f0 drv=ixgbe unused=

# 2. Load the VFIO-PCI driver
modprobe vfio-pci

# 3. Bind the NIC to DPDK
dpdk-devbind.py -b vfio-pci 0000:1f:00.0

# To unbind and return to kernel:
dpdk-devbind.py -u 0000:1f:00.0
dpdk-devbind.py -b ixgbe 0000:1f:00.0

ℹ️ Note: If vfio-pci doesn't work, try the igb_uio driver (may require manual compilation). See DPDK driver documentation.

Step 4: Configure VoIPmonitor

Mandatory Parameters

# /etc/voipmonitor.conf

dpdk = yes
interface = dpdk:0
dpdk_pci_device = 0000:1f:00.0

# CPU cores must be on the same NUMA node as the NIC
dpdk_read_thread_cpu_affinity = 2
dpdk_worker_thread_cpu_affinity = 30

Performance Tuning

# Increase ring buffer to reduce imissed drops
dpdk_nb_rx = 16384

# Larger burst size for efficiency
dpdk_pkt_burst = 512

# Enable RSS for multi-queue distribution
dpdk_nb_rxq = 4
dpdk_nb_rxq_rss = yes

# Increase memory pool for >5Gbit traffic
dpdk_nb_mbufs = 4096

# Restrict other threads to non-DPDK cores
thread_affinity = 1,3-29,31-59

Troubleshooting: imissed Packet Drops

The imissed counter indicates packets dropped by the NIC because they couldn't be read fast enough.

⚠️ Warning: imissed drops occur at the NIC level, not in VoIPmonitor. The NIC's hardware buffer fills faster than packets are read.

Solutions:

Cause Solution
Ring buffer too small dpdk_nb_rx = 16384
Burst size too low dpdk_pkt_burst = 512
Poor packet distribution Enable dpdk_nb_rxq_rss = yes
CPU scheduler interference Use isolcpus (see below)

Monitor with: tail -f /var/log/voipmonitor.log | grep imissed

Advanced OS Tuning

For 6+ Gbit/s environments, isolate DPDK cores from the Linux scheduler:

# Add to /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="... isolcpus=2,30 nohz=on nohz_full=2,30 rcu_nocbs=2,30"
  • isolcpus: Prevents scheduler from using these cores
  • nohz_full: Disables timer interrupts on isolated cores (tickless)
  • rcu_nocbs: Moves RCU callbacks off isolated cores

ℹ️ Note: nohz_full may require kernel compiled with CONFIG_NO_HZ_FULL=y.

Parameter Quick Reference

Parameter Default Description
dpdk no Enable DPDK mode
interface - Set to dpdk:0 for DPDK
dpdk_pci_device - PCI address of NIC (e.g., 0000:1f:00.0)
dpdk_read_thread_cpu_affinity - CPU core for polling thread
dpdk_worker_thread_cpu_affinity - CPU core for worker thread
dpdk_nb_rx 512 Receive ring buffer size (increase to 16384 for high traffic)
dpdk_pkt_burst 32 Packets per burst (increase to 512 for efficiency)
dpdk_nb_rxq 2 Number of receive queues
dpdk_nb_rxq_rss no Enable RSS for multi-queue distribution
dpdk_nb_mbufs 1024 Memory pool size (x1024 segments, ~2GB default)

See Also

  • Scaling - General performance tuning guide
  • Napatech - Alternative high-performance capture using Napatech SmartNICs
  • Sniffer_configuration - Complete sniffer configuration reference

AI Summary for RAG

Summary: This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks (6+ Gbit/s). DPDK bypasses the interrupt-driven Linux kernel stack using dedicated CPU cores in "poll-mode" to read packets directly from the NIC. The setup requires: (1) DPDK-compatible NIC with VT-d/AMD-Vi enabled in BIOS; (2) GRUB configuration for HugePages (hugepages=16), IOMMU (intel_iommu=on), and critically transparent_hugepage=never to prevent permission errors; (3) Binding NIC to DPDK using dpdk-devbind.py -b vfio-pci; (4) VoIPmonitor config with dpdk=yes, interface=dpdk:0, dpdk_pci_device, and CPU affinity settings. For imissed packet drops, increase dpdk_nb_rx=16384 and dpdk_pkt_burst=512. Advanced tuning uses isolcpus and nohz_full for complete CPU isolation.

Keywords: dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, dpdk-devbind, vfio-pci, igb_uio, hugepages, transparent_hugepage, iommu, cpu affinity, isolcpus, nohz_full, tickless kernel, dpdk_pci_device, dpdk_read_thread_cpu_affinity, dpdk_nb_rx, dpdk_pkt_burst, dpdk_nb_rxq_rss, imissed, packet loss, 10gbit, multi-gigabit

Key Questions:

  • How can I capture more than 3 Gbit/s of traffic with VoIPmonitor?
  • What is DPDK and why should I use it?
  • How do I configure DPDK for VoIPmonitor?
  • What are HugePages and how do I configure them for DPDK?
  • How do I bind a network card to the DPDK driver?
  • What does the dpdk-devbind.py script do?
  • What are the mandatory voipmonitor.conf settings for DPDK?
  • How do I fix imissed packet drops with DPDK?
  • What does transparent_hugepage=never do and why is it required?
  • How do I isolate CPU cores for maximum packet capture performance?
  • What is a tickless kernel (nohz_full)?
  • What is the difference between vfio-pci and igb_uio drivers?