DPDK: Difference between revisions

From VoIPmonitor.org
Jump to navigation Jump to search
No edit summary
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
= What is DPDK =
{{DISPLAYTITLE:High-Performance Packet Capture with DPDK}}


DPDK is the Data Plane Development Kit that consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures. Designed to run on x86, POWER and ARM processors. Polling-mode drivers skips packet processing from the operating system kernel to processes running in user space. This offloading achieves higher computing efficiency and higher packet throughput than is possible using the interrupt-driven processing provided in the kernel.
'''This is an expert-level guide for configuring the VoIPmonitor sensor to use the Data Plane Development Kit (DPDK) for ultra-high-performance packet capture. This setup is intended for multi-gigabit traffic loads where the standard Linux network stack becomes a bottleneck.'''


= Why DPDK for voipmonitor =
== What is DPDK and Why Use It? ==
The '''Data Plane Development Kit (DPDK)''' is a set of libraries and drivers that allows an application, like the VoIPmonitor sensor, to bypass the operating system's kernel and interact directly with the network card hardware.


Sniffing packets by kernel linux is driven by IRQ interrupts - every packet (or if driver supports every set of packets) needs to be handled by interrupt which has limitation around 3Gbit on 10Gbit cards (it depends on CPU). DPDK allows to read pacekts directly in userspace not using interrupts which allows faster packet reading (so called poll-mode reading). It needs some tweaks to the operating system (cpu affinity / NOHZ kernel) as the reader thread is sensitive to any scheduler delays which can occur on overloaded or misconfigured system. For 6Gbit packet rate with 3 000 000 packets / second any slight delays can cause packet drops.  
*'''Standard Kernel Method:''' In a normal setup, every incoming packet (or group of packets) triggers a CPU interrupt (IRQ), telling the kernel to process it. This interrupt-driven model is reliable but creates significant overhead, typically maxing out around 2-3 Gbit/s on a 10Gbit NIC, as it is limited by the performance of a single CPU core.
*'''DPDK Method:''' DPDK uses '''poll-mode drivers'''. A dedicated CPU core is assigned to constantly poll the network card for new packets, completely avoiding the overhead of kernel interrupts and context switching. This allows for much higher packet throughput, enabling VoIPmonitor to handle 6 Gbit/s or more on a single server.


The trade-off is that this setup requires careful system tuning and dedicated CPU cores to avoid scheduler delays, which can cause packet drops in a high-throughput environment.


= installation =  
== Step 1: System and Hardware Prerequisites ==
* '''Supported NIC:''' You must use a network card supported by DPDK. A list of compatible hardware can be found on the [https://core.dpdk.org/supported/ DPDK supported hardware page]. Intel 10-Gigabit cards (like the X540 or X710 series) are a common choice.
* '''BIOS/UEFI Settings:''' '''VT-d''' (for Intel) or '''AMD-Vi''' (for AMD) virtualization technology must be '''enabled''' in your server's BIOS/UEFI. IOMMU must also be enabled.
* '''DPDK Version:''' VoIPmonitor requires DPDK version 21.08.0 or newer. It is recommended to download the latest stable release from the [https://core.dpdk.org/download/ official DPDK website].


Version >= DPDK 21.08.0 is requried - download the latest version from:
== Step 2: System Preparation (HugePages & IOMMU) ==
 
DPDK requires specific kernel features and pre-allocated memory to function.
https://core.dpdk.org/download/
 
 
 
= How it works =
 
On supported NIC cards (https://core.dpdk.org/supported/) the ethernet port needs to be unbinded from kernel and binded to DPDK, the command for it is:
 
* no special driver is needed - debian 10/11 already has support for this out of the box
* bind/unbind means that when you undind NIC port from the kernel you cannot use it within the operating system - the port dissapears (you will not see eth1 for example)
* you can unbind from dpdk and bind back to kernel so eth1 can be used again
* dpdk is referencing NIC port by the PCI address which you can get from the "dpdk-devbind.py -s" command for example
 
list of available network devices:
 
dpdk-devbind.py -s
Network devices using kernel driver
===================================
0000:0b:00.0 'NetXtreme II BCM5709 Gigabit Ethernet 1639' if=enp11s0f0 drv=bnx2 unused= *Active*
0000:0b:00.1 'NetXtreme II BCM5709 Gigabit Ethernet 1639' if=enp11s0f1 drv=bnx2 unused=
0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f0 drv=ixgbe unused=
0000:1f:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f1 drv=ixgbe unused=
 
bind both 10gbit ports to vfio-pci driver (this driver is available by default on >= debian10)
 
modprobe vfio-pci
dpdk-devbind.py -b vfio-pci 0000:1f:00.0 0000:1f:00.1
 
bind B port back to kernel:
 
dpdk-devbind.py -b ixgbe 0000:1f:00.1
 
On some systems vfio-pci does not work for 10Gbit card - instead igb_uio (for Intel cards) needs to be loaded alongside with special kernel parameters:
 
/etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt intel_iommu=on"
 
Loading igb_uio for X540-AT2 4 port 10Gbit card (if vfio does not work)
 
modprobe igb_uio
 
More information about drivers:
 
https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html
 
 
dpdk is now ready to be used by voipmonitor
 
== Some useful basic settings ==
 
* by default you don't need to change anything. Just when things don't go well
 
Options for the sensor config:
 
dpdk_nb_rxq ... number of queues where the packets from the dpdk interface is stored. Default is 2.
 
dpdk_pkt_burst ... number of packets processed in one run from the dpdk interface. Default is 128. If the value >= 1024 then the second worker thread is enabled automatically.
 
dpdk_worker_slave_thread ... second worker thead. Default is no. If enabled then the dpdk_pkt_burst option is set to 2048
 
* for more tweaks see the hugepages and isolcpus below
 
= Troubleshooting =
 
=== DPDK interface bind problem ===
 
Jul 23 07:51:02 voipmon kernel: [267244.930194] vfio-pci: probe of 0000:43:00.0 failed with error -22
Jul 23 07:51:02 voipmon kernel: [267244.930245] vfio-pci: probe of 0000:43:00.0 failed with error -22
Jul 23 07:51:06 voipmon kernel: [267248.595082] vfio-pci: probe of 0000:43:00.1 failed with error -22
Jul 23 07:51:06 voipmon kernel: [267248.595129] vfio-pci: probe of 0000:43:00.1 failed with error -22
 
* be sure that IOMMU is working. Try to add "iommu=pt intel_iommu=on" kernel's options to the Grub's config on the Intel's host. For the AMD's host adjust iommu options accordingly.
 
* be sure that VT-d is enabled in the BIOS
 
* when IOMMU is working correctly then directory /sys/kernel/iommu_groups/ should contain directories
 
root@voipmon:~# ls /sys/kernel/iommu_groups/
0    102  107  111  116  120  125  13  134  139  143  148  152  157  161  166  170  175  18  22  27  31  36  40  45  5  54  59  63  68  72  77  81  86  90  95
1    103  108  112  117  121  126  130  135  14  144  149  153  158  162  167  171  176  19  23  28  32  37  41  46  50  55  6  64  69  73  78  82  87  91  96
10  104  109  113  118  122  127  131  136  140  145  15  154  159  163  168  172  177  2  24  29  33  38  42  47  51  56  60  65  7  74  79  83  88  92  97
100  105  11  114  119  123  128  132  137  141  146  150  155  16  164  169  173  178  20  25  3  34  39  43  48  52  57  61  66  70  75  8  84  89  93  98
101  106  110  115  12  124  129  133  138  142  147  151  156  160  165  17  174  179  21  26  30  35  4  44  49  53  58  62  67  71  76  80  85  9  94  99
 
=== Hugepages mount problem ===
 
* be sure that the hugepages are mounted in your system. You should see something like this.
 
root@voipmon:~# mount | grep -i huge
hugetlbfs on /dev/hugepages type hugetlbfs (rw,nosuid,nodev,relatime,pagesize=2M)
 
= huge pages =
 
DPDK requires huge pages which can be configured in two ways:
 
* 16 GB huge pages allocated to numa node0 (first CPU which handles NIC card)


=== A. Configure HugePages ===
DPDK uses large, contiguous blocks of memory called HugePages for its packet buffers (mbufs).
;To temporarily allocate 16GB of 1G-sized HugePages on NUMA node 0:
<pre>
echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
</pre>
*You must allocate HugePages on the same NUMA node (CPU socket) that your network card is physically connected to.*


or permanantly add to /etc/default/grub: default_hugepagesz=1G hugepagesz=16G hugepages=100
;To make this permanent, edit the GRUB configuration file (`/etc/default/grub`):
but this will add hugepages evenly for all numanodes which you might not need as the DPDK needs huge pages only for its mbuffer which has to be allocated only on the numa node handling NIC card.
<pre>
 
# This example allocates 16 1GB pages at boot
= OS tweaks =
GRUB_CMDLINE_LINUX_DEFAULT="... default_hugepagesz=1G hugepagesz=1G hugepages=16"
 
</pre>
== memory ==
After editing, run `update-grub` and reboot.
 
In case of more physical CPU turn off numa balancing which causes memory latency
 
echo 0 >  /proc/sys/kernel/numa_balancing
 
Disable transparent huge pages which can cause latency or high TLB shootdowns
 
echo never > /sys/kernel/mm/transparent_hugepage/enabled
or permanently - add transparent_hugepage=never to /etc/default/grub
 
== cpu affinity ==
 
* Ideal configuration is that the sniffer dpdkd reader and worker thread will run on standalone CPU cores and denies all other processes to ever touch or run on those cores. This can be configured fully manually for every processes or system wide by kernel parameters:
 
add to /etc/default/grub
 
isolcpus=2,30
 


* we tell the kernel that no processes can run on 2 and 30 cores (in our case 2 and 30 is one physical core and hyperthread sybling. In voipmontior.conf : dpdk_read_thread_cpu_affinity = 2, dpdk_worker_thread_cpu_affinity = 30
=== B. Enable IOMMU ===
The IOMMU (Input-Output Memory Management Unit) is required for the VFIO driver used to bind the NIC to DPDK.
;Edit `/etc/default/grub` and add the following to `GRUB_CMDLINE_LINUX_DEFAULT`:
<pre>
# For Intel CPUs
GRUB_CMDLINE_LINUX_DEFAULT="... iommu=pt intel_iommu=on"


this was proven to work for 6Gbit traffic on  Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz. For higher traffic or less powerfull CPU you might need to set reader and worker thread to two physical cores
# For AMD CPUs, adjust accordingly
</pre>
After editing, run `update-grub` and reboot. After rebooting, verify that the `/sys/kernel/iommu_groups/` directory is populated with subdirectories.


== Step 3: Bind the Network Interface to DPDK ==
Once the system is prepared, you must unbind the network interface you want to use for sniffing from the kernel driver and bind it to a DPDK-compatible driver. This means the OS will no longer see or be able to use this interface (e.g., it will not appear in `ifconfig` or `ip a`).


* NOHZ kernel
;1. Find the PCI address of your network card:
<pre>
# This script is included with the DPDK source package
dpdk-devbind.py -s


For Best performance, use the data cores as isolated cpus and operate them in tickless mode on kernel version 4.4 above. For this compile the Kernel with CONFIG_NO_HZ_FULL=y (default debian kernels does not have this option)
Network devices using kernel driver
We were able to achieve stable non packet loss reading from the NIC (6Gbit / 3000000 packets / sec)  but for high traffic this might bee needed.
===================================
0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f0 drv=ixgbe unused=
0000:1f:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f1 drv=ixgbe unused=
</pre>


The CONFIG_NO_HZ_FULL linux kernel build option is used to configure a tickless kernel. The idea is to configure certain processor cores to operate in tickless mode and these cores do not receive any periodic interrupts. These cores will run dedicated tasks (and no other tasks will be schedules on such cores obviating the need to send a scheduling tick). A CONFIG_HZ based timer interrupt will invalidate L1 cache on the core and this can degrade dataplane performance by a few % points (to be quantified, but estimated to be 1-3%). Running tickless typically means getting 1 timer interrupt/sec instead of 1000/sec.
;2. Load the VFIO-PCI driver:
<pre>modprobe vfio-pci</pre>


Add to /etc/default/grub
;3. Bind the interface to the driver using its PCI address:
<pre>
# This example binds both ports of the X540 card
dpdk-devbind.py -b vfio-pci 0000:1f:00.0 0000:1f:00.1
</pre>


nohz=on nohz_full=2,30 rcu_nocbs=2,30 rcu_nocb_poll clocksource=tsc
;To unbind it and return control to the kernel:
<pre>dpdk-devbind.py -u 0000:1f:00.1
dpdk-devbind.py -b ixgbe 0000:1f:00.1
</pre>
*Note: On some systems, `vfio-pci` may not work correctly. An alternative is the `igb_uio` driver, which may need to be compiled manually. See the [https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html official DPDK driver documentation] for more details.*


== other kernel tweaks ==
== Step 4: Configure VoIPmonitor ==
Finally, configure your `voipmonitor.conf` file to use the DPDK interface.


Add to /etc/default/grub
=== Mandatory Parameters ===
<pre>
# /etc/voipmonitor.conf


cpuidle.off=1 skew_tick=1 acpi_irq_nobalance idle=poll transparent_hugepage=never audit=0 nosoftlockup mce=ignore_ce mitigations=off selinux=0 nmi_watchdog=0
# Enable DPDK mode
dpdk = yes


We are not sure if these has any impact but was recommended during DPDK implementations and testing (be aware that mitigations=off turns off security patches for discovered CPU security flaws)
# Tell the sniffer to use the DPDK interface instead of a kernel interface like eth0
interface = dpdk:0


# The PCI address of the network card to use for sniffing
dpdk_pci_device = 0000:1f:00.0


= Sniffer configuration =
# Assign dedicated CPU cores for the DPDK polling threads.
# These cores should be on the same NUMA node as the NIC.
dpdk_read_thread_cpu_affinity = 2
dpdk_worker_thread_cpu_affinity = 30
</pre>


== mandatory parameters ==
=== Optional Performance Parameters ===
<pre>
# Number of receive queues on the NIC. Default is 2.
dpdk_nb_rxq = 4


*dpdk_read_thread_cpu_affinity sets on which CPU core will reader (polling NIC for packets) run.  
# Number of packets to read in a single burst. Default is 32.
*dpdk_worker_thread_cpu_affinity sets on which CPU core will worker run - it should run on hyperthread sibbling to core you set for dpdk_read_thread_core
# Do not change unless advised by support.
*dpdk_pci_device  - what interface will be used for sniffing packets
dpdk_pkt_burst = 32
*it is important to lock reader and worker threads to particular CPU cores so that sniffer will not use those cores for other threads
*in case of more NUMA nodes (two or more physical CPUs) always chose CPU cores for reader and worker thread which are on the same NUMA node for the NIC PCI card


# Number of mbuf segments (x1024) in the memory pool between reader/worker threads.
# Default of 1024 allocates about 2GB of RAM. Increase for >5Gbit traffic.
dpdk_nb_mbufs = 4096


voipmonitor.conf:
# Restrict all other voipmonitor threads to specific cores, leaving the
# DPDK cores isolated.
interface = dpdk:0
# By default, voipmonitor will automatically use all cores EXCEPT those
dpdk = yes
# assigned to the DPDK reader/worker.
dpdk_read_thread_cpu_affinity = 2
thread_affinity = 1,3-29,31-59
dpdk_worker_thread_cpu_affinity = 30
</pre>
dpdk_pci_device = 0000:04:00.0


== optional parameters ==
== Advanced OS Tuning for Maximum Performance ==
For the most demanding environments (6Gbit+), isolating dedicated cores from the Linux scheduler is critical.


thread_affinity = 1,3-5,4 ; this sets cpu affinity for the voipmonitor. It is automatically set to all cpu cores except dpdk_read_thread_core and dpdk_worker_thread_core. Using this option will override automatic cpu cores. You normally do not want to change this unless you decide to leave some cores dedicated to some other important processes on your server or if you want to hold sniffer on particular NUMA node.  
;1. Isolate CPU Cores:
dpdk_nb_rx = 4096 ; default size is 4096 if not specified. This is ring buffer on the NIC port. Maximum for Intel X540 is 4096 but it can be larger for others. You can get what is maximum by using ethtool -g eth1
: Edit `/etc/default/grub` and add `isolcpus=2,30` to the kernel command line. This tells the Linux scheduler to avoid scheduling any general tasks on cores 2 and 30, reserving them exclusively for our DPDK threads.
dpdk_nb_tx = 1024 (we do not need ring buffer for sending, but dpdk wants to have this - default is 1024
dpdk_nb_mbufs = 1024 ; number of packets multiplied by 1024 between reader and worker (buffer size). Each packet size is around 2kb which means that it will allocate 2GB of RAM by default. Higher mbuf is recommended (4096) for >=5Gbit traffic
dpdk_pkt_burst = 32 ; do not change this unless you exactly know what you are doing
dpdk_mempool_cache_size = 512; size of the cache size for dpdk mempool (do not change this until you exactly know what you are doing)
dpdk_memory_channels = 4; number of memory bank channels - if not specified, dpdk uses default value (TODO: we are not sure if it tries to guess it or what is the default)
dpdk_force_max_simd_bitwidth = 512; default is not set - if you have CPU which supports AVX 512 and you have compiled dpdk with AVX 512 support you can try to enable this and set 512


== experimental parameters ==
;2. Enable Tickless Kernel (`NOHZ_FULL`):
: To prevent even periodic timer interrupts from disturbing the polling threads, you can configure the kernel to be "tickless." This is an advanced option that may require compiling a custom kernel with `CONFIG_NO_HZ_FULL=y`. Add the following to your GRUB configuration:
: `nohz=on nohz_full=2,30 rcu_nocbs=2,30`


dpdk_ring_size = ; number of packets * 1024 in ring buffer which holds references to mbuf structures between worker thread and voipmonitor's packet buffer. If not specified it equels to dpdk_nb_mbufs
== AI Summary for RAG ==
'''Summary:''' This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks. It explains that DPDK bypasses the standard, interrupt-driven Linux kernel stack and uses dedicated CPU cores in "poll-mode" to read packets directly from the NIC, achieving significantly higher throughput. The guide details a four-step process: 1) Ensuring system prerequisites are met (supported NIC, BIOS settings). 2) Preparing the OS by allocating HugePages and enabling IOMMU via GRUB parameters. 3) Using the `dpdk-devbind.py` script to unbind the target network interface from its kernel driver and bind it to a DPDK driver like `vfio-pci`. 4) Configuring `voipmonitor.conf` with mandatory parameters, including `dpdk=yes`, `interface=dpdk:0`, `dpdk_pci_device`, and setting dedicated CPU cores with `dpdk_read_thread_cpu_affinity`. The article also covers advanced OS tuning, such as isolating CPU cores (`isolcpus`) and using a tickless kernel (`nohz_full`) for maximum, jitter-free performance.
'''Keywords:''' dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, `dpdk-devbind`, vfio-pci, igb_uio, hugepages, iommu, cpu affinity, `isolcpus`, nohz_full, tickless kernel, `dpdk_pci_device`, `dpdk_read_thread_cpu_affinity`, `t0CPU`, packet loss
'''Key Questions:'''
* How can I capture more than 3 Gbit/s of traffic with VoIPmonitor?
* What is DPDK and why should I use it?
* How do I configure DPDK for VoIPmonitor?
* What are HugePages and how do I configure them?
* How do I bind a network card to the DPDK driver?
* What does the `dpdk-devbind.py` script do?
* What are the mandatory `voipmonitor.conf` settings for DPDK?
* How do I isolate CPU cores for maximum packet capture performance?
* What is a "tickless kernel" (`nohz_full`)?

Latest revision as of 17:08, 30 June 2025


This is an expert-level guide for configuring the VoIPmonitor sensor to use the Data Plane Development Kit (DPDK) for ultra-high-performance packet capture. This setup is intended for multi-gigabit traffic loads where the standard Linux network stack becomes a bottleneck.

What is DPDK and Why Use It?

The Data Plane Development Kit (DPDK) is a set of libraries and drivers that allows an application, like the VoIPmonitor sensor, to bypass the operating system's kernel and interact directly with the network card hardware.

  • Standard Kernel Method: In a normal setup, every incoming packet (or group of packets) triggers a CPU interrupt (IRQ), telling the kernel to process it. This interrupt-driven model is reliable but creates significant overhead, typically maxing out around 2-3 Gbit/s on a 10Gbit NIC, as it is limited by the performance of a single CPU core.
  • DPDK Method: DPDK uses poll-mode drivers. A dedicated CPU core is assigned to constantly poll the network card for new packets, completely avoiding the overhead of kernel interrupts and context switching. This allows for much higher packet throughput, enabling VoIPmonitor to handle 6 Gbit/s or more on a single server.

The trade-off is that this setup requires careful system tuning and dedicated CPU cores to avoid scheduler delays, which can cause packet drops in a high-throughput environment.

Step 1: System and Hardware Prerequisites

  • Supported NIC: You must use a network card supported by DPDK. A list of compatible hardware can be found on the DPDK supported hardware page. Intel 10-Gigabit cards (like the X540 or X710 series) are a common choice.
  • BIOS/UEFI Settings: VT-d (for Intel) or AMD-Vi (for AMD) virtualization technology must be enabled in your server's BIOS/UEFI. IOMMU must also be enabled.
  • DPDK Version: VoIPmonitor requires DPDK version 21.08.0 or newer. It is recommended to download the latest stable release from the official DPDK website.

Step 2: System Preparation (HugePages & IOMMU)

DPDK requires specific kernel features and pre-allocated memory to function.

A. Configure HugePages

DPDK uses large, contiguous blocks of memory called HugePages for its packet buffers (mbufs).

To temporarily allocate 16GB of 1G-sized HugePages on NUMA node 0
echo 16 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
  • You must allocate HugePages on the same NUMA node (CPU socket) that your network card is physically connected to.*
To make this permanent, edit the GRUB configuration file (`/etc/default/grub`)
# This example allocates 16 1GB pages at boot
GRUB_CMDLINE_LINUX_DEFAULT="... default_hugepagesz=1G hugepagesz=1G hugepages=16"

After editing, run `update-grub` and reboot.

B. Enable IOMMU

The IOMMU (Input-Output Memory Management Unit) is required for the VFIO driver used to bind the NIC to DPDK.

Edit `/etc/default/grub` and add the following to `GRUB_CMDLINE_LINUX_DEFAULT`
# For Intel CPUs
GRUB_CMDLINE_LINUX_DEFAULT="... iommu=pt intel_iommu=on"

# For AMD CPUs, adjust accordingly

After editing, run `update-grub` and reboot. After rebooting, verify that the `/sys/kernel/iommu_groups/` directory is populated with subdirectories.

Step 3: Bind the Network Interface to DPDK

Once the system is prepared, you must unbind the network interface you want to use for sniffing from the kernel driver and bind it to a DPDK-compatible driver. This means the OS will no longer see or be able to use this interface (e.g., it will not appear in `ifconfig` or `ip a`).

1. Find the PCI address of your network card
# This script is included with the DPDK source package
dpdk-devbind.py -s

Network devices using kernel driver
===================================
0000:1f:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f0 drv=ixgbe unused=
0000:1f:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=ens3f1 drv=ixgbe unused=
2. Load the VFIO-PCI driver
modprobe vfio-pci
3. Bind the interface to the driver using its PCI address
# This example binds both ports of the X540 card
dpdk-devbind.py -b vfio-pci 0000:1f:00.0 0000:1f:00.1
To unbind it and return control to the kernel
dpdk-devbind.py -u 0000:1f:00.1
dpdk-devbind.py -b ixgbe 0000:1f:00.1
  • Note: On some systems, `vfio-pci` may not work correctly. An alternative is the `igb_uio` driver, which may need to be compiled manually. See the official DPDK driver documentation for more details.*

Step 4: Configure VoIPmonitor

Finally, configure your `voipmonitor.conf` file to use the DPDK interface.

Mandatory Parameters

# /etc/voipmonitor.conf

# Enable DPDK mode
dpdk = yes

# Tell the sniffer to use the DPDK interface instead of a kernel interface like eth0
interface = dpdk:0

# The PCI address of the network card to use for sniffing
dpdk_pci_device = 0000:1f:00.0

# Assign dedicated CPU cores for the DPDK polling threads.
# These cores should be on the same NUMA node as the NIC.
dpdk_read_thread_cpu_affinity = 2
dpdk_worker_thread_cpu_affinity = 30

Optional Performance Parameters

# Number of receive queues on the NIC. Default is 2.
dpdk_nb_rxq = 4

# Number of packets to read in a single burst. Default is 32.
# Do not change unless advised by support.
dpdk_pkt_burst = 32

# Number of mbuf segments (x1024) in the memory pool between reader/worker threads.
# Default of 1024 allocates about 2GB of RAM. Increase for >5Gbit traffic.
dpdk_nb_mbufs = 4096

# Restrict all other voipmonitor threads to specific cores, leaving the
# DPDK cores isolated.
# By default, voipmonitor will automatically use all cores EXCEPT those
# assigned to the DPDK reader/worker.
thread_affinity = 1,3-29,31-59

Advanced OS Tuning for Maximum Performance

For the most demanding environments (6Gbit+), isolating dedicated cores from the Linux scheduler is critical.

1. Isolate CPU Cores
Edit `/etc/default/grub` and add `isolcpus=2,30` to the kernel command line. This tells the Linux scheduler to avoid scheduling any general tasks on cores 2 and 30, reserving them exclusively for our DPDK threads.
2. Enable Tickless Kernel (`NOHZ_FULL`)
To prevent even periodic timer interrupts from disturbing the polling threads, you can configure the kernel to be "tickless." This is an advanced option that may require compiling a custom kernel with `CONFIG_NO_HZ_FULL=y`. Add the following to your GRUB configuration:
`nohz=on nohz_full=2,30 rcu_nocbs=2,30`

AI Summary for RAG

Summary: This guide provides an expert-level walkthrough for configuring VoIPmonitor with DPDK (Data Plane Development Kit) for high-performance packet capture on multi-gigabit networks. It explains that DPDK bypasses the standard, interrupt-driven Linux kernel stack and uses dedicated CPU cores in "poll-mode" to read packets directly from the NIC, achieving significantly higher throughput. The guide details a four-step process: 1) Ensuring system prerequisites are met (supported NIC, BIOS settings). 2) Preparing the OS by allocating HugePages and enabling IOMMU via GRUB parameters. 3) Using the `dpdk-devbind.py` script to unbind the target network interface from its kernel driver and bind it to a DPDK driver like `vfio-pci`. 4) Configuring `voipmonitor.conf` with mandatory parameters, including `dpdk=yes`, `interface=dpdk:0`, `dpdk_pci_device`, and setting dedicated CPU cores with `dpdk_read_thread_cpu_affinity`. The article also covers advanced OS tuning, such as isolating CPU cores (`isolcpus`) and using a tickless kernel (`nohz_full`) for maximum, jitter-free performance. Keywords: dpdk, performance, high throughput, packet capture, kernel bypass, poll-mode, `dpdk-devbind`, vfio-pci, igb_uio, hugepages, iommu, cpu affinity, `isolcpus`, nohz_full, tickless kernel, `dpdk_pci_device`, `dpdk_read_thread_cpu_affinity`, `t0CPU`, packet loss Key Questions:

  • How can I capture more than 3 Gbit/s of traffic with VoIPmonitor?
  • What is DPDK and why should I use it?
  • How do I configure DPDK for VoIPmonitor?
  • What are HugePages and how do I configure them?
  • How do I bind a network card to the DPDK driver?
  • What does the `dpdk-devbind.py` script do?
  • What are the mandatory `voipmonitor.conf` settings for DPDK?
  • How do I isolate CPU cores for maximum packet capture performance?
  • What is a "tickless kernel" (`nohz_full`)?