Scaling: Difference between revisions

From VoIPmonitor.org
Jump to navigation Jump to search
No edit summary
 
(20 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= Introduction =
'''This guide provides a comprehensive overview of performance tuning and scaling for VoIPmonitor. It covers the three primary system bottlenecks and offers practical, expert-level advice for optimizing your deployment for high traffic loads.'''


Our maximum throughput on a single server (24 cores Xeon, 10Gbit NIC card) is around 60000 simultaneous calls without RTP analyzing (only SIP) and 10000 concurrent calls with full RTP analyzing and packet capturing to disk (around 1.6Gbit voip traffic). VoIPmonitor can work in distributed mode where remote sniffers writes to one central database having one GUI accessing all data from all sensors.  
== Understanding Performance Bottlenecks ==
A VoIPmonitor deployment's maximum capacity is determined by three potential bottlenecks. Identifying and addressing the correct one is key to achieving high performance.
# '''Packet Capturing (CPU & Network Stack):''' The ability of a single CPU core to read packets from the network interface. This is often the first limit you will encounter.
# '''Disk I/O (Storage):''' The speed at which the sensor can write PCAP files to disk. This is critical when call recording is enabled.
# '''Database Performance (MySQL/MariaDB):''' The rate at which the database can ingest Call Detail Records (CDRs) and serve data to the GUI.


VoIPmonitor is able to use all available CPU cores but there are several bottlenecks which you should consider before deploying and configuring VoIPmonitor (do not hesitate to write email to support@voipmonitor.org if you need more info / help with deploying)
On a modern, well-tuned server (e.g., 24-core Xeon, 10Gbit NIC), a single VoIPmonitor instance can handle up to '''10,000 concurrent calls''' with full RTP analysis and recording, or over '''60,000 concurrent calls''' with SIP-only analysis.


There are three types of bottlenecks - CPU, disk I/O throughput (if storing SIP/RTP packets are enabled) and storing CDR to MySQL (which is both I/O and CPU). Since version 11.0 all CPU intensive tasks was split to threads but one bottleneck still remains and that is sniffing packets from kernel which cannot be split to more threads. This is the top most consuming thread and it depends on CPU type and kernel version (and number of packets per second). Below 500Mbit of traffic you do not need to be worried about CPU on usual CPU (Xeon, i5).  
== 1. Optimizing Packet Capturing (CPU & Network) ==
The most performance-critical task is the initial packet capture, which is handled by a single, highly optimized thread (t0). If this thread's CPU usage (`t0CPU` in logs) approaches 100%, you are hitting the capture limit. Here are the primary methods to optimize it, from easiest to most advanced.


Since voipmonitor version 11 the most problematic bottleneck - I/O throughput was solved by changing write strategy. Instead of writing pcap file for each call they are grouped into year-mon-day-hour-minute.tar files where the minute is start of the call. For example 2000 concurrent calls with enabled RTP+SIP+GRAPH disk IOPS lowered from 250 to 40. Another example on server with 60 000 concurrent calls enabling only SIP packets writing IOPS lowered from 4000 to 10 (yes it is not mistake). Usually single SATA disk with 7.5krpm has only 300 IOPS throughput.  
=== A. Use a Modern Linux Kernel & VoIPmonitor Build ===
Modern Linux kernels (3.2+) and VoIPmonitor builds include '''TPACKET_V3''' support, a high-speed packet capture mechanism. This is the single most important factor for high performance.
* '''Recommendation:''' Always use a recent Linux distribution (like AlmaLinux, Rocky Linux, or Debian) and the latest VoIPmonitor static binary. With this combination, a standard Intel 10Gbit NIC can often handle up to 2 Gbit/s of VoIP traffic without special drivers.


= CPU bound =
=== B. Network Stack & Driver Tuning ===
For high-traffic environments (>500 Mbit/s), fine-tuning the network driver and kernel parameters is essential.


== Reading packets ==
==== NIC Ring Buffer ====
Main thread (called t0CPU) which reads packets from kernel cannot be split into more threads which limits number of concurrent calls for the whole server. You can check how much CPU is spent in T0 thread looking in the syslog where voipmonitor sends each 10 seconds information about CPU and memory. If the t0CPU is >95% you are at the limit of capturing packets from the interface. Your options are:  
The ring buffer is a queue between the network card driver and the VoIPmonitor application. A larger buffer prevents packet loss during short CPU usage spikes.
# '''Check maximum size:'''
#<pre>ethtool -g eth0</pre>
# '''Set to maximum (e.g., 16384):'''
#<pre>ethtool -G eth0 rx 16384</pre>


==== Interrupt Coalescing ====
This setting batches multiple hardware interrupts into one, reducing CPU overhead.
#<pre>ethtool -C eth0 rx-usecs 1022</pre>


*better CPU (faster core)
==== Applying Settings Persistently ====
*kernel >= 3.2 (out latest static binaries supports TPACKET_V3 feature which speedups capturing and reducing t0CPU
To make these settings permanent, add them to your network configuration. For Debian/Ubuntu using `/etc/network/interfaces`:
*special network card (Napatech for example reduces t0CPU to 3% for 1.6Gbit voip traffic)
<pre>
*Commercial ntop.org drivers for intel cards which offloads CPU from 90% to 20-40% for 1.5Gbit (tested)
auto eth0
iface eth0 inet manual
    up ip link set $IFACE up
    up ip link set $IFACE promisc on
    up ethtool -G $IFACE rx 16384
    up ethtool -C $IFACE rx-usecs 1022
</pre>
''Note: Modern systems using NetworkManager or systemd-networkd require different configuration methods.''


Recent sniffer versions with kernel >3.2 is able to sniff up to 2Gbit voip traffic on 10Gbit intel card with native intel drivers. The CPU configuration is Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz.  
=== C. Advanced Offloading and Kernel-Bypass Solutions ===
If kernel and driver tuning are insufficient, you can offload the capture process entirely by bypassing the kernel's network stack.


There is also important thing to check for high throughput (>500Mbit) especially if you use kernel < 3.X which do not balance IRQ from RX NIC interrupts by default. You need to check /proc/interrupts to see if your RX queues are not bound only to CPU0 in case you see in syslog that CPU0 is on 100%. If you are not sure just upgrade your kernel to 3.X and the IRQ balance is by default spread to more CPU automatically.  
* '''DPDK (Data Plane Development Kit):''' DPDK is a set of libraries and drivers for fast packet processing. VoIPmonitor can leverage DPDK to read packets directly from the network card, completely bypassing the kernel and significantly reducing CPU overhead. This is a powerful, open-source solution for achieving multi-gigabit capture rates on commodity hardware. For detailed installation and configuration instructions, see the [[DPDK|official DPDK guide]].


Another consideration is limit number of rx tx queues on your nic card which is by default number of cores in the system which adds a lot of overhead causing more CPU cycles. six cores are sufficient with up to 2Gbit traffic on 10Gbit intel card. This is how you can limit it:
* '''PF_RING ZC/DNA:''' A commercial software driver from ntop.org that also dramatically reduces CPU load by bypassing the kernel. In tests, it can reduce CPU usage from 90% to as low as 20% for the same traffic load.


modprobe ixgbe DCA=6,6 RSS=6,6
* '''Napatech SmartNICs:''' Specialized hardware acceleration cards that deliver packets to VoIPmonitor with near-zero CPU overhead (<3% CPU for 10 Gbit/s traffic). This is the ultimate solution for extreme performance requirements.


If you want to make it by default create file  /etc/modprobe.d/ixgbe.conf
== 2. Optimizing Disk I/O ==
options ixgbe DCA=6,6 RSS=6,6
VoIPmonitor's modern storage engine is highly optimized to minimize random disk access, which is the primary cause of I/O bottlenecks.


The next important thing is to set ring buffer in the hardware for RX to its maximum value. You can get what you can set at maximum by running
=== VoIPmonitor Storage Strategy ===
Instead of writing a separate PCAP file for each call (which causes massive I/O load), VoIPmonitor groups all calls starting within the same minute into a single compressed `.tar` archive. This changes the I/O pattern from thousands of small, random writes to a few large, sequential writes, reducing IOPS (I/O Operations Per Second) by a factor of 10 or more. A standard 7200 RPM SATA drive can typically handle up to 2000 concurrent calls with full recording.


ethtool -g eth3
=== Filesystem Tuning (ext4) ===
For the spool directory (`/var/spool/voipmonitor`), using an optimized ext4 filesystem can improve performance.
*'''Example setup:'''
<pre>
# Format partition without a journal (use with caution, requires battery-backed RAID controller)
mke2fs -t ext4 -O ^has_journal /dev/sda2


For example for Intel 10Gbit card default value is 4092 which can be extended to 16384
# Add to /etc/fstab for optimal performance
/dev/sda2  /var/spool/voipmonitor  ext4    errors=remount-ro,noatime,data=writeback,barrier=0 0 0
</pre>


ethtool -G eth3 rx 16384
=== RAID Controller Cache Policy ===
A misconfigured RAID controller is a common bottleneck. For database and spool workloads, the cache policy should always be set to '''WriteBack''', not WriteThrough. This requires a healthy Battery Backup Unit (BBU). If the BBU is dead or missing, you may need to force this setting.
* ''The specific commands vary by vendor (`megacli`, `ssacli`, `perccli`). Refer to the original, more detailed version of this article or vendor documentation for specific commands for LSI, HP, and Dell controllers.''


This will prevent to miss packets when interrupts spike occurs
== 3. Optimizing Database Performance (MySQL/MariaDB) ==
A well-tuned database is critical for both data ingestion from the sensor and responsiveness of the GUI.


Tweak coalesce (hw interrupts)
=== Key Configuration Parameters ===
These settings should be placed in your `my.cnf` or a file in `/etc/mysql/mariadb.conf.d/`.


ethtool -C eth3 rx-usecs 1022
* '''`innodb_buffer_pool_size`''': '''This is the most important setting.''' It defines the amount of memory InnoDB uses to cache both data and indexes. A good starting point is 50-70% of the server's total available RAM. For a dedicated database server, 8GB is a good minimum, with 32GB or more being optimal for large databases.


(put it all togather into /etc/network/interfaces)  
* '''`innodb_flush_log_at_trx_commit = 2`''': The default value of `1` forces a write to disk for every single transaction, which is very slow without a high-end, battery-backed RAID controller. Setting it to `2` relaxes this, flushing logs to the OS cache and writing to disk once per second. This dramatically improves write performance with a minimal risk of data loss (max 1-2 seconds) in case of a full OS crash.


auto eth3
* '''`innodb_file_per_table = 1`''': This instructs InnoDB to store each table and its indexes in its own `.ibd` file, rather than in one giant, monolithic `ibdata1` file. This is essential for performance, management, and features like table compression and partitioning.
iface eth3 inet manual
up ip address add 0/0 dev $IFACE
up ip link set $IFACE up
up ip link set $IFACE promisc on
up ethtool -A $IFACE autoneg off rx off tx off 2>&1 > /dev/null || true
up ethtool -C $IFACE rx-usecs 1022  2>&1 > /dev/null || true
up ethtool -G $IFACE rx 16384  2>&1 > /dev/null || true


* '''LZ4 Compression:''' For modern MySQL/MariaDB versions, using LZ4 for page compression offers a great balance of reduced storage size and minimal CPU overhead.
<pre>
# In my.cnf [mysqld] section
innodb_compression_algorithm=lz4
# For MariaDB, you may also need to set default table formats:
# mysqlcompress_type = PAGE_COMPRESSED=1 in voipmonitor.conf
</pre>


On following picture you can see how packets are processed from ethernet card to Kernel space to ethernet driver which queues packets to ring buffer. Ring buffer (available since kernel 2.6.32 and libpcap > 1.0) is packet queue waiting to be fetched by the VoIPmonitor sniffer. This ringbuffer prevents packet loss in case the VoIPmonitor process does not have enough CPU cycles from kernel planner. Once the ringbuffer is filled it is logged to syslog that the sniffer loosed some packets from the ringbuffer. In this case you can increase the ringbuffer from the default 50MB to its maximum value of 2000MB.  
=== Database Partitioning ===
Partitioning is a feature where VoIPmonitor automatically splits large tables (like `cdr`) into smaller, more manageable pieces, typically one per day. This is enabled by default and is '''highly recommended'''.
* '''Benefits:'''
    * Massively improves query performance in the GUI, as it only needs to scan partitions relevant to the selected time range.
    * Allows for instant deletion of old data by dropping a partition, which is thousands of times faster than running a `DELETE` query on millions of rows.


== 4. Monitoring Live Performance ==
VoIPmonitor logs detailed performance metrics every 10 seconds to syslog. You can watch them live:
<pre>
tail -f /var/log/syslog  # Debian/Ubuntu
tail -f /var/log/messages # CentOS/RHEL
</pre>
A sample log line:
<code>voipmonitor[15567]: calls[315][355] PS[C:4 S:29/29 R:6354 A:6484] SQLq[0] heap[0|0|0] comp[54] [12.6Mb/s] t0CPU[5.2%] ... RSS/VSZ[323|752]MB</code>


From the kernel ringbuffer voipmonitor is storing packets to its internal cache memory (heap) which you can control with packetbuffer_total_maxheap parameter. Default value is 200MB. This cache is also compressed by very fast snappy compression algorithm which allows to store more packets in the cache (about 50% ratio for G711 calls). Packets from the cache heap is sent to processing threads which analyzes SIP and RTP. From there packets are destroyed or continues to the write queue.
* '''`calls[X][Y]`''': X = active calls, Y = total calls in memory.
 
* '''`SQLq[C]`''': Number of SQL queries waiting to be sent to the database. If this number is consistently growing, your database cannot keep up.
If voipmonitor sniffer is running with at least "-v 1" you can watch several metrics:
* '''`heap[A|B|C]`''': Memory usage percentages for internal buffers. If A (main heap) reaches 100%, packets will be dropped.
 
* '''`t0CPU[X%]`''': '''The most important CPU metric.''' This is the usage of the main packet capture thread. If it consistently exceeds 90-95%, you are at your server's capture limit.
tail -f /var/log/syslog (on debian/ubuntu)
tail -f /var/log/messages (on redhat/centos)
 
voipmonitor[15567]: calls[315][355] PS[C:4 S:29/29 R:6354 A:6484] SQLq[C:0 M:0 Cl:0] heap[0|0|0] comp[54] [12.6Mb/s] tarQ[1865] tarCPU[12.0|9.2|3.4|18.4%] t0CPU[5.2%] t1CPU[1.2%] t2CPU[0.9%] tacCPU[4.6|3.0|3.7|4.5%] RSS/VSZ[323|752]MB
 
*voipmonitor[15567] - 15567 is PID of the process
*calls - [X][Y] - X is actual calls in voipmonitor memory. Y is total calls in voipmonitor memory (actual + queue buffer) including SIP register
*PS - call/packet counters per second. C: number of calls / second, S: X/Y - X is number of valid SIP packets / second on sip ports. Y is number of all packets on sip ports. R: number of RTP packets / second of registered calls by voipmonitor per second. A: all packets per second
*SQLqueue - is number of sql statements (INSERTs) waiting to be written to MySQL. If this number is growing the MySQL is not able to handle it. See [[Scaling#innodb_flush_log_at_trx_commit]]
heap[A|B|C] - A: % of used heap memory. If 100 voipmonitor is not able to process packets in realtime due to CPU or I/O. B: number of % used memory in packetbuffer. C: number of % used for async write buffers (if 100% I/O is blocking and heap will grow and than ring buffer will get full and then packet loss will occur)
*hoverruns - if this number grows the heap buffer was completely filled. In this case the primary thread will stop reading packets from ringbuffer and if the ringbuffer is full packets will be lost - this occurrence will be logged to syslog.  
*comp - compression buffer ratio (if enabled)
*[12.6Mb/s] - total network throughput
*tarQ[1865] - number of opened files when tar=yes enabled which is default option for sniffer >11
*tarCPU[12.0|9.2|3.4|18.4%] - CPU utilization when compressing tar which is enabled by default. Maximum thread is controlled by option tar_maxthreads which is 4 by default
*t0CPU - This is %CPU utilization for thread 0. Thread 0 is process reading from kernel ring buffer. Once it is over 90% it means that the current setup is hitting limit processing packets from network card. Please write to support@voipmonitor.org if you hit this limit.
*t1CPU - This is %CPU utilization for thread 1. Thread 1 is process reading packets from thread 0, adding it to the buffer and compress it (if enabled).
*t2CPU - on start there is only one thread (pb) - if it will be >50% new 3 threads spawn (hash control thread rm, hash computation rh, thread moving packets to rtp threads rd. If pb > 50% new thread d is created. If d>50% new sip preprocess thread (s) is created. If s thread >50% new extended (e) thread is created (searching and creating Call structure)
*tacCPU[N0|N1|N...] - %CPU utilization when compressing pcap files or when compressing internal memory if tar=yes (which is by default) number of threads grows automatically
*RSS/VSZ[323|752]MB - RSS stands for the resident size, which is an accurate representation of how much actual physical memory sniffer is consuming. VSZ stands for the virtual size of a process, which is the sum of memory it is actually using, memory it has mapped into itself (for instance the video card’s RAM for the X server), files on disk that have been mapped into it (most notably shared libraries), and memory shared with other processes. VIRT represents how much memory the program is able to access at the present moment.
*more about [[Logging]]


[[File:kernelstandarddiagram.png]]
[[File:kernelstandarddiagram.png]]
Good tool for measuring CPU is http://htop.sourceforge.net/
[[File:ntop.png]]
[[File:ntop.png]]


=== Software driver alternatives ===
== AI Summary for RAG ==
 
'''Summary:''' This article is an expert guide to scaling and performance tuning VoIPmonitor for high-traffic environments. It identifies the three main system bottlenecks: Packet Capturing (CPU-bound), Disk I/O (for PCAP storage), and Database Performance (MySQL/MariaDB). For packet capture, it details tuning network card drivers with `ethtool`, the benefits of modern kernels with `TPACKET_V3`, and advanced kernel-bypass solutions like DPDK, PF_RING, and Napatech SmartNICs. For I/O, it explains VoIPmonitor's efficient TAR-based storage and provides tuning tips for ext4 filesystems and RAID controller cache policies. The largest section focuses on MySQL/MariaDB tuning, emphasizing the importance of `innodb_buffer_pool_size`, `innodb_flush_log_at_trx_commit`, `innodb_file_per_table`, and native LZ4 compression. It also explains the critical role of database partitioning for query performance and data retention. Finally, it details how to interpret live performance statistics from the syslog to diagnose bottlenecks.
*TPACKET_V3 - New libpcap 1.5.3 and >= 3.2 kernel supports TPACKET_V3 which means that you need to compile this libpcap against recent linux kernel. In our tests we are able to sniff on 10Gbit intel card 2Gbit traffic without special drivers - just using the latest libpcap and kernel. Our latest statically compiled binaries (in download section) already includes TPACKET_V3 which means that if you are running kernel >= 3.2 it is used.
'''Keywords:''' performance tuning, scaling, high throughput, bottleneck, CPU bound, t0CPU, packet capture, TPACKET_V3, DPDK, ethtool, ring buffer, interrupt coalescing, PF_RING, Napatech, I/O bottleneck, IOPS, filesystem tuning, ext4, RAID cache, WriteBack, megacli, ssacli, MySQL performance, MariaDB tuning, innodb_buffer_pool_size, innodb_flush_log_at_trx_commit, innodb_file_per_table, LZ4 compression, database partitioning, syslog, monitoring, high calls per second, CPS
 
'''Key Questions:'''
*Direct NIC Access http://www.ntop.org/products/pf_ring/dna/ - We have tried DNA driver for stock 1Gbit Intel card which reduces 100% CPU load to 20%.
* How do I scale VoIPmonitor for thousands of concurrent calls?
 
* What are the main performance bottlenecks in VoIPmonitor?
*[[Pcap_worksheet]]
* How can I fix high t0CPU usage?
 
* What is DPDK and when should I use it?
=== Hardware NIC cards ===
* What are the best `my.cnf` settings for a high-performance VoIPmonitor database?
 
* How does database partitioning work and why is it important?
We have successfully tested 1Gbit and 10Gbit cards from Napatech which delivers packets to VoIPmonitor at <3% CPU.
* My sniffer is dropping packets, how do I fix it?
 
* How do I interpret the performance metrics in the syslog?
= I/O bottleneck =
* Should I use a dedicated database server for VoIPmonitor?
 
Since sniffer version 11.0 number of IOPS (overall random writes) lowered by factor 10 which means that the I/O bottleneck is not a problem anymore. 2000 simultaneous calls takes around 40 IOPS which is 10MB / sec which can handle almost any storage. But still the next section is good reading:
 
== filesystem ==
 
 
The fastest filesystem for voipmonitor spool directory is EXT4 with following tweaks. Assuming your partition is /dev/sda2: 
 
export mydisk=/dev/sda2
mke2fs -t ext4 -O ^has_journal $mydisk
tune2fs -O ^has_journal $mydisk
tune2fs -o journal_data_writeback $mydisk
#add following line to /etc/fstab
/dev/sda2      /var/spool/voipmonitor  ext4    errors=remount-ro,noatime,nodiratime,data=writeback,barrier=0 0 0
 
== LSI write back cache policy ==
 
On many installations a raid controller is in not optimally configured. To check what is your cache policy run:
 
rpm -Uhv http://dl.marmotte.net/rpms/redhat/el6/x86_64/megacli-8.00.46-2/megacli-8.00.46-2.x86_64.rpm
#Debian/Ubuntu - you can use following repository for package megacli installation. https://hwraid.le-vert.net/wiki/DebianPackages
megacli -LDGetProp -Cache -L0 -a0
  Adapter 0-VD 0(target id: 0): Cache Policy:WriteThrough, ReadAheadNone, Direct, No Write Cache if bad BBU
Cache policy write through has very bad random write performance so you probably want to change it to write back cache policy:
megacli  -LDSetProp -WB -L0 -a0
  Battery needs replacement
So policy Change to WB will not come into effect immediately Set Write Policy to WriteBack on Adapter 0, VD 0 (target id: 0) success
 
 
Recheck if the cache was really set to write back if not, you need to force write cache if battery is bad / missing with this command:
 
megacli -LDSetProp CachedBadBBU -Lall -aAll
Set Write Cache OK if bad BBU on Adapter 0, VD 0 (target id: 0) success Set Write Cache OK if bad BBU on Adapter 0, VD 1 (target id: 1) success
And then set the write back cache again
megacli -LDSetProp -WB -L0 -a0
Please note that this example assumes you have one logical drive if you have more you need to repeat it for all of your virtual disks.
 
== HP SMART ARRAY ==
 
=== Centos ===
 
wget ftp://ftp.hp.com/pub/softlib2/software1/pubsw-linux/p1257348637/v71527/hpacucli-9.10-22.0.x86_64.rpm
yum install hpacucli-9.10-22.0.x86_64.rpm
 
See status
 
hpacucli ctrl slot=0 show
 
Enable cache
 
hpacucli ctrl slot=0 ld all modify arrayaccelerator=enable hpacucli ctrl slot=0 modify dwc=enable
 
Modify cache ratio between read and write:
hpacucli ctrl slot=0 modify cacheratio=50/50
 
= MySQL performance =
 
Before you create the database make sure that you either run
 
MySQL>SET GLOBAL innodb_file_per_table=1;
 
or set in my.cnf file in global section SET innodb_file_per_table = 1
 
this will prevent /var/lib/mysql/ibdata1 file grow to giant size and instead data are organized in /var/lib/mysql/voipmonitor which greatly increases I/O performance.
 
== Write performance ==
 
Write performance depends a lot if a storage is also used for pcap storing (thus sharing I/O with voipmonitor) and on how mysql handles writes (innodb_flush_log_at_trx_commit parameter - see below). Since sniffer version 6 MySQL tables uses compression which doubles write and read performance almost with no trade cost on CPU (well it depends on CPU type and amount of traffic).
 
=== innodb_flush_log_at_trx_commit ===
 
Default value of 1 will mean each update transaction commit (or each statement outside of transaction) will need to flush log to the disk which is rather expensive, especially if you do not have Battery backed up cache. Many applications are OK with value 2 which means do not flush log to the disk but only flush it to OS cache. The log is still flushed to the disk each second so you normally would not loose more than 1-2 sec worth of updates. Value 0 is a bit faster but is a bit less secure as you can lose transactions even in case MySQL Server crashes. Value 2 only cause data loss with full OS crash.
If you are importing or altering cdr table it is strongly recommended to set temporarily innodb_flush_log_at_trx_commit = 0 and turn off binlog if you are importing CDR via inserts.
 
innodb_flush_log_at_trx_commit = 2
 
=== compression ===
 
The compression is enabled by default when you use mysql >=5.5
 
==== MySQL 5.1 ====
 
set in my.cf in [global] section this value:
 
innodb_file_per_table = 1
 
==== MySQL > 5.1 ====
 
MySQL> set global innodb_file_per_table = 1;
MySQL> set global innodb_file_format = barracuda;
 
==== Tune KEY_BLOCK_SIZE ====
 
If you choose KEY_BLOCK_SIZE=2 instead of default 8 the compression will be twice better but with CPU penalty on read. We have tested differences between no compression, 8kb and 2kb block size compression on 700 000 CDR with this result (on single core system – we do not know how it behaves on multi core systems). Testing query is select with group by.
No compression – 1.6 seconds
8kb -  1.7 seconds
4kb - 8 seconds
 
== Read performance ==
 
Read performance depends how big the database is and how fast disk operates and how much memory is allocated for innodb cache. Since sniffer version 7 all large tables uses partitioning by days which reduces needs to allocate very large cache to get good performance for the GUI. Partitioning works since MySQL 5.1 and is highly recommended. It also allows instantly removes old data by wiping partition instead of DELETE rows which can take hours on very large tables (millions of rows).
 
=== innodb_buffer_pool_size ===
 
This is very important variable to tune if you’re using Innodb tables. Innodb tables are much more sensitive to buffer size compared to MyISAM. MyISAM may work kind of OK with default key_buffer_size even with large data set but it will crawl with default innodb_buffer_pool_size. Also Innodb buffer pool caches both data and index pages so you do not need to leave space for OS cache so values up to 70-80% of memory often make sense for Innodb only installations.
 
We recommend to set this value to 50% of your available RAM. 2GB at least, 8GB is optimal. All depends how many CDR do you have per day.
 
put into /etc/mysql/my.cnf (or /etc/my.cnf if redhat/centos) [mysqld] section
innodb_buffer_pool_size = 8GB
 
== Partitioning ==
 
Partitioning is enabled by default since version 7. If you want to take benefit of it (which we strongly recommend) you need to start with clean database - there is no conversion procedure from old database to partitioned one. Just create new database and start voipmonitor with new database and partitioning will be created. You can turn off partitioning by setting cdr_partition = no in voipmonitor.conf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.

Latest revision as of 09:53, 30 June 2025

This guide provides a comprehensive overview of performance tuning and scaling for VoIPmonitor. It covers the three primary system bottlenecks and offers practical, expert-level advice for optimizing your deployment for high traffic loads.

Understanding Performance Bottlenecks

A VoIPmonitor deployment's maximum capacity is determined by three potential bottlenecks. Identifying and addressing the correct one is key to achieving high performance.

  1. Packet Capturing (CPU & Network Stack): The ability of a single CPU core to read packets from the network interface. This is often the first limit you will encounter.
  2. Disk I/O (Storage): The speed at which the sensor can write PCAP files to disk. This is critical when call recording is enabled.
  3. Database Performance (MySQL/MariaDB): The rate at which the database can ingest Call Detail Records (CDRs) and serve data to the GUI.

On a modern, well-tuned server (e.g., 24-core Xeon, 10Gbit NIC), a single VoIPmonitor instance can handle up to 10,000 concurrent calls with full RTP analysis and recording, or over 60,000 concurrent calls with SIP-only analysis.

1. Optimizing Packet Capturing (CPU & Network)

The most performance-critical task is the initial packet capture, which is handled by a single, highly optimized thread (t0). If this thread's CPU usage (`t0CPU` in logs) approaches 100%, you are hitting the capture limit. Here are the primary methods to optimize it, from easiest to most advanced.

A. Use a Modern Linux Kernel & VoIPmonitor Build

Modern Linux kernels (3.2+) and VoIPmonitor builds include TPACKET_V3 support, a high-speed packet capture mechanism. This is the single most important factor for high performance.

  • Recommendation: Always use a recent Linux distribution (like AlmaLinux, Rocky Linux, or Debian) and the latest VoIPmonitor static binary. With this combination, a standard Intel 10Gbit NIC can often handle up to 2 Gbit/s of VoIP traffic without special drivers.

B. Network Stack & Driver Tuning

For high-traffic environments (>500 Mbit/s), fine-tuning the network driver and kernel parameters is essential.

NIC Ring Buffer

The ring buffer is a queue between the network card driver and the VoIPmonitor application. A larger buffer prevents packet loss during short CPU usage spikes.

  1. Check maximum size:
  2. ethtool -g eth0
  3. Set to maximum (e.g., 16384):
  4. ethtool -G eth0 rx 16384

Interrupt Coalescing

This setting batches multiple hardware interrupts into one, reducing CPU overhead.

  1. ethtool -C eth0 rx-usecs 1022

Applying Settings Persistently

To make these settings permanent, add them to your network configuration. For Debian/Ubuntu using `/etc/network/interfaces`:

auto eth0
iface eth0 inet manual
    up ip link set $IFACE up
    up ip link set $IFACE promisc on
    up ethtool -G $IFACE rx 16384
    up ethtool -C $IFACE rx-usecs 1022

Note: Modern systems using NetworkManager or systemd-networkd require different configuration methods.

C. Advanced Offloading and Kernel-Bypass Solutions

If kernel and driver tuning are insufficient, you can offload the capture process entirely by bypassing the kernel's network stack.

  • DPDK (Data Plane Development Kit): DPDK is a set of libraries and drivers for fast packet processing. VoIPmonitor can leverage DPDK to read packets directly from the network card, completely bypassing the kernel and significantly reducing CPU overhead. This is a powerful, open-source solution for achieving multi-gigabit capture rates on commodity hardware. For detailed installation and configuration instructions, see the official DPDK guide.
  • PF_RING ZC/DNA: A commercial software driver from ntop.org that also dramatically reduces CPU load by bypassing the kernel. In tests, it can reduce CPU usage from 90% to as low as 20% for the same traffic load.
  • Napatech SmartNICs: Specialized hardware acceleration cards that deliver packets to VoIPmonitor with near-zero CPU overhead (<3% CPU for 10 Gbit/s traffic). This is the ultimate solution for extreme performance requirements.

2. Optimizing Disk I/O

VoIPmonitor's modern storage engine is highly optimized to minimize random disk access, which is the primary cause of I/O bottlenecks.

VoIPmonitor Storage Strategy

Instead of writing a separate PCAP file for each call (which causes massive I/O load), VoIPmonitor groups all calls starting within the same minute into a single compressed `.tar` archive. This changes the I/O pattern from thousands of small, random writes to a few large, sequential writes, reducing IOPS (I/O Operations Per Second) by a factor of 10 or more. A standard 7200 RPM SATA drive can typically handle up to 2000 concurrent calls with full recording.

Filesystem Tuning (ext4)

For the spool directory (`/var/spool/voipmonitor`), using an optimized ext4 filesystem can improve performance.

  • Example setup:
# Format partition without a journal (use with caution, requires battery-backed RAID controller)
mke2fs -t ext4 -O ^has_journal /dev/sda2

# Add to /etc/fstab for optimal performance
/dev/sda2   /var/spool/voipmonitor  ext4    errors=remount-ro,noatime,data=writeback,barrier=0 0 0

RAID Controller Cache Policy

A misconfigured RAID controller is a common bottleneck. For database and spool workloads, the cache policy should always be set to WriteBack, not WriteThrough. This requires a healthy Battery Backup Unit (BBU). If the BBU is dead or missing, you may need to force this setting.

  • The specific commands vary by vendor (`megacli`, `ssacli`, `perccli`). Refer to the original, more detailed version of this article or vendor documentation for specific commands for LSI, HP, and Dell controllers.

3. Optimizing Database Performance (MySQL/MariaDB)

A well-tuned database is critical for both data ingestion from the sensor and responsiveness of the GUI.

Key Configuration Parameters

These settings should be placed in your `my.cnf` or a file in `/etc/mysql/mariadb.conf.d/`.

  • `innodb_buffer_pool_size`: This is the most important setting. It defines the amount of memory InnoDB uses to cache both data and indexes. A good starting point is 50-70% of the server's total available RAM. For a dedicated database server, 8GB is a good minimum, with 32GB or more being optimal for large databases.
  • `innodb_flush_log_at_trx_commit = 2`: The default value of `1` forces a write to disk for every single transaction, which is very slow without a high-end, battery-backed RAID controller. Setting it to `2` relaxes this, flushing logs to the OS cache and writing to disk once per second. This dramatically improves write performance with a minimal risk of data loss (max 1-2 seconds) in case of a full OS crash.
  • `innodb_file_per_table = 1`: This instructs InnoDB to store each table and its indexes in its own `.ibd` file, rather than in one giant, monolithic `ibdata1` file. This is essential for performance, management, and features like table compression and partitioning.
  • LZ4 Compression: For modern MySQL/MariaDB versions, using LZ4 for page compression offers a great balance of reduced storage size and minimal CPU overhead.
# In my.cnf [mysqld] section
innodb_compression_algorithm=lz4
# For MariaDB, you may also need to set default table formats:
# mysqlcompress_type = PAGE_COMPRESSED=1 in voipmonitor.conf

Database Partitioning

Partitioning is a feature where VoIPmonitor automatically splits large tables (like `cdr`) into smaller, more manageable pieces, typically one per day. This is enabled by default and is highly recommended.

  • Benefits:
   * Massively improves query performance in the GUI, as it only needs to scan partitions relevant to the selected time range.
   * Allows for instant deletion of old data by dropping a partition, which is thousands of times faster than running a `DELETE` query on millions of rows.

4. Monitoring Live Performance

VoIPmonitor logs detailed performance metrics every 10 seconds to syslog. You can watch them live:

tail -f /var/log/syslog   # Debian/Ubuntu
tail -f /var/log/messages # CentOS/RHEL

A sample log line: voipmonitor[15567]: calls[315][355] PS[C:4 S:29/29 R:6354 A:6484] SQLq[0] heap[0|0|0] comp[54] [12.6Mb/s] t0CPU[5.2%] ... RSS/VSZ[323|752]MB

  • `calls[X][Y]`: X = active calls, Y = total calls in memory.
  • `SQLq[C]`: Number of SQL queries waiting to be sent to the database. If this number is consistently growing, your database cannot keep up.
  • `heap[A|B|C]`: Memory usage percentages for internal buffers. If A (main heap) reaches 100%, packets will be dropped.
  • `t0CPU[X%]`: The most important CPU metric. This is the usage of the main packet capture thread. If it consistently exceeds 90-95%, you are at your server's capture limit.

AI Summary for RAG

Summary: This article is an expert guide to scaling and performance tuning VoIPmonitor for high-traffic environments. It identifies the three main system bottlenecks: Packet Capturing (CPU-bound), Disk I/O (for PCAP storage), and Database Performance (MySQL/MariaDB). For packet capture, it details tuning network card drivers with `ethtool`, the benefits of modern kernels with `TPACKET_V3`, and advanced kernel-bypass solutions like DPDK, PF_RING, and Napatech SmartNICs. For I/O, it explains VoIPmonitor's efficient TAR-based storage and provides tuning tips for ext4 filesystems and RAID controller cache policies. The largest section focuses on MySQL/MariaDB tuning, emphasizing the importance of `innodb_buffer_pool_size`, `innodb_flush_log_at_trx_commit`, `innodb_file_per_table`, and native LZ4 compression. It also explains the critical role of database partitioning for query performance and data retention. Finally, it details how to interpret live performance statistics from the syslog to diagnose bottlenecks. Keywords: performance tuning, scaling, high throughput, bottleneck, CPU bound, t0CPU, packet capture, TPACKET_V3, DPDK, ethtool, ring buffer, interrupt coalescing, PF_RING, Napatech, I/O bottleneck, IOPS, filesystem tuning, ext4, RAID cache, WriteBack, megacli, ssacli, MySQL performance, MariaDB tuning, innodb_buffer_pool_size, innodb_flush_log_at_trx_commit, innodb_file_per_table, LZ4 compression, database partitioning, syslog, monitoring, high calls per second, CPS Key Questions:

  • How do I scale VoIPmonitor for thousands of concurrent calls?
  • What are the main performance bottlenecks in VoIPmonitor?
  • How can I fix high t0CPU usage?
  • What is DPDK and when should I use it?
  • What are the best `my.cnf` settings for a high-performance VoIPmonitor database?
  • How does database partitioning work and why is it important?
  • My sniffer is dropping packets, how do I fix it?
  • How do I interpret the performance metrics in the syslog?
  • Should I use a dedicated database server for VoIPmonitor?