Sniffer detailed architecture
This document describes the internal architecture of the VoIPmonitor sensor (sniffer). It covers the threading model, buffer architecture, packet processing pipeline, and database write mechanisms. Understanding these internals helps administrators diagnose performance issues and tune the sensor for optimal performance.
For deployment topology and configuration, see:
- Deployment & Topology Guide - Where to deploy sensors
- Configuration Reference - All config parameters
- Scaling Guide - Performance tuning
- Troubleshooting Guide - Common issues
Architecture Overview
The VoIPmonitor sniffer uses a multi-stage pipeline architecture:
- Packet Capture (t0) - Single thread reads packets from kernel ring buffer
- Packet Buffer - User-space queue for packet distribution
- Preprocessing - Multiple threads parse SIP/RTP headers
- Call Assembly - Correlates packets into calls, calculates metrics
- Output - Parallel threads write PCAPs to disk and CDRs to database
Threading Model
VoIPmonitor uses a multi-threaded architecture with specialized threads for different tasks. Understanding which thread is bottlenecked helps target optimizations.
Thread Types
| Thread | Function | Scaling | Monitor |
|---|---|---|---|
| t0 | Packet capture from kernel | Single thread (cannot scale) | t0CPU in logs
|
| Preprocessing | SIP/RTP header parsing | Multiple threads | Thread count in logs |
| RTP Processing | Jitterbuffer simulation, MOS calculation | Per-call | CPU usage |
| PCAP Writers | Compress and write PCAP files | pcap_dump_writethreads_max |
I/O wait |
| SQL Writers | Insert CDRs into database | mysqlstore_max_threads_cdr |
SQLq in logs
|
The Critical t0 Thread
The t0 thread is the most critical component of the sniffer. It runs on a single CPU core and reads all packets from the network interface. If t0CPU approaches 100%, packets will be dropped.
⚠️ Warning: The t0 thread cannot be parallelized. If it becomes a bottleneck, you must either reduce load (filters, disable features) or use kernel-bypass solutions (DPDK, PF_RING, Napatech).
Monitoring t0CPU:
# View current t0CPU in real-time
journalctl -u voipmonitor -f | grep t0CPU
# Example output showing healthy t0CPU (23.4%):
# t0CPU[23.4%] t1CPU[0.7%] t2CPU[0.3%] rss/vsize[2.1G/14.6G]
Symptoms of t0 overload:
t0CPU > 90%in logs- Increasing packet drops:
ip -s link show eth0 - Missing call legs or incomplete CDRs
Solutions:
- Use
interface_ip_filterinstead of BPFfilter - Disable jitterbuffer analysis if not needed
- Upgrade to kernel-bypass: DPDK, Napatech
Buffer Architecture
VoIPmonitor uses multiple buffer layers to handle traffic bursts and prevent packet loss.
Ring Buffer (Kernel Space)
The ring buffer is a circular queue in kernel memory where the NIC driver places incoming packets. VoIPmonitor reads from this buffer using TPACKET_V3.
| Parameter | Default | Description |
|---|---|---|
ringbuffer |
50 | Size in MB (per interface) |
# /etc/voipmonitor.conf
# Increase for high-traffic or bursty environments
ringbuffer = 200
💡 Tip: Increase ringbuffer if you see "ring buffer overflow" messages or during traffic spikes. Typical values: 50-500 MB depending on traffic volume.
Packet Buffer (User Space)
After reading from the ring buffer, packets are queued in user-space memory for processing by worker threads.
| Parameter | Default | Description |
|---|---|---|
max_buffer_mem |
2000 | Maximum memory in MB for packet buffering |
# /etc/voipmonitor.conf
# Increase for servers with ample RAM
max_buffer_mem = 4000
Symptoms of buffer exhaustion:
- Log message:
PACKETBUFFER: MEMORY IS FULL - Increasing packet drops
Solutions:
- Increase
max_buffer_mem - Add more preprocessing threads
- Investigate database bottleneck (see Database Write Pipeline)
Query Cache (Disk-based)
When the database cannot keep up with CDR inserts, VoIPmonitor queues SQL statements to disk files (qoq* files in spool directory). This prevents data loss during database outages or slowdowns.
| Parameter | Default | Description |
|---|---|---|
query_cache |
no | Enable disk-based SQL queue |
# /etc/voipmonitor.conf
# CRITICAL: Enable for production systems
query_cache = yes
⚠️ Warning:
Packet Processing Pipeline
Stage 1: Packet Capture
The t0 thread reads packets using Linux's high-performance TPACKET_V3 interface (or DPDK/Napatech if configured).
Capture sources supported:
- Standard Linux interfaces (eth0, bond0, etc.)
- VLAN-tagged traffic
- Tunneled traffic: GRE, ERSPAN, VXLAN, TZSP, HEP
- AudioCodes Debug Recording
- DPDK or Napatech for kernel bypass
Stage 2: Packet Classification
Packets are classified by protocol:
- SIP - Matched by port (
sipportconfig) and content inspection - RTP/RTCP - Matched by correlation with SIP SDP or heuristics
- Other - Tunneling protocols, management traffic
# /etc/voipmonitor.conf
# Define SIP ports (comma-separated or ranges)
sipport = 5060,5061,5080
Stage 3: Call Assembly
VoIPmonitor correlates packets into calls using multiple methods:
| Method | Used For | Identifier |
|---|---|---|
| Call-ID | SIP dialog correlation | Call-ID header
|
| SSRC | RTP stream correlation | RTP SSRC field |
| SDP Ports | RTP-to-SIP binding | Ports from SDP offer/answer |
| Custom Headers | Multi-leg correlation | matchheader config
|
For complex scenarios with multiple call legs, see Call Correlation Guide.
Stage 4: Quality Analysis
For each RTP stream, VoIPmonitor calculates:
- Packet Loss - Missing sequence numbers
- Jitter - Packet delay variation
- MOS Score - Simulated Mean Opinion Score (three variants: F1, F2, adapt)
ℹ️ Note:
Stage 5: Output
Completed calls are written to:
- PCAP files - Raw packet captures (grouped into TAR archives per minute)
- Database - CDR records with all metadata and quality metrics
Database Write Pipeline
The database write pipeline is often the bottleneck in high-traffic deployments.
Key Parameters
| Parameter | Default | Description |
|---|---|---|
mysqlstore_max_threads_cdr |
1 | Parallel CDR insert threads |
quick_save_cdr |
no | Faster CDR saving (yes/quick) |
query_cache |
no | Disk-based SQL queue |
cdr_partition |
yes | Daily table partitioning |
Monitoring SQL Queue
The SQLq metric in logs shows pending SQL statements:
# Monitor SQL queue in real-time
journalctl -u voipmonitor -f | grep SQLq
# Example output:
# SQLq[cdr: 0] SQLf[cdr: 0] # Healthy - no backlog
# SQLq[cdr: 5234] SQLf[cdr: 12] # Backlog - database slow
When SQLq is growing:
- Database cannot keep up with insert rate
- Check MySQL performance:
innodb_buffer_pool_size, disk I/O - Increase
mysqlstore_max_threads_cdr(with caution) - See Database Troubleshooting for detailed guidance
Manager API
The sniffer exposes a TCP management interface for GUI communication, scripting, and monitoring.
Main article: Manager_API
| Setting | Default | Description |
|---|---|---|
managerip |
127.0.0.1 | Bind address |
managerport |
5029 | TCP port |
managersocket |
(none) | Unix socket path (alternative) |
Quick examples:
# Get version (requires encryption disabled or socket)
echo 'sniffer_version' | nc -U /tmp/vm_manager_socket
# List active calls
echo 'listcalls' | nc -U /tmp/vm_manager_socket
ℹ️ Note: Since sniffer 2024.02.2, Manager API uses encryption by default. See Manager_API#Encryption for details.
Memory Management
VoIPmonitor's memory usage depends on:
- Number of concurrent calls
- Buffer sizes (ringbuffer, max_buffer_mem)
- Call recording settings
- SQL queue depth
Monitoring Memory
# Memory shown in logs (RSS = physical, VSZ = virtual)
journalctl -u voipmonitor -f | grep rss
# Example: rss/vsize[2.1G/14.6G]
For detailed explanation of all syslog metrics, see Syslog_Status_Line.
Preventing OOM
| Symptom | Cause | Solution |
|---|---|---|
| OOM killer terminates sniffer | Insufficient RAM | Add RAM or reduce max_buffer_mem
|
| Memory grows continuously | SQL queue backlog | Fix database performance |
| High VSZ, normal RSS | Normal behavior | Virtual memory is pre-allocated, not consumed |
💡 Tip:
See Also
- Manager_API - Manager API reference and commands
- Syslog_Status_Line - Understanding the status line metrics
- Configuration Reference - All config parameters
- Scaling Guide - Performance tuning
- Database Troubleshooting - SQL queue issues
- DPDK Guide - Kernel bypass for high traffic
- Napatech Integration - Hardware acceleration
- Troubleshooting - Common issues
AI Summary for RAG
Summary: This document describes the internal architecture of the VoIPmonitor sniffer. The sniffer uses a multi-stage pipeline: (1) t0 thread captures packets from kernel ring buffer using TPACKET_V3, (2) packets are queued in user-space packet buffer (max_buffer_mem), (3) preprocessing threads parse SIP/RTP, (4) call assembly correlates packets into calls using Call-ID/SSRC/SDP, (5) parallel threads write PCAPs to disk and CDRs to database. Critical metrics: t0CPU (must stay below 90%), SQLq (database queue depth), rss/vsize (memory usage). Key buffers: ringbuffer (kernel, default 50MB), max_buffer_mem (user space, default 2000MB), query_cache (disk-based SQL queue for reliability). Manager API on port 5029 provides control interface for GUI and CLI tools.
Keywords: sniffer architecture, t0 thread, t0CPU, ringbuffer, max_buffer_mem, packet buffer, query_cache, SQLq, threading model, packet capture, TPACKET_V3, call assembly, RTP correlation, manager API, port 5029, memory management, OOM, database pipeline, mysqlstore_max_threads_cdr, quick_save_cdr
Key Questions:
- What is the t0 thread and why is it critical?
- How do I monitor t0CPU and what does high t0CPU mean?
- What is the ringbuffer and how do I size it?
- What is max_buffer_mem and when should I increase it?
- What does "PACKETBUFFER: MEMORY IS FULL" mean?
- What is query_cache and why should I enable it?
- How do I monitor the SQL queue (SQLq)?
- What is the manager API and what port does it use?
- How does VoIPmonitor correlate packets into calls?
- What causes OOM errors and how do I prevent them?
- How many threads does VoIPmonitor use?
- What is the difference between ringbuffer and max_buffer_mem?