Syslog Status Line

From VoIPmonitor.org


Every 10 seconds, VoIPmonitor outputs a status line to syslog containing real-time metrics about calls, CPU usage, memory, and queue sizes. This page explains each metric and what to do when values indicate problems.

Overview

The status line is logged to:

  • Debian/Ubuntu: /var/log/syslog
  • CentOS/RHEL: /var/log/messages

Example status line:

Nov 26 07:22:06 voipmonitor voipmonitor[2518]: calls[424][424] PS[C:4 S:41/41 R:13540 A:13583] SQLq[C:0 M:0 Cl:0] heap[0|0|0] comp[48] [25.6Mb/s] t0CPU[7.0%] t1CPU[2.5%] t2CPU[1.6%] tacCPU[8.0|8.0|6.8|6.9%] RSS/VSZ[365|1640]MB

Monitor in real-time:

journalctl -u voipmonitor -f
# or
tail -f /var/log/syslog | grep voipmonitor

Metric Reference

calls[X][Y] - Active Calls

calls[424][424]
      │    └── Total calls in memory (including finishing)
      └─────── Active calls (in progress)
Value Meaning Action
Both equal Normal operation None
Second > First Calls finishing, cleanup pending Normal
Very high numbers High traffic or stuck calls Check for call leaks

PS[C:x S:y/z R:a A:b] - Packet Statistics

PS[C:4 S:41/41 R:13540 A:13583]
    │   │   │    │       └── A: All packets processed
    │   │   │    └────────── R: RTP packets
    │   │   └─────────────── S: SIP packets (current/total)
    │   └─────────────────── S: SIP packets per interval
    └─────────────────────── C: Control packets

💡 Tip: If R (RTP) is 0 but S (SIP) shows traffic, check that RTP ports are included in capture or NAT aliases are configured.

SQLq[C:x M:y Cl:z] - SQL Queue

Critical metric for database health.

SQLq[C:0 M:0 Cl:0]
     │   │    └── Cl: Cleanup queries pending
     │   └─────── M: Message/register queries pending
     └─────────── C: CDR queries pending
Value Status Action
All zeros Healthy - DB keeping up None
Growing slowly Warning - DB slightly behind Monitor trend
Growing continuously Critical - DB bottleneck See SQL Queue Growing
Stuck at high value Critical - DB connection issue Check MySQL connectivity

Alternative format (older versions):

SQLq[cdr: 5234] SQLf[cdr: 12]
     └── In-memory queue    └── On-disk queue (query_cache files)

heap[A|B|C] - Memory Buffers

heap[0|0|0]
     │ │ └── C: Processing heap usage %
     │ └──── B: Secondary buffer usage %
     └────── A: Primary packet buffer usage %
Value Status Action
All < 20% Healthy None
Any > 50% Warning - Buffer filling Investigate bottleneck
Any approaching 100% Critical Increase max_buffer_mem or fix bottleneck

⚠️ Warning: If heap reaches 100%, you'll see PACKETBUFFER: MEMORY IS FULL and packets will be dropped.

comp[x] - Compression Threads

comp[48]
     └── Number of active compression threads for PCAP/audio files

High values indicate heavy disk write activity. If consistently maxed out, consider:

  • Increasing pcap_dump_writethreads_max
  • Faster storage (SSD/NVMe)

[X.X Mb/s] - Traffic Rate

[25.6Mb/s]
     └── Current network traffic rate being processed

Useful for capacity planning and detecting traffic spikes.

t0CPU[X%] - Packet Capture Thread

Most critical CPU metric.

t0CPU[7.0%]
      └── CPU usage of the main packet capture thread
Value Status Action
< 50% Healthy None
50-80% Warning Plan capacity upgrade
> 90% Critical Packets will be dropped!

⚠️ Warning: The t0 thread cannot be parallelized. If it hits 100%, you must reduce load (filters, disable features) or use kernel bypass (DPDK, Napatech).

t1CPU[X%], t2CPU[X%] - Processing Threads

t1CPU[2.5%] t2CPU[1.6%]

Secondary processing threads. These can scale with traffic automatically.

t2CPU Detailed Breakdown

When t2CPU shows high usage, the detailed breakdown helps identify the bottleneck:

t2CPU[pb:10.5/ d:39.2/ s:24.6/ e:17.3/ c:6.8/ g:6.4/ r:7.3/ rm:24.6/ rh:16.7/ rd:19.3/]
Code Function Description
pb Packet buffer Output from packet buffer
d Dispatch Creating structures for processing
s SIP parsing Parsing SIP headers
e Entity lookup Finding/creating calls
c Call processing Processing call packets
g Register processing Processing REGISTER packets
r RTP processing Processing RTP packets
rm RTP move Moving RTP packets for processing
rh RTP hash RTP hash table lookups
rd RTP dispatch Dispatching to RTP read queue

Thread auto-scaling:

  • If d > 50% → s thread starts (SIP parsing)
  • If s > 50% → e thread starts (entity lookup)
  • If e > 50% → c, g, r threads start

tacCPU[A|B|C|D%] - TAR Compression

tacCPU[8.0|8.0|6.8|6.9%]
       └── CPU usage of TAR archive compression threads

High values indicate heavy PCAP archiving. Controlled by tar_maxthreads (default: 8).

RSS/VSZ[X|Y]MB - Memory Usage

RSS/VSZ[365|1640]MB
        │    └── VSZ: Virtual memory (pre-allocated, not all used)
        └─────── RSS: Resident Set Size (actual physical memory used)
Metric Meaning Concern Level
RSS growing continuously Possible memory leak Investigate
High VSZ, normal RSS Normal - virtual pre-allocation None
RSS approaching server RAM OOM risk Reduce buffers or add RAM

Common Issues

SQL Queue Growing

Symptoms: SQLq[C:5234...] increasing over time

Diagnosis:

# Check MySQL CPU
top -p $(pgrep mysqld)

# Check disk I/O
iostat -x 1 5

Solutions:

Bottleneck Solution
I/O (high iowait) Upgrade to SSD/NVMe storage
CPU (mysqld at 100%) Upgrade CPU or optimize queries
RAM (swapping) Increase innodb_buffer_pool_size

Immediate mitigations:

# /etc/voipmonitor.conf
query_cache = yes              # Prevent OOM by using disk queue
mysqlstore_max_threads_cdr = 8 # Parallel DB writes
quick_save_cdr = yes           # Faster CDR saving

See SQL_queue_is_growing_in_a_peaktime and Database_troubleshooting for detailed guidance.

High t0CPU

Symptoms: t0CPU[>90%]

Solutions (in order of preference):

  1. Use interface_ip_filter instead of BPF filter
  2. Disable unused features (jitterbuffer if not needed)
  3. Upgrade to kernel bypass: DPDK or Napatech

PACKETBUFFER FULL

Symptoms: heap[90|80|70] or log message "PACKETBUFFER: MEMORY IS FULL"

Solutions:

# /etc/voipmonitor.conf
max_buffer_mem = 4000   # Increase from default 2000 MB
ringbuffer = 200        # Increase kernel ring buffer

Also investigate downstream bottleneck (usually database).

Accessing Status Programmatically

Via Manager API

The sniffer_stat command returns JSON including pbStatString:

echo 'sniffer_stat' | nc -U /tmp/vm_manager_socket | jq '.pbStatString'

See Manager_API for connection details.

Via GUI

Navigate to Settings → Sensors → Status to see real-time metrics for all connected sensors.

See Also