Sniffer distributed architecture: Difference between revisions
(Add documentation about sensor health check via management API (sniffer_stat command)) |
(Add natalias configuration guidance for distributed deployments - fix for RTP streams ending prematurely) |
||
| Line 614: | Line 614: | ||
If probes still miss calls after ensuring <code>sipport</code> matches on both sides, check the [[Sniffer_troubleshooting|full troubleshooting guide]] for other potential issues such as network connectivity, firewall rules, or interface misconfiguration. | If probes still miss calls after ensuring <code>sipport</code> matches on both sides, check the [[Sniffer_troubleshooting|full troubleshooting guide]] for other potential issues such as network connectivity, firewall rules, or interface misconfiguration. | ||
=== RTP Streams End Prematurely in Distributed Deployments === | |||
If RTP streams end prematurely in call recordings when using a remote sniffer with a central GUI, this is often caused by the <code>natalias</code> configuration being set on the wrong system. | |||
'''The Problem:''' | |||
When packets are forwarded from a remote sniffer to a central server (Packet Mirroring mode), the central server sees the packets with their original IP addresses as captured by the sniffer. If <code>natalias</code> is configured on the remote sniffer, the IP address substitution happens at capture time. This can cause the central server's RTP correlation logic to fail because the substituted addresses do not match what the central server sees in the SIP signaling. | |||
'''The Solution:''' | |||
Configure <code>natalias</code> only on the central server that receives and processes the packets, not on the remote sniffer that captures and forwards them. | |||
{| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;" | |||
|- | |||
! colspan="2" style="background:#ffc107;" | Critical: natalias Configuration Placement | |||
|- | |||
| style="vertical-align: top;" | '''Remote Sniffer (packet forwarding):''' | |||
| Do NOT set <code>natalias</code> on the remote sensor. Let it forward packets with their original IP addresses. | |||
|- | |||
| style="vertical-align: top;" | '''Central Server (packet processing):''' | |||
| Configure <code>natalias</code> on the central server that performs the analysis. The address substitution happens during correlation, at the point where SIP and RTP are matched. | |||
|} | |||
'''Configuration Example:''' | |||
<syntaxhighlight lang="ini"> | |||
# WRONG: Do NOT configure natalias on remote sniffer | |||
# /etc/voipmonitor.conf on REMOTE SENSOR | |||
# natalias = 1.2.3.4 10.0.0.5 # DON'T DO THIS | |||
# CORRECT: Configure natalias on central server | |||
# /etc/voipmonitor.conf on CENTRAL SERVER | |||
natalias = 1.2.3.4 10.0.0.5 | |||
server_bind = 0.0.0.0 | |||
server_bind_port = 60024 | |||
# ... other central server settings | |||
</syntaxhighlight> | |||
'''After Changing Configuration:''' | |||
<syntaxhighlight lang="bash"> | |||
# Restart voipmonitor on BOTH systems | |||
systemctl restart voipmonitor | |||
</syntaxhighlight> | |||
This ensures that RTP packets are correctly associated with their SIP dialogs on the central server, even when the network traverses NAT devices. | |||
== Legacy: Mirror Mode == | == Legacy: Mirror Mode == | ||
| Line 993: | Line 1,040: | ||
== AI Summary for RAG == | == AI Summary for RAG == | ||
'''Summary:''' VoIPmonitor v20+ uses Client-Server architecture for distributed deployments with encrypted TCP connections (default port 60024 with zstd compression, configurable via server_bind_port and server_destination_port). Two modes: Local Processing (<code>packetbuffer_sender=no</code>) analyzes locally and sends CDRs, Packet Mirroring (<code>packetbuffer_sender=yes</code>) forwards raw packets. NETWORK BANDWIDTH REQUIREMENTS: For Local Processing (PCAPs stored on sensors), network traffic consists mainly of CDR SQL data and a 1Gb connection between sensors and central server is generally sufficient. For Packet Mirroring, bandwidth consumption is roughly equivalent to VoIP traffic volume (use <code>server_type_compress=zstd</code> to reduce). Dashboard widgets for SIP/RTP/REGISTER counts: with Packet Mirroring, statistics appear only on central server (sender has empty widgets); with Local Processing, statistics appear on both sensor and central server. To enable local statistics on a forwarding sensor, set <code>packetbuffer_sender=no</code> (increases CPU/RAM usage). Supports failover with multiple server IPs. CDRs stored centrally; PCAPs on sensors (Local Processing) or centrally (Packet Mirroring). In Packet Mirroring mode, the <code>save*</code> options (savertp, savesip, saveaudio) configured on the CENTRAL SERVER control storage for packets received from sensors. When multiple sensors forward packets with the same Call-ID, VoIPmonitor automatically merges them into a single CDR. To keep records separate per sensor with same Call-ID, run multiple receiver instances on different ports with separate database tables. CRITICAL: A single sniffer instance MUST process both SIP signaling and RTP media for the same call. Splitting SIP and RTP across different sniffers creates incomplete call records that cannot be reconstructed. INTERMEDIATE SERVER: An intermediate server can receive traffic from multiple remote sensors and forward it to a central server. The intermediate server has both <code>server_bind</code> (to receive from sensors) and <code>server_destination</code> (to send to central server). The behavior is controlled by <code>packetbuffer_sender</code> on the intermediate server: if <code>packetbuffer_sender=no</code>, it processes traffic locally and sends CDRs to central server; if <code>packetbuffer_sender=yes</code>, it forwards raw packets to central server. In both cases, the original remote sensors must be manually added to the GUI Settings for visibility. This is supported because the intermediate server does NOT do local packet capture - it only acts as a relay. For custom port configuration: server_bind_port on central server MUST match server_destination_port on remote sensors. Common reasons for custom ports: firewall restrictions, multiple instances on same server, compliance requirements, avoiding port conflicts. SENSOR HEALTH CHECK VIA MANAGEMENT API: Each sensor exposes a TCP management API (default port 5029) that can be queried via netcat: `echo 'sniffer_stat' | nc <sensor_ip> <sensor_port>`. This returns JSON with sensor status including running state, version, uptime, active calls, total calls, packets per second, and packet drops. IMPORTANT: There is NO single command to check all sensors simultaneously - each must be queried individually. Scripting multiple sensors with a loop can provide a consolidated result with exit codes. In newer VoIPmonitor versions, management API communication may be encrypted, requiring encryption to be disabled or using VoIPmonitor-specific CLI tools. Firewall must allow TCP port 5029 access from monitoring host to sensors. LEGACY MIRROR MODE: Older mirror_destination/mirror_bind options exist but are less robust (no encryption, UDP) and Client-Server mode is recommended. Symptoms of mirror mode issues: all CDRs incorrectly associated with a single sensor after system updates. Migration involves: stop probes, remove old sensor records from GUI Settings, comment out mirror parameters (mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port), add server_bind/server_bind_port on central server and server_destination/server_destination_port on probes, set unique id_sensor per probe, choose packetbuffer_sender mode. Common migration issues: probes cannot connect (verify server_password, firewall allows TCP on server_bind_port), all CDRs show same sensor (old mirror config still active or id_sensor not set), PCAPs not accessible in Local Processing mode (central server must reach probes on TCP/5029). TROUBLESHOOTING: In distributed/probe setups with Packet Mirroring, if a probe is not detecting all calls on expected ports, the <code>sipport</code> configuration MUST match on BOTH the probe AND the central analysis host. If the network uses multiple SIP ports (e.g., 5060, 5061, 5080), both systems must have all ports listed in their <code>sipport</code> directive. Common symptom: Probe sees traffic via <code>tcpdump</code> but central server records incomplete CDRs. WEB GUI ACCESSIBLE BUT SENSORS CANNOT CONNECT: If the web portal is accessible but sensors cannot connect to the primary server, verify that the MySQL/MariaDB database service on the central server is running and responsive. The central VoIPmonitor service requires a functioning database connection to accept sensor data, even though the web interface (PHP) may remain accessible. Check MySQL service status (<code>systemctl status mariadb</code> or <code>systemctl status mysqld</code>) and inspect MySQL error logs (<code>/var/log/mariadb/mariadb.log</code> or <code>/var/log/mysql/error.log</code>) for critical errors. Restart the database service if needed. | '''Summary:''' VoIPmonitor v20+ uses Client-Server architecture for distributed deployments with encrypted TCP connections (default port 60024 with zstd compression, configurable via server_bind_port and server_destination_port). Two modes: Local Processing (<code>packetbuffer_sender=no</code>) analyzes locally and sends CDRs, Packet Mirroring (<code>packetbuffer_sender=yes</code>) forwards raw packets. NETWORK BANDWIDTH REQUIREMENTS: For Local Processing (PCAPs stored on sensors), network traffic consists mainly of CDR SQL data and a 1Gb connection between sensors and central server is generally sufficient. For Packet Mirroring, bandwidth consumption is roughly equivalent to VoIP traffic volume (use <code>server_type_compress=zstd</code> to reduce). Dashboard widgets for SIP/RTP/REGISTER counts: with Packet Mirroring, statistics appear only on central server (sender has empty widgets); with Local Processing, statistics appear on both sensor and central server. To enable local statistics on a forwarding sensor, set <code>packetbuffer_sender=no</code> (increases CPU/RAM usage). Supports failover with multiple server IPs. CDRs stored centrally; PCAPs on sensors (Local Processing) or centrally (Packet Mirroring). In Packet Mirroring mode, the <code>save*</code> options (savertp, savesip, saveaudio) configured on the CENTRAL SERVER control storage for packets received from sensors. When multiple sensors forward packets with the same Call-ID, VoIPmonitor automatically merges them into a single CDR. To keep records separate per sensor with same Call-ID, run multiple receiver instances on different ports with separate database tables. CRITICAL: A single sniffer instance MUST process both SIP signaling and RTP media for the same call. Splitting SIP and RTP across different sniffers creates incomplete call records that cannot be reconstructed. INTERMEDIATE SERVER: An intermediate server can receive traffic from multiple remote sensors and forward it to a central server. The intermediate server has both <code>server_bind</code> (to receive from sensors) and <code>server_destination</code> (to send to central server). The behavior is controlled by <code>packetbuffer_sender</code> on the intermediate server: if <code>packetbuffer_sender=no</code>, it processes traffic locally and sends CDRs to central server; if <code>packetbuffer_sender=yes</code>, it forwards raw packets to central server. In both cases, the original remote sensors must be manually added to the GUI Settings for visibility. This is supported because the intermediate server does NOT do local packet capture - it only acts as a relay. For custom port configuration: server_bind_port on central server MUST match server_destination_port on remote sensors. Common reasons for custom ports: firewall restrictions, multiple instances on same server, compliance requirements, avoiding port conflicts. SENSOR HEALTH CHECK VIA MANAGEMENT API: Each sensor exposes a TCP management API (default port 5029) that can be queried via netcat: `echo 'sniffer_stat' | nc <sensor_ip> <sensor_port>`. This returns JSON with sensor status including running state, version, uptime, active calls, total calls, packets per second, and packet drops. IMPORTANT: There is NO single command to check all sensors simultaneously - each must be queried individually. Scripting multiple sensors with a loop can provide a consolidated result with exit codes. In newer VoIPmonitor versions, management API communication may be encrypted, requiring encryption to be disabled or using VoIPmonitor-specific CLI tools. Firewall must allow TCP port 5029 access from monitoring host to sensors. LEGACY MIRROR MODE: Older mirror_destination/mirror_bind options exist but are less robust (no encryption, UDP) and Client-Server mode is recommended. Symptoms of mirror mode issues: all CDRs incorrectly associated with a single sensor after system updates. Migration involves: stop probes, remove old sensor records from GUI Settings, comment out mirror parameters (mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port), add server_bind/server_bind_port on central server and server_destination/server_destination_port on probes, set unique id_sensor per probe, choose packetbuffer_sender mode. Common migration issues: probes cannot connect (verify server_password, firewall allows TCP on server_bind_port), all CDRs show same sensor (old mirror config still active or id_sensor not set), PCAPs not accessible in Local Processing mode (central server must reach probes on TCP/5029). TROUBLESHOOTING: In distributed/probe setups with Packet Mirroring, if a probe is not detecting all calls on expected ports, the <code>sipport</code> configuration MUST match on BOTH the probe AND the central analysis host. If the network uses multiple SIP ports (e.g., 5060, 5061, 5080), both systems must have all ports listed in their <code>sipport</code> directive. Common symptom: Probe sees traffic via <code>tcpdump</code> but central server records incomplete CDRs. RTP STREAMS END PREMATURELY: If RTP streams end prematurely in call recordings when using a remote sniffer with a central GUI, this is often caused by Incorrect <code>natalias</code> configuration placement. The <code>natalias</code> option must be configured ONLY on the central server that receives and processes packets, NOT on the remote sniffer that captures and forwards them. When packets are forwarded from a remote sniffer to a central server in Packet Mirroring mode, configuring <code>natalias</code> on the remote sniffer causes IP address substitution to happen at capture time, which causes the central server's RTP correlation logic to fail. Solution: Remove <code>natalias</code> from the remote sniffer's voipmonitor.conf, add it to the central server's voipmonitor.conf, then restart both services. WEB GUI ACCESSIBLE BUT SENSORS CANNOT CONNECT: If the web portal is accessible but sensors cannot connect to the primary server, verify that the MySQL/MariaDB database service on the central server is running and responsive. The central VoIPmonitor service requires a functioning database connection to accept sensor data, even though the web interface (PHP) may remain accessible. Check MySQL service status (<code>systemctl status mariadb</code> or <code>systemctl status mysqld</code>) and inspect MySQL error logs (<code>/var/log/mariadb/mariadb.log</code> or <code>/var/log/mysql/error.log</code>) for critical errors. Restart the database service if needed. | ||
'''Keywords:''' distributed architecture, client-server, network bandwidth, throughput, network requirements, 1Gb connection, bandwidth requirements, server_destination, server_bind, server_bind_port, server_destination_port, custom port, packetbuffer_sender, local processing, packet mirroring, remote sensors, failover, encrypted channel, zstd compression, dashboard widgets, statistics, empty dashboard, SIP RTP correlation, split sensors, single sniffer requirement, availability zone, savertp, savesip, saveaudio, centralized storage, packet storage control, call-id merging, multiple sensors same callid, separate records per sensor, receiver instances, mysqltableprefix, firewall, port configuration, connection troubleshooting, probe, central host, central server, sensor, sipport, missing calls, probe not detecting calls, tcpdump, configuration mismatch, mirror mode, migration, mirror_destination, mirror_bind, mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port, migrate from mirror mode, all CDRs same sensor, system update, upgrade, intermediate server, relay server, multi-sensor aggregation, hub and spoke, chained topology, sensor forwarding, mysql, mariadb, database service, web gui accessible, error logs, sensor health check, management API, sniffer_stat, TCP port 5029, manager_bind, nc netcat, sensor status, sensor monitoring, health status, exit code, consolidated result, check all sensors, encrypted API, encryption disabled | '''Keywords:''' distributed architecture, client-server, network bandwidth, throughput, network requirements, 1Gb connection, bandwidth requirements, server_destination, server_bind, server_bind_port, server_destination_port, custom port, packetbuffer_sender, local processing, packet mirroring, remote sensors, failover, encrypted channel, zstd compression, dashboard widgets, statistics, empty dashboard, SIP RTP correlation, split sensors, single sniffer requirement, availability zone, savertp, savesip, saveaudio, centralized storage, packet storage control, call-id merging, multiple sensors same callid, separate records per sensor, receiver instances, mysqltableprefix, firewall, port configuration, connection troubleshooting, probe, central host, central server, sensor, sipport, missing calls, probe not detecting calls, tcpdump, configuration mismatch, mirror mode, migration, mirror_destination, mirror_bind, mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port, migrate from mirror mode, all CDRs same sensor, system update, upgrade, intermediate server, relay server, multi-sensor aggregation, hub and spoke, chained topology, sensor forwarding, mysql, mariadb, database service, web gui accessible, error logs, sensor health check, management API, sniffer_stat, TCP port 5029, manager_bind, nc netcat, sensor status, sensor monitoring, health status, exit code, consolidated result, check all sensors, encrypted API, encryption disabled, natalias, NAT alias configuration, RTP streams end prematurely, RTP correlation, IP address substitution, NAT traversal, remote sniffer configuration, central server configuration, natalias placement, incomplete recordings, call recordings cut off | ||
'''Key Questions:''' | '''Key Questions:''' | ||
* How do I connect multiple VoIPmonitor sensors to a central server? | * How do I connect multiple VoIPmonitor sensors to a central server? | ||
| Line 1,042: | Line 1,089: | ||
* How do I check if sensor management API is encrypted? | * How do I check if sensor management API is encrypted? | ||
* How do I check the health of remote sensors in a distributed deployment? | * How do I check the health of remote sensors in a distributed deployment? | ||
* Why are RTP streams ending prematurely in call recordings when using a remote sniffer with central GUI? | |||
* Where should I configure natalias in a distributed VoIPmonitor deployment? | |||
* Do I need to configure natalias on the remote sensor or on the central server? | |||
* What happens if natalias is configured on both the remote sensor and central server? | |||
Revision as of 04:27, 6 January 2026
This guide explains how to deploy multiple VoIPmonitor sensors in a distributed architecture using the modern Client-Server mode.
Overview
VoIPmonitor v20+ uses a Client-Server architecture for distributed deployments. Remote sensors connect to a central server via encrypted TCP channel.
| Mode | What is sent | Processing location | Use case |
|---|---|---|---|
| Local Processing | CDRs only | Remote sensor | Multiple sites, low bandwidth |
| Packet Mirroring | Raw packets | Central server | Centralized analysis, low-resource remotes |
The mode is controlled by a single option: packetbuffer_sender
For comprehensive deployment options including on-host vs dedicated sensors, traffic forwarding methods (SPAN, GRE, TZSP, VXLAN), and NFS/SSHFS alternatives, see VoIPmonitor Deployment & Topology Guide.
Client-Server Mode
Architecture
Configuration
Remote Sensor (client):
id_sensor = 2 # unique per sensor
server_destination = central.server.ip
server_destination_port = 60024
server_password = your_strong_password
# Choose one:
packetbuffer_sender = no # Local Processing: analyze locally, send CDRs
# packetbuffer_sender = yes # Packet Mirroring: send raw packets
interface = eth0
sipport = 5060
# No MySQL credentials needed on remote sensors
Important: Source IP Binding with manager_ip
For remote sensors with multiple IP addresses (e.g., in High Availability setups with a floating/virtual IP), use the manager_ip parameter to bind the outgoing connection to a specific static IP address. This ensures the central server sees a consistent source IP from each sensor, preventing connection issues during failover.
# On sensor with multiple interfaces (e.g., static IP + floating HA IP)
manager_ip = 10.0.0.5 # Bind to the static IP address
server_destination = 192.168.1.100
# The outgoing connection will use 10.0.0.5 as the source IP instead of the floating IP
Useful scenarios:
- HA pairs: Sensors use static IPs while floating IP is only for failover management
- Multiple VNICs: Explicit source IP selection on systems with multiple virtual interfaces
- Network ACLs: Ensure connections originate from whitelisted IP addresses
Central Server:
server_bind = 0.0.0.0
server_bind_port = 60024
server_password = your_strong_password
mysqlhost = localhost
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = db_password
# If receiving raw packets (packetbuffer_sender=yes on clients):
sipport = 5060
# ... other sniffer options
Custom Port Configuration
Critical: The server_bind_port on the central server must match the server_destination_port on each remote sensor. If these ports do not match, sensors cannot connect.
# Central Server (listening on custom port 50291)
server_bind = 0.0.0.0
server_bind_port = 50291 # Custom port (default is 60024)
server_password = your_strong_password
# Remote Sensor (must match the server's custom port)
server_destination = 45.249.9.2
server_destination_port = 50291 # MUST match server_bind_port
server_password = your_strong_password
Common reasons to use a custom port:
- Firewall restrictions that block the default port 60024
- Running multiple VoIPmonitor instances on the same server (each with a different port)
- Compliance requirements for non-standard ports
- Avoiding port conflicts with other services
Troubleshooting Connection Failures:
| Critical First Step: Check Traffic Rate Indicator | |
|---|---|
| IMPORTANT: | Before troubleshooting communication issues, check if the probe is receiving traffic. The traffic rate indicator in the sensor logs shows the current packet capture rate in the format [x.xMb/s] (e.g., [12.5Mb/s] or [0.0Mb/s]).
|
| How to check: | Run journalctl -u voipmonitor -n 100 on the probe and look for the traffic rate indicator printed in the status logs.
|
If showing [0.0Mb/s]:
|
The issue is NOT communication or authentication. The problem is network configuration on the probe side. Common causes: incorrect SPAN/mirror port setup on the switch, wrong network interface selected in voipmonitor.conf, or the probe is not receiving any traffic at all. Fix the network configuration first.
|
| If showing traffic (non-zero rate): | The probe IS receiving traffic from the network, so the handshake issue is with communication/authentication. Proceed with the steps below. |
If probes cannot connect to the server and the traffic rate indicator shows non-zero traffic:
1. Verify ports match on both sides:
# On central server - check which port it is listening on
ss -tulpn | grep voipmonitor
# Should show: voipmonitor LISTEN 0.0.0.0:50291
2. Test connectivity from remote sensor:
# Test TCP connection to the server's custom port
nc -zv 45.249.9.2 50291
# Success: "Connection to 45.249.9.2 50291 port [tcp/*] succeeded!"
# Timeout/Refused: Check firewall or misconfigured port
3. Ensure firewall allows the custom port:
# Allow inbound TCP on custom port (example for firewalld)
firewall-cmd --permanent --add-port=50291/tcp
firewall-cmd --reload
4. Check logs on both sides:
journalctl -u voipmonitor -f
# Look for: "connecting to server", "connection refused", or "timeout"
5. Verify MySQL database is accessible (if web GUI works but sensors cannot connect):
If the web portal is accessible but sensors cannot connect, verify that the MySQL/MariaDB database service on the primary server is running and responsive. The central VoIPmonitor service requires a functioning database connection to accept sensor data.
# Check if MySQL service is running
systemctl status mariadb
# or
systemctl status mysqld
# Check for database errors in MySQL error log
# Common locations:
tail -50 /var/log/mariadb/mariadb.log
tail -50 /var/log/mysql/error.log
# Look for critical errors that might prevent database connections
If MySQL is down or experiencing critical errors, the central VoIPmonitor server may not be able to accept sensor connections even though the web interface (PHP) remains accessible. Restart the database service if needed and monitor the logs for recurring errors.
After changing port configuration, restart the service:
systemctl restart voipmonitor
Checking Sensor Health Status via Management API
Each VoIPmonitor sensor exposes a TCP management API (default port 5029) that can be used to query its operational status and health. This is useful for monitoring multiple sensors, especially in distributed deployments.
Important Notes:
- There is NO single command to check all sensors simultaneously
- Each sensor must be queried individually
- The `sniffer_stat` command returns JSON with sensor status information
- In newer VoIPmonitor versions, the sensor's management API communication may be encrypted
Basic Health Check Command
To check the status of a single sensor:
# Query sensor status via management port
echo 'sniffer_stat' | nc <sensor_ip> <sensor_port>
Replace:
<sensor_ip>with the IP address of the sensor<sensor_port>with the management port (default: 5029)
Example Response
The command returns a JSON object with sensor status information:
{
"status": "running",
"version": "30.3-SVN.123",
"uptime": 86400,
"calls_active": 42,
"calls_total": 12345,
"packets_per_second": 1250.5,
"packets_dropped": 0
}
Scripting Multiple Sensors
To check multiple sensors and get a consolidated result, create a script that queries each sensor individually:
#!/bin/bash
# Check health of multiple sensors
SENSORS=("192.168.1.10:5029" "192.168.1.11:5029" "192.168.1.12:5029")
ALL_OK=true
for SENSOR in "${SENSORS[@]}"; do
IP=$(echo $SENSOR | cut -d: -f1)
PORT=$(echo $SENSOR | cut -d: -f2)
echo -n "Checking $IP:$PORT ... "
# Query sensor and check for running status
STATUS=$(echo 'sniffer_stat' | nc -w 2 $IP $PORT 2>/dev/null | grep -o '"status":"[^"]*"' | cut -d'"' -f4)
if [ "$STATUS" = "running" ]; then
echo "OK"
else
echo "FAILED (status: $STATUS)"
ALL_OK=false
fi
done
if [ "$ALL_OK" = true ]; then
echo "All sensors healthy"
exit 0
else
echo "One or more sensors unhealthy"
exit 1
fi
Troubleshooting Management API Access
If you cannot connect to the sensor management API:
1. Verify the management port is listening:
# On the sensor host
netstat -tlnp | grep 5029
# or
ss -tlnp | grep voipmonitor
2. Check firewall rules:
Ensure TCP port 5029 is allowed from the monitoring host to the sensor.
3. Test connectivity with netcat:
nc -zv <sensor_ip> 5029
4. Encrypted Communication (Newer Versions):
In newer VoIPmonitor versions, the sensor's API communication may be encrypted. If management API access fails with encryption errors: * Check VoIPmonitor documentation for your version * Encryption may need to be disabled for management API access * Consult support for encrypted CLI tools if available
Encryption Considerations
If your sensors use encrypted management API (newer versions):
- The standard netcat command may not work with encrypted connections
- Check if `manager_bind` (default port 5029) has encryption enabled
- For encrypted connections, you may need VoIPmonitor-specific CLI tools
- Refer to your VoIPmonitor version documentation or contact support for encrypted API access
Connection Compression
The client-server channel supports compression to reduce bandwidth usage:
# On both client and server (default: zstd)
server_type_compress = zstd
Available options: zstd (default, recommended), gzip, lzo, none
High Availability (Failover)
Remote sensors can specify multiple central server IPs for automatic failover:
# Remote sensor configuration with failover
server_destination = 192.168.0.1, 192.168.0.2
If the primary server becomes unavailable, the sensor automatically connects to the next server in the list.
Local Processing vs Packet Mirroring
| Local Processing | Packet Mirroring | |
|---|---|---|
packetbuffer_sender |
no (default) |
yes
|
| Packet analysis | On remote sensor | On central server |
| PCAP storage | On remote sensor | On central server |
| WAN bandwidth | Low (CDRs only) | High (full packets) |
| Remote CPU load | Higher | Minimal |
| Use case | Standard multi-site | Low-resource remotes |
Network Bandwidth Requirements
The network bandwidth requirements between remote sensors and the central server depend on the selected operational mode:
| Bandwidth Guidelines | |
|---|---|
Local Processing Mode (packetbuffer_sender=no):
|
PCAP files are stored locally on sensors. Network traffic consists mainly of CDR data (SQL queries). A 1Gb network connection between sensors and the central GUI/Database server is generally sufficient for most deployments. |
Packet Mirroring Mode (packetbuffer_sender=yes):
|
Raw packet stream is forwarded to central server. Bandwidth consumption is roughly equivalent to the VoIP traffic volume itself (minus Ethernet headers, plus compression overhead). Consider your expected VoIP traffic volume and network capacity. Use |
For optimal throughput in high-latency environments, see the server concatenation limit configuration in Sniffer Configuration: SQL Concatenation Throughput.
PCAP Access in Local Processing Mode
When using Local Processing, PCAPs are stored on remote sensors. The GUI retrieves them via the central server, which proxies requests to each sensor's management port (TCP/5029).
Firewall requirements:
- Central server must reach remote sensors on TCP/5029
- Remote sensors must reach central server on TCP/60024
Dashboard Statistics
Dashboard widgets (SIP/RTP/REGISTER counts) depend on where packet processing occurs:
| Configuration | Where statistics appear |
|---|---|
packetbuffer_sender = yes (Packet Mirroring) |
Central server only |
packetbuffer_sender = no (Local Processing) |
Both sensor and central server |
Note: If you are using Packet Mirroring mode (packetbuffer_sender=yes) and see empty dashboard widgets for the forwarding sensor, this is expected behavior. The sender sensor only captures and forwards raw packets - it does not create database records or statistics. The central server performs all processing.
Enabling Local Statistics on Forwarding Sensors
If you need local statistics on a sensor that was previously configured to forward packets:
# On the forwarding sensor
packetbuffer_sender = no
This disables packet forwarding and enables full local processing. Note that this increases CPU and RAM usage on the sensor since it must perform full SIP/RTP analysis.
Controlling Packet Storage in Packet Mirroring Mode
When using Packet Mirroring (packetbuffer_sender=yes), the central server processes raw packets received from sensors. The save* options on the central server control which packets are saved to disk.
# Central Server Configuration (receiving raw packets from sensors)
server_bind = 0.0.0.0
server_bind_port = 60024
server_password = your_strong_password
# Database Configuration
mysqlhost = localhost
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = db_password
# Sniffer options needed when receiving raw packets:
sipport = 5060
# CONTROL PACKET STORAGE HERE:
# These settings on the central server determine what gets saved:
savertp = yes # Save RTP packets
savesip = yes # Save SIP packets
saveaudio = wav # Export audio recordings (optional)
| Important: Central Server Controls Storage | |
|---|---|
| Key Point: | When sensors send raw packets to a central server, the storage is controlled by the savertp, savesip, and saveaudio options configured on the central server, not on the individual sensors. The sensors are only forwarding raw packets - they do not make decisions about what to save unless you are using Local Processing mode.
|
This centralized control allows you to:
- Enable/disable packet types (RTP, SIP, audio) from one location
- Adjust storage settings without touching each sensor
- Apply capture rules from the central server to filter traffic
Data Storage Summary
- CDRs: Always stored in MySQL on central server
- PCAPs:
- Local Processing → stored on each remote sensor
- Packet Mirroring → stored on central server
Handling Same Call-ID from Multiple Sensors
When a call passes through multiple sensors that see the same SIP Call-ID, VoIPmonitor automatically merges the SIP packets into a single CDR on the central server. This is expected behavior when using Packet Mirroring mode.
| Call-ID Merging Behavior | |
|---|---|
| What happens: | If Sensor A and Sensor B both forward packets for a call with the same Call-ID to the central server, VoIPmonitor creates a single CDR containing SIP packets from both sensors. The RTP packets are captured from whichever sensor processed the media. |
| Why: | VoIPmonitor uses the SIP Call-ID as the primary unique identifier. When multiple sensors forward packets with the same Call-ID to a central server, they are automatically treated as one call. |
| Is it a problem? | Usually not. For most deployments, combining records from multiple sensors for the same call (different call legs passing through different points in the network) is the desired behavior. |
Preventing Duplicate CDRs in Local Processing Mode
When using Local Processing mode (packetbuffer_sender=no), each remote probe processes its own packets and writes CDRs directly to a central database. If multiple probes capture the same call (e.g., redundant taps or overlapping SPAN ports), this creates duplicate CDR entries in the database.
To prevent duplicates in this scenario, use the cdr_check_exists_callid option on all probes:
| Setting | Result | |
|---|---|---|
cdr_check_exists_callid = no (default)
|
Each probe creates its own CDR row. Multiple probes capturing the same call result in duplicate entries with the same Call-ID but different id_sensor values. | |
cdr_check_exists_callid = yes
|
Probes check for an existing CDR with the same Call-ID before inserting. If found, they update the existing row instead of creating a new one. The final CDR will be associated with the id_sensor of the probe that last processed the call. | |
Prerequisites:
- MySQL user must have
UPDATEprivileges on thecdrtable - All probes must be configured with this setting
# Add to voipmonitor.conf on each probe (Local Processing mode only)
[general]
cdr_check_exists_callid = yes
Note: This setting is only useful in Local Processing mode. In Packet Mirroring mode (packetbuffer_sender=yes), the central server automatically merges packets with the same Call-ID, so this option is not needed.
Keeping Records Separate Per Sensor
If you need to keep records completely separate when multiple sensors see the same Call-ID (e.g., each sensor should create its own independent CDR even for calls with overlapping Call-IDs), you must run multiple receiver instances on the central server.
# Receiver Instance 1 (for Sensor A)
[receiver_sensor_a]
server_bind = 0.0.0.0
server_bind_port = 60024
mysqlhost = localhost
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = <password>
mysqltableprefix = sensor_a_ # Separate CDR tables
id_sensor = 2
# ... other options
# Receiver Instance 2 (for Sensor B)
[receiver_sensor_b]
server_bind = 0.0.0.0
server_bind_port = 60025 # Different port
mysqlhost = localhost
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = <password>
mysqltableprefix = sensor_b_ # Separate CDR tables
id_sensor = 3
# ... other options
Each receiver instance runs as a separate process, listens on a different port, and can write to separate database tables (using mysqltableprefix). Configure each sensor to connect to its dedicated receiver port.
For more details on correlating multiple call legs from the same call, see Merging_or_correlating_multiple_call_legs.
GUI Visibility
Remote sensors appear automatically when connected. To customize names or configure additional settings:
- Go to GUI → Settings → Sensors
- Sensors are identified by their
id_sensorvalue
Troubleshooting Distributed Deployments
Probe Not Detecting All Calls on Expected Ports
If a remote sensor (probe) configured for packet mirroring is not detecting all calls on expected ports, check configuration on both the probe and the central analysis host.
| Critical: sipport Must Match in Distributed Deployments | |
|---|---|
| The Issue: | In distributed/probe setups using Packet Mirroring (packetbuffer_sender=yes), calls will be missing if the sipport configuration is not aligned between the probe and central server. Common symptom: Probe sees traffic via tcpdump but central server records incomplete CDRs.
|
| Configuration Requirement: | The probe and central host must have consistent sipport values. If your network uses SIP on multiple ports (e.g., 5060, 5061, 5080, 6060), ALL ports must be listed on both systems.
|
The solution involves three steps:
- 1. Verify traffic reachability on the probe
Use tcpdump on the probe VM to confirm SIP packets for the missing calls are arriving on the expected ports.
# On the probe VM tcpdump -i eth0 -n port 5060
- 2. Check the probe's voipmonitor.conf
Ensure the sipport directive on the probe includes all necessary SIP ports used in your network.
# /etc/voipmonitor.conf on the PROBE
sipport = 5060,5061,5080,6060
- 3. Check the central analysis host's voipmonitor.conf
This is the most common cause of missing calls in distributed setups. The central analysis host (specified by server_bind on the central server, or by server_destination configured on the probe) must also have the sipport directive configured with the same list of ports used by all probes.
# /etc/voipmonitor.conf on the CENTRAL HOST
sipport = 5060,5061,5080,6060
- 4. Restart both services
Apply the configuration changes:
# On both probe and central host
systemctl restart voipmonitor
| Why Both Systems Must Match | |
|---|---|
| Probe side: | The probe captures packets from the network interface. Its sipport setting determines which UDP ports it considers as SIP traffic to capture and forward.
|
| Central server side: | When receiving raw packets in Packet Mirroring mode, the central server analyzes the packets locally. Its sipport setting determines which ports it interprets as SIP during analysis. If a port is missing here, packets are captured but not recognized as SIP, resulting in missing CDRs.
|
Quick Diagnosis Commands
On the probe:
# Check which sipport values are configured
grep -E "^sipport" /etc/voipmonitor.conf
# Verify traffic is arriving on expected ports
tcpdump -i eth0 -nn -c 10 port 5061
On the central server:
# Check which sipport values are configured
grep -E "^sipport" /etc/voipmonitor.conf
# Check syslog for analysis activity (should see processing packets)
tail -f /var/log/syslog | grep voipmonitor
If probes still miss calls after ensuring sipport matches on both sides, check the full troubleshooting guide for other potential issues such as network connectivity, firewall rules, or interface misconfiguration.
RTP Streams End Prematurely in Distributed Deployments
If RTP streams end prematurely in call recordings when using a remote sniffer with a central GUI, this is often caused by the natalias configuration being set on the wrong system.
The Problem:
When packets are forwarded from a remote sniffer to a central server (Packet Mirroring mode), the central server sees the packets with their original IP addresses as captured by the sniffer. If natalias is configured on the remote sniffer, the IP address substitution happens at capture time. This can cause the central server's RTP correlation logic to fail because the substituted addresses do not match what the central server sees in the SIP signaling.
The Solution:
Configure natalias only on the central server that receives and processes the packets, not on the remote sniffer that captures and forwards them.
| Critical: natalias Configuration Placement | |
|---|---|
| Remote Sniffer (packet forwarding): | Do NOT set natalias on the remote sensor. Let it forward packets with their original IP addresses.
|
| Central Server (packet processing): | Configure natalias on the central server that performs the analysis. The address substitution happens during correlation, at the point where SIP and RTP are matched.
|
Configuration Example:
# WRONG: Do NOT configure natalias on remote sniffer
# /etc/voipmonitor.conf on REMOTE SENSOR
# natalias = 1.2.3.4 10.0.0.5 # DON'T DO THIS
# CORRECT: Configure natalias on central server
# /etc/voipmonitor.conf on CENTRAL SERVER
natalias = 1.2.3.4 10.0.0.5
server_bind = 0.0.0.0
server_bind_port = 60024
# ... other central server settings
After Changing Configuration:
# Restart voipmonitor on BOTH systems
systemctl restart voipmonitor
This ensures that RTP packets are correctly associated with their SIP dialogs on the central server, even when the network traverses NAT devices.
Legacy: Mirror Mode
Note: The older mirror_destination/mirror_bind options still exist but the modern Client-Server approach with packetbuffer_sender=yes is preferred as it provides encryption and simpler management.
Migrating from Mirror Mode to Client-Server Mode
If your system uses the legacy mirror mode (mirror_destination on probes, mirror_bind on server), you should migrate to the modern client/server mode. Common symptoms of mirror mode issues include all CDRs being incorrectly associated with a single sensor after system updates.
| Why Migration is Recommended | |
|---|---|
| Mirror Mode Limitations: | * No encryption (raw UDP traffic)
|
| Client-Server Advantages: | * Encrypted TCP connections
|
==== Prerequisites
- Central server hostname or IP address
- Port for client-server communication (default: 60024)
- Strong shared password for authentication
==== Migration Steps
- 1. Stop the voipmonitor sniffer service on all probe machines
# On each probe
systemctl stop voipmonitor
- 2. Update GUI Sensors list
- Log in to the VoIPmonitor GUI
- Navigate to Settings → Sensors
- Remove all old probe records, keeping only the server instance (e.g., localhost or the central server IP)
- 3. Configure the Central Server
Edit /etc/voipmonitor.conf on the central server:
# COMMENT OUT or remove mirror mode parameters:
# mirror_bind_ip = 1.2.3.4
# mirror_bind_port = 9000
# ADD client-server mode parameters:
server_bind = <server_ip> # Use 0.0.0.0 to listen on all interfaces
server_bind_port = <port> # Default is 60024
server_password = <a_strong_password>
# MySQL configuration remains unchanged
mysqlhost = localhost
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = <your_db_password>
Restart the service on the central server:
# On central server
systemctl restart voipmonitor
Verify the server is listening:
# Check that voipmonitor is listening on the configured port
ss -tulpn | grep voipmonitor
# Should show: voipmonitor LISTEN 0.0.0.0:60024 (or your custom port)
- 4. Configure each Probe
Edit /etc/voipmonitor.conf on each remote probe:
# COMMENT OUT or remove mirror mode parameters:
# mirror_destination_ip = 1.2.3.4
# mirror_destination_port = 9000
# ADD client-server mode parameters:
id_sensor = <unique_id> # Must be unique per sensor
server_destination = <server_ip>
server_destination_port = <port> # Must match server_bind_port
server_password = <a_strong_password> # Same password used on server
# IMPORTANT: Set packet handling mode
packetbuffer_sender = no # Local Processing: analyze locally, send CDRs only
# OR
# packetbuffer_sender = yes # Packet Mirroring: send raw packets to server
# Capture settings remain unchanged
interface = eth0
sipport = 5060
# No MySQL credentials needed on remote sensors for Local Processing mode
Restart the service on each probe:
# On each probe
systemctl restart voipmonitor
- 5. Verify Connection in GUI
- Log in to the VoIPmonitor GUI
- Navigate to Settings → Sensors
- Verify that probes appear automatically with their configured
id_sensorvalues - Check the connection status (online/offline)
- 6. Test Data Flow
- Generate test traffic on a probe network (make a test call)
- Check CDR view in GUI
- Verify that new records show the correct
id_sensorfor that probe - Confirm PCAP files are accessible (click play button in CDR view)
==== Common Issues During Migration
| Troubleshooting Connection Problems | |
|---|---|
| Probes cannot connect: | * Verify server_password is identical on server and all probes
|
| All CDRs show same sensor: | This typically indicates the old mirror mode configuration is still active or the id_sensor is not set on probes. Double-check that:
|
| PCAP files not accessible: | In Local Processing mode (packetbuffer_sender=no), PCAPs are stored on probes and retrieved via TCP port 5029. Ensure:
|
Critical Requirement: SIP and RTP must be captured by the same sniffer instance
VoIPmonitor cannot reconstruct a complete call record if SIP signaling and RTP media are captured by different sniffer instances.
| Important: Single sniffer requirement | |
|---|---|
| What does not work: | * Sniffer A in Availability Zone 1 captures SIP signaling
|
| Why: | Call correlation requires a single sniffer instance to process both SIP and RTP packets from the same call. The sniffer correlates SIP signaling (INVITE, BYE, etc.) with RTP media in real-time during packet processing. If packets are split across multiple sniffers, the correlation cannot occur. |
| Solution: | Forward traffic so that one sniffer processes both SIP and RTP for each call. Options:
|
Configuration parameters like receiver_check_id_sensor and cdr_check_exists_callid are for other scenarios (multipath routing, duplicate Call-ID handling) and do NOT enable split SIP/RTP correlation. Setting these parameters does not allow SIP from one sniffer to be merged with RTP from another sniffer.
Intermediate Server: Multi-Sensor Aggregation
An intermediate server can receive traffic from multiple remote sensors and forward it to a central server. This is useful for aggregating traffic from many locations before sending to a central data center.
Architecture
This is supported because the intermediate server does NOT do local packet capture - it only acts as a relay.
Intermediate Server Configuration
The intermediate server has both server_bind (to receive from remote sensors) and server_destination (to send to central server).
# On INTERMEDIATE SERVER
# Acts as server for remote sensors, client to central server
[general]
id_sensor = 100 # Unique ID for this intermediate server
# Receive from remote sensors (server role)
server_bind = 0.0.0.0
server_bind_port = 60024
server_password = sensor_password
# Send to central server (client role)
server_destination = central.server.ip
server_destination_port = 60024
server_password = central_password
# CRITICAL: packetbuffer_sender controls what happens to forwarded traffic
# Option 1: Local Processing on intermediate server
packetbuffer_sender = no # Process locally, send CDRs to central
mysqlhost = localhost
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = db_password
# OR Option 2: Forward raw packets to central server
# packetbuffer_sender = yes # Forward raw packets (no database needed here)
packetbuffer_sender on Intermediate Server
The packetbuffer_sender setting on the intermediate server determines how it handles traffic from remote sensors:
| Setting | What Happens | Storage Location |
|---|---|---|
packetbuffer_sender=no |
Intermediate server processes traffic (SIP/RTP analysis) and sends CDRs to central server | PCAPs on intermediate server |
packetbuffer_sender=yes |
Intermediate server forwards raw packets to central server, which processes them | PCAPs on central server |
In both cases, the original remote sensors must still be manually added to the GUI for visibility
Original vs Intermediate Sensor Visibility
| Important: Manual Sensor Registration | |
|---|---|
| Behavior: | When using an intermediate server, the original remote sensors (A, B, C) are not automatically visible in the GUI Settings. Only the intermediate server itself appears. |
| Solution: | To view statistics and status for the original sensors, they must be manually added to the GUI Settings list with their id_sensor values, even though they connect to the intermediate server rather than directly to the central server.
|
Example: Local Processing Mode
Remote sensors forward CDRs to intermediate server, which forwards them to central server:
# Remote Sensors (A, B, C)
id_sensor = 2 # Unique values: 2, 3, 4...
server_destination = intermediate.server.ip
server_destination_port = 60024
server_password = sensor_password
packetbuffer_sender = no # Local Processing: process here, send CDRs
interface = eth0
sipport = 5060
# Intermediate Server
server_bind = 0.0.0.0
server_bind_port = 60024
server_password = sensor_password
server_destination = central.server.ip
server_destination_port = 60024
server_password = central_password
packetbuffer_sender = no # Process locally, send CDRs onward
mysqlhost = localhost
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = db_password
# Central Server
server_bind = 0.0.0.0
server_bind_port = 60024
server_password = central_password
mysqlhost = localhost
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = db_password
Example: Packet Mirroring Mode
Remote sensors forward raw packets to intermediate server, which forwards them to central server:
# Remote Sensors (A, B, C)
id_sensor = 2 # Unique values: 2, 3, 4...
server_destination = intermediate.server.ip
server_destination_port = 60024
server_password = sensor_password
packetbuffer_sender = yes # Packet Mirroring: send raw packets
interface = eth0
sipport = 5060
# Intermediate Server
server_bind = 0.0.0.0
server_bind_port = 60024
server_password = sensor_password
server_destination = central.server.ip
server_destination_port = 60024
server_password = central_password
packetbuffer_sender = yes # Forward raw packets onward
# No database configuration needed on intermediate server
# Central Server
server_bind = 0.0.0.0
server_bind_port = 60024
server_password = central_password
mysqlhost = localhost
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = db_password
# Processing and storage options (configured on central server)
sipport = 5060
savertp = yes
savesip = yes
Limitations
- All sensors must use the same
server_passwordat each connection level (sensors→intermediate and intermediate→central) - A single sniffer cannot do local packet capture AND act as both server and client simultaneously. The intermediate server configuration works because it does NOT capture from its own network interface - it only receives from sensors and forwards to central server.
- Each sensor requires a unique
id_sensor(< 65536) - Time synchronization (NTP) is critical for correlating calls across sensors
- Maximum allowed time difference between client and server: 2 seconds (configurable via
client_server_connect_maximum_time_diff_s)
For a complete reference of all client-server parameters, see Sniffer Configuration: Distributed Operation.
AI Summary for RAG
Summary: VoIPmonitor v20+ uses Client-Server architecture for distributed deployments with encrypted TCP connections (default port 60024 with zstd compression, configurable via server_bind_port and server_destination_port). Two modes: Local Processing (packetbuffer_sender=no) analyzes locally and sends CDRs, Packet Mirroring (packetbuffer_sender=yes) forwards raw packets. NETWORK BANDWIDTH REQUIREMENTS: For Local Processing (PCAPs stored on sensors), network traffic consists mainly of CDR SQL data and a 1Gb connection between sensors and central server is generally sufficient. For Packet Mirroring, bandwidth consumption is roughly equivalent to VoIP traffic volume (use server_type_compress=zstd to reduce). Dashboard widgets for SIP/RTP/REGISTER counts: with Packet Mirroring, statistics appear only on central server (sender has empty widgets); with Local Processing, statistics appear on both sensor and central server. To enable local statistics on a forwarding sensor, set packetbuffer_sender=no (increases CPU/RAM usage). Supports failover with multiple server IPs. CDRs stored centrally; PCAPs on sensors (Local Processing) or centrally (Packet Mirroring). In Packet Mirroring mode, the save* options (savertp, savesip, saveaudio) configured on the CENTRAL SERVER control storage for packets received from sensors. When multiple sensors forward packets with the same Call-ID, VoIPmonitor automatically merges them into a single CDR. To keep records separate per sensor with same Call-ID, run multiple receiver instances on different ports with separate database tables. CRITICAL: A single sniffer instance MUST process both SIP signaling and RTP media for the same call. Splitting SIP and RTP across different sniffers creates incomplete call records that cannot be reconstructed. INTERMEDIATE SERVER: An intermediate server can receive traffic from multiple remote sensors and forward it to a central server. The intermediate server has both server_bind (to receive from sensors) and server_destination (to send to central server). The behavior is controlled by packetbuffer_sender on the intermediate server: if packetbuffer_sender=no, it processes traffic locally and sends CDRs to central server; if packetbuffer_sender=yes, it forwards raw packets to central server. In both cases, the original remote sensors must be manually added to the GUI Settings for visibility. This is supported because the intermediate server does NOT do local packet capture - it only acts as a relay. For custom port configuration: server_bind_port on central server MUST match server_destination_port on remote sensors. Common reasons for custom ports: firewall restrictions, multiple instances on same server, compliance requirements, avoiding port conflicts. SENSOR HEALTH CHECK VIA MANAGEMENT API: Each sensor exposes a TCP management API (default port 5029) that can be queried via netcat: `echo 'sniffer_stat' | nc <sensor_ip> <sensor_port>`. This returns JSON with sensor status including running state, version, uptime, active calls, total calls, packets per second, and packet drops. IMPORTANT: There is NO single command to check all sensors simultaneously - each must be queried individually. Scripting multiple sensors with a loop can provide a consolidated result with exit codes. In newer VoIPmonitor versions, management API communication may be encrypted, requiring encryption to be disabled or using VoIPmonitor-specific CLI tools. Firewall must allow TCP port 5029 access from monitoring host to sensors. LEGACY MIRROR MODE: Older mirror_destination/mirror_bind options exist but are less robust (no encryption, UDP) and Client-Server mode is recommended. Symptoms of mirror mode issues: all CDRs incorrectly associated with a single sensor after system updates. Migration involves: stop probes, remove old sensor records from GUI Settings, comment out mirror parameters (mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port), add server_bind/server_bind_port on central server and server_destination/server_destination_port on probes, set unique id_sensor per probe, choose packetbuffer_sender mode. Common migration issues: probes cannot connect (verify server_password, firewall allows TCP on server_bind_port), all CDRs show same sensor (old mirror config still active or id_sensor not set), PCAPs not accessible in Local Processing mode (central server must reach probes on TCP/5029). TROUBLESHOOTING: In distributed/probe setups with Packet Mirroring, if a probe is not detecting all calls on expected ports, the sipport configuration MUST match on BOTH the probe AND the central analysis host. If the network uses multiple SIP ports (e.g., 5060, 5061, 5080), both systems must have all ports listed in their sipport directive. Common symptom: Probe sees traffic via tcpdump but central server records incomplete CDRs. RTP STREAMS END PREMATURELY: If RTP streams end prematurely in call recordings when using a remote sniffer with a central GUI, this is often caused by Incorrect natalias configuration placement. The natalias option must be configured ONLY on the central server that receives and processes packets, NOT on the remote sniffer that captures and forwards them. When packets are forwarded from a remote sniffer to a central server in Packet Mirroring mode, configuring natalias on the remote sniffer causes IP address substitution to happen at capture time, which causes the central server's RTP correlation logic to fail. Solution: Remove natalias from the remote sniffer's voipmonitor.conf, add it to the central server's voipmonitor.conf, then restart both services. WEB GUI ACCESSIBLE BUT SENSORS CANNOT CONNECT: If the web portal is accessible but sensors cannot connect to the primary server, verify that the MySQL/MariaDB database service on the central server is running and responsive. The central VoIPmonitor service requires a functioning database connection to accept sensor data, even though the web interface (PHP) may remain accessible. Check MySQL service status (systemctl status mariadb or systemctl status mysqld) and inspect MySQL error logs (/var/log/mariadb/mariadb.log or /var/log/mysql/error.log) for critical errors. Restart the database service if needed.
Keywords: distributed architecture, client-server, network bandwidth, throughput, network requirements, 1Gb connection, bandwidth requirements, server_destination, server_bind, server_bind_port, server_destination_port, custom port, packetbuffer_sender, local processing, packet mirroring, remote sensors, failover, encrypted channel, zstd compression, dashboard widgets, statistics, empty dashboard, SIP RTP correlation, split sensors, single sniffer requirement, availability zone, savertp, savesip, saveaudio, centralized storage, packet storage control, call-id merging, multiple sensors same callid, separate records per sensor, receiver instances, mysqltableprefix, firewall, port configuration, connection troubleshooting, probe, central host, central server, sensor, sipport, missing calls, probe not detecting calls, tcpdump, configuration mismatch, mirror mode, migration, mirror_destination, mirror_bind, mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port, migrate from mirror mode, all CDRs same sensor, system update, upgrade, intermediate server, relay server, multi-sensor aggregation, hub and spoke, chained topology, sensor forwarding, mysql, mariadb, database service, web gui accessible, error logs, sensor health check, management API, sniffer_stat, TCP port 5029, manager_bind, nc netcat, sensor status, sensor monitoring, health status, exit code, consolidated result, check all sensors, encrypted API, encryption disabled, natalias, NAT alias configuration, RTP streams end prematurely, RTP correlation, IP address substitution, NAT traversal, remote sniffer configuration, central server configuration, natalias placement, incomplete recordings, call recordings cut off
Key Questions:
- How do I connect multiple VoIPmonitor sensors to a central server?
- What is the expected network throughput between remote sensors and the central GUI/Database server?
- Is a 1Gb network connection sufficient for remote sensors in VoIPmonitor distributed deployment?
- What network bandwidth is required for Local Processing mode vs Packet Mirroring mode?
- What is the difference between Local Processing and Packet Mirroring?
- Where are CDRs and PCAP files stored in distributed mode?
- What is packetbuffer_sender and when should I use it?
- How do I configure failover for remote sensors?
- Why are dashboard widgets (SIP/RTP/REGISTER counts) empty for a sensor configured to forward packets?
- How do I enable local statistics on a forwarding sensor?
- Can a VoIPmonitor instance act as an intermediate server receiving from multiple sensors and forwarding to a central server?
- How does packetbuffer_sender control traffic forwarding on an intermediate server?
- Can a VoIPmonitor sniffer be both a server (listening for sensors) and a client (sending to central server)?
- Why does a single sniffer cannot be both server and client mean, and what are the exceptions?
- How do I configure an intermediate server in a hub-and-spoke topology?
- Do I need to manually add remote sensors to the GUI when using an intermediate server?
- How does an intermediate server handle traffic from multiple remote sensors in Packet Mirroring mode?
- How does an intermediate server handle traffic from multiple remote sensors in Local Processing mode?
- Can VoIPmonitor reconstruct a call if SIP signaling is captured by one sniffer and RTP media by another?
- Why does receiver_check_id_sensor not allow merging SIP from one sensor with RTP from another?
- How do I control packet storage when sensors send raw packets to a central server?
- What happens when multiple sensors see the same Call-ID?
- How do I keep records separate when multiple sensors see the same Call-ID?
- How do I configure a custom port for client-server connections?
- What do I do if probes cannot connect to the VoIPmonitor server?
- Why is my remote sensor showing connection refused or timeout?
- Why is a voipmonitor sensor probe not detecting all calls on expected ports?
- Do I need to configure sipport on both the probe and central server in distributed setups?
- What happens if sipport configuration doesn't match between probe and central host?
- How do I migrate from mirror mode to client-server mode?
- Why are all CDRs incorrectly associated with a single sensor after a system update?
- What are the differences between mirror mode and client-server mode?
- How do I configure mirror_destination and server_destination?
- Why are sensors unable to connect to the VoIPMonitor primary server while the web portal remains accessible?
- What should I check if the web GUI works but sensors cannot connect to the central server?
- How do I verify MySQL or MariaDB database service is running on the primary server?
- Where are MySQL error logs located?
- How do I check the health status of a VoIPmonitor sensor?
- What is the command to query sensor status via the management API?
- How do I use sniffer_stat to check sensor health?
- Is there a single command to check all sensors at once?
- How do I check the status of multiple sensors and get a consolidated exit code?
- What is the default management API port for VoIPmonitor sensors?
- Why can I not connect to the sensor management API on TCP port 5029?
- How do I check if sensor management API is encrypted?
- How do I check the health of remote sensors in a distributed deployment?
- Why are RTP streams ending prematurely in call recordings when using a remote sniffer with central GUI?
- Where should I configure natalias in a distributed VoIPmonitor deployment?
- Do I need to configure natalias on the remote sensor or on the central server?
- What happens if natalias is configured on both the remote sensor and central server?