Sniffer distributed architecture

From VoIPmonitor.org
Revision as of 17:14, 5 January 2026 by Admin (talk | contribs) (Add section on preventing duplicate CDRs in Local Processing mode)


This guide explains how to deploy multiple VoIPmonitor sensors in a distributed architecture using the modern Client-Server mode.

Overview

VoIPmonitor v20+ uses a Client-Server architecture for distributed deployments. Remote sensors connect to a central server via encrypted TCP channel.

Mode What is sent Processing location Use case
Local Processing CDRs only Remote sensor Multiple sites, low bandwidth
Packet Mirroring Raw packets Central server Centralized analysis, low-resource remotes

The mode is controlled by a single option: packetbuffer_sender

For comprehensive deployment options including on-host vs dedicated sensors, traffic forwarding methods (SPAN, GRE, TZSP, VXLAN), and NFS/SSHFS alternatives, see VoIPmonitor Deployment & Topology Guide.

Client-Server Mode

Architecture

Configuration

Remote Sensor (client):

id_sensor               = 2                    # unique per sensor
server_destination      = central.server.ip
server_destination_port = 60024
server_password         = your_strong_password

# Choose one:
packetbuffer_sender     = no     # Local Processing: analyze locally, send CDRs
# packetbuffer_sender   = yes    # Packet Mirroring: send raw packets

interface               = eth0
sipport                 = 5060
# No MySQL credentials needed on remote sensors

Important: Source IP Binding with manager_ip

For remote sensors with multiple IP addresses (e.g., in High Availability setups with a floating/virtual IP), use the manager_ip parameter to bind the outgoing connection to a specific static IP address. This ensures the central server sees a consistent source IP from each sensor, preventing connection issues during failover.

# On sensor with multiple interfaces (e.g., static IP + floating HA IP)
manager_ip              = 10.0.0.5     # Bind to the static IP address
server_destination      = 192.168.1.100
# The outgoing connection will use 10.0.0.5 as the source IP instead of the floating IP

Useful scenarios:

  • HA pairs: Sensors use static IPs while floating IP is only for failover management
  • Multiple VNICs: Explicit source IP selection on systems with multiple virtual interfaces
  • Network ACLs: Ensure connections originate from whitelisted IP addresses

Central Server:

server_bind             = 0.0.0.0
server_bind_port        = 60024
server_password         = your_strong_password

mysqlhost               = localhost
mysqldb                 = voipmonitor
mysqluser               = voipmonitor
mysqlpassword           = db_password

# If receiving raw packets (packetbuffer_sender=yes on clients):
sipport                 = 5060
# ... other sniffer options

Custom Port Configuration

Critical: The server_bind_port on the central server must match the server_destination_port on each remote sensor. If these ports do not match, sensors cannot connect.

# Central Server (listening on custom port 50291)
server_bind             = 0.0.0.0
server_bind_port        = 50291      # Custom port (default is 60024)
server_password         = your_strong_password
# Remote Sensor (must match the server's custom port)
server_destination      = 45.249.9.2
server_destination_port = 50291      # MUST match server_bind_port
server_password         = your_strong_password

Common reasons to use a custom port:

  • Firewall restrictions that block the default port 60024
  • Running multiple VoIPmonitor instances on the same server (each with a different port)
  • Compliance requirements for non-standard ports
  • Avoiding port conflicts with other services

Troubleshooting Connection Failures:

If probes cannot connect to the server:

1. Verify ports match on both sides:

   # On central server - check which port it is listening on
   ss -tulpn | grep voipmonitor
   # Should show: voipmonitor LISTEN 0.0.0.0:50291

2. Test connectivity from remote sensor:

   # Test TCP connection to the server's custom port
   nc -zv 45.249.9.2 50291
   # Success: "Connection to 45.249.9.2 50291 port [tcp/*] succeeded!"
   # Timeout/Refused: Check firewall or misconfigured port

3. Ensure firewall allows the custom port:

   # Allow inbound TCP on custom port (example for firewalld)
   firewall-cmd --permanent --add-port=50291/tcp
   firewall-cmd --reload

4. Check logs on both sides:

   journalctl -u voipmonitor -f
   # Look for: "connecting to server", "connection refused", or "timeout"

5. Verify MySQL database is accessible (if web GUI works but sensors cannot connect):

  If the web portal is accessible but sensors cannot connect, verify that the MySQL/MariaDB database service on the primary server is running and responsive. The central VoIPmonitor service requires a functioning database connection to accept sensor data.
   # Check if MySQL service is running
   systemctl status mariadb
   # or
   systemctl status mysqld

   # Check for database errors in MySQL error log
   # Common locations:
   tail -50 /var/log/mariadb/mariadb.log
   tail -50 /var/log/mysql/error.log
   # Look for critical errors that might prevent database connections

If MySQL is down or experiencing critical errors, the central VoIPmonitor server may not be able to accept sensor connections even though the web interface (PHP) remains accessible. Restart the database service if needed and monitor the logs for recurring errors.

After changing port configuration, restart the service:

systemctl restart voipmonitor

Connection Compression

The client-server channel supports compression to reduce bandwidth usage:

# On both client and server (default: zstd)
server_type_compress = zstd

Available options: zstd (default, recommended), gzip, lzo, none

High Availability (Failover)

Remote sensors can specify multiple central server IPs for automatic failover:

# Remote sensor configuration with failover
server_destination = 192.168.0.1, 192.168.0.2

If the primary server becomes unavailable, the sensor automatically connects to the next server in the list.

Local Processing vs Packet Mirroring

Local Processing Packet Mirroring
packetbuffer_sender no (default) yes
Packet analysis On remote sensor On central server
PCAP storage On remote sensor On central server
WAN bandwidth Low (CDRs only) High (full packets)
Remote CPU load Higher Minimal
Use case Standard multi-site Low-resource remotes

PCAP Access in Local Processing Mode

When using Local Processing, PCAPs are stored on remote sensors. The GUI retrieves them via the central server, which proxies requests to each sensor's management port (TCP/5029).

Firewall requirements:

  • Central server must reach remote sensors on TCP/5029
  • Remote sensors must reach central server on TCP/60024

Dashboard Statistics

Dashboard widgets (SIP/RTP/REGISTER counts) depend on where packet processing occurs:

Configuration Where statistics appear
packetbuffer_sender = yes (Packet Mirroring) Central server only
packetbuffer_sender = no (Local Processing) Both sensor and central server

Note: If you are using Packet Mirroring mode (packetbuffer_sender=yes) and see empty dashboard widgets for the forwarding sensor, this is expected behavior. The sender sensor only captures and forwards raw packets - it does not create database records or statistics. The central server performs all processing.

Enabling Local Statistics on Forwarding Sensors

If you need local statistics on a sensor that was previously configured to forward packets:

# On the forwarding sensor
packetbuffer_sender = no

This disables packet forwarding and enables full local processing. Note that this increases CPU and RAM usage on the sensor since it must perform full SIP/RTP analysis.

Controlling Packet Storage in Packet Mirroring Mode

When using Packet Mirroring (packetbuffer_sender=yes), the central server processes raw packets received from sensors. The save* options on the central server control which packets are saved to disk.

# Central Server Configuration (receiving raw packets from sensors)
server_bind             = 0.0.0.0
server_bind_port        = 60024
server_password         = your_strong_password

# Database Configuration
mysqlhost               = localhost
mysqldb                 = voipmonitor
mysqluser               = voipmonitor
mysqlpassword           = db_password

# Sniffer options needed when receiving raw packets:
sipport                 = 5060

# CONTROL PACKET STORAGE HERE:
# These settings on the central server determine what gets saved:
savertp                 = yes          # Save RTP packets
savesip                 = yes          # Save SIP packets
saveaudio               = wav          # Export audio recordings (optional)
Important: Central Server Controls Storage
Key Point: When sensors send raw packets to a central server, the storage is controlled by the savertp, savesip, and saveaudio options configured on the central server, not on the individual sensors. The sensors are only forwarding raw packets - they do not make decisions about what to save unless you are using Local Processing mode.

This centralized control allows you to:

  • Enable/disable packet types (RTP, SIP, audio) from one location
  • Adjust storage settings without touching each sensor
  • Apply capture rules from the central server to filter traffic

Data Storage Summary

  • CDRs: Always stored in MySQL on central server
  • PCAPs:
    • Local Processing → stored on each remote sensor
    • Packet Mirroring → stored on central server

Handling Same Call-ID from Multiple Sensors

When a call passes through multiple sensors that see the same SIP Call-ID, VoIPmonitor automatically merges the SIP packets into a single CDR on the central server. This is expected behavior when using Packet Mirroring mode.

Call-ID Merging Behavior
What happens: If Sensor A and Sensor B both forward packets for a call with the same Call-ID to the central server, VoIPmonitor creates a single CDR containing SIP packets from both sensors. The RTP packets are captured from whichever sensor processed the media.
Why: VoIPmonitor uses the SIP Call-ID as the primary unique identifier. When multiple sensors forward packets with the same Call-ID to a central server, they are automatically treated as one call.
Is it a problem? Usually not. For most deployments, combining records from multiple sensors for the same call (different call legs passing through different points in the network) is the desired behavior.

Preventing Duplicate CDRs in Local Processing Mode

When using Local Processing mode (packetbuffer_sender=no), each remote probe processes its own packets and writes CDRs directly to a central database. If multiple probes capture the same call (e.g., redundant taps or overlapping SPAN ports), this creates duplicate CDR entries in the database.

To prevent duplicates in this scenario, use the cdr_check_exists_callid option on all probes:

Setting Result
cdr_check_exists_callid = no (default) Each probe creates its own CDR row. Multiple probes capturing the same call result in duplicate entries with the same Call-ID but different id_sensor values.
cdr_check_exists_callid = yes Probes check for an existing CDR with the same Call-ID before inserting. If found, they update the existing row instead of creating a new one. The final CDR will be associated with the id_sensor of the probe that last processed the call.

Prerequisites:

  • MySQL user must have UPDATE privileges on the cdr table
  • All probes must be configured with this setting
# Add to voipmonitor.conf on each probe (Local Processing mode only)
[general]
cdr_check_exists_callid = yes

Note: This setting is only useful in Local Processing mode. In Packet Mirroring mode (packetbuffer_sender=yes), the central server automatically merges packets with the same Call-ID, so this option is not needed.

Keeping Records Separate Per Sensor

If you need to keep records completely separate when multiple sensors see the same Call-ID (e.g., each sensor should create its own independent CDR even for calls with overlapping Call-IDs), you must run multiple receiver instances on the central server.

# Receiver Instance 1 (for Sensor A)
[receiver_sensor_a]
server_bind             = 0.0.0.0
server_bind_port        = 60024
mysqlhost               = localhost
mysqldb                 = voipmonitor
mysqluser               = voipmonitor
mysqlpassword           = <password>
mysqltableprefix        = sensor_a_  # Separate CDR tables
id_sensor               = 2
# ... other options

# Receiver Instance 2 (for Sensor B)
[receiver_sensor_b]
server_bind             = 0.0.0.0
server_bind_port        = 60025  # Different port
mysqlhost               = localhost
mysqldb                 = voipmonitor
mysqluser               = voipmonitor
mysqlpassword           = <password>
mysqltableprefix        = sensor_b_  # Separate CDR tables
id_sensor               = 3
# ... other options

Each receiver instance runs as a separate process, listens on a different port, and can write to separate database tables (using mysqltableprefix). Configure each sensor to connect to its dedicated receiver port.

For more details on correlating multiple call legs from the same call, see Merging_or_correlating_multiple_call_legs.

GUI Visibility

Remote sensors appear automatically when connected. To customize names or configure additional settings:

  1. Go to GUI → Settings → Sensors
  2. Sensors are identified by their id_sensor value

Troubleshooting Distributed Deployments

Probe Not Detecting All Calls on Expected Ports

If a remote sensor (probe) configured for packet mirroring is not detecting all calls on expected ports, check configuration on both the probe and the central analysis host.

Critical: sipport Must Match in Distributed Deployments
The Issue: In distributed/probe setups using Packet Mirroring (packetbuffer_sender=yes), calls will be missing if the sipport configuration is not aligned between the probe and central server. Common symptom: Probe sees traffic via tcpdump but central server records incomplete CDRs.
Configuration Requirement: The probe and central host must have consistent sipport values. If your network uses SIP on multiple ports (e.g., 5060, 5061, 5080, 6060), ALL ports must be listed on both systems.

The solution involves three steps:

1. Verify traffic reachability on the probe

Use tcpdump on the probe VM to confirm SIP packets for the missing calls are arriving on the expected ports.

# On the probe VM
tcpdump -i eth0 -n port 5060
2. Check the probe's voipmonitor.conf

Ensure the sipport directive on the probe includes all necessary SIP ports used in your network.

# /etc/voipmonitor.conf on the PROBE
sipport = 5060,5061,5080,6060
3. Check the central analysis host's voipmonitor.conf

This is the most common cause of missing calls in distributed setups. The central analysis host (specified by server_bind on the central server, or by server_destination configured on the probe) must also have the sipport directive configured with the same list of ports used by all probes.

# /etc/voipmonitor.conf on the CENTRAL HOST
sipport = 5060,5061,5080,6060
4. Restart both services

Apply the configuration changes:

# On both probe and central host
systemctl restart voipmonitor
Why Both Systems Must Match
Probe side: The probe captures packets from the network interface. Its sipport setting determines which UDP ports it considers as SIP traffic to capture and forward.
Central server side: When receiving raw packets in Packet Mirroring mode, the central server analyzes the packets locally. Its sipport setting determines which ports it interprets as SIP during analysis. If a port is missing here, packets are captured but not recognized as SIP, resulting in missing CDRs.

Quick Diagnosis Commands

On the probe:

# Check which sipport values are configured
grep -E "^sipport" /etc/voipmonitor.conf

# Verify traffic is arriving on expected ports
tcpdump -i eth0 -nn -c 10 port 5061

On the central server:

# Check which sipport values are configured
grep -E "^sipport" /etc/voipmonitor.conf

# Check syslog for analysis activity (should see processing packets)
tail -f /var/log/syslog | grep voipmonitor

If probes still miss calls after ensuring sipport matches on both sides, check the full troubleshooting guide for other potential issues such as network connectivity, firewall rules, or interface misconfiguration.

Legacy: Mirror Mode

Note: The older mirror_destination/mirror_bind options still exist but the modern Client-Server approach with packetbuffer_sender=yes is preferred as it provides encryption and simpler management.

Critical Requirement: SIP and RTP must be captured by the same sniffer instance

VoIPmonitor cannot reconstruct a complete call record if SIP signaling and RTP media are captured by different sniffer instances.

Important: Single sniffer requirement
What does not work: * Sniffer A in Availability Zone 1 captures SIP signaling
  • Sniffer B in Availability Zone 2 captures RTP media
  • Result: Incomplete call record, GUI cannot reconstruct the call
Why: Call correlation requires a single sniffer instance to process both SIP and RTP packets from the same call. The sniffer correlates SIP signaling (INVITE, BYE, etc.) with RTP media in real-time during packet processing. If packets are split across multiple sniffers, the correlation cannot occur.
Solution: Forward traffic so that one sniffer processes both SIP and RTP for each call. Options:
  • Route both SIP and RTP through the same Availability Zone for capture
  • Use Packet Mirroring mode to forward complete traffic (SIP+RTP) to a central server that processes everything
  • Configure network routers/firewalls to forward the required stream to the correct zone

Configuration parameters like receiver_check_id_sensor and cdr_check_exists_callid are for other scenarios (multipath routing, duplicate Call-ID handling) and do NOT enable split SIP/RTP correlation. Setting these parameters does not allow SIP from one sniffer to be merged with RTP from another sniffer.

Limitations

  • All sensors must use the same server_password
  • A single sniffer cannot be both server and client simultaneously
  • Each sensor requires a unique id_sensor (< 65536)
  • Time synchronization (NTP) is critical for correlating calls across sensors
  • Maximum allowed time difference between client and server: 2 seconds (configurable via client_server_connect_maximum_time_diff_s)

For a complete reference of all client-server parameters, see Sniffer Configuration: Distributed Operation.

AI Summary for RAG

Summary: VoIPmonitor v20+ uses Client-Server architecture for distributed deployments with encrypted TCP connections (default port 60024 with zstd compression, configurable via server_bind_port and server_destination_port). Two modes: Local Processing (packetbuffer_sender=no) analyzes locally and sends CDRs, Packet Mirroring (packetbuffer_sender=yes) forwards raw packets. Dashboard widgets for SIP/RTP/REGISTER counts: with Packet Mirroring, statistics appear only on central server (sender has empty widgets); with Local Processing, statistics appear on both sensor and central server. To enable local statistics on a forwarding sensor, set packetbuffer_sender=no (increases CPU/RAM usage). Supports failover with multiple server IPs. CDRs stored centrally; PCAPs on sensors (Local Processing) or centrally (Packet Mirroring). In Packet Mirroring mode, the save* options (savertp, savesip, saveaudio) configured on the CENTRAL SERVER control storage for packets received from sensors. When multiple sensors forward packets with the same Call-ID, VoIPmonitor automatically merges them into a single CDR. To keep records separate per sensor with same Call-ID, run multiple receiver instances on different ports with separate database tables. CRITICAL: A single sniffer instance MUST process both SIP signaling and RTP media for the same call. Splitting SIP and RTP across different sniffers creates incomplete call records that cannot be reconstructed. For custom port configuration: server_bind_port on central server MUST match server_destination_port on remote sensors. Common reasons for custom ports: firewall restrictions, multiple instances on same server, compliance requirements, avoiding port conflicts. TROUBLESHOOTING: In distributed/probe setups with Packet Mirroring, if a probe is not detecting all calls on expected ports, the sipport configuration MUST match on BOTH the probe AND the central analysis host. If the network uses multiple SIP ports (e.g., 5060, 5061, 5080), both systems must have all ports listed in their sipport directive. Common symptom: Probe sees traffic via tcpdump but central server records incomplete CDRs. Keywords: distributed architecture, client-server, server_destination, server_bind, server_bind_port, server_destination_port, custom port, packetbuffer_sender, local processing, packet mirroring, remote sensors, failover, encrypted channel, zstd compression, dashboard widgets, statistics, empty dashboard, SIP RTP correlation, split sensors, single sniffer requirement, availability zone, savertp, savesip, saveaudio, centralized storage, packet storage control, call-id merging, multiple sensors same callid, separate records per sensor, receiver instances, mysqltableprefix, firewall, port configuration, connection troubleshooting, probe, central host, central server, sensor, sipport, missing calls, probe not detecting calls, tcpdump, configuration mismatch Key Questions:

  • How do I connect multiple VoIPmonitor sensors to a central server?
  • What is the difference between Local Processing and Packet Mirroring?
  • Where are CDRs and PCAP files stored in distributed mode?
  • What is packetbuffer_sender and when should I use it?
  • How do I configure failover for remote sensors?
  • Why are dashboard widgets (SIP/RTP/REGISTER counts) empty for a sensor configured to forward packets?
  • How do I enable local statistics on a forwarding sensor?
  • Can VoIPmonitor reconstruct a call if SIP signaling is captured by one sniffer and RTP media by another?
  • Why does receiver_check_id_sensor not allow merging SIP from one sensor with RTP from another?
  • How do I control packet storage when sensors send raw packets to a central server?
  • What happens when multiple sensors see the same Call-ID?
  • How do I keep records separate when multiple sensors see the same Call-ID?
  • How do I configure a custom port for client-server connections?
  • What do I do if probes cannot connect to the VoIPmonitor server?
  • Why is my remote sensor showing connection refused or timeout?
  • Why is a voipmonitor sensor probe not detecting all calls on expected ports?
  • Do I need to configure sipport on both the probe and central server in distributed setups?
  • What happens if sipport configuration doesn't match between probe and central host?