Sniffer distributed architecture: Difference between revisions

From VoIPmonitor.org
(Add documentation about sensor health check via management API (sniffer_stat command))
(Add HEP Protocol in Client/Server Mode documentation)
 
(26 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{DISPLAYTITLE:Distributed Architecture: Client-Server Mode}}
{{DISPLAYTITLE:Distributed Architecture: Client-Server Mode}}


This guide explains how to deploy multiple VoIPmonitor sensors in a distributed architecture using the modern Client-Server mode.
This guide covers deploying multiple VoIPmonitor sensors in a distributed architecture using Client-Server mode (v20+).


== Overview ==
For deployment options including on-host vs dedicated sensors and traffic forwarding methods (SPAN, GRE, TZSP, VXLAN), see [[Sniffing_modes|VoIPmonitor Deployment & Topology Guide]].


VoIPmonitor v20+ uses a '''Client-Server architecture''' for distributed deployments. Remote sensors connect to a central server via encrypted TCP channel.
= Overview =
 
VoIPmonitor v20+ uses '''Client-Server architecture''' for distributed deployments. Remote sensors connect to a central server via encrypted TCP (default port 60024, zstd compression).


{| class="wikitable"
{| class="wikitable"
|-
|-
! Mode !! What is sent !! Processing location !! Use case
! Mode !! <code>packetbuffer_sender</code> !! What is Sent !! Processing Location !! Use Case
|-
|-
| '''Local Processing''' || CDRs only || Remote sensor || Multiple sites, low bandwidth
| '''Local Processing''' || <code>no</code> (default) || CDRs only || Remote sensor || Multi-site, low bandwidth
|-
|-
| '''Packet Mirroring''' || Raw packets || Central server || Centralized analysis, low-resource remotes
| '''Packet Mirroring''' || <code>yes</code> || Raw packets || Central server || Centralized analysis, low-resource remotes
|}
|}
The mode is controlled by a single option: <code>packetbuffer_sender</code>
For comprehensive deployment options including on-host vs dedicated sensors, traffic forwarding methods (SPAN, GRE, TZSP, VXLAN), and NFS/SSHFS alternatives, see [[Sniffing_modes|VoIPmonitor Deployment & Topology Guide]].
== Client-Server Mode ==
=== Architecture ===


<kroki lang="plantuml">
<kroki lang="plantuml">
Line 49: Line 43:
</kroki>
</kroki>


=== Configuration ===
== Use Cases ==
 
'''AWS VPC Traffic Mirroring Alternative:'''
If experiencing packet loss with AWS VPC Traffic Mirroring (VXLAN overhead, MTU fragmentation), use client-server mode instead:
* Install VoIPmonitor on each source EC2 instance
* Send via encrypted TCP to central server
* Eliminates VXLAN encapsulation and MTU issues
 
= Configuration =
 
== Remote Sensor (Client) ==


'''Remote Sensor (client):'''
<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
id_sensor              = 2                    # unique per sensor
id_sensor              = 2                    # Unique per sensor (1-65535)
server_destination      = central.server.ip
server_destination      = central.server.ip
server_destination_port = 60024
server_destination_port = 60024
server_password        = your_strong_password
server_password        = your_strong_password


# Choose one:
# Choose mode:
packetbuffer_sender    = no    # Local Processing: analyze locally, send CDRs
packetbuffer_sender    = no    # Local Processing: analyze locally, send CDRs
# packetbuffer_sender  = yes    # Packet Mirroring: send raw packets
# packetbuffer_sender  = yes    # Packet Mirroring: send raw packets
Line 67: Line 70:
</syntaxhighlight>
</syntaxhighlight>


'''Important: Source IP Binding with <code>manager_ip</code>'''
{{Tip|1=For HA setups with floating IPs, use <code>manager_ip = 10.0.0.5</code> to bind outgoing connections to a static IP address.}}
 
For remote sensors with multiple IP addresses (e.g., in High Availability setups with a floating/virtual IP), use the <code>manager_ip</code> parameter to bind the outgoing connection to a specific static IP address. This ensures the central server sees a consistent source IP from each sensor, preventing connection issues during failover.
 
<syntaxhighlight lang="ini">
# On sensor with multiple interfaces (e.g., static IP + floating HA IP)
manager_ip              = 10.0.0.5    # Bind to the static IP address
server_destination      = 192.168.1.100
# The outgoing connection will use 10.0.0.5 as the source IP instead of the floating IP
</syntaxhighlight>


Useful scenarios:
== Central Server ==
* HA pairs: Sensors use static IPs while floating IP is only for failover management
* Multiple VNICs: Explicit source IP selection on systems with multiple virtual interfaces
* Network ACLs: Ensure connections originate from whitelisted IP addresses


'''Central Server:'''
<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
server_bind            = 0.0.0.0
server_bind            = 0.0.0.0
Line 96: Line 86:
# If receiving raw packets (packetbuffer_sender=yes on clients):
# If receiving raw packets (packetbuffer_sender=yes on clients):
sipport                = 5060
sipport                = 5060
# ... other sniffer options
savertp                = yes
savesip                = yes
</syntaxhighlight>
</syntaxhighlight>


=== Custom Port Configuration ===
{{Warning|1='''Critical:''' Exclude <code>server_bind_port</code> from <code>sipport</code> on the central server. Including it causes continuously increasing memory usage.
 
'''Critical:''' The <code>server_bind_port</code> on the central server must match the <code>server_destination_port</code> on each remote sensor. If these ports do not match, sensors cannot connect.
 
<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
# Central Server (listening on custom port 50291)
# WRONG - includes sensor communication port:
server_bind            = 0.0.0.0
sipport = 1-65535
server_bind_port        = 50291      # Custom port (default is 60024)
server_password        = your_strong_password
</syntaxhighlight>


<syntaxhighlight lang="ini">
# CORRECT - excludes port 60024:
# Remote Sensor (must match the server's custom port)
sipport = 1-60023,60025-65535
server_destination      = 45.249.9.2
</syntaxhighlight>}}
server_destination_port = 50291      # MUST match server_bind_port
server_password        = your_strong_password
</syntaxhighlight>


'''Common reasons to use a custom port:'''
== Key Configuration Rules ==


* Firewall restrictions that block the default port 60024
{| class="wikitable"
* Running multiple VoIPmonitor instances on the same server (each with a different port)
* Compliance requirements for non-standard ports
* Avoiding port conflicts with other services
 
''' Troubleshooting Connection Failures: '''
 
{| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
|-
|-
! colspan="2" style="background:#ffc107;" | Critical First Step: Check Traffic Rate Indicator
! Rule !! Applies To !! Why
|-
|-
| style="vertical-align: top;" | '''IMPORTANT:'''
| <code>server_bind_port</code> must match <code>server_destination_port</code> || Both || Connection fails if mismatched
| Before troubleshooting communication issues, check if the probe is receiving traffic. The traffic rate indicator in the sensor logs shows the current packet capture rate in the format <code>[x.xMb/s]</code> (e.g., <code>[12.5Mb/s]</code> or <code>[0.0Mb/s]</code>).
|-
|-
| style="vertical-align: top;" | '''How to check:'''
| <code>sipport</code> must match on probe and central server || Packet Mirroring || Missing ports = missing calls
| Run <code>journalctl -u voipmonitor -n 100</code> on the probe and look for the traffic rate indicator printed in the status logs.
|-
|-
| style="vertical-align: top;" | '''If showing <code>[0.0Mb/s]</code>:'''
| <code>natalias</code> only on central server || Packet Mirroring || Prevents RTP correlation issues
| The issue is NOT communication or authentication. The problem is network configuration on the probe side. Common causes: incorrect SPAN/mirror port setup on the switch, wrong network interface selected in <code>voipmonitor.conf</code>, or the probe is not receiving any traffic at all. Fix the network configuration first.
|-
|-
| style="vertical-align: top;" | '''If showing traffic (non-zero rate):'''
| Each sensor needs unique <code>id_sensor</code> || All || Required for identification
| The probe IS receiving traffic from the network, so the handshake issue is with communication/authentication. Proceed with the steps below.
|}
|}


If probes cannot connect to the server and the traffic rate indicator shows non-zero traffic:
= Local Processing vs Packet Mirroring =
 
1. '''Verify ports match on both sides:'''
  <syntaxhighlight lang="bash">
  # On central server - check which port it is listening on
  ss -tulpn | grep voipmonitor
  # Should show: voipmonitor LISTEN 0.0.0.0:50291
  </syntaxhighlight>
 
2. '''Test connectivity from remote sensor:'''
  <syntaxhighlight lang="bash">
  # Test TCP connection to the server's custom port
  nc -zv 45.249.9.2 50291
  # Success: "Connection to 45.249.9.2 50291 port [tcp/*] succeeded!"
  # Timeout/Refused: Check firewall or misconfigured port
  </syntaxhighlight>
 
3. '''Ensure firewall allows the custom port:'''
  <syntaxhighlight lang="bash">
  # Allow inbound TCP on custom port (example for firewalld)
  firewall-cmd --permanent --add-port=50291/tcp
  firewall-cmd --reload
  </syntaxhighlight>
 
4. '''Check logs on both sides:'''
  <syntaxhighlight lang="bash">
  journalctl -u voipmonitor -f
  # Look for: "connecting to server", "connection refused", or "timeout"
  </syntaxhighlight>
 
5. '''Verify MySQL database is accessible (if web GUI works but sensors cannot connect):'''
  If the web portal is accessible but sensors cannot connect, verify that the MySQL/MariaDB database service on the primary server is running and responsive. The central VoIPmonitor service requires a functioning database connection to accept sensor data.
  <syntaxhighlight lang="bash">
  # Check if MySQL service is running
  systemctl status mariadb
  # or
  systemctl status mysqld
 
  # Check for database errors in MySQL error log
  # Common locations:
  tail -50 /var/log/mariadb/mariadb.log
  tail -50 /var/log/mysql/error.log
  # Look for critical errors that might prevent database connections
  </syntaxhighlight>
 
If MySQL is down or experiencing critical errors, the central VoIPmonitor server may not be able to accept sensor connections even though the web interface (PHP) remains accessible. Restart the database service if needed and monitor the logs for recurring errors.
 
After changing port configuration, restart the service:
 
<syntaxhighlight lang="bash">
systemctl restart voipmonitor
</syntaxhighlight>
 
=== Checking Sensor Health Status via Management API ===
 
Each VoIPmonitor sensor exposes a TCP management API (default port 5029) that can be used to query its operational status and health. This is useful for monitoring multiple sensors, especially in distributed deployments.
 
'''Important Notes:'''
* There is NO single command to check all sensors simultaneously
* Each sensor must be queried individually
* The `sniffer_stat` command returns JSON with sensor status information
* In newer VoIPmonitor versions, the sensor's management API communication may be encrypted
 
==== Basic Health Check Command ====
 
To check the status of a single sensor:
 
<syntaxhighlight lang="bash">
# Query sensor status via management port
echo 'sniffer_stat' | nc <sensor_ip> <sensor_port>
</syntaxhighlight>
 
Replace:
* <code>&lt;sensor_ip&gt;</code> with the IP address of the sensor
* <code>&lt;sensor_port&gt;</code> with the management port (default: 5029)
 
==== Example Response ====
 
The command returns a JSON object with sensor status information:
 
<pre>
{
  "status": "running",
  "version": "30.3-SVN.123",
  "uptime": 86400,
  "calls_active": 42,
  "calls_total": 12345,
  "packets_per_second": 1250.5,
  "packets_dropped": 0
}
</pre>
 
==== Scripting Multiple Sensors ====
 
To check multiple sensors and get a consolidated result, create a script that queries each sensor individually:
 
<syntaxhighlight lang="bash">
#!/bin/bash
# Check health of multiple sensors
 
SENSORS=("192.168.1.10:5029" "192.168.1.11:5029" "192.168.1.12:5029")
ALL_OK=true
 
for SENSOR in "${SENSORS[@]}"; do
    IP=$(echo $SENSOR | cut -d: -f1)
    PORT=$(echo $SENSOR | cut -d: -f2)
 
    echo -n "Checking $IP:$PORT ... "
 
    # Query sensor and check for running status
    STATUS=$(echo 'sniffer_stat' | nc -w 2 $IP $PORT 2>/dev/null | grep -o '"status":"[^"]*"' | cut -d'"' -f4)
 
    if [ "$STATUS" = "running" ]; then
        echo "OK"
    else
        echo "FAILED (status: $STATUS)"
        ALL_OK=false
    fi
done
 
if [ "$ALL_OK" = true ]; then
    echo "All sensors healthy"
    exit 0
else
    echo "One or more sensors unhealthy"
    exit 1
fi
</syntaxhighlight>
 
==== Troubleshooting Management API Access ====
 
If you cannot connect to the sensor management API:
 
1. '''Verify the management port is listening:'''
  <syntaxhighlight lang="bash">
  # On the sensor host
  netstat -tlnp | grep 5029
  # or
  ss -tlnp | grep voipmonitor
  </syntaxhighlight>
 
2. '''Check firewall rules:'''
  Ensure TCP port 5029 is allowed from the monitoring host to the sensor.
 
3. '''Test connectivity with netcat:'''
  <syntaxhighlight lang="bash">
  nc -zv <sensor_ip> 5029
  </syntaxhighlight>
 
4. '''Encrypted Communication (Newer Versions):'''
  In newer VoIPmonitor versions, the sensor's API communication may be encrypted. If management API access fails with encryption errors:
  * Check VoIPmonitor documentation for your version
  * Encryption may need to be disabled for management API access
  * Consult support for encrypted CLI tools if available
 
==== Encryption Considerations ====
 
If your sensors use encrypted management API (newer versions):
 
* The standard netcat command may not work with encrypted connections
* Check if `manager_bind` (default port 5029) has encryption enabled
* For encrypted connections, you may need VoIPmonitor-specific CLI tools
* Refer to your VoIPmonitor version documentation or contact support for encrypted API access
 
=== Connection Compression ===
 
The client-server channel supports compression to reduce bandwidth usage:
 
<syntaxhighlight lang="ini">
# On both client and server (default: zstd)
server_type_compress = zstd
</syntaxhighlight>
 
Available options: <code>zstd</code> (default, recommended), <code>gzip</code>, <code>lzo</code>, <code>none</code>
 
=== High Availability (Failover) ===
 
Remote sensors can specify multiple central server IPs for automatic failover:
 
<syntaxhighlight lang="ini">
# Remote sensor configuration with failover
server_destination = 192.168.0.1, 192.168.0.2
</syntaxhighlight>
 
If the primary server becomes unavailable, the sensor automatically connects to the next server in the list.
 
== Local Processing vs Packet Mirroring ==


{| class="wikitable"
{| class="wikitable"
Line 337: Line 122:
| '''<code>packetbuffer_sender</code>''' || <code>no</code> (default) || <code>yes</code>
| '''<code>packetbuffer_sender</code>''' || <code>no</code> (default) || <code>yes</code>
|-
|-
| '''Packet analysis''' || On remote sensor || On central server
| '''Processing location''' || Remote sensor || Central server
|-
|-
| '''PCAP storage''' || On remote sensor || On central server
| '''PCAP storage''' || Remote sensor || Central server
|-
|-
| '''WAN bandwidth''' || Low (CDRs only) || High (full packets)
| '''WAN bandwidth''' || Low (CDRs only, 1Gb sufficient) || High (full packets)
|-
|-
| '''Remote CPU load''' || Higher || Minimal
| '''Remote CPU load''' || Higher || Minimal
|-
|-
| '''Use case''' || Standard multi-site || Low-resource remotes
| '''Capture rules applied''' || On sensor || On central server only
|}
|}


=== Network Bandwidth Requirements ===
== PCAP Access in Local Processing Mode ==


The network bandwidth requirements between remote sensors and the central server depend on the selected operational mode:
PCAPs are stored on remote sensors. The GUI retrieves them through the central server, which proxies the request to the sensor '''over the existing TCP/60024 connection''' - the same persistent encrypted channel the sensor uses for sending CDRs. This connection is bidirectional; the central server does not open any separate connection back to the sensor.


{| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
'''Firewall requirements:'''
 
{| class="wikitable"
|-
|-
! colspan="2" style="background:#4A90E2; color: white;" | Bandwidth Guidelines
! Direction !! Port !! Purpose
|-
|-
| style="vertical-align: top;" | '''Local Processing Mode (<code>packetbuffer_sender=no</code>):'''
| Remote sensors → Central server || TCP/60024 || Persistent encrypted channel (CDRs from sensor, PCAP requests from server - bidirectional)
| PCAP files are stored locally on sensors. Network traffic consists mainly of CDR data (SQL queries). <br><br>
'''A 1Gb network connection between sensors and the central GUI/Database server is generally sufficient for most deployments.'''
|-
|-
| style="vertical-align: top;" | '''Packet Mirroring Mode (<code>packetbuffer_sender=yes</code>):'''
| GUI → Central server || TCP/5029 || Manager API (sensor status, active calls, configuration)
| Raw packet stream is forwarded to central server. Bandwidth consumption is roughly equivalent to the VoIP traffic volume itself (minus Ethernet headers, plus compression overhead). <br><br>
|-
Consider your expected VoIP traffic volume and network capacity. Use <code>server_type_compress=zstd</code> to reduce bandwidth usage.
| GUI → Central server || TCP/60024 || Server API (list connected sensors, proxy PCAP retrieval)
|}
|}


For optimal throughput in high-latency environments, see the server concatenation limit configuration in [[Sniffer_configuration#SQL_Concatenation_Throughput_Tuning|Sniffer Configuration: SQL Concatenation Throughput]].
{{Note|1=The central server does '''not''' initiate connections to remote sensors. All server↔sensor communication happens over the single TCP/60024 connection that the sensor established.}}


=== PCAP Access in Local Processing Mode ===
{{Tip|1=Packet Mirroring (<code>packetbuffer_sender=yes</code>) '''automatically deduplicates calls''' - the central server merges packets from all probes for the same Call-ID into a single unified CDR. This also ensures one logical call only consumes one license channel.}}
= Advanced Topics =


When using Local Processing, PCAPs are stored on remote sensors. The GUI retrieves them via the central server, which proxies requests to each sensor's management port (TCP/5029).
== High Availability (Failover) ==


'''Firewall requirements:'''
Remote sensors can specify multiple central servers:
* Central server must reach remote sensors on TCP/5029
* Remote sensors must reach central server on TCP/60024


== Dashboard Statistics ==
<syntaxhighlight lang="ini">
server_destination = 192.168.0.1, 192.168.0.2
</syntaxhighlight>


Dashboard widgets (SIP/RTP/REGISTER counts) depend on where packet processing occurs:
If primary is unavailable, the sensor automatically connects to the next server.


{| class="wikitable"
== Connection Compression ==
|-
! Configuration !! Where statistics appear
|-
| '''<code>packetbuffer_sender = yes</code>''' (Packet Mirroring) || Central server only
|-
| '''<code>packetbuffer_sender = no</code>''' (Local Processing) || Both sensor and central server
|}


'''Note:''' If you are using Packet Mirroring mode (<code>packetbuffer_sender=yes</code>) and see empty dashboard widgets for the forwarding sensor, this is expected behavior. The sender sensor only captures and forwards raw packets - it does not create database records or statistics. The central server performs all processing.
<syntaxhighlight lang="ini">
# On both client and server (default: zstd)
server_type_compress = zstd  # Options: zstd, gzip, lzo, none
</syntaxhighlight>


=== Enabling Local Statistics on Forwarding Sensors ===
== Intermediate Server (Hub-and-Spoke) ==


If you need local statistics on a sensor that was previously configured to forward packets:
An intermediate server can receive from multiple sensors and forward to a central server:


<syntaxhighlight lang="ini">
<kroki lang="plantuml">
# On the forwarding sensor
@startuml
packetbuffer_sender = no
skinparam shadowing false
</syntaxhighlight>
skinparam defaultFontName Arial


This disables packet forwarding and enables full local processing. Note that this increases CPU and RAM usage on the sensor since it must perform full SIP/RTP analysis.
rectangle "Remote Sensors" as RS
rectangle "Intermediate Server" as INT
rectangle "Central Server" as CS
database "MySQL" as DB


== Controlling Packet Storage in Packet Mirroring Mode ==
RS --> INT : TCP/60024
INT --> CS : TCP/60024
CS --> DB
@enduml
</kroki>


When using Packet Mirroring (<code>packetbuffer_sender=yes</code>), the central server processes raw packets received from sensors. The <code>save*</code> options on the '''central server''' control which packets are saved to disk.
<syntaxhighlight lang="ini">
# On INTERMEDIATE SERVER
id_sensor              = 100


<syntaxhighlight lang="ini">
# Receive from remote sensors
# Central Server Configuration (receiving raw packets from sensors)
server_bind            = 0.0.0.0
server_bind            = 0.0.0.0
server_bind_port        = 60024
server_bind_port        = 60024
server_password        = your_strong_password
server_password        = sensor_password


# Database Configuration
# Forward to central server
mysqlhost              = localhost
server_destination      = central.server.ip
mysqldb                = voipmonitor
server_destination_port = 60024
mysqluser              = voipmonitor
mysqlpassword          = db_password


# Sniffer options needed when receiving raw packets:
packetbuffer_sender    = no    # or yes, depending on desired mode
sipport                = 5060
</syntaxhighlight>


# CONTROL PACKET STORAGE HERE:
{{Note|1=This works because the intermediate server does NOT do local packet capture - it only relays. Original remote sensors must be manually added to GUI Settings for visibility.}}
# These settings on the central server determine what gets saved:
savertp                = yes          # Save RTP packets
savesip                = yes          # Save SIP packets
saveaudio              = wav          # Export audio recordings (optional)
</syntaxhighlight>


{| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
== Multiple Receivers for Packet Mirroring ==
|-
! colspan="2" style="background:#4A90E2; color: white;" | Important: Central Server Controls Storage
|-
| style="vertical-align: top;" | '''Key Point:'''
| When sensors send raw packets to a central server, the storage is controlled by the <code>savertp</code>, <code>savesip</code>, and <code>saveaudio</code> options configured on the '''central server''', not on the individual sensors. The sensors are only forwarding raw packets - they do not make decisions about what to save unless you are using Local Processing mode.
|}


This centralized control allows you to:
{{Warning|1=Multiple sensors with <code>packetbuffer_sender=yes</code> sending to a '''single receiver instance''' can cause call processing conflicts (calls appear in Active Calls but missing from CDRs).}}
* Enable/disable packet types (RTP, SIP, audio) from one location
* Adjust storage settings without touching each sensor
* Apply capture rules from the central server to filter traffic


== Data Storage Summary ==
'''Solution:''' Run separate receiver instances on different hosts, each dedicated to specific sensors:


* '''CDRs''': Always stored in MySQL on central server
<syntaxhighlight lang="ini">
* '''PCAPs''':
# Receiver Instance 1 (Host 1, for Sensor A)
** Local Processing → stored on each remote sensor
server_bind_port        = 60024
** Packet Mirroring → stored on central server
id_sensor              = 1


== Handling Same Call-ID from Multiple Sensors ==
# Receiver Instance 2 (Host 2, for Sensor B)
server_bind_port        = 60024
id_sensor              = 2
</syntaxhighlight>


When a call passes through multiple sensors that see the same SIP Call-ID, VoIPmonitor automatically merges the SIP packets into a single CDR on the central server. This is expected behavior when using Packet Mirroring mode.
Alternative: Use '''Local Processing mode''' (<code>packetbuffer_sender=no</code>) which processes calls independently on each sensor.


{| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
== Preventing Duplicate CDRs (Local Processing) ==
|-
! colspan="2" style="background:#ffc107;" | Call-ID Merging Behavior
|-
| style="vertical-align: top;" | '''What happens:'''
| If Sensor A and Sensor B both forward packets for a call with the same Call-ID to the central server, VoIPmonitor creates a single CDR containing SIP packets from both sensors. The RTP packets are captured from whichever sensor processed the media.
|-
| style="vertical-align: top;" | '''Why:'''
| VoIPmonitor uses the SIP Call-ID as the primary unique identifier. When multiple sensors forward packets with the same Call-ID to a central server, they are automatically treated as one call.
|-
| style="vertical-align: top;" | '''Is it a problem?'''
| Usually not. For most deployments, combining records from multiple sensors for the same call (different call legs passing through different points in the network) is the desired behavior.
|}


=== Preventing Duplicate CDRs in Local Processing Mode ===
When multiple probes capture the same call in Local Processing mode:


When using '''Local Processing mode''' (<code>packetbuffer_sender=no</code>), each remote probe processes its own packets and writes CDRs directly to a central database. If multiple probes capture the same call (e.g., redundant taps or overlapping SPAN ports), this creates '''duplicate CDR entries''' in the database.
<syntaxhighlight lang="ini">
# On each probe
cdr_check_exists_callid = yes
</syntaxhighlight>


To prevent duplicates in this scenario, use the <code>cdr_check_exists_callid</code> option on '''all probes''':
This checks for existing CDRs before inserting. Requires MySQL UPDATE privileges.


{| class="wikitable" style="background:#f8f9fa; border:1px solid #dee2e6;"
== Critical: SIP and RTP Must Be Captured Together ==
|-
! rowspan="2" | Setting
! colspan="2" | Result
|-
|<code>cdr_check_exists_callid = no</code> (default)
|Each probe creates its own CDR row. Multiple probes capturing the same call result in duplicate entries with the same Call-ID but different id_sensor values.
|-
|<code>cdr_check_exists_callid = yes</code>
|Probes check for an existing CDR with the same Call-ID before inserting. If found, they update the existing row instead of creating a new one. The final CDR will be associated with the id_sensor of the probe that last processed the call.
|}


'''Prerequisites:'''
VoIPmonitor cannot correlate SIP and RTP from different sniffer instances. A '''single sniffer must process both SIP and RTP''' for each call. Parameters like <code>cdr_check_exists_callid</code> do NOT enable split SIP/RTP correlation.
* MySQL user must have <code>UPDATE</code> privileges on the <code>cdr</code> table
* All probes must be configured with this setting


<syntaxhighlight lang="ini">
# Add to voipmonitor.conf on each probe (Local Processing mode only)
[general]
cdr_check_exists_callid = yes
</syntaxhighlight>


'''Note:''' This setting is only useful in Local Processing mode. In Packet Mirroring mode (<code>packetbuffer_sender=yes</code>), the central server automatically merges packets with the same Call-ID, so this option is not needed.


=== Keeping Records Separate Per Sensor ===
==== Split SIP/RTP with Packet Mirroring Mode ====


If you need to keep records completely separate when multiple sensors see the same Call-ID (e.g., each sensor should create its own independent CDR even for calls with overlapping Call-IDs), you must run '''multiple receiver instances on the central server'''.
{{Note|1='''Exception for Packet Mirroring Mode:''': The above limitation applies to '''Local Processing mode''' (<code>packetbuffer_sender=no</code>) where each sensor processes calls independently. In '''Packet Mirroring mode''' (<code>packetbuffer_sender=yes</code>), the central server receives raw packets from multiple remote sensors and processes them together. This allows scenarios where SIP and RTP are captured on separate nodes - configure both as packet senders and let the central server correlate them into single unified CDRs.}}


Example scenario: Separate SIP signaling node and RTP handling node:
<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
# Receiver Instance 1 (for Sensor A)
# SIP Signaling Node (packet sender)
[receiver_sensor_a]
id_sensor              = 1
server_bind            = 0.0.0.0
packetbuffer_sender    = yes
server_bind_port        = 60024
server_destination      = central.server.ip
mysqlhost              = localhost
server_destination_port = 60024
mysqldb                = voipmonitor
server_password        = your_password
mysqluser              = voipmonitor
 
mysqlpassword          = <password>
# RTP Handling Node (packet sender)
mysqltableprefix        = sensor_a_  # Separate CDR tables
id_sensor              = 2
id_sensor              = 2
# ... other options
packetbuffer_sender    = yes
 
server_destination      = central.server.ip
# Receiver Instance 2 (for Sensor B)
server_destination_port = 60024
[receiver_sensor_b]
server_password        = your_password
server_bind            = 0.0.0.0
server_bind_port        = 60025  # Different port
mysqlhost              = localhost
mysqldb                = voipmonitor
mysqluser              = voipmonitor
mysqlpassword          = <password>
mysqltableprefix        = sensor_b_  # Separate CDR tables
id_sensor              = 3
# ... other options
</syntaxhighlight>
</syntaxhighlight>


Each receiver instance runs as a separate process, listens on a different port, and can write to separate database tables (using <code>mysqltableprefix</code>). Configure each sensor to connect to its dedicated receiver port.
The central server merges packets from both senders by Call-ID, creating unified CDRs with complete SIP and RTP data.


For more details on correlating multiple call legs from the same call, see [[Merging_or_correlating_multiple_call_legs]].


== GUI Visibility ==
==== HEP Protocol in Client/Server Mode ====


Remote sensors appear automatically when connected. To customize names or configure additional settings:
VoIPmonitor supports receiving HEP-encapsulated traffic on sniffer clients and forwarding it to a central server. This enables distributed capture from HEP sources (Kamailio, OpenSIPS, rtpproxy, FreeSWITCH) in a client/server architecture.
# Go to '''GUI → Settings → Sensors'''
# Sensors are identified by their <code>id_sensor</code> value


== Troubleshooting Distributed Deployments ==
'''Scenario:''' SIP proxy and RTP proxy at different locations sending HEP to remote sniffer clients:


=== Probe Not Detecting All Calls on Expected Ports ===
<syntaxhighlight lang="ini">
# Remote Sniffer Client A (receives HEP from Kamailio)
id_sensor              = 1
hep                    = yes
hep_bind_port          = 9060
packetbuffer_sender    = yes
server_destination      = central.server.ip
server_destination_port = 60024
server_password        = your_password


If a remote sensor (probe) configured for packet mirroring is not detecting all calls on expected ports, check configuration on '''both''' the probe and the central analysis host.
# Remote Sniffer Client B (receives HEP from rtpproxy)
id_sensor              = 2
hep                    = yes
hep_bind_port          = 9060
packetbuffer_sender    = yes
server_destination      = central.server.ip
server_destination_port = 60024
server_password        = your_password
</syntaxhighlight>


{| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
The central server receives packets from both clients and correlates them into unified CDRs using standard SIP Call-ID and IP:port from SDP.
|-
! colspan="2" style="background:#ffc107;" | Critical: sipport Must Match in Distributed Deployments
|-
| style="vertical-align: top;" | '''The Issue:'''
| In distributed/probe setups using Packet Mirroring (<code>packetbuffer_sender=yes</code>), calls will be missing if the <code>sipport</code> configuration is not aligned between the probe and central server. Common symptom: Probe sees traffic via <code>tcpdump</code> but central server records incomplete CDRs.
|-
| style="vertical-align: top;" | '''Configuration Requirement:'''
| The probe and central host must have consistent <code>sipport</code> values. If your network uses SIP on multiple ports (e.g., 5060, 5061, 5080, 6060), ALL ports must be listed on both systems.
|}


The solution involves three steps:
{{Note|1=This also works for IPFIX (Oracle SBCs) and RibbonSBC protocols forwarded via client/server mode.}}


;1. Verify traffic reachability on the probe:
'''Alternative: Direct HEP to single sniffer'''
Use <code>tcpdump</code> on the probe VM to confirm SIP packets for the missing calls are arriving on the expected ports.
<pre>
# On the probe VM
tcpdump -i eth0 -n port 5060
</pre>


;2. Check the probe's ''voipmonitor.conf'':
If both HEP sources can reach the same sniffer directly, no client/server setup is needed:
Ensure the <code>sipport</code> directive on the probe includes all necessary SIP ports used in your network.
<syntaxhighlight lang="ini">
# /etc/voipmonitor.conf on the PROBE
sipport = 5060,5061,5080,6060
</syntaxhighlight>


;3. Check the central analysis host's ''voipmonitor.conf'':
'''This is the most common cause of missing calls in distributed setups.''' The central analysis host (specified by <code>server_bind</code> on the central server, or by <code>server_destination</code> configured on the probe) must also have the <code>sipport</code> directive configured with the same list of ports used by all probes.
<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
# /etc/voipmonitor.conf on the CENTRAL HOST
# Single sniffer receiving HEP from multiple sources
sipport = 5060,5061,5080,6060
hep                    = yes
hep_bind_port          = 9060
interface              = eth0  # Can also sniff locally if needed
</syntaxhighlight>
</syntaxhighlight>


;4. Restart both services:
Both Kamailio (SIP) and rtpproxy (RTP) send HEP to this sniffer on port 9060. The sniffer correlates them automatically based on Call-ID and SDP IP:port.
Apply the configuration changes:
= Sensor Health Monitoring =
<syntaxhighlight lang="bash">
# On both probe and central host
systemctl restart voipmonitor
</syntaxhighlight>


{| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
== Management API ==
|-
! colspan="2" style="background:#4A90E2; color: white;" | Why Both Systems Must Match
|-
| style="vertical-align: top;" | '''Probe side:'''
| The probe captures packets from the network interface. Its <code>sipport</code> setting determines which UDP ports it considers as SIP traffic to capture and forward.
|-
| style="vertical-align: top;" | '''Central server side:'''
| When receiving raw packets in Packet Mirroring mode, the central server analyzes the packets locally. Its <code>sipport</code> setting determines which ports it interprets as SIP during analysis. If a port is missing here, packets are captured but not recognized as SIP, resulting in missing CDRs.
|}


=== Quick Diagnosis Commands ===
Query sensor status via TCP port 5029:


On the probe:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Check which sipport values are configured
echo 'sniffer_stat' | nc <sensor_ip> 5029
grep -E "^sipport" /etc/voipmonitor.conf
</syntaxhighlight>


# Verify traffic is arriving on expected ports
Returns JSON with status, version, active calls, packets per second, etc.
tcpdump -i eth0 -nn -c 10 port 5061
 
</syntaxhighlight>
== Multi-Sensor Health Check Script ==


On the central server:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Check which sipport values are configured
#!/bin/bash
grep -E "^sipport" /etc/voipmonitor.conf
SENSORS=("192.168.1.10:5029" "192.168.1.11:5029")
 
for SENSOR in "${SENSORS[@]}"; do
# Check syslog for analysis activity (should see processing packets)
    IP=$(echo $SENSOR | cut -d: -f1)
tail -f /var/log/syslog | grep voipmonitor
    PORT=$(echo $SENSOR | cut -d: -f2)
    STATUS=$(echo 'sniffer_stat' | nc -w 2 $IP $PORT 2>/dev/null | grep -o '"status":"[^"]*"' | cut -d'"' -f4)
    echo "$IP: ${STATUS:-FAILED}"
done
</syntaxhighlight>
</syntaxhighlight>


If probes still miss calls after ensuring <code>sipport</code> matches on both sides, check the [[Sniffer_troubleshooting|full troubleshooting guide]] for other potential issues such as network connectivity, firewall rules, or interface misconfiguration.
= Version Compatibility =


== Legacy: Mirror Mode ==
{| class="wikitable"
 
'''Note:''' The older <code>mirror_destination</code>/<code>mirror_bind</code> options still exist but the modern Client-Server approach with <code>packetbuffer_sender=yes</code> is preferred as it provides encryption and simpler management.
 
=== Migrating from Mirror Mode to Client-Server Mode ===
 
If your system uses the legacy mirror mode (<code>mirror_destination</code> on probes, <code>mirror_bind</code> on server), you should migrate to the modern client/server mode. Common symptoms of mirror mode issues include all CDRs being incorrectly associated with a single sensor after system updates.
 
{| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
|-
|-
! colspan="2" style="background:#ffc107;" | Why Migration is Recommended
! Scenario !! Compatibility !! Notes
|-
|-
| style="vertical-align: top;" | '''Mirror Mode Limitations:'''
| '''GUI ≥ Sniffer''' || ✅ Compatible || Recommended
| * No encryption (raw UDP traffic)
* Complex firewall configuration (must open mirroring port)
* Less robust connection handling
* Configuration can be lost during OS upgrades
|-
|-
| style="vertical-align: top;" | '''Client-Server Advantages:'''
| '''GUI < Sniffer''' || ⚠️ Risk || Sensor may write to non-existent columns
| * Encrypted TCP connections
* Automatic reconnection with failover
* Centralized port configuration
* Better troubleshooting capabilities
|}
|}


==== Prerequisites
'''Best practice:''' Upgrade GUI first (applies schema changes), then upgrade sensors.


* Central server hostname or IP address
For mixed versions temporarily, add to central server:
* Port for client-server communication (default: 60024)
<syntaxhighlight lang="ini">
* Strong shared password for authentication
server_cp_store_simple_connect_response = yes  # Sniffer 2024.11.0+
 
==== Migration Steps
 
;1. Stop the voipmonitor sniffer service on all probe machines:
<syntaxhighlight lang="bash">
# On each probe
systemctl stop voipmonitor
</syntaxhighlight>
</syntaxhighlight>


;2. Update GUI Sensors list:
= Troubleshooting =
# Log in to the VoIPmonitor GUI
# Navigate to '''Settings → Sensors'''
# Remove all old probe records, keeping only the server instance (e.g., localhost or the central server IP)


;3. Configure the Central Server:
== Quick Diagnosis ==
Edit <code>/etc/voipmonitor.conf</code> on the central server:
<syntaxhighlight lang="ini">
# COMMENT OUT or remove mirror mode parameters:
# mirror_bind_ip = 1.2.3.4
# mirror_bind_port = 9000


# ADD client-server mode parameters:
{| class="wikitable"
server_bind              = <server_ip>     # Use 0.0.0.0 to listen on all interfaces
|-
server_bind_port        = <port>         # Default is 60024
! Symptom !! First Check !! Likely Cause
server_password          = <a_strong_password>
|-
| Sensor not connecting || <code>journalctl -u voipmonitor -f</code> on sensor || Check <code>server_destination</code>, password, firewall
|-
| Traffic rate <code>[0.0Mb/s]</code> || tcpdump on sensor interface || Network/SPAN issue, not communication
|-
| High memory on central server || Check if <code>sipport</code> includes 60024 || Exclude server port from sipport
|-
| Missing calls || Compare <code>sipport</code> on probe vs central || Must match on both sides
|-
| "Bad password" error || GUI → Settings → Sensors || Delete stale sensor record, restart sensor
|-
| "Connection refused (111)" after migration || Check <code>server_destination</code> in config || Points to old server IP
|-
| RTP streams end prematurely || Check <code>natalias</code> location || Configure only on central server
|-
| Time sync errors || <code>timedatectl status</code> || Fix NTP or increase tolerance
|}


# MySQL configuration remains unchanged
== Connection Testing ==
mysqlhost                = localhost
mysqldb                  = voipmonitor
mysqluser                = voipmonitor
mysqlpassword            = <your_db_password>
</syntaxhighlight>


Restart the service on the central server:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# On central server
# Test connectivity from sensor to server
systemctl restart voipmonitor
nc -zv <server_ip> 60024
</syntaxhighlight>


Verify the server is listening:
# Verify server is listening
<syntaxhighlight lang="bash">
# Check that voipmonitor is listening on the configured port
ss -tulpn | grep voipmonitor
ss -tulpn | grep voipmonitor
# Should show: voipmonitor LISTEN 0.0.0.0:60024 (or your custom port)
 
# Check sensor logs
journalctl -u voipmonitor -n 100 | grep -i "connect"
</syntaxhighlight>
</syntaxhighlight>


;4. Configure each Probe:
== Time Synchronization Errors ==
Edit <code>/etc/voipmonitor.conf</code> on each remote probe:
<syntaxhighlight lang="ini">
# COMMENT OUT or remove mirror mode parameters:
# mirror_destination_ip = 1.2.3.4
# mirror_destination_port = 9000


# ADD client-server mode parameters:
If seeing "different time between server and client" errors:
id_sensor                = <unique_id>    # Must be unique per sensor
server_destination      = <server_ip>
server_destination_port  = <port>          # Must match server_bind_port
server_password          = <a_strong_password>  # Same password used on server


# IMPORTANT: Set packet handling mode
'''Immediate workaround:''' Increase tolerance on both sides:
packetbuffer_sender      = no    # Local Processing: analyze locally, send CDRs only
<syntaxhighlight lang="ini">
# OR
client_server_connect_maximum_time_diff_s = 30
# packetbuffer_sender    = yes  # Packet Mirroring: send raw packets to server
receive_packetbuffer_maximum_time_diff_s = 30
 
# Capture settings remain unchanged
interface                = eth0
sipport                  = 5060
# No MySQL credentials needed on remote sensors for Local Processing mode
</syntaxhighlight>
</syntaxhighlight>


Restart the service on each probe:
'''Root cause fix:''' Ensure NTP is working:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# On each probe
timedatectl status          # Check sync status
systemctl restart voipmonitor
chronyc tracking            # Check offset (Chrony)
ntpq -p                      # Check offset (NTP)
</syntaxhighlight>
</syntaxhighlight>


;5. Verify Connection in GUI:
== Network Throughput Testing ==
# Log in to the VoIPmonitor GUI
 
# Navigate to '''Settings → Sensors'''
If experiencing "packetbuffer: MEMORY IS FULL" errors, test network with iperf3:
# Verify that probes appear automatically with their configured <code>id_sensor</code> values
# Check the connection status (online/offline)


;6. Test Data Flow:
<syntaxhighlight lang="bash">
# Generate test traffic on a probe network (make a test call)
# On central server
# Check CDR view in GUI
iperf3 -s
# Verify that new records show the correct <code>id_sensor</code> for that probe
# Confirm PCAP files are accessible (click play button in CDR view)


==== Common Issues During Migration
# On probe
iperf3 -c <server_ip>
</syntaxhighlight>


{| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
{| class="wikitable"
|-
! colspan="2" style="background:#4A90E2; color: white;" | Troubleshooting Connection Problems
|-
|-
| style="vertical-align: top;" | '''Probes cannot connect:'''
! Result !! Interpretation !! Action
| * Verify <code>server_password</code> is identical on server and all probes
* Check firewall: allow incoming TCP on <code>server_bind_port</code> (default 60024) on central server
* Verify network connectivity: <code>nc -zv <server_ip> <server_bind_port></code> from probe
|-
|-
| style="vertical-align: top;" | '''All CDRs show same sensor:'''
| Expected bandwidth (>900 Mbps on 1Gb) || Network OK || Check local CPU/RAM
| This typically indicates the old mirror mode configuration is still active or the <code>id_sensor</code> is not set on probes. Double-check that:
* Mirror parameters are commented out on both sides
* Each probe has a unique <code>id_sensor</code> value
* Services were restarted after configuration changes
|-
|-
| style="vertical-align: top;" | '''PCAP files not accessible:'''
| Low throughput || Network bottleneck || Check switches, cabling, consider Local Processing mode
| In Local Processing mode (<code>packetbuffer_sender=no</code>), PCAPs are stored on probes and retrieved via TCP port 5029. Ensure:
* Central server can reach each probe on TCP/5029
* Firewall allows TCP/5029 from central server to probes
|}
|}


== Critical Requirement: SIP and RTP must be captured by the same sniffer instance ==
== Debugging SIP Traffic ==


'''VoIPmonitor cannot reconstruct a complete call record if SIP signaling and RTP media are captured by different sniffer instances.'''
<code>sngrep</code> does not work on the central server because traffic is encapsulated in the TCP tunnel.


{| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
'''Options:'''
|-
* '''Live Sniffer:''' Use GUI → Live Sniffer to view SIP from remote sensors
! colspan="2" style="background:#ffc107;" | Important: Single sniffer requirement
* '''sngrep on sensor:''' Run <code>sngrep -i eth0</code> directly on the remote sensor
|-
| style="vertical-align: top;" | '''What does not work:'''
| * Sniffer A in Availability Zone 1 captures SIP signaling
* Sniffer B in Availability Zone 2 captures RTP media
* Result: Incomplete call record, GUI cannot reconstruct the call
|-
| style="vertical-align: top;" | '''Why:'''
| Call correlation requires a '''single sniffer instance to process both SIP and RTP packets from the same call'''. The sniffer correlates SIP signaling (INVITE, BYE, etc.) with RTP media in real-time during packet processing. If packets are split across multiple sniffers, the correlation cannot occur.
|-
| style="vertical-align: top;" | '''Solution:'''
| Forward traffic so that '''one sniffer processes both SIP and RTP for each call'''. Options:
* Route both SIP and RTP through the same Availability Zone for capture
* Use Packet Mirroring mode to forward complete traffic (SIP+RTP) to a central server that processes everything
* Configure network routers/firewalls to forward the required stream to the correct zone
|}


Configuration parameters like <code>receiver_check_id_sensor</code> and <code>cdr_check_exists_callid</code> are for other scenarios (multipath routing, duplicate Call-ID handling) and '''do NOT enable split SIP/RTP correlation'''. Setting these parameters does not allow SIP from one sniffer to be merged with RTP from another sniffer.
== Stale Sensor Records ==


== Intermediate Server: Multi-Sensor Aggregation ==
If a new sensor fails with "bad password" despite correct credentials:


An intermediate server can receive traffic from multiple remote sensors and forward it to a central server. This is useful for aggregating traffic from many locations before sending to a central data center.
# Delete the sensor record from '''GUI → Settings → Sensors'''
# Restart voipmonitor on the sensor: <code>systemctl restart voipmonitor</code>
# The sensor will re-register automatically


=== Architecture ===
= Legacy: Mirror Mode =


<kroki lang="plantuml">
The older <code>mirror_destination</code>/<code>mirror_bind</code> options still work but Client-Server mode is preferred (encryption, simpler management).
@startuml
skinparam shadowing false
skinparam defaultFontName Arial
skinparam rectangle {
  BorderColor #4A90E2
  BackgroundColor #FFFFFF
}


rectangle "Remote Sensor A" as RA
To migrate from mirror mode:
rectangle "Remote Sensor B" as RB
# Stop sensors, comment out <code>mirror_*</code> parameters
rectangle "Remote Sensor C" as RC
# Configure <code>server_bind</code> on central, <code>server_destination</code> on sensors
rectangle "Intermediate Server\n(server_bind + server_destination)" as INT
# Restart all services
rectangle "Central Server\n(server_bind)" as CS
database "MySQL" as DB


RA --> INT : encrypted TCP
For mirror mode <code>id_sensor</code> attribution, use:
RB --> INT : encrypted TCP
<syntaxhighlight lang="ini">
RC --> INT : encrypted TCP
# On central receiver
mirror_bind_sensor_id_by_sender = yes
</syntaxhighlight>


INT --> CS : encrypted TCP
= See Also =
CS --> DB : CDRs


note right of INT
* [[Sniffing_modes|Deployment & Topology Guide]] - Traffic forwarding methods
  Behavior controlled by
* [[Sniffer_configuration|Sniffer Configuration]] - All parameters reference
  packetbuffer_sender on
* [[Merging_or_correlating_multiple_call_legs|Call Correlation]] - Multi-leg call handling
  intermediate server:
* [[FAQ#One_GUI_for_multiple_sniffers|FAQ: One GUI for Multiple Sniffers]]


  * packetbuffer_sender=no:
== Filtering Options in Packet Mirroring Mode ==
    Process traffic locally,
    send CDRs to central


  * packetbuffer_sender=yes:
{{Note|1='''Important distinction:''' In Packet Mirroring mode (<code>packetbuffer_sender=yes</code>):
    Forward raw packets to
    central server
end note
@enduml
</kroki>


This is supported because the intermediate server does NOT do local packet capture - it only acts as a relay.
* '''Capture rules (GUI-based):''' Applied ONLY on the central server
* '''BPF filters / IP filters:''' CAN be applied on the remote sensor to reduce bandwidth


=== Intermediate Server Configuration ===
Use the following options on the '''remote sensor''' to filter traffic BEFORE sending to the central server:
 
The intermediate server has both <code>server_bind</code> (to receive from remote sensors) and <code>server_destination</code> (to send to central server).


<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
# On INTERMEDIATE SERVER
# On REMOTE SENSOR (client)
# Acts as server for remote sensors, client to central server


[general]
# Option 1: BPF filter (tcpdump syntax) - most flexible
id_sensor              = 100    # Unique ID for this intermediate server
filter = not net 192.168.0.0/16 and not net 10.0.0.0/8


# Receive from remote sensors (server role)
# Option 2: IP allow-list filter - CPU-efficient, no negation support
server_bind            = 0.0.0.0
interface_ip_filter = 192.168.1.0/24
server_bind_port        = 60024
interface_ip_filter = 10.0.0.0/8
server_password        = sensor_password
</syntaxhighlight>


# Send to central server (client role)
<b>Benefits of filtering on remote sensor:</b>
server_destination      = central.server.ip
* Reduces WAN bandwidth usage between sensor and central server
server_destination_port = 60024
* Reduces processing load on central server
server_password        = central_password
* Use <code>filter</code> for complex conditions (tcpdump/BPF syntax)
* Use <code>interface_ip_filter</code> for simple IP allow-lists (more efficient)


# CRITICAL: packetbuffer_sender controls what happens to forwarded traffic
<b>Filtering approaches:</b>
* For <b>SIP header-based filtering</b>: Apply capture rules on the '''central server''' only
* For <b>IP/subnet filtering</b>: Use <code>filter</code> or <code>interface_ip_filter</code> on '''remote sensor'''}}


# Option 1: Local Processing on intermediate server
== Supported Configuration Options in Packet Mirroring Mode ==
packetbuffer_sender    = no      # Process locally, send CDRs to central
mysqlhost              = localhost
mysqldb                = voipmonitor
mysqluser              = voipmonitor
mysqlpassword          = db_password


# OR Option 2: Forward raw packets to central server
In Packet Mirroring mode (<code>packetbuffer_sender = yes</code>), the remote sensor forwards raw packets without processing them. This means many configuration options that manipulate packet behavior are '''unsupported''' on the remote sensor.
# packetbuffer_sender   = yes   # Forward raw packets (no database needed here)
</syntaxhighlight>


=== <code>packetbuffer_sender</code> on Intermediate Server ===
== Supported Options on Remote Sensor (packetbuffer_sender) ==


The <code>packetbuffer_sender</code> setting on the intermediate server determines how it handles traffic from remote sensors:
The following options work correctly on the remote sensor in packet mirroring mode:


{| class="wikitable"
{| class="wikitable"
! Parameter !! Description
|-
| <code>id_sensor</code> || Unique sensor identifier
|-
| <code>server_destination</code> || Central server address
|-
| <code>server_destination_port</code> || Central server port (default 60024)
|-
| <code>server_password</code> || Authentication password
|-
| <code>server_destination_timeout</code> || Connection timeout settings
|-
| <code>server_destination_reconnect</code> || Auto-reconnect behavior
|-
| <code>filter</code> || BPF filter to limit capture (use this to capture only SIP)
|-
| <code>interface_ip_filter</code> || IP-based packet filtering
|-
| <code>interface</code> || Capture interface
|-
| <code>sipport</code> || SIP ports to monitor
|-
| <code>promisc</code> || Promiscuous mode
|-
| <code>rrd</code> || RRD statistics
|-
| <code>spooldir</code> || Temporary packet buffer directory
|-
| <code>ringbuffer</code> || Ring buffer size for packet mirroring
|-
|-
! Setting !! What Happens !! Storage Location
| <code>max_buffer_mem</code> || Maximum buffer memory
|-
|-
| <code>packetbuffer_sender=no</code> || Intermediate server processes traffic (SIP/RTP analysis) and sends CDRs to central server || PCAPs on intermediate server
| <code>packetbuffer_enable</code> || Enable packet buffering
|-
|-
| <code>packetbuffer_sender=yes</code> || Intermediate server forwards raw packets to central server, which processes them || PCAPs on central server
| <code>packetbuffer_compress</code> || Enable compression for forwarded packets
|-
| <code>packetbuffer_compress_ratio</code> || Compression ratio
|}
|}


In both cases, the '''original remote sensors must still be manually added to the GUI for visibility'''
== Unsupported Options on Remote Sensor ==


=== Original vs Intermediate Sensor Visibility ===
The following options '''do NOT work''' on the remote sensor in packet mirroring mode because the sensor does not parse packets:


{| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
{| class="wikitable"
! Parameter !! Reason
|-
|-
! colspan="2" style="background:#4A90E2; color: white;" | Important: Manual Sensor Registration
| <code>natalias</code> || NAT alias handling (configure on central server instead)
|-
|-
| style="vertical-align: top;" | '''Behavior:'''
| <code>rtp_check_both_sides_by_sdp</code> || RTP correlation requires packet parsing
| When using an intermediate server, the original remote sensors (A, B, C) are not automatically visible in the GUI Settings. Only the intermediate server itself appears.
|-
|-
| style="vertical-align: top;" | '''Solution:'''
| <code>disable_process_sdp</code> || SDP processing happens on central server
| To view statistics and status for the original sensors, they must be manually added to the GUI Settings list with their <code>id_sensor</code> values, even though they connect to the intermediate server rather than directly to the central server.
|-
| <code>save_sdp_ipport</code> || SDP extraction happens on central server
|-
| <code>rtpfromsdp_onlysip</code> || RTP mapping requires packet parsing
|-
| <code>rtpip_find_endpoints</code> || Endpoint discovery requires packet parsing
|}
|}


=== Example: Local Processing Mode ===
{{Warning|1='''Critical: Storage options''' (<code>savesip</code>, <code>savertp</code>, <code>saveaudio</code>) '''must be configured on the CENTRAL SERVER''' in packet mirroring mode. The remote sensor only forwards packets and does not perform any storage operations.}}


Remote sensors forward CDRs to intermediate server, which forwards them to central server:
== SIP-Only Capture Example ==


<syntaxhighlight lang="ini">
To capture and forward only SIP packets (excluding RTP/RTCP) for security or compliance:
# Remote Sensors (A, B, C)
id_sensor              = 2        # Unique values: 2, 3, 4...
server_destination      = intermediate.server.ip
server_destination_port = 60024
server_password        = sensor_password
 
packetbuffer_sender    = no      # Local Processing: process here, send CDRs
interface              = eth0
sipport                = 5060
</syntaxhighlight>


<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
# Intermediate Server
# /etc/voipmonitor.conf - Remote Sensor
server_bind            = 0.0.0.0
id_sensor              = 2
server_bind_port        = 60024
server_password        = sensor_password
 
server_destination      = central.server.ip
server_destination      = central.server.ip
server_destination_port = 60024
server_destination_port = 60024
server_password        = central_password
server_password        = your_strong_password
packetbuffer_sender    = yes
interface              = eth0
sipport                = 5060,5061


packetbuffer_sender    = no      # Process locally, send CDRs onward
# Filter to capture ONLY SIP packets (exclude RTP/RTCP)
mysqlhost              = localhost
filter = port 5060 or port 5061
mysqldb                = voipmonitor
mysqluser              = voipmonitor
mysqlpassword          = db_password
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="ini">
{{Note|1=The <code>filter</code> parameter using BPF syntax (tcpdump-compatible) is the recommended way to filter packets at the source in packet mirroring mode. This reduces bandwidth by forwarding only SIP packets to the central server.}}
# Central Server
server_bind            = 0.0.0.0
server_bind_port        = 60024
server_password        = central_password
 
mysqlhost              = localhost
mysqldb                = voipmonitor
mysqluser              = voipmonitor
mysqlpassword          = db_password
</syntaxhighlight>
 
=== Example: Packet Mirroring Mode ===
 
Remote sensors forward raw packets to intermediate server, which forwards them to central server:
 
<syntaxhighlight lang="ini">
# Remote Sensors (A, B, C)
id_sensor              = 2        # Unique values: 2, 3, 4...
server_destination      = intermediate.server.ip
server_destination_port = 60024
server_password        = sensor_password


packetbuffer_sender    = yes    # Packet Mirroring: send raw packets
interface              = eth0
sipport                = 5060
</syntaxhighlight>


<syntaxhighlight lang="ini">
# Intermediate Server
server_bind            = 0.0.0.0
server_bind_port        = 60024
server_password        = sensor_password


server_destination      = central.server.ip
server_destination_port = 60024
server_password        = central_password


packetbuffer_sender    = yes    # Forward raw packets onward
# No database configuration needed on intermediate server
</syntaxhighlight>


<syntaxhighlight lang="ini">
# Central Server
server_bind            = 0.0.0.0
server_bind_port        = 60024
server_password        = central_password


mysqlhost              = localhost
mysqldb                = voipmonitor
mysqluser              = voipmonitor
mysqlpassword          = db_password


# Processing and storage options (configured on central server)
sipport                = 5060
savertp                = yes
savesip                = yes
</syntaxhighlight>


== Limitations ==
= AI Summary for RAG =


* All sensors must use the same <code>server_password</code> at each connection level (sensors→intermediate and intermediate→central)
'''Summary:''' VoIPmonitor v20+ Client-Server architecture for distributed deployments using encrypted TCP (default port 60024, zstd compression). Two modes: '''Local Processing''' (<code>packetbuffer_sender=no</code>) analyzes locally and sends CDRs only (1Gb sufficient); '''Packet Mirroring''' (<code>packetbuffer_sender=yes</code>) forwards raw packets to central server. Critical requirements: (1) exclude server_bind_port from sipport on central server (prevents memory issues); (2) sipport must match on probe and central server; (3) single sniffer must process both SIP and RTP for same call; (4) natalias only on central server. Intermediate servers supported for hub-and-spoke topology. Use <code>manager_ip</code> to bind outgoing connections to specific IP on HA setups. Sensor health via management API port 5029: <code>echo 'sniffer_stat' | nc <ip> 5029</code>. Debug SIP using Live Sniffer in GUI or sngrep on remote sensor. Stale sensor records cause "bad password" errors - delete from GUI Settings → Sensors and restart. Time sync errors: fix NTP or increase <code>client_server_connect_maximum_time_diff_s</code>.
* '''A single sniffer cannot do local packet capture AND act as both server and client simultaneously.''' The intermediate server configuration works because it does NOT capture from its own network interface - it only receives from sensors and forwards to central server.
* Each sensor requires a unique <code>id_sensor</code> (< 65536)
* Time synchronization (NTP) is critical for correlating calls across sensors
* Maximum allowed time difference between client and server: 2 seconds (configurable via <code>client_server_connect_maximum_time_diff_s</code>)


For a complete reference of all client-server parameters, see [[Sniffer_configuration#Distributed_Operation:_Client/Server_&_Mirroring|Sniffer Configuration: Distributed Operation]].
'''Keywords:''' distributed architecture, client-server, packetbuffer_sender, local processing, packet mirroring, server_destination, server_bind, sipport exclusion, AWS VPC Traffic Mirroring alternative, intermediate server, sensor health, sniffer_stat, Live Sniffer, natalias, version compatibility, time synchronization, NTP, stale sensor record, mirror mode migration, manager_ip, high availability


== AI Summary for RAG ==
'''Summary:''' VoIPmonitor v20+ uses Client-Server architecture for distributed deployments with encrypted TCP connections (default port 60024 with zstd compression, configurable via server_bind_port and server_destination_port). Two modes: Local Processing (<code>packetbuffer_sender=no</code>) analyzes locally and sends CDRs, Packet Mirroring (<code>packetbuffer_sender=yes</code>) forwards raw packets. NETWORK BANDWIDTH REQUIREMENTS: For Local Processing (PCAPs stored on sensors), network traffic consists mainly of CDR SQL data and a 1Gb connection between sensors and central server is generally sufficient. For Packet Mirroring, bandwidth consumption is roughly equivalent to VoIP traffic volume (use <code>server_type_compress=zstd</code> to reduce). Dashboard widgets for SIP/RTP/REGISTER counts: with Packet Mirroring, statistics appear only on central server (sender has empty widgets); with Local Processing, statistics appear on both sensor and central server. To enable local statistics on a forwarding sensor, set <code>packetbuffer_sender=no</code> (increases CPU/RAM usage). Supports failover with multiple server IPs. CDRs stored centrally; PCAPs on sensors (Local Processing) or centrally (Packet Mirroring). In Packet Mirroring mode, the <code>save*</code> options (savertp, savesip, saveaudio) configured on the CENTRAL SERVER control storage for packets received from sensors. When multiple sensors forward packets with the same Call-ID, VoIPmonitor automatically merges them into a single CDR. To keep records separate per sensor with same Call-ID, run multiple receiver instances on different ports with separate database tables. CRITICAL: A single sniffer instance MUST process both SIP signaling and RTP media for the same call. Splitting SIP and RTP across different sniffers creates incomplete call records that cannot be reconstructed. INTERMEDIATE SERVER: An intermediate server can receive traffic from multiple remote sensors and forward it to a central server. The intermediate server has both <code>server_bind</code> (to receive from sensors) and <code>server_destination</code> (to send to central server). The behavior is controlled by <code>packetbuffer_sender</code> on the intermediate server: if <code>packetbuffer_sender=no</code>, it processes traffic locally and sends CDRs to central server; if <code>packetbuffer_sender=yes</code>, it forwards raw packets to central server. In both cases, the original remote sensors must be manually added to the GUI Settings for visibility. This is supported because the intermediate server does NOT do local packet capture - it only acts as a relay. For custom port configuration: server_bind_port on central server MUST match server_destination_port on remote sensors. Common reasons for custom ports: firewall restrictions, multiple instances on same server, compliance requirements, avoiding port conflicts. SENSOR HEALTH CHECK VIA MANAGEMENT API: Each sensor exposes a TCP management API (default port 5029) that can be queried via netcat: `echo 'sniffer_stat' | nc <sensor_ip> <sensor_port>`. This returns JSON with sensor status including running state, version, uptime, active calls, total calls, packets per second, and packet drops. IMPORTANT: There is NO single command to check all sensors simultaneously - each must be queried individually. Scripting multiple sensors with a loop can provide a consolidated result with exit codes. In newer VoIPmonitor versions, management API communication may be encrypted, requiring encryption to be disabled or using VoIPmonitor-specific CLI tools. Firewall must allow TCP port 5029 access from monitoring host to sensors. LEGACY MIRROR MODE: Older mirror_destination/mirror_bind options exist but are less robust (no encryption, UDP) and Client-Server mode is recommended. Symptoms of mirror mode issues: all CDRs incorrectly associated with a single sensor after system updates. Migration involves: stop probes, remove old sensor records from GUI Settings, comment out mirror parameters (mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port), add server_bind/server_bind_port on central server and server_destination/server_destination_port on probes, set unique id_sensor per probe, choose packetbuffer_sender mode. Common migration issues: probes cannot connect (verify server_password, firewall allows TCP on server_bind_port), all CDRs show same sensor (old mirror config still active or id_sensor not set), PCAPs not accessible in Local Processing mode (central server must reach probes on TCP/5029). TROUBLESHOOTING: In distributed/probe setups with Packet Mirroring, if a probe is not detecting all calls on expected ports, the <code>sipport</code> configuration MUST match on BOTH the probe AND the central analysis host. If the network uses multiple SIP ports (e.g., 5060, 5061, 5080), both systems must have all ports listed in their <code>sipport</code> directive. Common symptom: Probe sees traffic via <code>tcpdump</code> but central server records incomplete CDRs. WEB GUI ACCESSIBLE BUT SENSORS CANNOT CONNECT: If the web portal is accessible but sensors cannot connect to the primary server, verify that the MySQL/MariaDB database service on the central server is running and responsive. The central VoIPmonitor service requires a functioning database connection to accept sensor data, even though the web interface (PHP) may remain accessible. Check MySQL service status (<code>systemctl status mariadb</code> or <code>systemctl status mysqld</code>) and inspect MySQL error logs (<code>/var/log/mariadb/mariadb.log</code> or <code>/var/log/mysql/error.log</code>) for critical errors. Restart the database service if needed.
'''Keywords:''' distributed architecture, client-server, network bandwidth, throughput, network requirements, 1Gb connection, bandwidth requirements, server_destination, server_bind, server_bind_port, server_destination_port, custom port, packetbuffer_sender, local processing, packet mirroring, remote sensors, failover, encrypted channel, zstd compression, dashboard widgets, statistics, empty dashboard, SIP RTP correlation, split sensors, single sniffer requirement, availability zone, savertp, savesip, saveaudio, centralized storage, packet storage control, call-id merging, multiple sensors same callid, separate records per sensor, receiver instances, mysqltableprefix, firewall, port configuration, connection troubleshooting, probe, central host, central server, sensor, sipport, missing calls, probe not detecting calls, tcpdump, configuration mismatch, mirror mode, migration, mirror_destination, mirror_bind, mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port, migrate from mirror mode, all CDRs same sensor, system update, upgrade, intermediate server, relay server, multi-sensor aggregation, hub and spoke, chained topology, sensor forwarding, mysql, mariadb, database service, web gui accessible, error logs, sensor health check, management API, sniffer_stat, TCP port 5029, manager_bind, nc netcat, sensor status, sensor monitoring, health status, exit code, consolidated result, check all sensors, encrypted API, encryption disabled
'''Key Questions:'''
'''Key Questions:'''
* How do I connect multiple VoIPmonitor sensors to a central server?
* How do I connect multiple VoIPmonitor sensors to a central server?
* What is the expected network throughput between remote sensors and the central GUI/Database server?
* What is the difference between Local Processing and Packet Mirroring mode?
* Is a 1Gb network connection sufficient for remote sensors in VoIPmonitor distributed deployment?
* Why is VoIPmonitor using high memory on the central server?
* What network bandwidth is required for Local Processing mode vs Packet Mirroring mode?
* Why is a remote probe not detecting all calls on expected ports?
* What is the difference between Local Processing and Packet Mirroring?
* How do I check VoIPmonitor sensor health status?
* Where are CDRs and PCAP files stored in distributed mode?
* Why does a new sensor fail with "bad password" error?
* What is packetbuffer_sender and when should I use it?
* How do I configure failover for remote sensors?
* Why are dashboard widgets (SIP/RTP/REGISTER counts) empty for a sensor configured to forward packets?
* How do I enable local statistics on a forwarding sensor?
* Can a VoIPmonitor instance act as an intermediate server receiving from multiple sensors and forwarding to a central server?
* How does packetbuffer_sender control traffic forwarding on an intermediate server?
* Can a VoIPmonitor sniffer be both a server (listening for sensors) and a client (sending to central server)?
* Why does a single sniffer cannot be both server and client mean, and what are the exceptions?
* How do I configure an intermediate server in a hub-and-spoke topology?
* Do I need to manually add remote sensors to the GUI when using an intermediate server?
* How does an intermediate server handle traffic from multiple remote sensors in Packet Mirroring mode?
* How does an intermediate server handle traffic from multiple remote sensors in Local Processing mode?
* Can VoIPmonitor reconstruct a call if SIP signaling is captured by one sniffer and RTP media by another?
* Why does receiver_check_id_sensor not allow merging SIP from one sensor with RTP from another?
* How do I control packet storage when sensors send raw packets to a central server?
* What happens when multiple sensors see the same Call-ID?
* How do I keep records separate when multiple sensors see the same Call-ID?
* How do I configure a custom port for client-server connections?
* What do I do if probes cannot connect to the VoIPmonitor server?
* Why is my remote sensor showing connection refused or timeout?
* Why is a voipmonitor sensor probe not detecting all calls on expected ports?
* Do I need to configure sipport on both the probe and central server in distributed setups?
* What happens if sipport configuration doesn't match between probe and central host?
* How do I migrate from mirror mode to client-server mode?
* How do I migrate from mirror mode to client-server mode?
* Why are all CDRs incorrectly associated with a single sensor after a system update?
* What causes time synchronization errors between client and server?
* What are the differences between mirror mode and client-server mode?
* Where should natalias be configured in distributed deployments?
* How do I configure mirror_destination and server_destination?
* Can VoIPmonitor act as an intermediate server?
* Why are sensors unable to connect to the VoIPMonitor primary server while the web portal remains accessible?
* What is an alternative to AWS VPC Traffic Mirroring?
* What should I check if the web GUI works but sensors cannot connect to the central server?
* How do I verify MySQL or MariaDB database service is running on the primary server?
* Where are MySQL error logs located?
* How do I check the health status of a VoIPmonitor sensor?
* What is the command to query sensor status via the management API?
* How do I use sniffer_stat to check sensor health?
* Is there a single command to check all sensors at once?
* How do I check the status of multiple sensors and get a consolidated exit code?
* What is the default management API port for VoIPmonitor sensors?
* Why can I not connect to the sensor management API on TCP port 5029?
* How do I check if sensor management API is encrypted?
* How do I check the health of remote sensors in a distributed deployment?

Latest revision as of 20:48, 19 January 2026


This guide covers deploying multiple VoIPmonitor sensors in a distributed architecture using Client-Server mode (v20+).

For deployment options including on-host vs dedicated sensors and traffic forwarding methods (SPAN, GRE, TZSP, VXLAN), see VoIPmonitor Deployment & Topology Guide.

Overview

VoIPmonitor v20+ uses Client-Server architecture for distributed deployments. Remote sensors connect to a central server via encrypted TCP (default port 60024, zstd compression).

Mode packetbuffer_sender What is Sent Processing Location Use Case
Local Processing no (default) CDRs only Remote sensor Multi-site, low bandwidth
Packet Mirroring yes Raw packets Central server Centralized analysis, low-resource remotes

Use Cases

AWS VPC Traffic Mirroring Alternative: If experiencing packet loss with AWS VPC Traffic Mirroring (VXLAN overhead, MTU fragmentation), use client-server mode instead:

  • Install VoIPmonitor on each source EC2 instance
  • Send via encrypted TCP to central server
  • Eliminates VXLAN encapsulation and MTU issues

Configuration

Remote Sensor (Client)

id_sensor               = 2                    # Unique per sensor (1-65535)
server_destination      = central.server.ip
server_destination_port = 60024
server_password         = your_strong_password

# Choose mode:
packetbuffer_sender     = no     # Local Processing: analyze locally, send CDRs
# packetbuffer_sender   = yes    # Packet Mirroring: send raw packets

interface               = eth0
sipport                 = 5060
# No MySQL credentials needed on remote sensors

💡 Tip: For HA setups with floating IPs, use manager_ip = 10.0.0.5 to bind outgoing connections to a static IP address.

Central Server

server_bind             = 0.0.0.0
server_bind_port        = 60024
server_password         = your_strong_password

mysqlhost               = localhost
mysqldb                 = voipmonitor
mysqluser               = voipmonitor
mysqlpassword           = db_password

# If receiving raw packets (packetbuffer_sender=yes on clients):
sipport                 = 5060
savertp                 = yes
savesip                 = yes

⚠️ Warning: Critical: Exclude server_bind_port from sipport on the central server. Including it causes continuously increasing memory usage.

# WRONG - includes sensor communication port:
sipport = 1-65535

# CORRECT - excludes port 60024:
sipport = 1-60023,60025-65535

Key Configuration Rules

Rule Applies To Why
server_bind_port must match server_destination_port Both Connection fails if mismatched
sipport must match on probe and central server Packet Mirroring Missing ports = missing calls
natalias only on central server Packet Mirroring Prevents RTP correlation issues
Each sensor needs unique id_sensor All Required for identification

Local Processing vs Packet Mirroring

Local Processing Packet Mirroring
packetbuffer_sender no (default) yes
Processing location Remote sensor Central server
PCAP storage Remote sensor Central server
WAN bandwidth Low (CDRs only, 1Gb sufficient) High (full packets)
Remote CPU load Higher Minimal
Capture rules applied On sensor On central server only

PCAP Access in Local Processing Mode

PCAPs are stored on remote sensors. The GUI retrieves them through the central server, which proxies the request to the sensor over the existing TCP/60024 connection - the same persistent encrypted channel the sensor uses for sending CDRs. This connection is bidirectional; the central server does not open any separate connection back to the sensor.

Firewall requirements:

Direction Port Purpose
Remote sensors → Central server TCP/60024 Persistent encrypted channel (CDRs from sensor, PCAP requests from server - bidirectional)
GUI → Central server TCP/5029 Manager API (sensor status, active calls, configuration)
GUI → Central server TCP/60024 Server API (list connected sensors, proxy PCAP retrieval)

ℹ️ Note: The central server does not initiate connections to remote sensors. All server↔sensor communication happens over the single TCP/60024 connection that the sensor established.

💡 Tip: Packet Mirroring (packetbuffer_sender=yes) automatically deduplicates calls - the central server merges packets from all probes for the same Call-ID into a single unified CDR. This also ensures one logical call only consumes one license channel.

Advanced Topics

High Availability (Failover)

Remote sensors can specify multiple central servers:

server_destination = 192.168.0.1, 192.168.0.2

If primary is unavailable, the sensor automatically connects to the next server.

Connection Compression

# On both client and server (default: zstd)
server_type_compress = zstd   # Options: zstd, gzip, lzo, none

Intermediate Server (Hub-and-Spoke)

An intermediate server can receive from multiple sensors and forward to a central server:

# On INTERMEDIATE SERVER
id_sensor               = 100

# Receive from remote sensors
server_bind             = 0.0.0.0
server_bind_port        = 60024
server_password         = sensor_password

# Forward to central server
server_destination      = central.server.ip
server_destination_port = 60024

packetbuffer_sender     = no    # or yes, depending on desired mode

ℹ️ Note: This works because the intermediate server does NOT do local packet capture - it only relays. Original remote sensors must be manually added to GUI Settings for visibility.

Multiple Receivers for Packet Mirroring

⚠️ Warning: Multiple sensors with packetbuffer_sender=yes sending to a single receiver instance can cause call processing conflicts (calls appear in Active Calls but missing from CDRs).

Solution: Run separate receiver instances on different hosts, each dedicated to specific sensors:

# Receiver Instance 1 (Host 1, for Sensor A)
server_bind_port        = 60024
id_sensor               = 1

# Receiver Instance 2 (Host 2, for Sensor B)
server_bind_port        = 60024
id_sensor               = 2

Alternative: Use Local Processing mode (packetbuffer_sender=no) which processes calls independently on each sensor.

Preventing Duplicate CDRs (Local Processing)

When multiple probes capture the same call in Local Processing mode:

# On each probe
cdr_check_exists_callid = yes

This checks for existing CDRs before inserting. Requires MySQL UPDATE privileges.

Critical: SIP and RTP Must Be Captured Together

VoIPmonitor cannot correlate SIP and RTP from different sniffer instances. A single sniffer must process both SIP and RTP for each call. Parameters like cdr_check_exists_callid do NOT enable split SIP/RTP correlation.


Split SIP/RTP with Packet Mirroring Mode

ℹ️ Note: Exception for Packet Mirroring Mode:: The above limitation applies to Local Processing mode (packetbuffer_sender=no) where each sensor processes calls independently. In Packet Mirroring mode (packetbuffer_sender=yes), the central server receives raw packets from multiple remote sensors and processes them together. This allows scenarios where SIP and RTP are captured on separate nodes - configure both as packet senders and let the central server correlate them into single unified CDRs.

Example scenario: Separate SIP signaling node and RTP handling node:

# SIP Signaling Node (packet sender)
id_sensor               = 1
packetbuffer_sender     = yes
server_destination      = central.server.ip
server_destination_port = 60024
server_password         = your_password

# RTP Handling Node (packet sender)
id_sensor               = 2
packetbuffer_sender     = yes
server_destination      = central.server.ip
server_destination_port = 60024
server_password         = your_password

The central server merges packets from both senders by Call-ID, creating unified CDRs with complete SIP and RTP data.


HEP Protocol in Client/Server Mode

VoIPmonitor supports receiving HEP-encapsulated traffic on sniffer clients and forwarding it to a central server. This enables distributed capture from HEP sources (Kamailio, OpenSIPS, rtpproxy, FreeSWITCH) in a client/server architecture.

Scenario: SIP proxy and RTP proxy at different locations sending HEP to remote sniffer clients:

# Remote Sniffer Client A (receives HEP from Kamailio)
id_sensor               = 1
hep                     = yes
hep_bind_port           = 9060
packetbuffer_sender     = yes
server_destination      = central.server.ip
server_destination_port = 60024
server_password         = your_password

# Remote Sniffer Client B (receives HEP from rtpproxy)
id_sensor               = 2
hep                     = yes
hep_bind_port           = 9060
packetbuffer_sender     = yes
server_destination      = central.server.ip
server_destination_port = 60024
server_password         = your_password

The central server receives packets from both clients and correlates them into unified CDRs using standard SIP Call-ID and IP:port from SDP.

ℹ️ Note: This also works for IPFIX (Oracle SBCs) and RibbonSBC protocols forwarded via client/server mode.

Alternative: Direct HEP to single sniffer

If both HEP sources can reach the same sniffer directly, no client/server setup is needed:

# Single sniffer receiving HEP from multiple sources
hep                     = yes
hep_bind_port           = 9060
interface               = eth0   # Can also sniff locally if needed

Both Kamailio (SIP) and rtpproxy (RTP) send HEP to this sniffer on port 9060. The sniffer correlates them automatically based on Call-ID and SDP IP:port.

Sensor Health Monitoring

Management API

Query sensor status via TCP port 5029:

echo 'sniffer_stat' | nc <sensor_ip> 5029

Returns JSON with status, version, active calls, packets per second, etc.

Multi-Sensor Health Check Script

#!/bin/bash
SENSORS=("192.168.1.10:5029" "192.168.1.11:5029")
for SENSOR in "${SENSORS[@]}"; do
    IP=$(echo $SENSOR | cut -d: -f1)
    PORT=$(echo $SENSOR | cut -d: -f2)
    STATUS=$(echo 'sniffer_stat' | nc -w 2 $IP $PORT 2>/dev/null | grep -o '"status":"[^"]*"' | cut -d'"' -f4)
    echo "$IP: ${STATUS:-FAILED}"
done

Version Compatibility

Scenario Compatibility Notes
GUI ≥ Sniffer ✅ Compatible Recommended
GUI < Sniffer ⚠️ Risk Sensor may write to non-existent columns

Best practice: Upgrade GUI first (applies schema changes), then upgrade sensors.

For mixed versions temporarily, add to central server:

server_cp_store_simple_connect_response = yes   # Sniffer 2024.11.0+

Troubleshooting

Quick Diagnosis

Symptom First Check Likely Cause
Sensor not connecting journalctl -u voipmonitor -f on sensor Check server_destination, password, firewall
Traffic rate [0.0Mb/s] tcpdump on sensor interface Network/SPAN issue, not communication
High memory on central server Check if sipport includes 60024 Exclude server port from sipport
Missing calls Compare sipport on probe vs central Must match on both sides
"Bad password" error GUI → Settings → Sensors Delete stale sensor record, restart sensor
"Connection refused (111)" after migration Check server_destination in config Points to old server IP
RTP streams end prematurely Check natalias location Configure only on central server
Time sync errors timedatectl status Fix NTP or increase tolerance

Connection Testing

# Test connectivity from sensor to server
nc -zv <server_ip> 60024

# Verify server is listening
ss -tulpn | grep voipmonitor

# Check sensor logs
journalctl -u voipmonitor -n 100 | grep -i "connect"

Time Synchronization Errors

If seeing "different time between server and client" errors:

Immediate workaround: Increase tolerance on both sides:

client_server_connect_maximum_time_diff_s = 30
receive_packetbuffer_maximum_time_diff_s = 30

Root cause fix: Ensure NTP is working:

timedatectl status           # Check sync status
chronyc tracking             # Check offset (Chrony)
ntpq -p                      # Check offset (NTP)

Network Throughput Testing

If experiencing "packetbuffer: MEMORY IS FULL" errors, test network with iperf3:

# On central server
iperf3 -s

# On probe
iperf3 -c <server_ip>
Result Interpretation Action
Expected bandwidth (>900 Mbps on 1Gb) Network OK Check local CPU/RAM
Low throughput Network bottleneck Check switches, cabling, consider Local Processing mode

Debugging SIP Traffic

sngrep does not work on the central server because traffic is encapsulated in the TCP tunnel.

Options:

  • Live Sniffer: Use GUI → Live Sniffer to view SIP from remote sensors
  • sngrep on sensor: Run sngrep -i eth0 directly on the remote sensor

Stale Sensor Records

If a new sensor fails with "bad password" despite correct credentials:

  1. Delete the sensor record from GUI → Settings → Sensors
  2. Restart voipmonitor on the sensor: systemctl restart voipmonitor
  3. The sensor will re-register automatically

Legacy: Mirror Mode

The older mirror_destination/mirror_bind options still work but Client-Server mode is preferred (encryption, simpler management).

To migrate from mirror mode:

  1. Stop sensors, comment out mirror_* parameters
  2. Configure server_bind on central, server_destination on sensors
  3. Restart all services

For mirror mode id_sensor attribution, use:

# On central receiver
mirror_bind_sensor_id_by_sender = yes

See Also

Filtering Options in Packet Mirroring Mode

ℹ️ Note: Important distinction: In Packet Mirroring mode (packetbuffer_sender=yes):

  • Capture rules (GUI-based): Applied ONLY on the central server
  • BPF filters / IP filters: CAN be applied on the remote sensor to reduce bandwidth

Use the following options on the remote sensor to filter traffic BEFORE sending to the central server:

# On REMOTE SENSOR (client)

# Option 1: BPF filter (tcpdump syntax) - most flexible
filter = not net 192.168.0.0/16 and not net 10.0.0.0/8

# Option 2: IP allow-list filter - CPU-efficient, no negation support
interface_ip_filter = 192.168.1.0/24
interface_ip_filter = 10.0.0.0/8

Benefits of filtering on remote sensor:

  • Reduces WAN bandwidth usage between sensor and central server
  • Reduces processing load on central server
  • Use filter for complex conditions (tcpdump/BPF syntax)
  • Use interface_ip_filter for simple IP allow-lists (more efficient)

Filtering approaches:

  • For SIP header-based filtering: Apply capture rules on the central server only
  • For IP/subnet filtering: Use filter or interface_ip_filter on remote sensor

Supported Configuration Options in Packet Mirroring Mode

In Packet Mirroring mode (packetbuffer_sender = yes), the remote sensor forwards raw packets without processing them. This means many configuration options that manipulate packet behavior are unsupported on the remote sensor.

Supported Options on Remote Sensor (packetbuffer_sender)

The following options work correctly on the remote sensor in packet mirroring mode:

Parameter Description
id_sensor Unique sensor identifier
server_destination Central server address
server_destination_port Central server port (default 60024)
server_password Authentication password
server_destination_timeout Connection timeout settings
server_destination_reconnect Auto-reconnect behavior
filter BPF filter to limit capture (use this to capture only SIP)
interface_ip_filter IP-based packet filtering
interface Capture interface
sipport SIP ports to monitor
promisc Promiscuous mode
rrd RRD statistics
spooldir Temporary packet buffer directory
ringbuffer Ring buffer size for packet mirroring
max_buffer_mem Maximum buffer memory
packetbuffer_enable Enable packet buffering
packetbuffer_compress Enable compression for forwarded packets
packetbuffer_compress_ratio Compression ratio

Unsupported Options on Remote Sensor

The following options do NOT work on the remote sensor in packet mirroring mode because the sensor does not parse packets:

Parameter Reason
natalias NAT alias handling (configure on central server instead)
rtp_check_both_sides_by_sdp RTP correlation requires packet parsing
disable_process_sdp SDP processing happens on central server
save_sdp_ipport SDP extraction happens on central server
rtpfromsdp_onlysip RTP mapping requires packet parsing
rtpip_find_endpoints Endpoint discovery requires packet parsing

⚠️ Warning: Critical: Storage options (savesip, savertp, saveaudio) must be configured on the CENTRAL SERVER in packet mirroring mode. The remote sensor only forwards packets and does not perform any storage operations.

SIP-Only Capture Example

To capture and forward only SIP packets (excluding RTP/RTCP) for security or compliance:

# /etc/voipmonitor.conf - Remote Sensor
id_sensor               = 2
server_destination      = central.server.ip
server_destination_port = 60024
server_password         = your_strong_password
packetbuffer_sender     = yes
interface               = eth0
sipport                 = 5060,5061

# Filter to capture ONLY SIP packets (exclude RTP/RTCP)
filter = port 5060 or port 5061

ℹ️ Note: The filter parameter using BPF syntax (tcpdump-compatible) is the recommended way to filter packets at the source in packet mirroring mode. This reduces bandwidth by forwarding only SIP packets to the central server.





AI Summary for RAG

Summary: VoIPmonitor v20+ Client-Server architecture for distributed deployments using encrypted TCP (default port 60024, zstd compression). Two modes: Local Processing (packetbuffer_sender=no) analyzes locally and sends CDRs only (1Gb sufficient); Packet Mirroring (packetbuffer_sender=yes) forwards raw packets to central server. Critical requirements: (1) exclude server_bind_port from sipport on central server (prevents memory issues); (2) sipport must match on probe and central server; (3) single sniffer must process both SIP and RTP for same call; (4) natalias only on central server. Intermediate servers supported for hub-and-spoke topology. Use manager_ip to bind outgoing connections to specific IP on HA setups. Sensor health via management API port 5029: echo 'sniffer_stat' | nc <ip> 5029. Debug SIP using Live Sniffer in GUI or sngrep on remote sensor. Stale sensor records cause "bad password" errors - delete from GUI Settings → Sensors and restart. Time sync errors: fix NTP or increase client_server_connect_maximum_time_diff_s.

Keywords: distributed architecture, client-server, packetbuffer_sender, local processing, packet mirroring, server_destination, server_bind, sipport exclusion, AWS VPC Traffic Mirroring alternative, intermediate server, sensor health, sniffer_stat, Live Sniffer, natalias, version compatibility, time synchronization, NTP, stale sensor record, mirror mode migration, manager_ip, high availability

Key Questions:

  • How do I connect multiple VoIPmonitor sensors to a central server?
  • What is the difference between Local Processing and Packet Mirroring mode?
  • Why is VoIPmonitor using high memory on the central server?
  • Why is a remote probe not detecting all calls on expected ports?
  • How do I check VoIPmonitor sensor health status?
  • Why does a new sensor fail with "bad password" error?
  • How do I migrate from mirror mode to client-server mode?
  • What causes time synchronization errors between client and server?
  • Where should natalias be configured in distributed deployments?
  • Can VoIPmonitor act as an intermediate server?
  • What is an alternative to AWS VPC Traffic Mirroring?