|
|
| (19 intermediate revisions by the same user not shown) |
| Line 1: |
Line 1: |
| {{DISPLAYTITLE:Distributed Architecture: Client-Server Mode}} | | {{DISPLAYTITLE:Distributed Architecture: Client-Server Mode}} |
|
| |
|
| This guide explains how to deploy multiple VoIPmonitor sensors in a distributed architecture using the modern Client-Server mode. | | This guide covers deploying multiple VoIPmonitor sensors in a distributed architecture using Client-Server mode (v20+). |
|
| |
|
| == Overview ==
| | For deployment options including on-host vs dedicated sensors and traffic forwarding methods (SPAN, GRE, TZSP, VXLAN), see [[Sniffing_modes|VoIPmonitor Deployment & Topology Guide]]. |
|
| |
|
| VoIPmonitor v20+ uses a '''Client-Server architecture''' for distributed deployments. Remote sensors connect to a central server via encrypted TCP channel. | | = Overview = |
| | |
| | VoIPmonitor v20+ uses '''Client-Server architecture''' for distributed deployments. Remote sensors connect to a central server via encrypted TCP (default port 60024, zstd compression). |
|
| |
|
| {| class="wikitable" | | {| class="wikitable" |
| |- | | |- |
| ! Mode !! What is sent !! Processing location !! Use case | | ! Mode !! <code>packetbuffer_sender</code> !! What is Sent !! Processing Location !! Use Case |
| |- | | |- |
| | '''Local Processing''' || CDRs only || Remote sensor || Multiple sites, low bandwidth | | | '''Local Processing''' || <code>no</code> (default) || CDRs only || Remote sensor || Multi-site, low bandwidth |
| |- | | |- |
| | '''Packet Mirroring''' || Raw packets || Central server || Centralized analysis, low-resource remotes | | | '''Packet Mirroring''' || <code>yes</code> || Raw packets || Central server || Centralized analysis, low-resource remotes |
| |} | | |} |
|
| |
| The mode is controlled by a single option: <code>packetbuffer_sender</code>
| |
|
| |
| For comprehensive deployment options including on-host vs dedicated sensors, traffic forwarding methods (SPAN, GRE, TZSP, VXLAN), and NFS/SSHFS alternatives, see [[Sniffing_modes|VoIPmonitor Deployment & Topology Guide]].
| |
|
| |
| == Client-Server Mode ==
| |
|
| |
| === Architecture ===
| |
|
| |
|
| <kroki lang="plantuml"> | | <kroki lang="plantuml"> |
| Line 49: |
Line 43: |
| </kroki> | | </kroki> |
|
| |
|
| === Configuration === | | == Use Cases == |
| | |
| | '''AWS VPC Traffic Mirroring Alternative:''' |
| | If experiencing packet loss with AWS VPC Traffic Mirroring (VXLAN overhead, MTU fragmentation), use client-server mode instead: |
| | * Install VoIPmonitor on each source EC2 instance |
| | * Send via encrypted TCP to central server |
| | * Eliminates VXLAN encapsulation and MTU issues |
| | |
| | = Configuration = |
| | |
| | == Remote Sensor (Client) == |
|
| |
|
| '''Remote Sensor (client):'''
| |
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| id_sensor = 2 # unique per sensor | | id_sensor = 2 # Unique per sensor (1-65535) |
| server_destination = central.server.ip | | server_destination = central.server.ip |
| server_destination_port = 60024 | | server_destination_port = 60024 |
| server_password = your_strong_password | | server_password = your_strong_password |
|
| |
|
| # Choose one: | | # Choose mode: |
| packetbuffer_sender = no # Local Processing: analyze locally, send CDRs | | packetbuffer_sender = no # Local Processing: analyze locally, send CDRs |
| # packetbuffer_sender = yes # Packet Mirroring: send raw packets | | # packetbuffer_sender = yes # Packet Mirroring: send raw packets |
| Line 67: |
Line 70: |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| '''Important: Source IP Binding with <code>manager_ip</code>'''
| | {{Tip|1=For HA setups with floating IPs, use <code>manager_ip = 10.0.0.5</code> to bind outgoing connections to a static IP address.}} |
|
| |
|
| For remote sensors with multiple IP addresses (e.g., in High Availability setups with a floating/virtual IP), use the <code>manager_ip</code> parameter to bind the outgoing connection to a specific static IP address. This ensures the central server sees a consistent source IP from each sensor, preventing connection issues during failover.
| | == Central Server == |
|
| |
|
| <syntaxhighlight lang="ini">
| |
| # On sensor with multiple interfaces (e.g., static IP + floating HA IP)
| |
| manager_ip = 10.0.0.5 # Bind to the static IP address
| |
| server_destination = 192.168.1.100
| |
| # The outgoing connection will use 10.0.0.5 as the source IP instead of the floating IP
| |
| </syntaxhighlight>
| |
|
| |
| Useful scenarios:
| |
| * HA pairs: Sensors use static IPs while floating IP is only for failover management
| |
| * Multiple VNICs: Explicit source IP selection on systems with multiple virtual interfaces
| |
| * Network ACLs: Ensure connections originate from whitelisted IP addresses
| |
|
| |
| '''Central Server:'''
| |
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| server_bind = 0.0.0.0 | | server_bind = 0.0.0.0 |
| Line 96: |
Line 86: |
| # If receiving raw packets (packetbuffer_sender=yes on clients): | | # If receiving raw packets (packetbuffer_sender=yes on clients): |
| sipport = 5060 | | sipport = 5060 |
| # ... other sniffer options
| | savertp = yes |
| | savesip = yes |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| === Custom Port Configuration === | | {{Warning|1='''Critical:''' Exclude <code>server_bind_port</code> from <code>sipport</code> on the central server. Including it causes continuously increasing memory usage. |
| | |
| '''Critical:''' The <code>server_bind_port</code> on the central server must match the <code>server_destination_port</code> on each remote sensor. If these ports do not match, sensors cannot connect. | |
| | |
| <syntaxhighlight lang="ini">
| |
| # Central Server (listening on custom port 50291)
| |
| server_bind = 0.0.0.0
| |
| server_bind_port = 50291 # Custom port (default is 60024)
| |
| server_password = your_strong_password
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # Remote Sensor (must match the server's custom port)
| |
| server_destination = 45.249.9.2
| |
| server_destination_port = 50291 # MUST match server_bind_port
| |
| server_password = your_strong_password
| |
| </syntaxhighlight>
| |
| | |
| '''Common reasons to use a custom port:'''
| |
| | |
| * Firewall restrictions that block the default port 60024
| |
| * Running multiple VoIPmonitor instances on the same server (each with a different port)
| |
| * Compliance requirements for non-standard ports
| |
| * Avoiding port conflicts with other services
| |
| | |
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
| |
| |-
| |
| ! colspan="2" style="background:#ffc107;" | Critical: Exclude server_bind_port from sipport
| |
| |-
| |
| | style="vertical-align: top;" | '''The Issue:'''
| |
| | In client/server deployments, the <code>server_bind_port</code> (default 60024) used for sensor communication must be <strong>excluded</strong> from the <code>sipport</code> directive on the central server. If the sensor communication port is included in <code>sipport</code>, the sensor-to-server traffic itself is captured as SIP packets, causing high and continuously increasing memory utilization.
| |
| |-
| |
| | style="vertical-align: top;" | '''Symptoms:'''
| |
| | VoIPmonitor service exhibits high and increasing memory usage on the central server even when call volume is normal or low. The issue persists even after increasing <code>max_buffer_mem</code>.
| |
| |-
| |
| | style="vertical-align: top;" | '''The Fix:'''
| |
| | Configure <code>sipport</code> to exclude the <code>server_bind_port</code>. For example, if using the default <code>server_bind_port = 60024</code>:
| |
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # WRONG - includes sensor communication port: | | # WRONG - includes sensor communication port: |
| sipport = 1-65535 | | sipport = 1-65535 |
|
| |
|
| # CORRECT - excludes sensor communication port: | | # CORRECT - excludes port 60024: |
| sipport = 1-60023 | | sipport = 1-60023,60025-65535 |
| sipport = 60025-65535</syntaxhighlight>
| | </syntaxhighlight>}} |
| |-
| |
| | style="vertical-align: top;" | ''' Applies To:'''
| |
| | Central servers in both Local Processing (<code>packetbuffer_sender=no</code>) and Packet Mirroring (<code>packetbuffer_sender=yes</code>) modes. Remote sensors typically do not need this exclusion as they are clients (not servers).
| |
| |}
| |
|
| |
|
| ''' Troubleshooting Connection Failures: '''
| | == Key Configuration Rules == |
|
| |
|
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;" | | {| class="wikitable" |
| |- | | |- |
| ! colspan="2" style="background:#ffc107;" | Critical First Step: Check Traffic Rate Indicator | | ! Rule !! Applies To !! Why |
| |- | | |- |
| | style="vertical-align: top;" | '''IMPORTANT:''' | | | <code>server_bind_port</code> must match <code>server_destination_port</code> || Both || Connection fails if mismatched |
| | Before troubleshooting communication issues, check if the probe is receiving traffic. The traffic rate indicator in the sensor logs shows the current packet capture rate in the format <code>[x.xMb/s]</code> (e.g., <code>[12.5Mb/s]</code> or <code>[0.0Mb/s]</code>).
| |
| |- | | |- |
| | style="vertical-align: top;" | '''How to check:''' | | | <code>sipport</code> must match on probe and central server || Packet Mirroring || Missing ports = missing calls |
| | Run <code>journalctl -u voipmonitor -n 100</code> on the probe and look for the traffic rate indicator printed in the status logs.
| |
| |- | | |- |
| | style="vertical-align: top;" | '''If showing <code>[0.0Mb/s]</code>:''' | | | <code>natalias</code> only on central server || Packet Mirroring || Prevents RTP correlation issues |
| | The issue is NOT communication or authentication. The problem is network configuration on the probe side. Common causes: incorrect SPAN/mirror port setup on the switch, wrong network interface selected in <code>voipmonitor.conf</code>, or the probe is not receiving any traffic at all. Fix the network configuration first. | |
| |- | | |- |
| | style="vertical-align: top;" | '''If showing traffic (non-zero rate):''' | | | Each sensor needs unique <code>id_sensor</code> || All || Required for identification |
| | The probe IS receiving traffic from the network, so the handshake issue is with communication/authentication. Proceed with the steps below. | |
| |} | | |} |
|
| |
|
| If probes cannot connect to the server and the traffic rate indicator shows non-zero traffic:
| | = Local Processing vs Packet Mirroring = |
| | |
| 1. '''Verify ports match on both sides:'''
| |
| <syntaxhighlight lang="bash">
| |
| # On central server - check which port it is listening on
| |
| ss -tulpn | grep voipmonitor
| |
| # Should show: voipmonitor LISTEN 0.0.0.0:50291
| |
| </syntaxhighlight>
| |
| | |
| 2. '''Test connectivity from remote sensor:'''
| |
| <syntaxhighlight lang="bash">
| |
| # Test TCP connection to the server's custom port
| |
| nc -zv 45.249.9.2 50291
| |
| # Success: "Connection to 45.249.9.2 50291 port [tcp/*] succeeded!"
| |
| # Timeout/Refused: Check firewall or misconfigured port
| |
| </syntaxhighlight>
| |
| | |
| 3. '''Ensure firewall allows the custom port:'''
| |
| <syntaxhighlight lang="bash">
| |
| # Allow inbound TCP on custom port (example for firewalld)
| |
| firewall-cmd --permanent --add-port=50291/tcp
| |
| firewall-cmd --reload
| |
| </syntaxhighlight>
| |
| | |
| 4. '''Check logs on both sides:'''
| |
| <syntaxhighlight lang="bash">
| |
| journalctl -u voipmonitor -f
| |
| # Look for: "connecting to server", "connection refused", or "timeout"
| |
| </syntaxhighlight>
| |
| | |
| 5. '''Verify MySQL database is accessible (if web GUI works but sensors cannot connect):'''
| |
| If the web portal is accessible but sensors cannot connect, verify that the MySQL/MariaDB database service on the primary server is running and responsive. The central VoIPmonitor service requires a functioning database connection to accept sensor data.
| |
| <syntaxhighlight lang="bash">
| |
| # Check if MySQL service is running
| |
| systemctl status mariadb
| |
| # or
| |
| systemctl status mysqld
| |
| | |
| # Check for database errors in MySQL error log
| |
| # Common locations:
| |
| tail -50 /var/log/mariadb/mariadb.log
| |
| tail -50 /var/log/mysql/error.log
| |
| # Look for critical errors that might prevent database connections
| |
| </syntaxhighlight>
| |
| | |
| If MySQL is down or experiencing critical errors, the central VoIPmonitor server may not be able to accept sensor connections even though the web interface (PHP) remains accessible. Restart the database service if needed and monitor the logs for recurring errors.
| |
| | |
| After changing port configuration, restart the service:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| systemctl restart voipmonitor
| |
| </syntaxhighlight>
| |
| | |
| === Checking Sensor Health Status via Management API ===
| |
| | |
| Each VoIPmonitor sensor exposes a TCP management API (default port 5029) that can be used to query its operational status and health. This is useful for monitoring multiple sensors, especially in distributed deployments.
| |
| | |
| '''Important Notes:'''
| |
| * There is NO single command to check all sensors simultaneously
| |
| * Each sensor must be queried individually
| |
| * The `sniffer_stat` command returns JSON with sensor status information
| |
| * In newer VoIPmonitor versions, the sensor's management API communication may be encrypted
| |
| | |
| ==== Basic Health Check Command ====
| |
| | |
| To check the status of a single sensor:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| # Query sensor status via management port
| |
| echo 'sniffer_stat' | nc <sensor_ip> <sensor_port>
| |
| </syntaxhighlight>
| |
| | |
| Replace:
| |
| * <code><sensor_ip></code> with the IP address of the sensor
| |
| * <code><sensor_port></code> with the management port (default: 5029)
| |
| | |
| ==== Example Response ====
| |
| | |
| The command returns a JSON object with sensor status information:
| |
| | |
| <pre>
| |
| {
| |
| "status": "running",
| |
| "version": "30.3-SVN.123",
| |
| "uptime": 86400,
| |
| "calls_active": 42,
| |
| "calls_total": 12345,
| |
| "packets_per_second": 1250.5,
| |
| "packets_dropped": 0
| |
| }
| |
| </pre>
| |
| | |
| ==== Scripting Multiple Sensors ====
| |
| | |
| To check multiple sensors and get a consolidated result, create a script that queries each sensor individually:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| #!/bin/bash
| |
| # Check health of multiple sensors
| |
| | |
| SENSORS=("192.168.1.10:5029" "192.168.1.11:5029" "192.168.1.12:5029")
| |
| ALL_OK=true
| |
| | |
| for SENSOR in "${SENSORS[@]}"; do
| |
| IP=$(echo $SENSOR | cut -d: -f1)
| |
| PORT=$(echo $SENSOR | cut -d: -f2)
| |
| | |
| echo -n "Checking $IP:$PORT ... "
| |
| | |
| # Query sensor and check for running status
| |
| STATUS=$(echo 'sniffer_stat' | nc -w 2 $IP $PORT 2>/dev/null | grep -o '"status":"[^"]*"' | cut -d'"' -f4)
| |
| | |
| if [ "$STATUS" = "running" ]; then
| |
| echo "OK"
| |
| else
| |
| echo "FAILED (status: $STATUS)"
| |
| ALL_OK=false
| |
| fi
| |
| done
| |
| | |
| if [ "$ALL_OK" = true ]; then
| |
| echo "All sensors healthy"
| |
| exit 0
| |
| else
| |
| echo "One or more sensors unhealthy"
| |
| exit 1
| |
| fi
| |
| </syntaxhighlight>
| |
| | |
| ==== Troubleshooting Management API Access ====
| |
| | |
| If you cannot connect to the sensor management API:
| |
| | |
| 1. '''Verify the management port is listening:'''
| |
| <syntaxhighlight lang="bash">
| |
| # On the sensor host
| |
| netstat -tlnp | grep 5029
| |
| # or
| |
| ss -tlnp | grep voipmonitor
| |
| </syntaxhighlight>
| |
| | |
| 2. '''Check firewall rules:'''
| |
| Ensure TCP port 5029 is allowed from the monitoring host to the sensor.
| |
| | |
| 3. '''Test connectivity with netcat:'''
| |
| <syntaxhighlight lang="bash">
| |
| nc -zv <sensor_ip> 5029
| |
| </syntaxhighlight>
| |
| | |
| 4. '''Encrypted Communication (Newer Versions):'''
| |
| In newer VoIPmonitor versions, the sensor's API communication may be encrypted. If management API access fails with encryption errors:
| |
| * Check VoIPmonitor documentation for your version
| |
| * Encryption may need to be disabled for management API access
| |
| * Consult support for encrypted CLI tools if available
| |
| | |
| ==== Encryption Considerations ====
| |
| | |
| If your sensors use encrypted management API (newer versions):
| |
| | |
| * The standard netcat command may not work with encrypted connections
| |
| * Check if `manager_bind` (default port 5029) has encryption enabled
| |
| * For encrypted connections, you may need VoIPmonitor-specific CLI tools
| |
| * Refer to your VoIPmonitor version documentation or contact support for encrypted API access
| |
| | |
| === Connection Compression ===
| |
| | |
| The client-server channel supports compression to reduce bandwidth usage:
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # On both client and server (default: zstd)
| |
| server_type_compress = zstd
| |
| </syntaxhighlight>
| |
| | |
| Available options: <code>zstd</code> (default, recommended), <code>gzip</code>, <code>lzo</code>, <code>none</code>
| |
| | |
| === High Availability (Failover) ===
| |
| | |
| Remote sensors can specify multiple central server IPs for automatic failover:
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # Remote sensor configuration with failover
| |
| server_destination = 192.168.0.1, 192.168.0.2
| |
| </syntaxhighlight>
| |
| | |
| If the primary server becomes unavailable, the sensor automatically connects to the next server in the list.
| |
| | |
| == Local Processing vs Packet Mirroring ==
| |
|
| |
|
| {| class="wikitable" | | {| class="wikitable" |
| Line 361: |
Line 122: |
| | '''<code>packetbuffer_sender</code>''' || <code>no</code> (default) || <code>yes</code> | | | '''<code>packetbuffer_sender</code>''' || <code>no</code> (default) || <code>yes</code> |
| |- | | |- |
| | '''Packet analysis''' || On remote sensor || On central server | | | '''Processing location''' || Remote sensor || Central server |
| |- | | |- |
| | '''PCAP storage''' || On remote sensor || On central server | | | '''PCAP storage''' || Remote sensor || Central server |
| |- | | |- |
| | '''WAN bandwidth''' || Low (CDRs only) || High (full packets) | | | '''WAN bandwidth''' || Low (CDRs only, 1Gb sufficient) || High (full packets) |
| |- | | |- |
| | '''Remote CPU load''' || Higher || Minimal | | | '''Remote CPU load''' || Higher || Minimal |
| |- | | |- |
| | '''Use case''' || Standard multi-site || Low-resource remotes | | | '''Capture rules applied''' || On sensor || On central server only |
| |}
| |
| | |
| === Network Bandwidth Requirements ===
| |
| | |
| The network bandwidth requirements between remote sensors and the central server depend on the selected operational mode:
| |
| | |
| {| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
| |
| |-
| |
| ! colspan="2" style="background:#4A90E2; color: white;" | Bandwidth Guidelines
| |
| |-
| |
| | style="vertical-align: top;" | '''Local Processing Mode (<code>packetbuffer_sender=no</code>):'''
| |
| | PCAP files are stored locally on sensors. Network traffic consists mainly of CDR data (SQL queries). <br><br>
| |
| '''A 1Gb network connection between sensors and the central GUI/Database server is generally sufficient for most deployments.'''
| |
| |-
| |
| | style="vertical-align: top;" | '''Packet Mirroring Mode (<code>packetbuffer_sender=yes</code>):'''
| |
| | Raw packet stream is forwarded to central server. Bandwidth consumption is roughly equivalent to the VoIP traffic volume itself (minus Ethernet headers, plus compression overhead). <br><br>
| |
| Consider your expected VoIP traffic volume and network capacity. Use <code>server_type_compress=zstd</code> to reduce bandwidth usage.
| |
| |} | | |} |
|
| |
|
| For optimal throughput in high-latency environments, see the server concatenation limit configuration in [[Sniffer_configuration#SQL_Concatenation_Throughput_Tuning|Sniffer Configuration: SQL Concatenation Throughput]].
| | == PCAP Access in Local Processing Mode == |
| | |
| === PCAP Access in Local Processing Mode ===
| |
|
| |
|
| When using Local Processing, PCAPs are stored on remote sensors. The GUI retrieves them via the central server, which proxies requests to each sensor's management port (TCP/5029).
| | PCAPs are stored on remote sensors. The GUI retrieves them through the central server, which proxies the request to the sensor '''over the existing TCP/60024 connection''' - the same persistent encrypted channel the sensor uses for sending CDRs. This connection is bidirectional; the central server does not open any separate connection back to the sensor. |
|
| |
|
| '''Firewall requirements:''' | | '''Firewall requirements:''' |
| * Central server must reach remote sensors on TCP/5029
| |
| * Remote sensors must reach central server on TCP/60024
| |
|
| |
| == Dashboard Statistics ==
| |
|
| |
| Dashboard widgets (SIP/RTP/REGISTER counts) depend on where packet processing occurs:
| |
|
| |
|
| {| class="wikitable" | | {| class="wikitable" |
| |- | | |- |
| ! Configuration !! Where statistics appear | | ! Direction !! Port !! Purpose |
| | |- |
| | | Remote sensors → Central server || TCP/60024 || Persistent encrypted channel (CDRs from sensor, PCAP requests from server - bidirectional) |
| |- | | |- |
| | '''<code>packetbuffer_sender = yes</code>''' (Packet Mirroring) || Central server only | | | GUI → Central server || TCP/5029 || Manager API (sensor status, active calls, configuration) |
| |- | | |- |
| | '''<code>packetbuffer_sender = no</code>''' (Local Processing) || Both sensor and central server | | | GUI → Central server || TCP/60024 || Server API (list connected sensors, proxy PCAP retrieval) |
| |} | | |} |
|
| |
|
| '''Note:''' If you are using Packet Mirroring mode (<code>packetbuffer_sender=yes</code>) and see empty dashboard widgets for the forwarding sensor, this is expected behavior. The sender sensor only captures and forwards raw packets - it does not create database records or statistics. The central server performs all processing. | | {{Note|1=The central server does '''not''' initiate connections to remote sensors. All server↔sensor communication happens over the single TCP/60024 connection that the sensor established.}} |
| | |
| | {{Tip|1=Packet Mirroring (<code>packetbuffer_sender=yes</code>) '''automatically deduplicates calls''' - the central server merges packets from all probes for the same Call-ID into a single unified CDR. This also ensures one logical call only consumes one license channel.}} |
| | = Advanced Topics = |
|
| |
|
| === Enabling Local Statistics on Forwarding Sensors === | | == High Availability (Failover) == |
|
| |
|
| If you need local statistics on a sensor that was previously configured to forward packets:
| | Remote sensors can specify multiple central servers: |
|
| |
|
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # On the forwarding sensor
| | server_destination = 192.168.0.1, 192.168.0.2 |
| packetbuffer_sender = no
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| This disables packet forwarding and enables full local processing. Note that this increases CPU and RAM usage on the sensor since it must perform full SIP/RTP analysis.
| | If primary is unavailable, the sensor automatically connects to the next server. |
| | |
| == Controlling Packet Storage in Packet Mirroring Mode ==
| |
|
| |
|
| When using Packet Mirroring (<code>packetbuffer_sender=yes</code>), the central server processes raw packets received from sensors. The <code>save*</code> options on the '''central server''' control which packets are saved to disk.
| | == Connection Compression == |
|
| |
|
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # Central Server Configuration (receiving raw packets from sensors) | | # On both client and server (default: zstd) |
| server_bind = 0.0.0.0
| | server_type_compress = zstd # Options: zstd, gzip, lzo, none |
| server_bind_port = 60024
| |
| server_password = your_strong_password
| |
| | |
| # Database Configuration
| |
| mysqlhost = localhost
| |
| mysqldb = voipmonitor
| |
| mysqluser = voipmonitor
| |
| mysqlpassword = db_password
| |
| | |
| # Sniffer options needed when receiving raw packets:
| |
| sipport = 5060
| |
| | |
| # CONTROL PACKET STORAGE HERE: | |
| # These settings on the central server determine what gets saved:
| |
| savertp = yes # Save RTP packets
| |
| savesip = yes # Save SIP packets
| |
| saveaudio = wav # Export audio recordings (optional)
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| {| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
| | == Intermediate Server (Hub-and-Spoke) == |
| |-
| |
| ! colspan="2" style="background:#4A90E2; color: white;" | Important: Central Server Controls Storage
| |
| |-
| |
| | style="vertical-align: top;" | '''Key Point:'''
| |
| | When sensors send raw packets to a central server, the storage is controlled by the <code>savertp</code>, <code>savesip</code>, and <code>saveaudio</code> options configured on the '''central server''', not on the individual sensors. The sensors are only forwarding raw packets - they do not make decisions about what to save unless you are using Local Processing mode.
| |
| |}
| |
|
| |
|
| This centralized control allows you to:
| | An intermediate server can receive from multiple sensors and forward to a central server: |
| * Enable/disable packet types (RTP, SIP, audio) from one location
| |
| * Adjust storage settings without touching each sensor
| |
| * Apply capture rules from the central server to filter traffic
| |
|
| |
|
| == Data Storage Summary == | | <kroki lang="plantuml"> |
| | @startuml |
| | skinparam shadowing false |
| | skinparam defaultFontName Arial |
|
| |
|
| * '''CDRs''': Always stored in MySQL on central server
| | rectangle "Remote Sensors" as RS |
| * '''PCAPs''':
| | rectangle "Intermediate Server" as INT |
| ** Local Processing → stored on each remote sensor
| | rectangle "Central Server" as CS |
| ** Packet Mirroring → stored on central server
| | database "MySQL" as DB |
|
| |
|
| == Handling Same Call-ID from Multiple Sensors ==
| | RS --> INT : TCP/60024 |
| | | INT --> CS : TCP/60024 |
| When a call passes through multiple sensors that see the same SIP Call-ID, VoIPmonitor automatically merges the SIP packets into a single CDR on the central server. This is expected behavior when using Packet Mirroring mode.
| | CS --> DB |
| | | @enduml |
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
| | </kroki> |
| |-
| |
| ! colspan="2" style="background:#ffc107;" | Call-ID Merging Behavior
| |
| |-
| |
| | style="vertical-align: top;" | '''What happens:'''
| |
| | If Sensor A and Sensor B both forward packets for a call with the same Call-ID to the central server, VoIPmonitor creates a single CDR containing SIP packets from both sensors. The RTP packets are captured from whichever sensor processed the media.
| |
| |-
| |
| | style="vertical-align: top;" | '''Why:'''
| |
| | VoIPmonitor uses the SIP Call-ID as the primary unique identifier. When multiple sensors forward packets with the same Call-ID to a central server, they are automatically treated as one call.
| |
| |-
| |
| | style="vertical-align: top;" | '''Is it a problem?'''
| |
| | Usually not. For most deployments, combining records from multiple sensors for the same call (different call legs passing through different points in the network) is the desired behavior.
| |
| |}
| |
| | |
| === Preventing Duplicate CDRs in Local Processing Mode ===
| |
| | |
| When using '''Local Processing mode''' (<code>packetbuffer_sender=no</code>), each remote probe processes its own packets and writes CDRs directly to a central database. If multiple probes capture the same call (e.g., redundant taps or overlapping SPAN ports), this creates '''duplicate CDR entries''' in the database.
| |
| | |
| To prevent duplicates in this scenario, use the <code>cdr_check_exists_callid</code> option on '''all probes''':
| |
| | |
| {| class="wikitable" style="background:#f8f9fa; border:1px solid #dee2e6;"
| |
| |-
| |
| ! rowspan="2" | Setting
| |
| ! colspan="2" | Result
| |
| |-
| |
| |<code>cdr_check_exists_callid = no</code> (default)
| |
| |Each probe creates its own CDR row. Multiple probes capturing the same call result in duplicate entries with the same Call-ID but different id_sensor values.
| |
| |-
| |
| |<code>cdr_check_exists_callid = yes</code>
| |
| |Probes check for an existing CDR with the same Call-ID before inserting. If found, they update the existing row instead of creating a new one. The final CDR will be associated with the id_sensor of the probe that last processed the call.
| |
| |}
| |
| | |
| '''Prerequisites:'''
| |
| * MySQL user must have <code>UPDATE</code> privileges on the <code>cdr</code> table
| |
| * All probes must be configured with this setting
| |
|
| |
|
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # Add to voipmonitor.conf on each probe (Local Processing mode only) | | # On INTERMEDIATE SERVER |
| [general]
| | id_sensor = 100 |
| cdr_check_exists_callid = yes
| |
| </syntaxhighlight>
| |
| | |
| '''Note:''' This setting is only useful in Local Processing mode. In Packet Mirroring mode (<code>packetbuffer_sender=yes</code>), the central server automatically merges packets with the same Call-ID, so this option is not needed.
| |
| | |
| === Keeping Records Separate Per Sensor === | |
| | |
| If you need to keep records completely separate when multiple sensors see the same Call-ID (e.g., each sensor should create its own independent CDR even for calls with overlapping Call-IDs), you must run '''multiple receiver instances on the central server'''.
| |
|
| |
|
| <syntaxhighlight lang="ini">
| | # Receive from remote sensors |
| # Receiver Instance 1 (for Sensor A) | |
| [receiver_sensor_a]
| |
| server_bind = 0.0.0.0 | | server_bind = 0.0.0.0 |
| server_bind_port = 60024 | | server_bind_port = 60024 |
| mysqlhost = localhost
| | server_password = sensor_password |
| mysqldb = voipmonitor
| | |
| mysqluser = voipmonitor
| | # Forward to central server |
| mysqlpassword = <password>
| | server_destination = central.server.ip |
| mysqltableprefix = sensor_a_ # Separate CDR tables
| | server_destination_port = 60024 |
| id_sensor = 2
| |
| # ... other options
| |
|
| |
|
| # Receiver Instance 2 (for Sensor B)
| | packetbuffer_sender = no # or yes, depending on desired mode |
| [receiver_sensor_b]
| |
| server_bind = 0.0.0.0
| |
| server_bind_port = 60025 # Different port
| |
| mysqlhost = localhost
| |
| mysqldb = voipmonitor
| |
| mysqluser = voipmonitor
| |
| mysqlpassword = <password>
| |
| mysqltableprefix = sensor_b_ # Separate CDR tables
| |
| id_sensor = 3
| |
| # ... other options | |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| Each receiver instance runs as a separate process, listens on a different port, and can write to separate database tables (using <code>mysqltableprefix</code>). Configure each sensor to connect to its dedicated receiver port.
| | {{Note|1=This works because the intermediate server does NOT do local packet capture - it only relays. Original remote sensors must be manually added to GUI Settings for visibility.}} |
| | |
| For more details on correlating multiple call legs from the same call, see [[Merging_or_correlating_multiple_call_legs]].
| |
| | |
| == GUI Visibility ==
| |
| | |
| Remote sensors appear automatically when connected. To customize names or configure additional settings:
| |
| # Go to '''GUI → Settings → Sensors'''
| |
| # Sensors are identified by their <code>id_sensor</code> value
| |
| | |
| == Troubleshooting Distributed Deployments ==
| |
| | |
| === Probe Not Detecting All Calls on Expected Ports ===
| |
| | |
| If a remote sensor (probe) configured for packet mirroring is not detecting all calls on expected ports, check configuration on '''both''' the probe and the central analysis host.
| |
|
| |
|
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
| | == Multiple Receivers for Packet Mirroring == |
| |-
| |
| ! colspan="2" style="background:#ffc107;" | Critical: sipport Must Match in Distributed Deployments
| |
| |-
| |
| | style="vertical-align: top;" | '''The Issue:'''
| |
| | In distributed/probe setups using Packet Mirroring (<code>packetbuffer_sender=yes</code>), calls will be missing if the <code>sipport</code> configuration is not aligned between the probe and central server. Common symptom: Probe sees traffic via <code>tcpdump</code> but central server records incomplete CDRs.
| |
| |-
| |
| | style="vertical-align: top;" | '''Configuration Requirement:'''
| |
| | The probe and central host must have consistent <code>sipport</code> values. If your network uses SIP on multiple ports (e.g., 5060, 5061, 5080, 6060), ALL ports must be listed on both systems.
| |
| |}
| |
|
| |
|
| The solution involves three steps:
| | {{Warning|1=Multiple sensors with <code>packetbuffer_sender=yes</code> sending to a '''single receiver instance''' can cause call processing conflicts (calls appear in Active Calls but missing from CDRs).}} |
|
| |
|
| ;1. Verify traffic reachability on the probe:
| | '''Solution:''' Run separate receiver instances on different hosts, each dedicated to specific sensors: |
| Use <code>tcpdump</code> on the probe VM to confirm SIP packets for the missing calls are arriving on the expected ports.
| |
| <pre>
| |
| # On the probe VM
| |
| tcpdump -i eth0 -n port 5060
| |
| </pre>
| |
|
| |
|
| ;2. Check the probe's ''voipmonitor.conf'':
| |
| Ensure the <code>sipport</code> directive on the probe includes all necessary SIP ports used in your network.
| |
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # /etc/voipmonitor.conf on the PROBE | | # Receiver Instance 1 (Host 1, for Sensor A) |
| sipport = 5060,5061,5080,6060
| | server_bind_port = 60024 |
| </syntaxhighlight>
| | id_sensor = 1 |
|
| |
|
| ;3. Check the central analysis host's ''voipmonitor.conf'':
| | # Receiver Instance 2 (Host 2, for Sensor B) |
| '''This is the most common cause of missing calls in distributed setups.''' The central analysis host (specified by <code>server_bind</code> on the central server, or by <code>server_destination</code> configured on the probe) must also have the <code>sipport</code> directive configured with the same list of ports used by all probes.
| | server_bind_port = 60024 |
| <syntaxhighlight lang="ini">
| | id_sensor = 2 |
| # /etc/voipmonitor.conf on the CENTRAL HOST
| |
| sipport = 5060,5061,5080,6060
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| ;4. Restart both services:
| | Alternative: Use '''Local Processing mode''' (<code>packetbuffer_sender=no</code>) which processes calls independently on each sensor. |
| Apply the configuration changes:
| |
| <syntaxhighlight lang="bash"> | |
| # On both probe and central host
| |
| systemctl restart voipmonitor
| |
| </syntaxhighlight> | |
|
| |
|
| {| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
| | == Preventing Duplicate CDRs (Local Processing) == |
| |-
| |
| ! colspan="2" style="background:#4A90E2; color: white;" | Why Both Systems Must Match
| |
| |-
| |
| | style="vertical-align: top;" | '''Probe side:'''
| |
| | The probe captures packets from the network interface. Its <code>sipport</code> setting determines which UDP ports it considers as SIP traffic to capture and forward.
| |
| |-
| |
| | style="vertical-align: top;" | '''Central server side:'''
| |
| | When receiving raw packets in Packet Mirroring mode, the central server analyzes the packets locally. Its <code>sipport</code> setting determines which ports it interprets as SIP during analysis. If a port is missing here, packets are captured but not recognized as SIP, resulting in missing CDRs.
| |
| |}
| |
|
| |
|
| === Quick Diagnosis Commands ===
| | When multiple probes capture the same call in Local Processing mode: |
|
| |
|
| On the probe:
| | <syntaxhighlight lang="ini"> |
| <syntaxhighlight lang="bash"> | | # On each probe |
| # Check which sipport values are configured | | cdr_check_exists_callid = yes |
| grep -E "^sipport" /etc/voipmonitor.conf
| |
| | |
| # Verify traffic is arriving on expected ports
| |
| tcpdump -i eth0 -nn -c 10 port 5061
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| On the central server:
| | This checks for existing CDRs before inserting. Requires MySQL UPDATE privileges. |
| <syntaxhighlight lang="bash">
| |
| # Check which sipport values are configured
| |
| grep -E "^sipport" /etc/voipmonitor.conf
| |
|
| |
|
| # Check syslog for analysis activity (should see processing packets)
| | == Critical: SIP and RTP Must Be Captured Together == |
| tail -f /var/log/syslog | grep voipmonitor
| |
| </syntaxhighlight>
| |
|
| |
|
| If probes still miss calls after ensuring <code>sipport</code> matches on both sides, check the [[Sniffer_troubleshooting|full troubleshooting guide]] for other potential issues such as network connectivity, firewall rules, or interface misconfiguration.
| | VoIPmonitor cannot correlate SIP and RTP from different sniffer instances. A '''single sniffer must process both SIP and RTP''' for each call. Parameters like <code>cdr_check_exists_callid</code> do NOT enable split SIP/RTP correlation. |
|
| |
|
| === Sensor Registration Errors: Stale Database Records ===
| |
|
| |
|
| If a new sensor fails to connect to the main server with the error '''"failed response from server - bad password"''' even when the <code>server_password</code> is correctly configured in both files, the issue may be a stale sensor record in the GUI database.
| |
|
| |
|
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
| | ==== Split SIP/RTP with Packet Mirroring Mode ==== |
| |-
| |
| ! colspan="2" style="background:#ffc107;" | Important: Stale Sensor Records Cause Authentication Failures
| |
| |-
| |
| | style="vertical-align: top;" | '''The Issue:'''
| |
| | The GUI database retains a record of a previously-deployed sensor. This stale record can prevent a new sensor with the same <code>id_sensor</code> from authenticating correctly, even when the password matches exactly. The error is misleading; the problem is not a password mismatch but a database conflict.
| |
| |-
| |
| | style="vertical-align: top;" | '''Common Scenarios:'''
| |
| | * Replacing a failed sensor with new hardware<br>* Reinstalling or reconfiguring a sensor<br>* Changing sensor IDs and back<br>* Restoring from backups where sensor records are out of sync
| |
| |}
| |
|
| |
|
| '''Resolution: Delete the Stale Sensor Record and Re-register''' | | {{Note|1='''Exception for Packet Mirroring Mode:''': The above limitation applies to '''Local Processing mode''' (<code>packetbuffer_sender=no</code>) where each sensor processes calls independently. In '''Packet Mirroring mode''' (<code>packetbuffer_sender=yes</code>), the central server receives raw packets from multiple remote sensors and processes them together. This allows scenarios where SIP and RTP are captured on separate nodes - configure both as packet senders and let the central server correlate them into single unified CDRs.}} |
|
| |
|
| ;1. Delete the problematic sensor record from the GUI:
| | Example scenario: Separate SIP signaling node and RTP handling node: |
| Navigate to '''GUI → Settings → Sensors''' and delete the sensor entry that is causing issues. Click the delete icon (typically a trash can or × icon) next to the sensor record.
| | <syntaxhighlight lang="ini"> |
| | # SIP Signaling Node (packet sender) |
| | id_sensor = 1 |
| | packetbuffer_sender = yes |
| | server_destination = central.server.ip |
| | server_destination_port = 60024 |
| | server_password = your_password |
|
| |
|
| ;2. Restart the voipmonitor service on the '''sensor/probe machine''':
| | # RTP Handling Node (packet sender) |
| <syntaxhighlight lang="bash">
| | id_sensor = 2 |
| # On the agent/probe machine only
| | packetbuffer_sender = yes |
| systemctl restart voipmonitor
| | server_destination = central.server.ip |
| </syntaxhighlight>
| | server_destination_port = 60024 |
| | | server_password = your_password |
| ;3. Verify network connectivity from the sensor to the server:
| |
| Test that the server is reachable on the configured port:
| |
| <syntaxhighlight lang="bash">
| |
| # From the sensor, test connectivity to the central server
| |
| # Replace <server_ip> with the actual server IP
| |
| # Replace 60024 with your server_bind_port if not using the default
| |
| telnet <server_ip> 60024
| |
| </syntaxhighlight>
| |
| If <code>telnet</code> shows "Connected to <server_ip>", the server is reachable. If it shows "Connection refused" or times out, check firewall rules and ensure the server service is running:
| |
| <syntaxhighlight lang="bash">
| |
| # On the central server, check if voipmonitor is listening
| |
| ss -tulpn | grep 60024
| |
| # or
| |
| netstat -tlnp | grep voipmonitor
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| ;4. Verify automatic re-registration:
| | The central server merges packets from both senders by Call-ID, creating unified CDRs with complete SIP and RTP data. |
| After the service restart, the sensor automatically registers with the central server and appears as a new entry in '''GUI → Settings → Sensors'''. The connection status should show as "Connected" or "Online".
| |
|
| |
|
| {| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
| |
| |-
| |
| ! colspan="2" style="background:#4A90E2; color: white;" | How Sensor Registration Works
| |
| |-
| |
| | style="vertical-align: top;" | '''Automatic Process:'''
| |
| | Sensors automatically register with the central server when they start. The sensor sends a handshake packet with its <code>id_sensor</code> and authentication credentials. The central server checks:
| |
| * If a sensor with this <code>id_sensor</code> exists in the database, it validates the credentials
| |
| * If no such sensor exists, the server creates a database record automatically
| |
| |-
| |
| | style="vertical-align: top;" | '''Why Stale Records Matter:'''
| |
| | When a stale sensor record exists, the server attempts to validate the new sensor against the existing record. If the records are out of sync (e.g., different passwords, different manager IPs, corrupted state), authentication fails. Deleting the stale record allows the registration process to start fresh.
| |
| |}
| |
|
| |
|
| '''Prevention: Keep Sensor Database Clean'''
| | ==== HEP Protocol in Client/Server Mode ==== |
|
| |
|
| * When decommissioning a sensor, delete its record from '''GUI → Settings → Sensors'''
| | VoIPmonitor supports receiving HEP-encapsulated traffic on sniffer clients and forwarding it to a central server. This enables distributed capture from HEP sources (Kamailio, OpenSIPS, rtpproxy, FreeSWITCH) in a client/server architecture. |
| * When replacing sensor hardware, delete the old sensor record before bringing up the new one
| |
| * Verify sensor IDs are unique across your deployment (duplicate <code>id_sensor</code> values will cause conflicts)
| |
| * Use the [[Backing_Up_GUI_Configuration|GUI backup feature]] to maintain a clean baseline, then restore sensors selectively as needed
| |
|
| |
|
| === RTP Streams End Prematurely in Distributed Deployments ===
| | '''Scenario:''' SIP proxy and RTP proxy at different locations sending HEP to remote sniffer clients: |
|
| |
|
| If RTP streams end prematurely in call recordings when using a remote sniffer with a central GUI, this is often caused by the <code>natalias</code> configuration being set on the wrong system.
| | <syntaxhighlight lang="ini"> |
| | # Remote Sniffer Client A (receives HEP from Kamailio) |
| | id_sensor = 1 |
| | hep = yes |
| | hep_bind_port = 9060 |
| | packetbuffer_sender = yes |
| | server_destination = central.server.ip |
| | server_destination_port = 60024 |
| | server_password = your_password |
|
| |
|
| '''The Problem:'''
| | # Remote Sniffer Client B (receives HEP from rtpproxy) |
| | | id_sensor = 2 |
| When packets are forwarded from a remote sniffer to a central server (Packet Mirroring mode), the central server sees the packets with their original IP addresses as captured by the sniffer. If <code>natalias</code> is configured on the remote sniffer, the IP address substitution happens at capture time. This can cause the central server's RTP correlation logic to fail because the substituted addresses do not match what the central server sees in the SIP signaling.
| | hep = yes |
| | hep_bind_port = 9060 |
| | packetbuffer_sender = yes |
| | server_destination = central.server.ip |
| | server_destination_port = 60024 |
| | server_password = your_password |
| | </syntaxhighlight> |
|
| |
|
| '''The Solution:'''
| | The central server receives packets from both clients and correlates them into unified CDRs using standard SIP Call-ID and IP:port from SDP. |
|
| |
|
| Configure <code>natalias</code> only on the central server that receives and processes the packets, not on the remote sniffer that captures and forwards them.
| | {{Note|1=This also works for IPFIX (Oracle SBCs) and RibbonSBC protocols forwarded via client/server mode.}} |
|
| |
|
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
| | '''Alternative: Direct HEP to single sniffer''' |
| |-
| |
| ! colspan="2" style="background:#ffc107;" | Critical: natalias Configuration Placement
| |
| |-
| |
| | style="vertical-align: top;" | '''Remote Sniffer (packet forwarding):'''
| |
| | Do NOT set <code>natalias</code> on the remote sensor. Let it forward packets with their original IP addresses.
| |
| |-
| |
| | style="vertical-align: top;" | '''Central Server (packet processing):'''
| |
| | Configure <code>natalias</code> on the central server that performs the analysis. The address substitution happens during correlation, at the point where SIP and RTP are matched.
| |
| |}
| |
|
| |
|
| '''Configuration Example:'''
| | If both HEP sources can reach the same sniffer directly, no client/server setup is needed: |
|
| |
|
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # WRONG: Do NOT configure natalias on remote sniffer | | # Single sniffer receiving HEP from multiple sources |
| # /etc/voipmonitor.conf on REMOTE SENSOR
| | hep = yes |
| # natalias = 1.2.3.4 10.0.0.5 # DON'T DO THIS
| | hep_bind_port = 9060 |
| | | interface = eth0 # Can also sniff locally if needed |
| # CORRECT: Configure natalias on central server
| |
| # /etc/voipmonitor.conf on CENTRAL SERVER
| |
| natalias = 1.2.3.4 10.0.0.5
| |
| server_bind = 0.0.0.0
| |
| server_bind_port = 60024
| |
| # ... other central server settings
| |
| </syntaxhighlight>
| |
| | |
| '''After Changing Configuration:'''
| |
| | |
| <syntaxhighlight lang="bash">
| |
| # Restart voipmonitor on BOTH systems | |
| systemctl restart voipmonitor
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| This ensures that RTP packets are correctly associated with their SIP dialogs on the central server, even when the network traverses NAT devices.
| | Both Kamailio (SIP) and rtpproxy (RTP) send HEP to this sniffer on port 9060. The sniffer correlates them automatically based on Call-ID and SDP IP:port. |
| | | = Sensor Health Monitoring = |
| === Measuring Network Throughput Between Probe and Server ===
| |
| | |
| If you experience memory buffer issues ("packetbuffer: MEMORY IS FULL"), high packet loss, or slow CDR delivery in distributed deployments, the bottleneck may be insufficient network bandwidth between the probe and central server. Before adding hardware, verify network performance using iperf.
| |
| | |
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
| |
| |-
| |
| ! colspan="2" style="background:#ffc107;" | Test Network Throughput Before Hardware Upgrades
| |
| |-
| |
| | style="vertical-align: top;" | '''When to test:'''
| |
| | * "packetbuffer: MEMORY IS FULL" errors on probes<br>* System load consistently high (e.g., 70-80% on 8-core system)<br>* Slow CDR display<br>* Packet loss during peak traffic
| |
| |-
| |
| | style="vertical-align: top;" | '''Objective:'''
| |
| | Identify whether the bottleneck is network bandwidth or local resources (CPU/RAM/Disk)
| |
| |}
| |
|
| |
|
| ==== Step 1: Install iperf3 on Both Systems ==== | | == Management API == |
|
| |
|
| Install iperf3 on both the probe and the central server:
| | Query sensor status via TCP port 5029: |
|
| |
|
| <syntaxhighlight lang="bash"> | | <syntaxhighlight lang="bash"> |
| # On Debian/Ubuntu systems
| | echo 'sniffer_stat' | nc <sensor_ip> 5029 |
| sudo apt-get install iperf3
| |
| | |
| # On RHEL/CentOS systems
| |
| sudo yum install iperf3
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| ==== Step 2: Start iperf3 Server on Central Server ====
| | Returns JSON with status, version, active calls, packets per second, etc. |
|
| |
|
| On the central server (listening side), run:
| | == Multi-Sensor Health Check Script == |
|
| |
|
| <syntaxhighlight lang="bash"> | | <syntaxhighlight lang="bash"> |
| # Start iperf3 server (listening on all interfaces) | | #!/bin/bash |
| iperf3 -s
| | SENSORS=("192.168.1.10:5029" "192.168.1.11:5029") |
| | | for SENSOR in "${SENSORS[@]}"; do |
| # Or specify a specific port if needed
| | IP=$(echo $SENSOR | cut -d: -f1) |
| iperf3 -s -p 5201
| | PORT=$(echo $SENSOR | cut -d: -f2) |
| </syntaxhighlight>
| | STATUS=$(echo 'sniffer_stat' | nc -w 2 $IP $PORT 2>/dev/null | grep -o '"status":"[^"]*"' | cut -d'"' -f4) |
| | | echo "$IP: ${STATUS:-FAILED}" |
| Leave the server running.
| | done |
| | |
| ==== Step 3: Run iperf3 Client on Probe ====
| |
| | |
| On the remote probe (sending side), test the connection to the central server:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| # Test TCP throughput to central server
| |
| iperf3 -c <central_server_ip>
| |
| | |
| # Example:
| |
| iperf3 -c 192.168.1.100
| |
| | |
| # Test bidirectional throughput (useful for symmetric traffic)
| |
| iperf3 -c <central_server_ip> -R
| |
| | |
| # Run for 60 seconds with multiple parallel streams
| |
| iperf3 -c <central_server_ip> -t 60 -P 4
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| Replace <code><central_server_ip></code> with the IP address or hostname of your central server.
| | = Version Compatibility = |
| | |
| ==== Step 4: Interpret Results ==== | |
| | |
| Analyze the iperf3 output to determine if your network is a bottleneck:
| |
|
| |
|
| {| class="wikitable" | | {| class="wikitable" |
| |- | | |- |
| ! Throughput !! Interpretation !! Recommended Action | | ! Scenario !! Compatibility !! Notes |
| |- | | |- |
| | '''Expected bandwidth''' (e.g., >900 Mbps on 1Gb link)|| Network is NOT the bottleneck || Check local resources: CPU load, RAM, disk I/O | | | '''GUI ≥ Sniffer''' || ✅ Compatible || Recommended |
| |- | | |- |
| | '''Significantly lower''' (e.g., 200-500 Mbps on 1Gb link)|| Network is a bottleneck || Investigate network infrastructure: switch capacity, link quality, congestion | | | '''GUI < Sniffer''' || ⚠️ Risk || Sensor may write to non-existent columns |
| |- | |
| | '''Very low''' (e.g., <50 Mbps on 1Gb link)|| Severe network issue || Check for duplex mismatches, faulty cabling, switch configuration, ISP limitations
| |
| |} | | |} |
|
| |
|
| ==== Step 5: Decision Matrix After iperf3 Test ====
| | '''Best practice:''' Upgrade GUI first (applies schema changes), then upgrade sensors. |
|
| |
|
| Based on the network test results:
| | For mixed versions temporarily, add to central server: |
| | <syntaxhighlight lang="ini"> |
| | server_cp_store_simple_connect_response = yes # Sniffer 2024.11.0+ |
| | </syntaxhighlight> |
| | |
| | = Troubleshooting = |
|
| |
|
| {| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
| | == Quick Diagnosis == |
| |-
| |
| ! colspan="2" style="background:#4A90E2; color: white;" | When Network is NOT the Bottleneck (High Throughput)
| |
| |-
| |
| | style="vertical-align: top;" | '''Symptom:'''
| |
| | iperf3 shows maximum bandwidth (e.g., >900 Mbps on 1Gb), but probe still shows MEMORY IS FULL or high CPU load
| |
| |-
| |
| | style="vertical-align: top;" | '''Root Cause:'''
| |
| | Local resource constraints on the probe machine
| |
| |-
| |
| | style="vertical-align: top;" | '''Solution:'''
| |
| | Check CPU usage (htop, uptime). If system load is consistently high (e.g., 70-80% on an 8-core system) and cannot process packets fast enough, '''increase CPU cores on the probe machine''' by upgrading hardware or adding more vCPUs in virtualized environments. Configuring tuning options (rtpthreads, ringbuffer) is a temporary workaround; adding CPU cores addresses the root cause.
| |
| |-
| |
| | colspan="2" | See [[Hardware|Hardware Sizing Examples]] for CPU requirements based on concurrent call volume, or [[Scaling|Scaling and Performance Tuning]] for optimization guidance.
| |
| |}
| |
|
| |
|
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;" | | {| class="wikitable" |
| |- | | |- |
| ! colspan="2" style="background:#ffc107;" | When Network IS the Bottleneck (Low Throughput) | | ! Symptom !! First Check !! Likely Cause |
| |- | | |- |
| | style="vertical-align: top;" | '''Symptom:''' | | | Sensor not connecting || <code>journalctl -u voipmonitor -f</code> on sensor || Check <code>server_destination</code>, password, firewall |
| | iperf3 shows significantly lower bandwidth than link capacity | |
| |- | | |- |
| | style="vertical-align: top;" | '''Root Cause:''' | | | Traffic rate <code>[0.0Mb/s]</code> || tcpdump on sensor interface || Network/SPAN issue, not communication |
| | Network infrastructure cannot handle VoIP traffic volume between probe and server | |
| |- | | |- |
| | style="vertical-align: top;" | '''Solution:''' | | | High memory on central server || Check if <code>sipport</code> includes 60024 || Exclude server port from sipport |
| | * Inspect network path: Check switches, routers, VLANs for congestion or misconfiguration<br>* Verify link speed and duplex settings (ethtool)<br>* Check for packet loss (ping with statistics) and latency (traceroute)<br>* Upgrade network: 1GbE to 10GbE, add dedicated links, or improve network routing<br>* Consider switching to Local Processing mode (<code>packetbuffer_sender=no</code>) to reduce network traffic | |
| |} | |
| | |
| ==== Step 6: Network Configuration Checks ====
| |
| | |
| If iperf3 shows low throughput, check these network configuration issues:
| |
| | |
| {| class="wikitable"
| |
| |- | | |- |
| ! Check !! Command !! What to Look For
| | | Missing calls || Compare <code>sipport</code> on probe vs central || Must match on both sides |
| |- | | |- |
| | Link speed and duplex || <code>ethtool eth0</code> || "Speed: 1000Mb/s", "Duplex: Full" | | | "Bad password" error || GUI → Settings → Sensors || Delete stale sensor record, restart sensor |
| |- | | |- |
| | Packet loss on the path || <code>ping -c 100 <central_server_ip></code> || "0% packet loss" is ideal. Loss >1% indicates network issues | | | "Connection refused (111)" after migration || Check <code>server_destination</code> in config || Points to old server IP |
| |- | | |- |
| | Network latency || <code>traceroute <central_server_ip></code> || Consistent sub-millisecond hops are ideal. High variance indicates congestion | | | RTP streams end prematurely || Check <code>natalias</code> location || Configure only on central server |
| |- | | |- |
| | Interface errors || <code>ethtool -S eth0| grep -i error</code> || Should be zero or very low. High errors indicate hardware issues | | | Time sync errors || <code>timedatectl status</code> || Fix NTP or increase tolerance |
| |} | | |} |
|
| |
|
| Example commands:
| | == Connection Testing == |
|
| |
|
| <syntaxhighlight lang="bash"> | | <syntaxhighlight lang="bash"> |
| # Check network interface speed and duplex | | # Test connectivity from sensor to server |
| ethtool eth0
| | nc -zv <server_ip> 60024 |
| # Look for: Speed: 1000Mb/s, Duplex: Full
| |
|
| |
|
| # Test for packet loss | | # Verify server is listening |
| ping -c 100 192.168.1.100
| | ss -tulpn | grep voipmonitor |
| # Look for: 0% packet loss
| |
| | |
| # Check network path latency
| |
| traceroute 192.168.1.100
| |
|
| |
|
| # Check interface error counters | | # Check sensor logs |
| ethtool -S eth0 | grep -i error
| | journalctl -u voipmonitor -n 100 | grep -i "connect" |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| ==== Considerations for Packet Mirroring vs Local Processing ==== | | == Time Synchronization Errors == |
|
| |
|
| When network bandwidth is constrained:
| | If seeing "different time between server and client" errors: |
|
| |
|
| * '''Local Processing Mode (<code>packetbuffer_sender=no</code>):''' Probes analyze packets locally and send only CDRs (SQL queries) to the central server. Network traffic is minimal (typically <1 Mbps even during high traffic).
| | '''Immediate workaround:''' Increase tolerance on both sides: |
| | | <syntaxhighlight lang="ini"> |
| * '''Packet Mirroring Mode (<code>packetbuffer_sender=yes</code>):''' Probes forward raw packets to the central server. Network bandwidth requirement roughly equals VoIP traffic volume (e.g., 300 Mbps for 3 Gbit/s traffic).
| | client_server_connect_maximum_time_diff_s = 30 |
| | | receive_packetbuffer_maximum_time_diff_s = 30 |
| For deployments with limited network bandwidth between probes and central server, Local Processing mode is generally preferred.
| |
| | |
| For more details on switching between modes, see [[Sniffer_distributed_architecture#Local_Processing_vs_Packet_Mirroring|Local Processing vs Packet Mirroring]].
| |
| | |
| == Legacy: Mirror Mode ==
| |
| | |
| '''Note:''' The older <code>mirror_destination</code>/<code>mirror_bind</code> options still exist but the modern Client-Server approach with <code>packetbuffer_sender=yes</code> is preferred as it provides encryption and simpler management.
| |
| | |
| === Debugging SIP Traffic in Distributed Architecture ===
| |
| | |
| When using the Client-Server architecture (Packet Mirroring or Local Processing), standard packet capture tools like <code>sngrep</code> cannot see SIP packets on the central server because the traffic is encapsulated inside the encrypted TCP tunnel between the sensor and the central server.
| |
| | |
| {| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
| |
| |-
| |
| ! colspan="2" style="background:#4A90E2; color: white;" | Why sngrep Does Not Work on Central Server
| |
| |-
| |
| | style="vertical-align: top;" | '''The Issue:'''
| |
| | <code>sngrep</code> relies on capturing raw SIP packets directly from the network interface. In Client-Server mode, remote sensors forward traffic to the central server inside an encrypted TCP channel (default port 60024 with zstd compression). The SIP packets are wrapped inside this tunnel and cannot be inspected by standard tools.
| |
| |-
| |
| | style="vertical-align: top;" | '''What You See:'''
| |
| | <code>tcpdump</code> on the central server shows encrypted TCP packets on port 60024 (Packet Mirroring) or SQL traffic on port 3306 (Local Processing). No raw SIP packets (UDP 5060) are visible on the wire.
| |
| |}
| |
| | |
| To inspect SIP packets in distributed deployments, use one of these methods:
| |
| | |
| ==== Live Sniffer (Recommended) ====
| |
| | |
| The '''Live Sniffer''' feature in the VoIPmonitor GUI provides real-time SIP packet display from remote sensors. This is the preferred method for debugging call flows and network issues in distributed architectures.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| # To use Live Sniffer:
| |
| # 1. Open the VoIPmonitor GUI
| |
| # 2. Navigate to "Live Sniffer"
| |
| # 3. Select the remote sensor from the dropdown
| |
| # 4. Click "Start" to view live SIP packets from that sensor
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| The Live Sniffer streams SIP packets from the sensor to the GUI in real-time via the Manager API (TCP port 5029). Features include:
| | '''Root cause fix:''' Ensure NTP is working: |
| * Call coloring by Call-ID
| |
| * Packet details inspection
| |
| * Call flow visualization
| |
| * Multi-user support
| |
| | |
| For setup and troubleshooting of Live Sniffer, see [[Live_sniffer|Live Sniffer documentation]].
| |
| | |
| ==== sngrep on Remote Sensors ====
| |
| | |
| If you need to use <code>sngrep</code> directly, run it on the **remote sensor machine** where the traffic first arrives from the network interface before VoIPmonitor encapsulation occurs.
| |
| | |
| <syntaxhighlight lang="bash"> | | <syntaxhighlight lang="bash"> |
| # SSH into the remote sensor | | timedatectl status # Check sync status |
| # Run sngrep on the interface connected to the SPAN/mirror port | | chronyc tracking # Check offset (Chrony) |
| sngrep -i eth0
| | ntpq -p # Check offset (NTP) |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| Replace <code>eth0</code> with the actual network interface on the sensor that receives the mirrored traffic from the switch.
| | == Network Throughput Testing == |
| | |
| ==== Verify Connectivity on Central Server ====
| |
|
| |
|
| If you need to verify that data is flowing from sensors to the central server: | | If experiencing "packetbuffer: MEMORY IS FULL" errors, test network with iperf3: |
|
| |
|
| <syntaxhighlight lang="bash"> | | <syntaxhighlight lang="bash"> |
| # Check for encrypted tunnel traffic (Packet Mirroring mode) | | # On central server |
| tcpdump -i any port 60024
| | iperf3 -s |
| | |
| # Check for SQL traffic (Local Processing mode)
| |
| tcpdump -i any port 3306
| |
|
| |
|
| # Check sensor statistics via Management API | | # On probe |
| echo 'sniffer_stat' | nc <sensor_ip> 5029
| | iperf3 -c <server_ip> |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| === Migrating from Mirror Mode to Client-Server Mode ===
| | {| class="wikitable" |
| | |
| If your system uses the legacy mirror mode (<code>mirror_destination</code> on probes, <code>mirror_bind</code> on server), you should migrate to the modern client/server mode. Common symptoms of mirror mode issues include all CDRs being incorrectly associated with a single sensor after system updates.
| |
| | |
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;" | |
| |- | | |- |
| ! colspan="2" style="background:#ffc107;" | Why Migration is Recommended | | ! Result !! Interpretation !! Action |
| |- | | |- |
| | style="vertical-align: top;" | '''Mirror Mode Limitations:''' | | | Expected bandwidth (>900 Mbps on 1Gb) || Network OK || Check local CPU/RAM |
| | * No encryption (raw UDP traffic) | |
| * Complex firewall configuration (must open mirroring port)
| |
| * Less robust connection handling
| |
| * Configuration can be lost during OS upgrades
| |
| |- | | |- |
| | style="vertical-align: top;" | '''Client-Server Advantages:''' | | | Low throughput || Network bottleneck || Check switches, cabling, consider Local Processing mode |
| | * Encrypted TCP connections | |
| * Automatic reconnection with failover
| |
| * Centralized port configuration
| |
| * Better troubleshooting capabilities
| |
| |} | | |} |
|
| |
|
| ==== Prerequisites | | == Debugging SIP Traffic == |
| | |
| * Central server hostname or IP address
| |
| * Port for client-server communication (default: 60024)
| |
| * Strong shared password for authentication
| |
|
| |
|
| ==== Migration Steps
| | <code>sngrep</code> does not work on the central server because traffic is encapsulated in the TCP tunnel. |
|
| |
|
| ;1. Stop the voipmonitor sniffer service on all probe machines:
| | '''Options:''' |
| <syntaxhighlight lang="bash"> | | * '''Live Sniffer:''' Use GUI → Live Sniffer to view SIP from remote sensors |
| # On each probe
| | * '''sngrep on sensor:''' Run <code>sngrep -i eth0</code> directly on the remote sensor |
| systemctl stop voipmonitor
| |
| </syntaxhighlight> | |
|
| |
|
| ;2. Update GUI Sensors list:
| | == Stale Sensor Records == |
| # Log in to the VoIPmonitor GUI
| |
| # Navigate to '''Settings → Sensors'''
| |
| # Remove all old probe records, keeping only the server instance (e.g., localhost or the central server IP)
| |
|
| |
|
| ;3. Configure the Central Server:
| | If a new sensor fails with "bad password" despite correct credentials: |
| Edit <code>/etc/voipmonitor.conf</code> on the central server:
| |
| <syntaxhighlight lang="ini">
| |
| # COMMENT OUT or remove mirror mode parameters:
| |
| # mirror_bind_ip = 1.2.3.4
| |
| # mirror_bind_port = 9000
| |
|
| |
|
| # ADD client-server mode parameters: | | # Delete the sensor record from '''GUI → Settings → Sensors''' |
| server_bind = <server_ip> # Use 0.0.0.0 to listen on all interfaces
| | # Restart voipmonitor on the sensor: <code>systemctl restart voipmonitor</code> |
| server_bind_port = <port> # Default is 60024
| | # The sensor will re-register automatically |
| server_password = <a_strong_password>
| |
|
| |
|
| # MySQL configuration remains unchanged
| | = Legacy: Mirror Mode = |
| mysqlhost = localhost
| |
| mysqldb = voipmonitor
| |
| mysqluser = voipmonitor
| |
| mysqlpassword = <your_db_password>
| |
| </syntaxhighlight>
| |
|
| |
|
| Restart the service on the central server:
| | The older <code>mirror_destination</code>/<code>mirror_bind</code> options still work but Client-Server mode is preferred (encryption, simpler management). |
| <syntaxhighlight lang="bash"> | |
| # On central server
| |
| systemctl restart voipmonitor
| |
| </syntaxhighlight> | |
|
| |
|
| Verify the server is listening:
| | To migrate from mirror mode: |
| <syntaxhighlight lang="bash"> | | # Stop sensors, comment out <code>mirror_*</code> parameters |
| # Check that voipmonitor is listening on the configured port | | # Configure <code>server_bind</code> on central, <code>server_destination</code> on sensors |
| ss -tulpn | grep voipmonitor
| | # Restart all services |
| # Should show: voipmonitor LISTEN 0.0.0.0:60024 (or your custom port)
| |
| </syntaxhighlight> | |
|
| |
|
| ;4. Configure each Probe:
| | For mirror mode <code>id_sensor</code> attribution, use: |
| Edit <code>/etc/voipmonitor.conf</code> on each remote probe:
| |
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # COMMENT OUT or remove mirror mode parameters: | | # On central receiver |
| # mirror_destination_ip = 1.2.3.4
| | mirror_bind_sensor_id_by_sender = yes |
| # mirror_destination_port = 9000
| |
| | |
| # ADD client-server mode parameters:
| |
| id_sensor = <unique_id> # Must be unique per sensor
| |
| server_destination = <server_ip>
| |
| server_destination_port = <port> # Must match server_bind_port
| |
| server_password = <a_strong_password> # Same password used on server
| |
| | |
| # IMPORTANT: Set packet handling mode
| |
| packetbuffer_sender = no # Local Processing: analyze locally, send CDRs only
| |
| # OR
| |
| # packetbuffer_sender = yes # Packet Mirroring: send raw packets to server
| |
| | |
| # Capture settings remain unchanged
| |
| interface = eth0
| |
| sipport = 5060
| |
| # No MySQL credentials needed on remote sensors for Local Processing mode
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| Restart the service on each probe:
| | = See Also = |
| <syntaxhighlight lang="bash">
| |
| # On each probe
| |
| systemctl restart voipmonitor
| |
| </syntaxhighlight>
| |
|
| |
|
| ;5. Verify Connection in GUI:
| | * [[Sniffing_modes|Deployment & Topology Guide]] - Traffic forwarding methods |
| # Log in to the VoIPmonitor GUI
| | * [[Sniffer_configuration|Sniffer Configuration]] - All parameters reference |
| # Navigate to '''Settings → Sensors'''
| | * [[Merging_or_correlating_multiple_call_legs|Call Correlation]] - Multi-leg call handling |
| # Verify that probes appear automatically with their configured <code>id_sensor</code> values | | * [[FAQ#One_GUI_for_multiple_sniffers|FAQ: One GUI for Multiple Sniffers]] |
| # Check the connection status (online/offline)
| |
|
| |
|
| ;6. Test Data Flow:
| | == Filtering Options in Packet Mirroring Mode == |
| # Generate test traffic on a probe network (make a test call)
| |
| # Check CDR view in GUI
| |
| # Verify that new records show the correct <code>id_sensor</code> for that probe
| |
| # Confirm PCAP files are accessible (click play button in CDR view)
| |
|
| |
|
| ==== Common Issues During Migration | | {{Note|1='''Important distinction:''' In Packet Mirroring mode (<code>packetbuffer_sender=yes</code>): |
|
| |
|
| {| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
| | * '''Capture rules (GUI-based):''' Applied ONLY on the central server |
| |-
| | * '''BPF filters / IP filters:''' CAN be applied on the remote sensor to reduce bandwidth |
| ! colspan="2" style="background:#4A90E2; color: white;" | Troubleshooting Connection Problems
| |
| |-
| |
| | style="vertical-align: top;" | '''Probes cannot connect:'''
| |
| | * Verify <code>server_password</code> is identical on server and all probes
| |
| * Check firewall: allow incoming TCP on <code>server_bind_port</code> (default 60024) on central server
| |
| * Verify network connectivity: <code>nc -zv <server_ip> <server_bind_port></code> from probe | |
| |-
| |
| | style="vertical-align: top;" | '''All CDRs show same sensor:'''
| |
| | This typically indicates the old mirror mode configuration is still active or the <code>id_sensor</code> is not set on probes. Double-check that:
| |
| * Mirror parameters are commented out on both sides
| |
| * Each probe has a unique <code>id_sensor</code> value
| |
| * Services were restarted after configuration changes
| |
| |-
| |
| | style="vertical-align: top;" | '''PCAP files not accessible:'''
| |
| | In Local Processing mode (<code>packetbuffer_sender=no</code>), PCAPs are stored on probes and retrieved via TCP port 5029. Ensure:
| |
| * Central server can reach each probe on TCP/5029
| |
| * Firewall allows TCP/5029 from central server to probes
| |
| |}
| |
| | |
| == Critical Requirement: SIP and RTP must be captured by the same sniffer instance ==
| |
|
| |
|
| '''VoIPmonitor cannot reconstruct a complete call record if SIP signaling and RTP media are captured by different sniffer instances.'''
| | Use the following options on the '''remote sensor''' to filter traffic BEFORE sending to the central server: |
| | |
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
| |
| |-
| |
| ! colspan="2" style="background:#ffc107;" | Important: Single sniffer requirement
| |
| |-
| |
| | style="vertical-align: top;" | '''What does not work:'''
| |
| | * Sniffer A in Availability Zone 1 captures SIP signaling
| |
| * Sniffer B in Availability Zone 2 captures RTP media
| |
| * Result: Incomplete call record, GUI cannot reconstruct the call
| |
| |-
| |
| | style="vertical-align: top;" | '''Why:'''
| |
| | Call correlation requires a '''single sniffer instance to process both SIP and RTP packets from the same call'''. The sniffer correlates SIP signaling (INVITE, BYE, etc.) with RTP media in real-time during packet processing. If packets are split across multiple sniffers, the correlation cannot occur.
| |
| |-
| |
| | style="vertical-align: top;" | '''Solution:'''
| |
| | Forward traffic so that '''one sniffer processes both SIP and RTP for each call'''. Options:
| |
| * Route both SIP and RTP through the same Availability Zone for capture
| |
| * Use Packet Mirroring mode to forward complete traffic (SIP+RTP) to a central server that processes everything
| |
| * Configure network routers/firewalls to forward the required stream to the correct zone
| |
| |}
| |
| | |
| Configuration parameters like <code>receiver_check_id_sensor</code> and <code>cdr_check_exists_callid</code> are for other scenarios (multipath routing, duplicate Call-ID handling) and '''do NOT enable split SIP/RTP correlation'''. Setting these parameters does not allow SIP from one sniffer to be merged with RTP from another sniffer.
| |
| | |
| == Intermediate Server: Multi-Sensor Aggregation ==
| |
| | |
| An intermediate server can receive traffic from multiple remote sensors and forward it to a central server. This is useful for aggregating traffic from many locations before sending to a central data center.
| |
| | |
| === Architecture ===
| |
| | |
| <kroki lang="plantuml">
| |
| @startuml
| |
| skinparam shadowing false
| |
| skinparam defaultFontName Arial
| |
| skinparam rectangle {
| |
| BorderColor #4A90E2
| |
| BackgroundColor #FFFFFF
| |
| }
| |
| | |
| rectangle "Remote Sensor A" as RA
| |
| rectangle "Remote Sensor B" as RB
| |
| rectangle "Remote Sensor C" as RC
| |
| rectangle "Intermediate Server\n(server_bind + server_destination)" as INT
| |
| rectangle "Central Server\n(server_bind)" as CS
| |
| database "MySQL" as DB
| |
| | |
| RA --> INT : encrypted TCP
| |
| RB --> INT : encrypted TCP
| |
| RC --> INT : encrypted TCP
| |
| | |
| INT --> CS : encrypted TCP
| |
| CS --> DB : CDRs
| |
| | |
| note right of INT
| |
| Behavior controlled by
| |
| packetbuffer_sender on
| |
| intermediate server:
| |
| | |
| * packetbuffer_sender=no:
| |
| Process traffic locally,
| |
| send CDRs to central
| |
| | |
| * packetbuffer_sender=yes:
| |
| Forward raw packets to
| |
| central server
| |
| end note
| |
| @enduml
| |
| </kroki>
| |
| | |
| This is supported because the intermediate server does NOT do local packet capture - it only acts as a relay.
| |
| | |
| === Intermediate Server Configuration ===
| |
| | |
| The intermediate server has both <code>server_bind</code> (to receive from remote sensors) and <code>server_destination</code> (to send to central server).
| |
|
| |
|
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # On INTERMEDIATE SERVER | | # On REMOTE SENSOR (client) |
| # Acts as server for remote sensors, client to central server
| |
|
| |
|
| [general]
| | # Option 1: BPF filter (tcpdump syntax) - most flexible |
| id_sensor = 100 # Unique ID for this intermediate server
| | filter = not net 192.168.0.0/16 and not net 10.0.0.0/8 |
|
| |
|
| # Receive from remote sensors (server role) | | # Option 2: IP allow-list filter - CPU-efficient, no negation support |
| server_bind = 0.0.0.0
| | interface_ip_filter = 192.168.1.0/24 |
| server_bind_port = 60024
| | interface_ip_filter = 10.0.0.0/8 |
| server_password = sensor_password
| | </syntaxhighlight> |
|
| |
|
| # Send to central server (client role)
| | <b>Benefits of filtering on remote sensor:</b> |
| server_destination = central.server.ip
| | * Reduces WAN bandwidth usage between sensor and central server |
| server_destination_port = 60024
| | * Reduces processing load on central server |
| server_password = central_password
| | * Use <code>filter</code> for complex conditions (tcpdump/BPF syntax) |
| | * Use <code>interface_ip_filter</code> for simple IP allow-lists (more efficient) |
|
| |
|
| # CRITICAL: packetbuffer_sender controls what happens to forwarded traffic
| | <b>Filtering approaches:</b> |
| | * For <b>SIP header-based filtering</b>: Apply capture rules on the '''central server''' only |
| | * For <b>IP/subnet filtering</b>: Use <code>filter</code> or <code>interface_ip_filter</code> on '''remote sensor'''}} |
|
| |
|
| # Option 1: Local Processing on intermediate server
| | == Supported Configuration Options in Packet Mirroring Mode == |
| packetbuffer_sender = no # Process locally, send CDRs to central
| |
| mysqlhost = localhost
| |
| mysqldb = voipmonitor
| |
| mysqluser = voipmonitor
| |
| mysqlpassword = db_password
| |
|
| |
|
| # OR Option 2: Forward raw packets to central server
| | In Packet Mirroring mode (<code>packetbuffer_sender = yes</code>), the remote sensor forwards raw packets without processing them. This means many configuration options that manipulate packet behavior are '''unsupported''' on the remote sensor. |
| # packetbuffer_sender = yes # Forward raw packets (no database needed here)
| |
| </syntaxhighlight> | |
|
| |
|
| === <code>packetbuffer_sender</code> on Intermediate Server === | | == Supported Options on Remote Sensor (packetbuffer_sender) == |
|
| |
|
| The <code>packetbuffer_sender</code> setting on the intermediate server determines how it handles traffic from remote sensors: | | The following options work correctly on the remote sensor in packet mirroring mode: |
|
| |
|
| {| class="wikitable" | | {| class="wikitable" |
| | ! Parameter !! Description |
| | |- |
| | | <code>id_sensor</code> || Unique sensor identifier |
| | |- |
| | | <code>server_destination</code> || Central server address |
| |- | | |- |
| ! Setting !! What Happens !! Storage Location
| | | <code>server_destination_port</code> || Central server port (default 60024) |
| |- | | |- |
| | <code>packetbuffer_sender=no</code> || Intermediate server processes traffic (SIP/RTP analysis) and sends CDRs to central server || PCAPs on intermediate server | | | <code>server_password</code> || Authentication password |
| |- | | |- |
| | <code>packetbuffer_sender=yes</code> || Intermediate server forwards raw packets to central server, which processes them || PCAPs on central server | | | <code>server_destination_timeout</code> || Connection timeout settings |
| |}
| |
| | |
| In both cases, the '''original remote sensors must still be manually added to the GUI for visibility'''
| |
| | |
| === Original vs Intermediate Sensor Visibility ===
| |
| | |
| {| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
| |
| |- | | |- |
| ! colspan="2" style="background:#4A90E2; color: white;" | Important: Manual Sensor Registration
| | | <code>server_destination_reconnect</code> || Auto-reconnect behavior |
| |- | | |- |
| | style="vertical-align: top;" | '''Behavior:''' | | | <code>filter</code> || BPF filter to limit capture (use this to capture only SIP) |
| | When using an intermediate server, the original remote sensors (A, B, C) are not automatically visible in the GUI Settings. Only the intermediate server itself appears. | |
| |- | | |- |
| | style="vertical-align: top;" | '''Solution:''' | | | <code>interface_ip_filter</code> || IP-based packet filtering |
| | To view statistics and status for the original sensors, they must be manually added to the GUI Settings list with their <code>id_sensor</code> values, even though they connect to the intermediate server rather than directly to the central server.
| |
| |} | |
| | |
| === Example: Local Processing Mode ===
| |
| | |
| Remote sensors forward CDRs to intermediate server, which forwards them to central server:
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # Remote Sensors (A, B, C)
| |
| id_sensor = 2 # Unique values: 2, 3, 4...
| |
| server_destination = intermediate.server.ip
| |
| server_destination_port = 60024
| |
| server_password = sensor_password
| |
| | |
| packetbuffer_sender = no # Local Processing: process here, send CDRs
| |
| interface = eth0
| |
| sipport = 5060
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # Intermediate Server
| |
| server_bind = 0.0.0.0
| |
| server_bind_port = 60024
| |
| server_password = sensor_password
| |
| | |
| server_destination = central.server.ip
| |
| server_destination_port = 60024
| |
| server_password = central_password
| |
| | |
| packetbuffer_sender = no # Process locally, send CDRs onward
| |
| mysqlhost = localhost
| |
| mysqldb = voipmonitor
| |
| mysqluser = voipmonitor
| |
| mysqlpassword = db_password
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # Central Server
| |
| server_bind = 0.0.0.0
| |
| server_bind_port = 60024
| |
| server_password = central_password
| |
| | |
| mysqlhost = localhost
| |
| mysqldb = voipmonitor
| |
| mysqluser = voipmonitor
| |
| mysqlpassword = db_password
| |
| </syntaxhighlight>
| |
| | |
| === Example: Packet Mirroring Mode ===
| |
| | |
| Remote sensors forward raw packets to intermediate server, which forwards them to central server:
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # Remote Sensors (A, B, C)
| |
| id_sensor = 2 # Unique values: 2, 3, 4...
| |
| server_destination = intermediate.server.ip
| |
| server_destination_port = 60024
| |
| server_password = sensor_password
| |
| | |
| packetbuffer_sender = yes # Packet Mirroring: send raw packets
| |
| interface = eth0
| |
| sipport = 5060
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # Intermediate Server
| |
| server_bind = 0.0.0.0
| |
| server_bind_port = 60024
| |
| server_password = sensor_password
| |
| | |
| server_destination = central.server.ip
| |
| server_destination_port = 60024
| |
| server_password = central_password
| |
| | |
| packetbuffer_sender = yes # Forward raw packets onward
| |
| # No database configuration needed on intermediate server
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # Central Server
| |
| server_bind = 0.0.0.0
| |
| server_bind_port = 60024
| |
| server_password = central_password
| |
| | |
| mysqlhost = localhost
| |
| mysqldb = voipmonitor
| |
| mysqluser = voipmonitor
| |
| mysqlpassword = db_password
| |
| | |
| # Processing and storage options (configured on central server)
| |
| sipport = 5060
| |
| savertp = yes
| |
| savesip = yes
| |
| </syntaxhighlight>
| |
| | |
| == Version Compatibility ==
| |
| | |
| === General Compatibility Rules ===
| |
| | |
| In general, there is **no strict version locking** between the VoIPmonitor GUI (web interface) and the Sniffer (sensor components). The primary compatibility constraint is the **Database Schema**, which is managed primarily by the GUI.
| |
| | |
| {| class="wikitable"
| |
| |- | | |- |
| ! Scenario !! Compatibility !! Risk Level !! Details
| | | <code>interface</code> || Capture interface |
| |- | | |- |
| | '''GUI ≥ Sniffer''' || '''Compatible''' || ✅ Low || GUI is newer or same version. It can visualize data from older sensors, and the database schema supports all data the sensors write. | | | <code>sipport</code> || SIP ports to monitor |
| |- | | |- |
| | '''GUI < Sniffer''' || '''Potentially Incompatible''' || ⚠️ High || Sensor is newer. It may try to write to database columns or tables that do not exist in the older GUI's schema, leading to SQL insert errors. | | | <code>promisc</code> || Promiscuous mode |
| |}
| |
| | |
| **Best Practice:**
| |
| > Maintain the GUI version equal to or higher than the sniffer version (GUI ≥ Sniffer).
| |
| | |
| When upgrading components, always upgrade the GUI first so it applies new database schemas, then upgrade the sniffers.
| |
| | |
| === Client-Server Mode Version Matching ===
| |
| | |
| For client-server mode deployments (remote sensors connecting to a central server), it is **strongly recommended that clients and receivers use the same version**.
| |
| | |
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;"
| |
| |- | | |- |
| ! colspan="2" style="background:#ffc107;" | Version Matching Recommendation
| | | <code>rrd</code> || RRD statistics |
| |- | | |- |
| | style="vertical-align: top;" | '''For client-server mode:''' | | | <code>spooldir</code> || Temporary packet buffer directory |
| | Keep all sensors and their central receivers on the same version whenever possible. While different versions can work together, matching versions ensure full compatibility and access to the latest features and fixes. | |
| |- | | |- |
| | style="vertical-align: top;" | '''For gradual upgrades:''' | | | <code>ringbuffer</code> || Ring buffer size for packet mirroring |
| | If you cannot upgrade all sensors simultaneously, upgrade the central receiver/server first, then upgrade sensors one by one. The receiver should be at least as new as the newest connecting sensor.
| |
| |}
| |
| | |
| === Mixed Version Compatibility ===
| |
| | |
| If you need to run mixed versions in your deployment temporarily, a compatibility option is available in sensor version 2024.11.0 and newer.
| |
| | |
| <strong>server_cp_store_simple_connect_response</strong> | |
| | |
| This configuration option enables the client-server communication protocol to work with mixed-version environments. When enabled, the server uses a simpler protocol variant that is compatible with older sensor versions.
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # On the central receiver/server (sniffer 2024.11.0+)
| |
| server_cp_store_simple_connect_response = yes
| |
| </syntaxhighlight>
| |
| | |
| | Condition | Default Value | Recommended Setting |
| |
| |-----------|---------------|-------------------|
| |
| | Matching versions (normal) | no | no (default) |
| |
| | Mixed versions (temporary) | no | yes (enable only if needed) |
| |
| | |
| {| class="wikitable" style="background:#f8f9fa; border:1px solid #dee2e6;"
| |
| |- | | |- |
| ! colspan="2" style="background:#dee2e6;" | Important Notes on Mixed Versions
| | | <code>max_buffer_mem</code> || Maximum buffer memory |
| |- | | |- |
| | This is a **temporary compatibility option** for migration periods. Using matching versions for all components is the preferred long-term configuration. | | | <code>packetbuffer_enable</code> || Enable packet buffering |
| |- | | |- |
| | The option should be set on the **central receiver/server** instance that receives connections from sensors. | | | <code>packetbuffer_compress</code> || Enable compression for forwarded packets |
| |- | | |- |
| | Once all sensors are upgraded to their target versions, disable this option (<code>server_cp_store_simple_connect_response = no</code>) for normal operation. | | | <code>packetbuffer_compress_ratio</code> || Compression ratio |
| |} | | |} |
|
| |
|
| === Version Verification === | | == Unsupported Options on Remote Sensor == |
|
| |
|
| To check the running version of any component:
| | The following options '''do NOT work''' on the remote sensor in packet mirroring mode because the sensor does not parse packets: |
|
| |
|
| <syntaxhighlight lang="bash">
| | {| class="wikitable" |
| # On the sensor or server host
| | ! Parameter !! Reason |
| /usr/local/sbin/voipmonitor --version
| |
| | |
| # Or via management API (default port 5029)
| |
| echo 'sniffer_version' | nc 127.0.0.1 5029
| |
| </syntaxhighlight
| |
| | |
| In the GUI, navigate to '''Settings → Sensors''' to see the version of each connected sensor.
| |
| | |
| == Limitations ==
| |
| | |
| * All sensors must use the same <code>server_password</code> at each connection level (sensors→intermediate and intermediate→central)
| |
| * '''A single sniffer cannot do local packet capture AND act as both server and client simultaneously.''' The intermediate server configuration works because it does NOT capture from its own network interface - it only receives from sensors and forwards to central server.
| |
| * Each sensor requires a unique <code>id_sensor</code> (< 65536)
| |
| * Time synchronization (NTP) is critical for correlating calls across sensors
| |
| * Maximum allowed time difference between client and server: 2 seconds (configurable via <code>client_server_connect_maximum_time_diff_s</code>)
| |
| | |
| For a complete reference of all client-server parameters, see [[Sniffer_configuration#Distributed_Operation:_Client/Server_&_Mirroring|Sniffer Configuration: Distributed Operation]].
| |
| | |
| === Troubleshooting: Time Synchronization Errors ===
| |
| | |
| If sensors repeatedly log errors such as <code>send packetbuffer block error: failed response from server - different time between server and client</code> or <code>client_server_connect_maximum_time_diff</code>, this indicates that the clock offset between the client and server has exceeded the permitted limit.
| |
| | |
| {| class="wikitable" style="background:#fff3cd; border:1px solid #ffc107;" | |
| |- | | |- |
| ! colspan="2" style="background:#ffc107;" | Time Synchronization Error
| | | <code>natalias</code> || NAT alias handling (configure on central server instead) |
| |- | | |- |
| | style="vertical-align: top;" | '''Error Message:'''
| | | <code>rtp_check_both_sides_by_sdp</code> || RTP correlation requires packet parsing |
| | <code>send packetbuffer block error: failed response from server - different time between server and client</code> | |
| |- | | |- |
| | style="vertical-align: top;" | '''Meaning:''' | | | <code>disable_process_sdp</code> || SDP processing happens on central server |
| | The actual time difference between client and server exceeds the configured threshold. Even if both systems are configured for UTC and use the same NTP servers, clock drift, NTP polling intervals, network latency to NTP servers, or firewall restrictions on UDP port 123 can cause the offset to exceed the limit.
| |
| |}
| |
| | |
| ==== Solution 1: Increase Time Tolerance (Immediate Workaround) ====
| |
| | |
| If you cannot immediately resolve the NTP synchronization precision issue, increase the allowed time difference in the configuration.
| |
| | |
| Add or modify the following parameters in <code>/etc/voipmonitor.conf</code> on '''both the client and the server''':
| |
| | |
| <syntaxhighlight lang="ini">
| |
| # Increase time difference tolerance (in seconds)
| |
| # For persistent clock offset issues, consider values like 30 or higher
| |
| client_server_connect_maximum_time_diff_s = 30
| |
| receive_packetbuffer_maximum_time_diff_s = 30
| |
| </syntaxhighlight>
| |
| | |
| After changing the configuration, restart the VoIPmonitor service on both nodes:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| systemctl restart voipmonitor
| |
| </syntaxhighlight>
| |
| | |
| {| class="wikitable" style="background:#e8f4f8; border:1px solid #4A90E2;"
| |
| |- | | |- |
| ! colspan="2" style="background:#4A90E2; color: white;" | Parameter Function
| | | <code>save_sdp_ipport</code> || SDP extraction happens on central server |
| |- | | |- |
| | style="vertical-align: top;" | <code>client_server_connect_maximum_time_diff_s</code>
| | | <code>rtpfromsdp_onlysip</code> || RTP mapping requires packet parsing |
| | (Default: 2) Controls the maximum time difference allowed during the initial client-server connection handshake. | |
| |- | | |- |
| | style="vertical-align: top;" | <code>receive_packetbuffer_maximum_time_diff_s</code>
| | | <code>rtpip_find_endpoints</code> || Endpoint discovery requires packet parsing |
| | (Default: 30) Controls the maximum time difference allowed when clients send packet buffer data (CDRs or raw packets) to the server. | |
| |} | | |} |
|
| |
|
| ==== Solution 2: Verify and Fix NTP Synchronization (Root Cause Fix) ==== | | {{Warning|1='''Critical: Storage options''' (<code>savesip</code>, <code>savertp</code>, <code>saveaudio</code>) '''must be configured on the CENTRAL SERVER''' in packet mirroring mode. The remote sensor only forwards packets and does not perform any storage operations.}} |
|
| |
|
| The proper solution is to ensure NTP is synchronized with minimal clock drift.
| | == SIP-Only Capture Example == |
|
| |
|
| '''Check NTP status on both Client and Server:'''
| | To capture and forward only SIP packets (excluding RTP/RTCP) for security or compliance: |
|
| |
|
| <syntaxhighlight lang="bash"> | | <syntaxhighlight lang="ini"> |
| # Check system time synchronization status | | # /etc/voipmonitor.conf - Remote Sensor |
| timedatectl status
| | id_sensor = 2 |
| | server_destination = central.server.ip |
| | server_destination_port = 60024 |
| | server_password = your_strong_password |
| | packetbuffer_sender = yes |
| | interface = eth0 |
| | sipport = 5060,5061 |
|
| |
|
| # Ensure "System clock synchronized: yes" is shown | | # Filter to capture ONLY SIP packets (exclude RTP/RTCP) |
| | filter = port 5060 or port 5061 |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| '''Check clock offset (Chrony):'''
| | {{Note|1=The <code>filter</code> parameter using BPF syntax (tcpdump-compatible) is the recommended way to filter packets at the source in packet mirroring mode. This reduces bandwidth by forwarding only SIP packets to the central server.}} |
| | |
| <syntaxhighlight lang="bash"> | |
| chronyc tracking
| |
| | |
| # Look at "Last offset" and "RMS offset" values
| |
| # If values fluctuate near or above 2000ms (2 seconds), connections will fail
| |
| </syntaxhighlight> | |
|
| |
|
| '''Check clock offset (NTP):'''
| |
|
| |
|
| <syntaxhighlight lang="bash">
| |
| ntpq -p
| |
|
| |
|
| # Look at delay, offset, and jitter columns
| |
| # High jitter or offset will trigger sporadic errors
| |
| </syntaxhighlight>
| |
|
| |
|
| Common NTP issues:
| |
| * Firewall silently dropping UDP port 123 (NTP)
| |
| * High network latency to NTP servers
| |
| * NTP service not running or misconfigured
| |
| * Different NTP server pools with divergent time
| |
|
| |
|
| '''Verify firewall allows NTP:'''
| |
|
| |
|
| <syntaxhighlight lang="bash">
| |
| # Check firewalld (CentOS/RHEL)
| |
| firewall-cmd --list-ports
| |
|
| |
|
| # Check iptables/ufw (Debian/Ubuntu)
| |
| iptables -L -n -v | grep 123
| |
| # or
| |
| ufw status verbose | grep 123
| |
| </syntaxhighlight>
| |
|
| |
|
| If needed, allow NTP traffic:
| | = AI Summary for RAG = |
|
| |
|
| <syntaxhighlight lang="bash"> | | '''Summary:''' VoIPmonitor v20+ Client-Server architecture for distributed deployments using encrypted TCP (default port 60024, zstd compression). Two modes: '''Local Processing''' (<code>packetbuffer_sender=no</code>) analyzes locally and sends CDRs only (1Gb sufficient); '''Packet Mirroring''' (<code>packetbuffer_sender=yes</code>) forwards raw packets to central server. Critical requirements: (1) exclude server_bind_port from sipport on central server (prevents memory issues); (2) sipport must match on probe and central server; (3) single sniffer must process both SIP and RTP for same call; (4) natalias only on central server. Intermediate servers supported for hub-and-spoke topology. Use <code>manager_ip</code> to bind outgoing connections to specific IP on HA setups. Sensor health via management API port 5029: <code>echo 'sniffer_stat' | nc <ip> 5029</code>. Debug SIP using Live Sniffer in GUI or sngrep on remote sensor. Stale sensor records cause "bad password" errors - delete from GUI Settings → Sensors and restart. Time sync errors: fix NTP or increase <code>client_server_connect_maximum_time_diff_s</code>. |
| # firewalld
| |
| firewall-cmd --permanent --add-service=ntp
| |
| firewall-cmd --reload
| |
|
| |
|
| # ufw
| | '''Keywords:''' distributed architecture, client-server, packetbuffer_sender, local processing, packet mirroring, server_destination, server_bind, sipport exclusion, AWS VPC Traffic Mirroring alternative, intermediate server, sensor health, sniffer_stat, Live Sniffer, natalias, version compatibility, time synchronization, NTP, stale sensor record, mirror mode migration, manager_ip, high availability |
| ufw allow ntp
| |
| </syntaxhighlight>
| |
|
| |
|
| == AI Summary for RAG ==
| |
| '''Summary:''' VoIPmonitor v20+ uses Client-Server architecture for distributed deployments with encrypted TCP connections (default port 60024 with zstd compression, configurable via server_bind_port and server_destination_port). Two modes: Local Processing (<code>packetbuffer_sender=no</code>) analyzes locally and sends CDRs, Packet Mirroring (<code>packetbuffer_sender=yes</code>) forwards raw packets. NETWORK BANDWIDTH REQUIREMENTS: For Local Processing (PCAPs stored on sensors), network traffic consists mainly of CDR SQL data and a 1Gb connection between sensors and central server is generally sufficient. For Packet Mirroring, bandwidth consumption is roughly equivalent to VoIP traffic volume (use <code>server_type_compress=zstd</code> to reduce). Dashboard widgets for SIP/RTP/REGISTER counts: with Packet Mirroring, statistics appear only on central server (sender has empty widgets); with Local Processing, statistics appear on both sensor and central server. To enable local statistics on a forwarding sensor, set <code>packetbuffer_sender=no</code> (increases CPU/RAM usage). Supports failover with multiple server IPs. CDRs stored centrally; PCAPs on sensors (Local Processing) or centrally (Packet Mirroring). In Packet Mirroring mode, the <code>save*</code> options (savertp, savesip, saveaudio) configured on the CENTRAL SERVER control storage for packets received from sensors. When multiple sensors forward packets with the same Call-ID, VoIPmonitor automatically merges them into a single CDR. To keep records separate per sensor with same Call-ID, run multiple receiver instances on different ports with separate database tables. CRITICAL: A single sniffer instance MUST process both SIP signaling and RTP media for the same call. Splitting SIP and RTP across different sniffers creates incomplete call records that cannot be reconstructed. INTERMEDIATE SERVER: An intermediate server can receive traffic from multiple remote sensors and forward it to a central server. The intermediate server has both <code>server_bind</code> (to receive from sensors) and <code>server_destination</code> (to send to central server). The behavior is controlled by <code>packetbuffer_sender</code> on the intermediate server: if <code>packetbuffer_sender=no</code>, it processes traffic locally and sends CDRs to central server; if <code>packetbuffer_sender=yes</code>, it forwards raw packets to central server. In both cases, the original remote sensors must be manually added to the GUI Settings for visibility. This is supported because the intermediate server does NOT do local packet capture - it only acts as a relay. For custom port configuration: server_bind_port on central server MUST match server_destination_port on remote sensors. Common reasons for custom ports: firewall restrictions, multiple instances on same server, compliance requirements, avoiding port conflicts. SENSOR HEALTH CHECK VIA MANAGEMENT API: Each sensor exposes a TCP management API (default port 5029) that can be queried via netcat: `echo 'sniffer_stat' | nc <sensor_ip> <sensor_port>`. This returns JSON with sensor status including running state, version, uptime, active calls, total calls, packets per second, and packet drops. IMPORTANT: There is NO single command to check all sensors simultaneously - each must be queried individually. Scripting multiple sensors with a loop can provide a consolidated result with exit codes. In newer VoIPmonitor versions, management API communication may be encrypted, requiring encryption to be disabled or using VoIPmonitor-specific CLI tools. Firewall must allow TCP port 5029 access from monitoring host to sensors. DEBUGGING SIP TRAFFIC IN DISTRIBUTED ARCHITECTURE: Standard packet capture tools like sngrep cannot see SIP packets on the central server in Client-Server mode because traffic is encapsulated inside encrypted TCP tunnel (port 60024 for Packet Mirroring, or SQL/3306 for Local Processing). SOLUTION 1: Use Live Sniffer feature in VoIPmonitor GUI - navigate to Live Sniffer, select remote sensor from dropdown, click Start to view live SIP packets from that sensor. Live Sniffer streams SIP packets from sensor to GUI in real-time via Manager API (TCP 5029), with features including call coloring by Call-ID, packet details inspection, call flow visualization, and multi-user support. SOLUTION 2: Run sngrep directly on the remote sensor machine where traffic first arrives from network interface before encapsulation: sngrep -i eth0 (replace eth0 with actual interface connected to SPAN/mirror port). VERIFICATION: Check connectivity on central server with `tcpdump -i any port 60024` for Packet Mirroring mode, `tcpdump -i any port 3306` for Local Processing mode, or check sensor statistics: `echo 'sniffer_stat' | nc <sensor_ip> 5029`. For detailed Live Sniffer setup and troubleshooting, see Live_sniffer documentation. LEGACY MIRROR MODE: Older mirror_destination/mirror_bind options exist but are less robust (no encryption, UDP) and Client-Server mode is recommended. Symptoms of mirror mode issues: all CDRs incorrectly associated with a single sensor after system updates. Migration involves: stop probes, remove old sensor records from GUI Settings, comment out mirror parameters (mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port), add server_bind/server_bind_port on central server and server_destination/server_destination_port on probes, set unique id_sensor per probe, choose packetbuffer_sender mode. Common migration issues: probes cannot connect (verify server_password, firewall allows TCP on server_bind_port), all CDRs show same sensor (old mirror config still active or id_sensor not set), PCAPs not accessible in Local Processing mode (central server must reach probes on TCP/5029). TROUBLESHOOTING: In distributed/probe setups with Packet Mirroring, if a probe is not detecting all calls on expected ports, the <code>sipport</code> configuration MUST match on BOTH the probe AND the central analysis host. If the network uses multiple SIP ports (e.g., 5060, 5061, 5080), both systems must have all ports listed in their <code>sipport</code> directive. Common symptom: Probe sees traffic via <code>tcpdump</code> but central server records incomplete CDRs. RTP STREAMS END PREMATURELY: If RTP streams end prematurely in call recordings when using a remote sniffer with a central GUI, this is often caused by Incorrect <code>natalias</code> configuration placement. The <code>natalias</code> option must be configured ONLY on the central server that receives and processes packets, NOT on the remote sniffer that captures and forwards them. When packets are forwarded from a remote sniffer to a central server in Packet Mirroring mode, configuring <code>natalias</code> on the remote sniffer causes IP address substitution to happen at capture time, which causes the central server's RTP correlation logic to fail. Solution: Remove <code>natalias</code> from the remote sniffer's voipmonitor.conf, add it to the central server's voipmonitor.conf, then restart both services. WEB GUI ACCESSIBLE BUT SENSORS CANNOT CONNECT: If the web portal is accessible but sensors cannot connect to the primary server, verify that the MySQL/MariaDB database service on the central server is running and responsive. The central VoIPmonitor service requires a functioning database connection to accept sensor data, even though the web interface (PHP) may remain accessible. Check MySQL service status (<code>systemctl status mariadb</code> or <code>systemctl status mysqld</code>) and inspect MySQL error logs (<code>/var/log/mariadb/mariadb.log</code> or <code>/var/log/mysql/error.log</code>) for critical errors. Restart the database service if needed. VERSION COMPATIBILITY: There is no strict version locking between GUI and sensor components. The primary compatibility constraint is the database schema managed by the GUI. Best practice: Maintain GUI version equal to or higher than sensor version (GUI >= Sniffer). When upgrading, upgrade the GUI first to apply new database schemas, then upgrade sensors. For client-server mode, it is strongly recommended that clients and receivers use the same version for full compatibility and access to latest features. If mixed versions are needed temporarily, sensor version 2024.11.0+ supports <code>server_cp_store_simple_connect_response = yes</code> configuration option on the central receiver/server to enable a simpler protocol compatible with older sensor versions. This is a temporary compatibility option for migration periods - disable (<code>server_cp_store_simple_connect_response = no</code>) once all components are on matching versions. Check versions with <code>/usr/local/sbin/voipmonitor --version</code> or via management API: <code>echo 'sniffer_version' | nc 127.0.0.1 5029</code>. In the GUI, navigate to Settings -> Sensors to see sensor versions. HIGH MEMORY UTILIZATION ON CENTRAL SERVER: If VoIPmonitor service exhibits high and continuously increasing memory utilization on the central server even when call volume is normal, check if <code>server_bind_port</code> (default 60024) is included in the <code>sipport</code> directive. The sensor communication port MUST be excluded from <code>sipport</code> on the central server to prevent sensor-to-server traffic from being captured as SIP packets. For example, with default <code>server_bind_port = 60024</code>, configure <code>sipport = 1-60023</code> and <code>sipport = 60025-65535</code>. This fix applies to both Local Processing and Packet Mirroring modes.
| |
| TIME SYNCHRONIZATION ERRORS: In client-server mode, sensors may log errors like "send packetbuffer block error: failed response from server - different time between server and client" or "client_server_connect_maximum_time_diff" when clock offset exceeds the permitted limit. Even if both systems use UTC and same NTP servers, clock drift, NTP polling intervals, network latency to NTP servers, or firewall restrictions on UDP port 123 can cause the offset to exceed the threshold. IMMEDIATE WORKAROUND: Increase time tolerance by adding `client_server_connect_maximum_time_diff_s = 30` and `receive_packetbuffer_maximum_time_diff_s = 30` to voipmonitor.conf on BOTH client and server, then restart voipmonitor service. The `client_server_connect_maximum_time_diff_s` parameter (default: 2) controls maximum time difference during initial client-server connection handshake. The `receive_packetbuffer_maximum_time_diff_s` parameter (default: 30) controls maximum time difference when clients send packet buffer data (CDRs or raw packets) to the server. ROOT CAUSE FIX: Ensure NTP is synchronized with minimal clock drift. Check system time status with `timedatectl status` (ensure "System clock synchronized: yes"), check clock offset with `chronyc tracking` (look at Last offset and RMS offset - values near or above 2000ms cause failures), or use `ntpq -p` (look at delay, offset, jitter columns). Common NTP issues: firewall silently dropping UDP port 123, high network latency to NTP servers, NTP service not running or misconfigured, different NTP server pools with divergent time. Allow NTP traffic through firewall: `firewall-cmd --permanent --add-service=ntp` (firewalld) or `ufw allow ntp` (Ubuntu/Debian). NETWORK PERFORMANCE TESTING WITH IPERF3: Before adding hardware or tuning configuration, test network throughput between probe and central server to identify bottlenecks. When experiencing "packetbuffer: MEMORY IS FULL" errors, high system load (e.g., 70-80% on 8-core system), slow CDR display, or packet loss during peak traffic, use iperf3 to test. INSTALLATION: Install iperf3 on both systems with `apt-get install iperf3` (Debian/Ubuntu) or `yum install iperf3` (RHEL/CentOS). TESTING: On central server, run `iperf3 -s` (listening server). On probe, run `iperf3 -c <central_server_ip>` to test throughput, `iperf3 -c <central_server_ip> -R` for bidirectional test, or `iperf3 -c <central_server_ip> -t 60 -P 4` for 60-second test with 4 parallel streams. INTERPRETATION: If iperf3 shows expected bandwidth (e.g., >900 Mbps on 1Gb link), network is NOT the bottleneck - check local resources (CPU load, RAM, disk I/O). If iperf3 shows significantly lower bandwidth (e.g., 200-500 Mbps on 1Gb link), network IS the bottleneck. If very low (<50 Mbps on 1Gb link), there is a severe network issue (duplex mismatch, faulty cabling, switch configuration, ISP limitations). DECISION MATRIX: When network is NOT bottleneck (high throughput), check CPU usage with htop/uptime. If system load is consistently high (e.g., 70-80% on 8-core system) and cannot process packets fast enough, INCREASE CPU CORES on the probe machine by upgrading hardware or adding more vCPUs in virtualized environments. Configuring tuning options (rtpthreads, ringbuffer) is a temporary workaround; adding CPU cores addresses the root cause. See Hardware page for CPU requirements based on concurrent call volume. When network IS bottleneck (low throughput), inspect network path (check switches, routers, VLANs for congestion/misconfiguration), verify link speed and duplex settings with `ethtool eth0` (look for "Speed: 1000Mb/s, Duplex: Full"), check for packet loss with `ping -c 100 <central_server_ip>` (0% loss is ideal), check network latency with `traceroute` (consistent sub-millisecond hops), check interface errors with `ethtool -S eth0 | grep -i error` (should be zero or very low). Solutions: upgrade network (1GbE to 10GbE, add dedicated links, improve routing), or switch to Local Processing mode (`packetbuffer_sender=no`) to reduce network traffic. NETWORK BANDWIDTH AND MODES: Local Processing Mode (`packetbuffer_sender=no`) sends only CDRs (SQL queries) to central server - network traffic is minimal (typically <1 Mbps). Packet Mirroring Mode (`packetbuffer_sender=yes`) forwards raw packets to central server - bandwidth roughly equals VoIP traffic volume (e.g., 300 Mbps for 3 Gbit/s traffic). For limited network bandwidth, Local Processing mode is generally preferred. SENSOR REGISTRATION ERRORS: If a new sensor fails to connect to the main server with error "failed response from server - bad password" even when server_password is correctly configured in both files, the issue may be a stale sensor record in the GUI database. The GUI database retains a record of a previously-deployed sensor, and this stale record can prevent a new sensor with the same id_sensor from authenticating correctly, even when the password matches exactly. The error message is misleading - the problem is not a password mismatch but a database conflict. Common scenarios: replacing a failed sensor with new hardware, reinstalling or reconfiguring a sensor, changing sensor IDs and back, restoring from backups where sensor records are out of sync. RESOLUTION: Delete the problematic sensor record from GUI -> Settings -> Sensors, restart the voipmonitor service on the sensor/probe machine only (systemctl restart voipmonitor), verify network connectivity from sensor to server with `telnet <server_ip> 60024` (or nc -zv <server_ip> 60024). If telnet shows "Connected to <server_ip>", the server is reachable. If "Connection refused" or timeout, check firewall rules and ensure server service is running (ss -tulpn | grep 60024). After service restart, the sensor automatically registers with the central server and appears as a new entry in GUI -> Settings -> Sensors. Sensor registration process: sensors automatically register with the central server when they start by sending a handshake packet with id_sensor and authentication credentials. The central server checks if a sensor with this id_sensor exists in the database and validates credentials, or creates a new database record automatically if no such sensor exists. When a stale sensor record exists, the server attempts to validate the new sensor against the existing record, and if records are out of sync (different passwords, different manager IPs, corrupted state), authentication fails. Deleting the stale record allows the registration process to start fresh. PREVENTION: When decommissioning a sensor, delete its record from GUI -> Settings -> Sensors. When replacing sensor hardware, delete the old sensor record before bringing up the new one. Verify sensor IDs are unique across deployment (duplicate id_sensor values cause conflicts). Use GUI backup feature to maintain a clean baseline and restore sensors selectively as needed.
| |
| '''Keywords:''' distributed architecture, client-server, network bandwidth, sensor registration error, failed response from server bad password, stale sensor record, sensor authentication failure, GUI Settings Sensors, delete sensor, telnet connectivity, automatic sensor registration, sensor database, id_sensor conflict, replacing sensor hardware, sensor replacement, reinstall sensor, sensor not connecting, sensor connection refused, server_password mismatch, database conflict, throughput, network requirements, 1Gb connection, bandwidth requirements, server_destination, server_bind, server_bind_port, server_destination_port, custom port, packetbuffer_sender, local processing, packet mirroring, remote sensors, failover, encrypted channel, zstd compression, dashboard widgets, statistics, empty dashboard, SIP RTP correlation, split sensors, single sniffer requirement, availability zone, savertp, savesip, saveaudio, centralized storage, packet storage control, call-id merging, multiple sensors same callid, separate records per sensor, receiver instances, mysqltableprefix, firewall, port configuration, connection troubleshooting, probe, central host, central server, sensor, sipport, missing calls, probe not detecting calls, tcpdump, configuration mismatch, mirror mode, migration, mirror_destination, mirror_bind, mirror_bind_ip, mirror_bind_port, mirror_destination_ip, mirror_destination_port, migrate from mirror mode, all CDRs same sensor, system update, upgrade, intermediate server, relay server, multi-sensor aggregation, hub and spoke, chained topology, sensor forwarding, mysql, mariadb, database service, web gui accessible, error logs, sensor health check, management API, sniffer_stat, TCP port 5029, manager_bind, nc netcat, sensor status, sensor monitoring, health status, exit code, consolidated result, check all sensors, encrypted API, encryption disabled, natalias, NAT alias configuration, RTP streams end prematurely, RTP correlation, IP address substitution, NAT traversal, remote sniffer configuration, central server configuration, natalias placement, incomplete recordings, call recordings cut off, version compatibility, GUI version, sensor version, database schema, GUI >= Sniffer, upgrade GUI first, client-server version matching, mixed versions, server_cp_store_simple_connect_response, protocol compatibility, sniffer 2024.11.0, check version, sniffer_version, time synchronization, NTP, clock drift, time difference, different time, send packetbuffer block error, failed response from server, client_server_connect_maximum_time_diff_s, receive_packetbuffer_maximum_time_diff_s, timedatectl, chronyc tracking, ntpq -p, clock offset, UDP port 123, firewall NTP, DEBUGGING SIP, sngrep, live sniffer, cannot see SIP packets, encrypted TCP tunnel, packet encapsulation, sngrep central server, debug SIP distributed architecture, live sniffer remote sensor, packet capture tools, manager API live sniffer, call coloring, packet details, call flow visualization, encrypted tunnel traffic, zstd compression, packetbuffer_sender traffic, SQL traffic local processing, high memory utilization, high and increasing memory usage, memory leak, server_bind_port exclusion, iperf3, network performance testing, network throughput, iperf, tcp throughput, packet loss, slow CDR display, high system load, CPU cores upgrade, hardware upgrade, network bottleneck, local resource bottleneck, MEMORY IS FULL errors, network bandwidth test, 90% CPU load, probe hardware, bandwidth limitations, ethtool, link speed, duplex configuration, traceroute, network latency, interface errors, network congestion, switch configuration, 10GbE upgrade
| |
| '''Key Questions:''' | | '''Key Questions:''' |
| * How do I connect multiple VoIPmonitor sensors to a central server? | | * How do I connect multiple VoIPmonitor sensors to a central server? |
| * What is the expected network throughput between remote sensors and the central GUI/Database server?
| | * What is the difference between Local Processing and Packet Mirroring mode? |
| * Is a 1Gb network connection sufficient for remote sensors in VoIPmonitor distributed deployment?
| | * Why is VoIPmonitor using high memory on the central server? |
| * What network bandwidth is required for Local Processing mode vs Packet Mirroring mode?
| | * Why is a remote probe not detecting all calls on expected ports? |
| * What is the difference between Local Processing and Packet Mirroring? | | * How do I check VoIPmonitor sensor health status? |
| * Where are CDRs and PCAP files stored in distributed mode?
| | * Why does a new sensor fail with "bad password" error? |
| * What is packetbuffer_sender and when should I use it? | |
| * How do I configure failover for remote sensors?
| |
| * Why are dashboard widgets (SIP/RTP/REGISTER counts) empty for a sensor configured to forward packets?
| |
| * How do I enable local statistics on a forwarding sensor?
| |
| * Can a VoIPmonitor instance act as an intermediate server receiving from multiple sensors and forwarding to a central server?
| |
| * How does packetbuffer_sender control traffic forwarding on an intermediate server?
| |
| * Can a VoIPmonitor sniffer be both a server (listening for sensors) and a client (sending to central server)?
| |
| * Why does a single sniffer cannot be both server and client mean, and what are the exceptions?
| |
| * What causes "failed response from server - bad password" error when connecting a new sensor to the central server?
| |
| * How do I fix sensor registration error when the password is correct?
| |
| * Why does a new sensor fail to connect despite having correct credentials? | |
| * How do I delete a stale sensor record from the VoIPmonitor GUI?
| |
| * How do I verify network connectivity from a sensor to the central server?
| |
| * How does automatic sensor registration work in VoIPmonitor client-server mode?
| |
| * How do I configure an intermediate server in a hub-and-spoke topology?
| |
| * Do I need to manually add remote sensors to the GUI when using an intermediate server?
| |
| * How does an intermediate server handle traffic from multiple remote sensors in Packet Mirroring mode?
| |
| * How does an intermediate server handle traffic from multiple remote sensors in Local Processing mode?
| |
| * Can VoIPmonitor reconstruct a call if SIP signaling is captured by one sniffer and RTP media by another?
| |
| * Why does receiver_check_id_sensor not allow merging SIP from one sensor with RTP from another?
| |
| * How do I control packet storage when sensors send raw packets to a central server?
| |
| * What happens when multiple sensors see the same Call-ID?
| |
| * How do I keep records separate when multiple sensors see the same Call-ID?
| |
| * How do I configure a custom port for client-server connections?
| |
| * What do I do if probes cannot connect to the VoIPmonitor server?
| |
| * Why is my remote sensor showing connection refused or timeout?
| |
| * Why is a voipmonitor sensor probe not detecting all calls on expected ports?
| |
| * Do I need to configure sipport on both the probe and central server in distributed setups? | |
| * What happens if sipport configuration doesn't match between probe and central host? | |
| * How do I migrate from mirror mode to client-server mode? | | * How do I migrate from mirror mode to client-server mode? |
| * Why are all CDRs incorrectly associated with a single sensor after a system update?
| | * What causes time synchronization errors between client and server? |
| * What are the differences between mirror mode and client-server mode? | | * Where should natalias be configured in distributed deployments? |
| * How do I configure mirror_destination and server_destination?
| | * Can VoIPmonitor act as an intermediate server? |
| * Why are sensors unable to connect to the VoIPMonitor primary server while the web portal remains accessible?
| | * What is an alternative to AWS VPC Traffic Mirroring? |
| * Why does the VoIPmonitor service exhibit high and increasing memory utilization on the central server in client-server mode?
| |
| * What causes memory issues when using client-server mode?
| |
| * Why do I need to exclude server_bind_port from sipport?
| |
| * What should I check if the web GUI works but sensors cannot connect to the central server?
| |
| * How do I verify MySQL or MariaDB database service is running on the primary server?
| |
| * Where are MySQL error logs located?
| |
| * How do I check the health status of a VoIPmonitor sensor?
| |
| * What is the command to query sensor status via the management API?
| |
| * How do I use sniffer_stat to check sensor health?
| |
| * Is there a single command to check all sensors at once?
| |
| * How do I check the status of multiple sensors and get a consolidated exit code?
| |
| * What is the default management API port for VoIPmonitor sensors?
| |
| * Why can I not connect to the sensor management API on TCP port 5029?
| |
| * How do I check if sensor management API is encrypted?
| |
| * How do I check the health of remote sensors in a distributed deployment?
| |
| * Why are RTP streams ending prematurely in call recordings when using a remote sniffer with central GUI?
| |
| * Why is sngrep not showing SIP packets on the central server in distributed mode?
| |
| * Can I see SIP packets with sngrep when using Client-Server architecture?
| |
| * How do I debug SIP traffic in VoIPmonitor distributed architecture?
| |
| * How do I inspect SIP packets from remote sensors?
| |
| * What is Live Sniffer and how do I use it for remote sensors?
| |
| * How do I view live SIP packets from remote sensors?
| |
| * Can I run sngrep on the central server with distributed architecture?
| |
| * Where should I run sngrep in a VoIPmonitor distributed setup?
| |
| * How do I use Live Sniffer to debug remote sensors?
| |
| * How do I verify the encrypted tunnel is working between sensor and central server?
| |
| * Where should I configure natalias in a distributed VoIPmonitor deployment? | |
| * Do I need to configure natalias on the remote sensor or on the central server? | |
| * What happens if natalias is configured on both the remote sensor and central server?
| |
| * What is the version compatibility between the VoIPmonitor GUI and sniffer components?
| |
| * Can I use a newer GUI with an older sniffer?
| |
| * What happens if I use an older GUI with a newer sniffer?
| |
| * Do I need to match the GUI version with the sniffer version?
| |
| * Do remote sensors and central receivers need the same version in client-server mode?
| |
| * Is it safe to run different versions of VoIPmonitor sensors and receivers?
| |
| * How do I handle mixed versions in a client-server deployment?
| |
| * What is server_cp_store_simple_connect_response used for? | |
| * When should I enable server_cp_store_simple_connect_response?
| |
| * How do I use server_cp_store_simple_connect_response for mixed version compatibility?
| |
| * How do I check the running version of my VoIPmonitor sensor or GUI?* What does the error "send packetbuffer block error: failed response from server - different time between server and client" mean?
| |
| * How do I fix time synchronization errors between VoIPmonitor client and server?
| |
| * What are client_server_connect_maximum_time_diff_s and receive_packetbuffer_maximum_time_diff_s?
| |
| * How do I increase the time tolerance for VoIPmonitor distributed sensors?
| |
| * How do I check if NTP is synchronized on VoIPmonitor sensors?
| |
| * How do I check clock offset between VoIPmonitor client and server?
| |
| * How do I fix different time errors in VoIPmonitor client-server mode?
| |
| * How do I test network throughput between probe and central server in distributed VoIPmonitor deployment?
| |
| * What is iperf3 and how do I use it to test network performance?
| |
| * How do I install iperf3 for troubleshooting VoIPmonitor network issues?
| |
| * How do I run iperf3 to test network bandwidth between probe and central server?
| |
| * How do I interpret iperf3 results to identify network vs CPU bottlenecks?
| |
| * What should I do if iperf3 shows high network bandwidth but VoIPmonitor probe still has MEMORY IS FULL errors?
| |
| * What should I do if iperf3 shows low network bandwidth between probe and central server?
| |
| * How do I check if MEMORY IS FULL errors are caused by network bottlenecks or CPU constraints?
| |
| * When should I increase CPU cores on the probe machine vs tuning configuration parameters?
| |
| * How do I verify network link speed and duplex settings with ethtool?
| |
| * How do I check for packet loss between probe and central server?
| |
| * What network configuration should I check when iperf3 shows low throughput?
| |