Sniffer troubleshooting: Difference between revisions

From VoIPmonitor.org
(Add section 3B: Troubleshooting RTP streams not displayed for specific provider)
(Add diagnostic workflow for audio missing on one call leg (tcpdump vs GUI capture comparison))
Line 696: Line 696:


Once the timing is corrected, probe connections to the central server should remain stable without intermittent timeouts.
Once the timing is corrected, probe connections to the central server should remain stable without intermittent timeouts.
== Troubleshooting: Audio Missing on One Call Leg ==
If the sniffer captures full audio on one call leg (e.g., carrier/outside) but only partial or no audio on the other leg (e.g., PBX/inside), use this diagnostic workflow to identify the root cause BEFORE applying any configuration fixes.
The key question to answer is: '''Are the RTP packets for the silent leg present on the wire?'''
=== Step 1: Use tcpdump to Capture Traffic During a Test Call ===
Initiate a new test call that reproduces the issue. During the call, use tcpdump or tshark directly on the sensor's sniffing interface to capture all traffic:
<syntaxhighlight lang="bash">
# Capture traffic to a file during the test call
# Replace eth0 with your sniffing interface
tcpdump -i eth0 -s0 -w /tmp/direct_capture.pcap
# OR: Display live traffic for specific IPs (useful for real-time diagnostics)
tcpdump -i eth0 -s0 -nn "host <pbx_ip> or host <carrier_ip>"
</syntaxhighlight>
Let the call run for 10-30 seconds, then stop tcpdump with Ctrl+C.
=== Step 2: Retrieve VoIPmonitor GUI's PCAP for the Same Call ===
After the call completes:
1. Navigate to the '''CDR View''' in the VoIPmonitor GUI
2. Find the test call you just made
3. Download the PCAP file for that call (click the PCAP icon/button)
4. Save it as: <code>/tmp/gui_capture.pcap</code>
=== Step 3: Compare the Two Captures ===
Analyze both captures to determine if RTP packets for the silent leg are present on the wire:
<syntaxhighlight lang="bash">
# Count RTP packets in the direct capture
tshark -r /tmp/direct_capture.pcap -Y "rtp" | wc -l
# Count RTP packets in the GUI capture
tshark -r /tmp/gui_capture.pcap -Y "rtp" | wc -l
# Check for RTP from specific source IPs in the direct capture
tshark -r /tmp/direct_capture.pcap -Y "rtp" -T fields -e rtp.ssrc -e ip.src -e ip.dst
# Check Call-ID in both captures to verify they're the same call
tshark -r /tmp/direct_capture.pcap -Y "sip" -T fields -e sip.Call-ID | head -1
tshark -r /tmp/gui_capture.pcap -Y "sip" -T fields -e sip.Call-ID | head -1
</syntaxhighlight>
=== Step 4: Interpret the Results ===
{| class="wikitable" style="background:#e7f3ff; border:1px solid #3366cc;"
|-
! colspan="2" style="background:#3366cc; color: white;" | Diagnostic Decision Matrix
|-
! Observation
! Root Cause & Next Steps
|-
| '''RTP packets for silent leg are NOT present in direct capture'''
| '''Network/PBX Issue:''' The PBX or network is not sending the packets. This is not a VoIPmonitor problem. Troubleshoot the PBX (check NAT, RTP port configuration) or network (SPAN/mirror configuration, firewall rules).
|-
| '''RTP packets for silent leg ARE present in direct capture but missing in GUI capture'''
| '''Sniffer Configuration Issue:''' Packets are on the wire but VoIPmonitor is failing to capture or correlate them. Likely causes: NAT IP mismatch (natalias configuration incorrect), SIP signaling advertises different IP than RTP source, or restrictive filter rules. Proceed with configuration fixes.
|-
| '''RTP packets present in both captures but audio still silent'''
| '''Codec/Transcoding Issue:''' Packets are captured correctly but may not be decoded properly. Check codec compatibility, unsupported codecs, or transcoding issues on the PBX.
|}
=== Step 5: Apply the Correct Fix Based on Diagnosis ===
;If RTP is NOT on the wire (Network/PBX issue):
:* Check PBX RTP port configuration and firewall rules
:* Verify network SPAN/mirror is capturing bidirectional traffic (see [[#SPAN_Configuration_Troubleshooting|Section 3]])
:* Check PBX NAT settings - RTP packets may be blocked or routed incorrectly
;If RTP is on the wire but not captured (Sniffer configuration issue):
:* Configure '''natalias''' in <code>/etc/voipmonitor.conf</code> to map the IP advertised in SIP signaling to the actual RTP source IP:
:<syntaxhighlight lang="ini">
:; /etc/voipmonitor.conf
:natalias = <Public_IP_Signaled> <Private_IP_Actual>
:</syntaxhighlight>
:* Check for restrictive <code>filter</code> directives in <code>voipmonitor.conf</code>
:* Verify <code>sipport</code> includes all necessary SIP ports
;If packets are captured but audio silent (Codec issue):
:* Check CDR view for codec information on both legs
:* Verify VoIPmonitor GUI has the necessary codec decoders installed
:* Check for codec mismatches between call legs (transcoding may be missing)
=== Step 6: Verify the Fix After Configuration Changes ===
After making changes in <code>/etc/voipmonitor.conf</code>:
<syntaxhighlight lang="bash">
# Restart the sniffer
systemctl restart voipmonitor
# Make another test call and repeat the diagnostic workflow
# Compare direct vs GUI capture again
</syntaxhighlight>
Confirm that RTP packets for the problematic leg now appear in both the direct tcpdump capture AND the GUI's PCAP file.
'''Note:''' This diagnostic methodology helps you identify whether the issue is in the network infrastructure (PBX, SPAN, firewall) or in VoIPmonitor configuration (natalias, filters). Applying VoIPmonitor configuration fixes when the root cause is a network issue will not resolve the problem.


== Appendix: tshark Display Filter Syntax for SIP ==
== Appendix: tshark Display Filter Syntax for SIP ==
Line 769: Line 873:
'''Summary:''' Step-by-step troubleshooting guide for VoIPmonitor sensor not capturing calls. Steps: (1) Verify service running with <code>systemctl status</code>. If service fails to start or crashes immediately with "missing package" error: check logs (syslog/journalctl), install missing dependencies - most commonly <code>rrdtool</code> for RRD graphing/statistics (apt-get install rrdtool or yum/dnf install rrdtool), other common missing packages: libpcap, libssl, zlib. Use <code>ldd</code> to check shared library dependencies. Restart service after installing packages. (2) CRITICAL STEP: Use <code>tshark</code> to verify live traffic is reaching the correct network interface: <code>tshark -i eth0 -Y "sip || rtp" -n</code> (replace eth0 with interface from voipmonitor.conf). If command shows NO packets: issue is network - check SPAN/mirror port configuration on switch, firewall rules. If command shows OPTIONS/NOTIFY/SUBSCRIBE/METHOD but NO INVITE packets: environment has no calls (VOIPmonitor requires INVITE for CDRs). Configure to process non-call SIP messages in voipmonitor.conf with sip-options, sip-message, sip-subscribe, sip-notify set to yes. (3) Check network config - promiscuous mode required for SPAN/RSPAN but NOT for Layer 3 tunnels (ERSPAN/GRE/TZSP/VXLAN). (3A) SPECIAL CASE: Missing packets for specific IPs during high-traffic periods. Use tcpdump FIRST: `tcpdump -i eth0 -nn "host 10.1.2.3 and port 5060"`. If NO packets arrive -> check SPAN config for bidirectional capture (source ports, BOTH inbound/outbound, SPAN buffer saturation during peak, VLAN trunking). If packets DO arrive -> check sensor bottlenecks (ringbuffer, t0CPU, OOM, max_sip_packets_in_call). (3a) If tcpdump shows traffic but VoIPmonitor does NOT capture it, investigate packet encapsulation - capture with tcpdump and analyze with tshark for VLAN tags, ERSPAN, GRE (tshark -Y "gre"), VXLAN (udp.port == 4789), TZSP (udp.port 37008/37009). VLAN tags: ensure filter directive does not use "udp" which drops VLAN-tagged packets. ERSPAN/GRE: verify tunnel configured correctly and packets addressed to sensor IP (promiscuous mode NOT required). VXLAN/TZSP: require proper sending device configuration. (3B) SPECIAL CASE: RTP streams not displayed for specific provider. If SIP signaling works in GUI but RTP streams/quality graphs missing for one provider while working for others: Step 1: Make a test call to reproduce issue. Step 2: During test call, capture RTP packets with tcpdump: `sudo tcpdump -i eth0 -nn "host 1.2.3.4 and rtp" -w /tmp/test_provider_rtp.pcap`. Step 3: Compare tcpdump output with sensor GUI. If tcpdump shows NO RTP packets: network-level issue (asymmetric routing, SPAN config missing RTP path). If tcpdump shows RTP packets but GUI shows no streams: check capture rules with RTP set to DISCARD/Header Only, SRTP decryption config, or sipport/filter settings. (4) Verify <code>voipmonitor.conf</code> settings: interface, sipport, filter directives. (5) Check GUI capture rules with "Skip" option blocking calls. (6) Review system logs for errors. (7) Diagnose OOM killer events causing CDR processing stops. (8) Investigate missing CDRs due tosnaplen truncation, MTU mismatch, or EXTERNAL SOURCE packet truncation. Cause 3: If packets truncated before reaching VoIPmonitor (e.g., Kamailio siptrace, FreeSWITCH sip_trace, custom HEP/HOMER agents, load balancer mirrors), snaplen changes will NOT help. Diagnose with tcpdump -s0; check if received packets smaller than expected. Solutions: For Kamailio siptrace, use TCP transport in duplicate_uri parameter; if connection refused, open TCP listener with socat; best solution: use HAProxy traffic 'tee' to bypass siptrace entirely and send original packets directly. (9) Diagnose probe timeout due to virtualization timing issues - check syslog for 10-second voipmonitor status intervals, RDTSC problems on hypervisor cause >30 second gaps triggering timeouts. Includes tshark display filter syntax appendix.
'''Summary:''' Step-by-step troubleshooting guide for VoIPmonitor sensor not capturing calls. Steps: (1) Verify service running with <code>systemctl status</code>. If service fails to start or crashes immediately with "missing package" error: check logs (syslog/journalctl), install missing dependencies - most commonly <code>rrdtool</code> for RRD graphing/statistics (apt-get install rrdtool or yum/dnf install rrdtool), other common missing packages: libpcap, libssl, zlib. Use <code>ldd</code> to check shared library dependencies. Restart service after installing packages. (2) CRITICAL STEP: Use <code>tshark</code> to verify live traffic is reaching the correct network interface: <code>tshark -i eth0 -Y "sip || rtp" -n</code> (replace eth0 with interface from voipmonitor.conf). If command shows NO packets: issue is network - check SPAN/mirror port configuration on switch, firewall rules. If command shows OPTIONS/NOTIFY/SUBSCRIBE/METHOD but NO INVITE packets: environment has no calls (VOIPmonitor requires INVITE for CDRs). Configure to process non-call SIP messages in voipmonitor.conf with sip-options, sip-message, sip-subscribe, sip-notify set to yes. (3) Check network config - promiscuous mode required for SPAN/RSPAN but NOT for Layer 3 tunnels (ERSPAN/GRE/TZSP/VXLAN). (3A) SPECIAL CASE: Missing packets for specific IPs during high-traffic periods. Use tcpdump FIRST: `tcpdump -i eth0 -nn "host 10.1.2.3 and port 5060"`. If NO packets arrive -> check SPAN config for bidirectional capture (source ports, BOTH inbound/outbound, SPAN buffer saturation during peak, VLAN trunking). If packets DO arrive -> check sensor bottlenecks (ringbuffer, t0CPU, OOM, max_sip_packets_in_call). (3a) If tcpdump shows traffic but VoIPmonitor does NOT capture it, investigate packet encapsulation - capture with tcpdump and analyze with tshark for VLAN tags, ERSPAN, GRE (tshark -Y "gre"), VXLAN (udp.port == 4789), TZSP (udp.port 37008/37009). VLAN tags: ensure filter directive does not use "udp" which drops VLAN-tagged packets. ERSPAN/GRE: verify tunnel configured correctly and packets addressed to sensor IP (promiscuous mode NOT required). VXLAN/TZSP: require proper sending device configuration. (3B) SPECIAL CASE: RTP streams not displayed for specific provider. If SIP signaling works in GUI but RTP streams/quality graphs missing for one provider while working for others: Step 1: Make a test call to reproduce issue. Step 2: During test call, capture RTP packets with tcpdump: `sudo tcpdump -i eth0 -nn "host 1.2.3.4 and rtp" -w /tmp/test_provider_rtp.pcap`. Step 3: Compare tcpdump output with sensor GUI. If tcpdump shows NO RTP packets: network-level issue (asymmetric routing, SPAN config missing RTP path). If tcpdump shows RTP packets but GUI shows no streams: check capture rules with RTP set to DISCARD/Header Only, SRTP decryption config, or sipport/filter settings. (4) Verify <code>voipmonitor.conf</code> settings: interface, sipport, filter directives. (5) Check GUI capture rules with "Skip" option blocking calls. (6) Review system logs for errors. (7) Diagnose OOM killer events causing CDR processing stops. (8) Investigate missing CDRs due tosnaplen truncation, MTU mismatch, or EXTERNAL SOURCE packet truncation. Cause 3: If packets truncated before reaching VoIPmonitor (e.g., Kamailio siptrace, FreeSWITCH sip_trace, custom HEP/HOMER agents, load balancer mirrors), snaplen changes will NOT help. Diagnose with tcpdump -s0; check if received packets smaller than expected. Solutions: For Kamailio siptrace, use TCP transport in duplicate_uri parameter; if connection refused, open TCP listener with socat; best solution: use HAProxy traffic 'tee' to bypass siptrace entirely and send original packets directly. (9) Diagnose probe timeout due to virtualization timing issues - check syslog for 10-second voipmonitor status intervals, RDTSC problems on hypervisor cause >30 second gaps triggering timeouts. Includes tshark display filter syntax appendix.


'''Keywords:''' troubleshooting, no calls, not sniffing, no CDRs, tshark, missing package, missing library, rrdtool, rrdtools, dependencies, service failed, service crashed, ldd, libpcap, libssl, zlib, systemctl restart, journalctl, syslog, promiscuous mode, SPAN, RSPAN, ERSPAN, GRE, TZSP, VXLAN, voipmonitor.conf, interface, sipport, filter, capture rules, Skip, OOM, out of memory, snaplen, MTU, packet truncation, external source truncation, Kamailio siptrace, FreeSWITCH sip_trace, OpenSIPS, HEP, HOMER, HAProxy tee, traffic mirroring, load balancer, socat, TCP listener, WebRTC INVITE, truncated packets, corrupted packets, Authorization header, 4k packets, display filter, sip.Method, sip.Call-ID, probe timeout, virtualization, RDTSC, timing issues, status logs, 10 second interval, KVM, VMware, Hyper-V, Xen, non-call SIP traffic, OPTIONS, NOTIFY, SUBSCRIBE, MESSAGE, sip-options, sip-message, sip-subscribe, sip-notify, qualify pings, heartbeat, instant messaging, encapsulation, packet encapsulation, VLAN tags, 802.1Q, tcpdump analysis, tshark encapsulation filters, high traffic, specific IP, missing packets, specific IP addresses, call legs missing, INVITE missing, high-traffic periods, peak hours, bidirectional capture, inbound outbound, both directions, SPAN buffer saturation, port mirroring, SPAN buffer capacity, rx tx both, monitor session, SPAN source, SPAN destination, ringbuffer, t0CPU, max_sip_packets_in_call, max_invite_packets_in_call, RTP missing, RTP not displayed, RTP missing specific provider, audio quality graphs missing, SRTP, asymmetric routing, RTP test call, tcpdump RTP capture, RTP stream visualization
'''Keywords:''' troubleshooting, no calls, not sniffing, no CDRs, tshark, missing package, missing library, rrdtool, rrdtools, dependencies, service failed, service crashed, ldd, libpcap, libssl, zlib, systemctl restart, journalctl, syslog, promiscuous mode, SPAN, RSPAN, ERSPAN, GRE, TZSP, VXLAN, voipmonitor.conf, interface, sipport, filter, capture rules, Skip, OOM, out of memory, snaplen, MTU, packet truncation, external source truncation, Kamailio siptrace, FreeSWITCH sip_trace, OpenSIPS, HEP, HOMER, HAProxy tee, traffic mirroring, load balancer, socat, TCP listener, WebRTC INVITE, truncated packets, corrupted packets, Authorization header, 4k packets, display filter, sip.Method, sip.Call-ID, probe timeout, virtualization, RDTSC, timing issues, status logs, 10 second interval, KVM, VMware, Hyper-V, Xen, non-call SIP traffic, OPTIONS, NOTIFY, SUBSCRIBE, MESSAGE, sip-options, sip-message, sip-subscribe, sip-notify, qualify pings, heartbeat, instant messaging, encapsulation, packet encapsulation, VLAN tags, 802.1Q, tcpdump analysis, tshark encapsulation filters, high traffic, specific IP, missing packets, specific IP addresses, call legs missing, INVITE missing, high-traffic periods, peak hours, bidirectional capture, inbound outbound, both directions, SPAN buffer saturation, port mirroring, SPAN buffer capacity, rx tx both, monitor session, SPAN source, SPAN destination, ringbuffer, t0CPU, max_sip_packets_in_call, max_invite_packets_in_call, RTP missing, RTP not displayed, RTP missing specific provider, audio quality graphs missing, SRTP, asymmetric routing, RTP test call, tcpdump RTP capture, RTP stream visualization, audio missing, audio missing on one leg, partial audio, silenced audio, one call leg, carrier, PBX, inside, outside, tcpdump tshark comparison, direct capture vs GUI capture, diagnose audio issues, RTP packets on the wire, NAT IP mismatch, natalias configuration, codec issue, transcoding, RTP port configuration, network issue, PBX issue, sniffer configuration, packet correlation, RTP source IP mismatch, SIP signaling IP


'''Key Questions:'''
'''Key Questions:'''
* What is the correct tshark command to verify SIP/RTP traffic is reaching the VoIPmonitor sensor? (Use: tshark -i eth0 -Y "sip || rtp" -n)
* What is the correct tshark command to verify SIP/RTP traffic is reaching the VoIPmonitor sensor? (Use: tshark -i eth0 -Y "sip || rtp" -n)
* How do I diagnose why WebGUI shows no CDRs but the service is running with no errors?
* How do I diagnose why sniffer captures full audio on one call leg but no audio on the other leg?
* What does it mean if tshark shows OPTIONS/NOTIFY traffic but no INVITE packets?
* How do I use tcpdump to diagnose missing audio on one call leg?
* How do I compare tcpdump capture with the GUI's PCAP file?
* How do I determine if RTP packets are on the wire when one leg has no audio?
* What is the diagnostic workflow for audio missing on one call leg?
* How do I determine if audio issue is network/PBX problem vs VoIPmonitor configuration?
* How do I check if RTP packets for the silent leg are present on the wire?
* How do I verify if natalias is needed for NAT IP mismatch?
* How do I diagnose whether one-way audio is a codec issue or network issue?
* How do I use tcpdump vs GUI PCAP comparison for troubleshooting?
* What should I do first when one call leg has missing or partial audio?
* How do I interpret tcpdump vs GUI capture comparison results?
* How do I check for codec/transcoding issues causing one-way audio?
* How do I configure VoIPmonitor to process non-call SIP messages like OPTIONS/NOTIFY/SUBSCRIBE?
* How do I configure VoIPmonitor to process non-call SIP messages like OPTIONS/NOTIFY/SUBSCRIBE?
* How do I check for VLAN tags in a pcap file?
* How do I check for VLAN tags in a pcap file?

Revision as of 04:04, 6 January 2026


This guide provides a systematic, step-by-step process to diagnose why the VoIPmonitor sensor might not be capturing any calls. Follow these steps in order to quickly identify and resolve the most common issues.

Troubleshooting Flowchart

<mermaid> flowchart TD

   A[No Calls Being Captured] --> B{Step 1: Service Running?}
   B -->|No| B1[systemctl restart voipmonitor]
   B -->|Yes| C{Step 2: Traffic on Interface?
tshark -i eth0 -Y 'sip'}
   C -->|No packets| D[Step 3: Network Issue]
   D --> D1{Interface UP?}
   D1 -->|No| D2[ip link set dev eth0 up]
   D1 -->|Yes| D3{SPAN/RSPAN?}
   D3 -->|Yes| D4[Enable promisc mode]
   D3 -->|ERSPAN/GRE/TZSP| D5[Check tunnel config]
   C -->|Packets visible| E[Step 4: VoIPmonitor Config]
   E --> E1{interface correct?}
   E1 -->|No| E2[Fix interface in voipmonitor.conf]
   E1 -->|Yes| E3{sipport correct?}
   E3 -->|No| E4[Add port: sipport = 5060,5080]
   E3 -->|Yes| E5{BPF filter blocking?}
   E5 -->|Maybe| E6[Comment out filter directive]
   E5 -->|No| F[Step 5: GUI Capture Rules]
   F --> F1{Rules with Skip: ON?}
   F1 -->|Yes| F2[Remove/modify rules + reload sniffer]
   F1 -->|No| G[Step 6: Check Logs]
   G --> H{OOM Events?}
   H -->|Yes| H1[Step 7: Add RAM / tune MySQL]
   H -->|No| I{Large SIP packets?}
   I -->|Yes| I1{External SIP source?
Kamailio/HAProxy mirror} I1 -->|No| I2[Increase snaplen in voipmonitor.conf] I1 -->|Yes| I3[Fix external source: Kamailio siptrace or HAProxy tee] I2 --> I4[If snaplen change fails, recheck with tcpdump -s0] I4 --> I1 I -->|No| J[Contact Support]

</mermaid>

Step 1: Is the VoIPmonitor Service Running Correctly?

First, confirm that the sensor process is active and loaded the correct configuration file.

1. Check the service status (for modern systemd systems)
systemctl status voipmonitor

Look for a line that says Active: active (running). If it is inactive or failed, try restarting it with systemctl restart voipmonitor and check the status again.

2. Verify the running process
ps aux | grep voipmonitor

This command will show the running process and the exact command line arguments it was started with. Critically, ensure it is using the correct configuration file, for example: --config-file /etc/voipmonitor.conf. If it is not, there may be an issue with your startup script.

Troubleshooting: Missing Package or Library Dependencies

If the sensor service fails to start or crashes immediately with an error about a "missing package" or "missing library," it indicates that a required system dependency is not installed on the server. This is most common on newly installed sensors or fresh operating system installations.

1. Check the system logs for the specific error message
# For Debian/Ubuntu
tail -f /var/log/syslog | grep voipmonitor

# For CentOS/RHEL/AlmaLinux or systemd systems
journalctl -u voipmonitor -f
2. Common missing packages for sensors

Most sensor missing package issues are resolved by installing the rrdtools package. This is required for RRD (Round-Robin Database) graphing and statistics functionality.

# For Debian/Ubuntu
apt-get update && apt-get install rrdtool

# For CentOS/RHEL/AlmaLinux
yum install rrdtool
# OR
dnf install rrdtool
3. Other frequently missing dependencies

If the error references a specific shared library or binary, install it using your package manager. Common examples:

  • libpcap or libpcap-dev: Packet capture library
  • libssl or libssl-dev: SSL/TLS support
  • zlib or zlib1g-dev: Compression library
4. Verify shared library dependencies

If the error mentions a specific shared library (e.g., error while loading shared libraries: libxxx.so), check which libraries the binary is trying to load:

ldd /usr/local/sbin/voipmonitor | grep pcap

If ldd reports "not found," install the missing library using your package manager.

5. After installing the missing package, restart the sensor service
systemctl restart voipmonitor
systemctl status voipmonitor

Verify the service starts successfully and is now Active: active (running).

Step 2: Is Network Traffic Reaching the Server?

If the service is running, the next step is to verify if the VoIP packets (SIP/RTP) are actually arriving at the server's network interface. The best tool for this is tshark (the command-line version of Wireshark).

1. Install tshark
# For Debian/Ubuntu
apt-get update && apt-get install tshark

# For CentOS/RHEL/AlmaLinux
yum install wireshark
2. Listen for SIP traffic on the correct interface

Replace eth0 with the interface name you have configured in voipmonitor.conf.

tshark -i eth0 -Y "sip || rtp" -n
  • If you see a continuous stream of SIP and RTP packets, it means traffic is reaching the server, and the problem is likely in VoIPmonitor's configuration (see Step 4).
  • If you see NO packets, the problem lies with your network configuration. Proceed to Step 3.

Step 3: Troubleshoot Network and Interface Configuration

If tshark shows no traffic, it means the packets are not being delivered to the operating system correctly.

1. Check if the interface is UP

Ensure the network interface is active.

ip link show eth0

The output should contain the word UP. If it doesn't, bring it up with:

ip link set dev eth0 up
2. Check for Promiscuous Mode (for SPAN/RSPAN Mirrored Traffic)

Important: Promiscuous mode requirements depend on your traffic mirroring method:

  • SPAN/RSPAN (Layer 2 mirroring): The network interface must be in promiscuous mode. Mirrored packets retain their original MAC addresses, so the interface would normally ignore them. Promiscuous mode forces the interface to accept all packets regardless of destination MAC.
  • ERSPAN/GRE/TZSP/VXLAN (Layer 3 tunnels): Promiscuous mode is NOT required. These tunneling protocols encapsulate the mirrored traffic inside IP packets that are addressed directly to the sensor's IP address. The operating system receives these packets normally, and VoIPmonitor automatically decapsulates them to extract the inner SIP/RTP traffic.

For SPAN/RSPAN deployments, check the current promiscuous mode status:

ip link show eth0

Look for the PROMISC flag.

Enable promiscuous mode manually if needed:

ip link set eth0 promisc on

If this solves the problem, you should make the change permanent. The install-script.sh for the sensor usually attempts to do this, but it can fail.

3A. Troubleshooting
Missing Packets for Specific IPs During High-Traffic Periods:

If calls are missing only for certain IP addresses or specific call flows (particularly during high-traffic periods), the issue is typically at the network infrastructure level (SPAN configuration) rather than sensor resource limits. Use this systematic approach:

Step 1: Use tcpdump to Verify Packet Arrival

Before tuning any sensor configuration, first verify if the missing packets are actually reaching the sensor's network interface. Use tcpdump for this verification:

# Listen for SIP packets from a specific IP during the next high-traffic window
# Replace eth0 with your interface and 10.1.2.3 with the problematic IP
tcpdump -i eth0 -nn "host 10.1.2.3 and port 5060" -v

# Or capture to a file for later analysis
tcpdump -i eth0 -nn "host 10.1.2.3 and port 5060" -w /tmp/trace_10.1.2.3.pcap

Interpret the results:

  • If you see SIP packets arriving: The traffic reaches the sensor. The issue is likely a sensor resource bottleneck (CPU, memory, or configuration limits). Proceed to Step 4: Check Sensor Statistics.
  • If you see NO packets or only intermittent packets: The traffic is not reaching the sensor. This indicates a network infrastructure issue. Proceed to Step 2: Check SPAN Configuration.

Step 2: Check SPAN Configuration for Bidirectional Capture

If packets are missing at the interface level, verify your network switch's SPAN (port mirroring) configuration. During high-traffic periods, switches may have insufficient SPAN buffer capacity, causing packets to be dropped in the mirroring process itself.

Key verification points:

  • Verify Source Ports: Confirm that both source IP addresses (or the switch ports they connect to) are included in the SPAN source list. Missing one direction of the call flow will result in incomplete CDRs.
  • Check for Bidirectional Mirroring: Your SPAN configuration must capture BOTH inbound and outbound traffic. On most Cisco switches, this requires specifying:
  monitor session 1 source interface GigabitEthernet1/1 both
 Replace both with:
 * rx for incoming traffic only
 * tx for outgoing traffic only
 * both for bidirectional capture (recommended)
  • Verify Destination Port: Confirm the SPAN destination points to the switch port where the VoIPmonitor sensor is connected.
  • Check SPAN Buffer Saturation (High-Traffic Issues): Some switches have limited SPAN buffer capacity. When monitoring multiple high-traffic ports simultaneously, the SPAN buffer may overflow during peak usage, causing randomized packet drops. Symptoms:
 ** Drops occur only during busy hours
 ** Missing packets are inconsistent across different calls
 ** Sensor CPU usage and t0CPU metrics appear normal (no bottleneck at sensor)
 Solutions:
 ** Reduce the number of monitored source ports in the SPAN session
 ** Use multiple SPAN sessions if your switch supports it
 ** Consider upgrading to a switch with higher SPAN buffer capacity
  • Verify VLAN Trunking: If the monitored traffic spans different VLANs, ensure the SPAN destination port is configured as a trunk to carry all necessary VLAN tags. Without trunk mode, packets from non-native VLANs will be dropped or stripped of their tags.

For detailed instructions on configuring SPAN/ERSPAN/GRE for different network environments, see Sniffing_modes.

Step 3: Check for Sensor Resource Bottlenecks

If tcpdump confirms that packets are arriving at the interface consistently, but VoIPmonitor is still missing them, the issue may be sensor resource limitations.

  • Check Packet Drops: In the GUI, navigate to Settings → Sensors and look at the "# packet drops" counter. If this counter is non-zero or increasing during high traffic:
 ** Increase the ringbuffer size in voipmonitor.conf (default 50 MB, max 2000 MB)
 ** Check the t0CPU metric in system logs - if consistently above 90%, you may need to upgrade CPU or optimize NIC drivers
  • Monitor Memory Usage: Check for OOM (Out of Memory) killer events:
  grep -i "out of memory\|killed process" /var/log/syslog | tail -20
  • SIP Packet Limits: If only long or chatty calls are affected, check the max_sip_packets_in_call and max_invite_packets_in_call limits in voipmonitor.conf.
3. Verify Your SPAN/Mirror/TAP Configuration

This is the most common cause of no traffic. Double-check your network switch or hardware tap configuration to ensure:

  • The correct source ports (where your PBX/SBC is connected) are being monitored.
  • The correct destination port (where your VoIPmonitor sensor is connected) is configured.
  • If you are monitoring traffic across different VLANs, ensure your mirror port is configured to carry all necessary VLAN tags (often called "trunk" mode).
4. Investigate Packet Encapsulation (If tcpdump shows traffic but VoIPmonitor does not)

If tcpdump or tshark shows packets reaching the interface but VoIPmonitor is not capturing them, the traffic may be encapsulated in a tunnel that VoIPmonitor cannot automatically process without additional configuration. Common encapsulations include VLAN tags, ERSPAN, GRE, VXLAN, and TZSP.

First, capture a sample of the traffic for analysis:

# Capture 100 packets of SIP traffic to a pcap file
tcpdump -i eth0 -c 100 -s0 port 5060 -w /tmp/encapsulation_check.pcap

Then analyze the capture to identify encapsulation:

# Check for VLAN-tagged packets (802.1Q)
tshark -r /tmp/encapsulation_check.pcap -Y "vlan"

# Check for GRE tunnels
tshark -r /tmp/encapsulation_check.pcap -Y "gre"

# Check for ERSPAN (GRE encapsulated with ERSPAN protocol)
tshark -r /tmp/encapsulation_check.pcap -Y "gre && ip.proto == 47"

# Check for VXLAN (UDP port 4789)
tshark -r /tmp/encapsulation_check.pcap -Y "udp.port == 4789"

# Check for TZSP (UDP ports 37008 or 37009)
tshark -r /tmp/encapsulation_check.pcap -Y "udp.port == 37008 || udp.port == 37009"

# Show packet summary to identify any unusual protocol stacks
tshark -r /tmp/encapsulation_check.pcap -V | head -50

Identifying encapsulation issues:

  • VLAN tags present: Ensure VoIPmonitor's sipport filter does not use udp (which may drop VLAN-tagged packets). Comment out the filter directive in voipmonitor.conf to test.
  • ERSPAN/GRE tunnels: Promiscuous mode is NOT required for these Layer 3 tunnels. Verify that tunneling is configured correctly on your network device and that the packets are addressed to the sensor's IP. VoIPmonitor automatically decapsulates ERSPAN and GRE.
  • VXLAN/TZSP tunnels: These specialized tunneling protocols require proper configuration on the sending device. Consult your network device documentation for VoIPmonitor compatibility requirements.

If encapsulation is identified as the issue, review Sniffing_modes for detailed configuration guidance.

3B. Troubleshooting
RTP Streams Not Displayed for Specific Provider:

If SIP signaling appears correctly in the GUI for calls from a specific provider, but RTP streams (audio quality graphs, waveform visualization) are missing for that provider while working correctly for other call paths, use this systematic approach to identify the cause.

Step 1: Make a Test Call to Reproduce the Issue

First, create a controlled test scenario to investigate the specific provider.

  • Determine if the issue affects ALL calls from this provider or only some (e.g., specific codecs, call duration, time of day)
  • Make a test call that reproduces the problem (e.g., from the problematic provider to a test number)
  • Allow the call to establish and run for at least 30-60 seconds to capture meaningful RTP data

Step 2: Capture Packets on the Sniffing Interface During the Test Call

During the test call, use tcpdump (or tshark) to directly capture packets on the network interface configured in voipmonitor.conf. This tells you whether RTP packets are being received by the sensor.

# Capture SIP and RTP packets from the specific provider IP during your test call
# Replace eth0 with your interface and 1.2.3.4 with the provider's IP
sudo tcpdump -i eth0 -nn "host 1.2.3.4 and (udp port 5060 or (udp[0] & 0x78) == 0x78)" -v

# Capture RTP to a file for detailed analysis (recommended)
sudo tcpdump -i eth0 -nn "host 1.2.3.4 and rtp" -w /tmp/test_provider_rtp.pcap

Note: The RTP filter (udp[0] & 0x78) == 0x78 matches packets with the first two bits of the first byte set to "10", which is characteristic of RTP.

Step 3: Compare Raw Packet Capture with Sensor Output

After the test call:

  • Check what tcpdump captured:
# Count SIP packets
tshark -r /tmp/test_provider_rtp.pcap -Y "sip" | wc -l

# Count RTP packets
tshark -r /tmp/test_provider_rtp.pcap -Y "rtp" | wc -l

# View RTP stream details
tshark -r /tmp/test_provider_rtp.pcap -Y "rtp" -T fields -e rtp.ssrc -e rtp.seq -e rtp.ptype -e udp.srcport -e udp.dstport | head -20
  • Check what VoIPmonitor recorded:
 * Open the CDR for your test call in the GUI
 * Verify if the "Received Packets" column shows non-zero values for the provider leg
 * Check if the "Streams" section shows RTP quality graphs and waveform visualization
  • Compare the results:
    • If tcpdump shows NO RTP packets: The RTP traffic is not reaching the sensor interface. This indicates a network-level issue (asymmetric routing, SPAN configuration missing the RTP path, or firewall). You need to troubleshoot the network infrastructure, not VoIPmonitor.
    • If tcpdump shows RTP packets but the GUI shows no streams or zero received packets: The packets are reaching the sensor but VoIPmonitor is not processing them. Check:
  • Step 5: Check GUI Capture Rules - Look for capture rules targeting the provider's IP with RTP set to "DISCARD" or "Header Only"
  • TLS/SSL Decryption - Verify SRTP decryption is configured correctly if the provider uses encryption
  • Sniffer_configuration - Check for any problematic sipport or filter settings

For more information on capture rules that affect RTP storage, see Capture_rules.

5. Check for Non-Call SIP Traffic Only

If you see SIP traffic but it consists only of OPTIONS, NOTIFY, SUBSCRIBE, or MESSAGE methods (without any INVITE packets), there are no calls to generate CDRs. This can occur in environments that use SIP for non-call purposes like heartbeat checks or instant messaging.

You can configure VoIPmonitor to process and store these non-call SIP messages. See SIP_OPTIONS/SUBSCRIBE/NOTIFY and MESSAGES for configuration details.

Enable non-call SIP message processing in /etc/voipmonitor.conf:

# Process SIP OPTIONS (qualify pings). Default: no
sip-options = yes

# Process SIP MESSAGE (instant messaging). Default: yes
sip-message = yes

# Process SIP SUBSCRIBE requests. Default: no
sip-subscribe = yes

# Process SIP NOTIFY requests. Default: no
sip-notify = yes

Note that enabling these for processing and storage can significantly increase database load in high-traffic scenarios. Use with caution and monitor SQL queue growth. See Performance Tuning for optimization tips.

Step 4: Check the VoIPmonitor Configuration

If tshark sees traffic but VoIPmonitor does not, the problem is almost certainly in voipmonitor.conf.

1. Check the interface directive
Make sure the interface parameter in /etc/voipmonitor.conf exactly matches the interface where you see traffic with tshark. For example: interface = eth0.
2. Check the sipport directive
By default, VoIPmonitor only listens on port 5060. If your PBX uses a different port for SIP, you must add it. For example:
sipport = 5060,5080
3. Distributed/Probe Setup Considerations:
If you are using a remote sensor (probe) with Packet Mirroring (packetbuffer_sender=yes), call detection depends on configuration on both the probe and the central analysis host.
Common symptom: The probe captures traffic (visible via tcpdump), but the central server records incomplete or missing CDRs for calls on non-default ports.
Critical: Both Systems Must Have Matching sipport Configuration
Probe side: The probe captures packets from the network interface. Its sipport setting determines which UDP ports it considers as SIP traffic to capture and forward.
Central server side: When receiving raw packets in Packet Mirroring mode, the central server analyzes the packets locally. Its sipport setting determines which ports it interprets as SIP during analysis. If a port is missing here, packets are captured but not recognized as SIP, resulting in missing CDRs.
Troubleshooting steps for distributed probe setups:
1. Verify traffic reachability on the probe:
Use tcpdump on the probe VM to confirm SIP packets for the missing calls are arriving on the expected ports.

::# On the probe VM ::tcpdump -i eth0 -n port 5061 ::

2. Check the probe's voipmonitor.conf:
Ensure the sipport directive on the probe includes all necessary SIP ports used in your network.
::# /etc/voipmonitor.conf on the PROBE
::sipport = 5060,5061,5080,6060
::
3. Check the central analysis host's voipmonitor.conf:
This is the most common cause of missing calls in distributed setups. The central analysis host (the system receiving packets via server_bind or legacy mirror_bind) must also have the sipport directive configured with the same list of ports used by all probes.
::# /etc/voipmonitor.conf on the CENTRAL HOST
::sipport = 5060,5061,5080,6060
::
4. Restart both services:
Apply the configuration changes:
::# On both probe and central host
::systemctl restart voipmonitor
::
For more details on distributed architecture configuration and packet mirroring, see Distributed Architecture: Client-Server Mode.
4. Check for a restrictive filter
If you have a BPF filter configured, ensure it is not accidentally excluding the traffic you want to see. For debugging, try commenting out the filter line entirely and restarting the sensor.

Step 5: Check GUI Capture Rules (Causing Call Stops)

If tshark sees SIP traffic and the sniffer configuration appears correct, but the probe stops processing calls or shows traffic only on the network interface, GUI capture rules may be the culprit.

Capture rules configured in the GUI can instruct the sniffer to ignore ("skip") all processing for matched calls. This includes calls matching specific IP addresses or telephone number prefixes.

1. Review existing capture rules
Navigate to GUI -> Capture rules and examine all rules for any that might be blocking your traffic.
Look specifically for rules with the Skip option set to ON (displayed as "Skip: ON"). The Skip option instructs the sniffer to completely ignore matching calls (no files, RTP analysis, or CDR creation).
2. Test by temporarily removing all capture rules
To isolate the issue, first create a backup of your GUI configuration:
  • Navigate to Tools -> Backup & Restore -> Backup GUI -> Configuration tables
  • This saves your current settings including capture rules
  • Delete all capture rules from the GUI
  • Click the Apply button to save changes
  • Reload the sniffer by clicking the green "reload sniffer" button in the control panel
  • Test if calls are now being processed correctly
  • If resolved, restore the configuration from the backup and systematically investigate the rules to identify the problematic one
3. Identify the problematic rule
  • After restoring your configuration, remove rules one at a time and reload the sniffer after each removal
  • When calls start being processed again, you have identified the problematic rule
  • Review the rule's match criteria (IP addresses, prefixes, direction) against your actual traffic pattern
  • Adjust the rule's conditions or Skip setting as needed
4. Verify rules are reloaded
After making changes to capture rules, remember that changes are not automatically applied to the running sniffer. You must click the "reload sniffer" button in the control panel, or the rules will continue using the previous configuration.

For more information on capture rules, see Capture_rules.

Step 6: Check VoIPmonitor Logs for Errors

Finally, VoIPmonitor's own logs are the best source for clues. Check the system log for any error messages generated by the sensor on startup or during operation.

# For Debian/Ubuntu
tail -f /var/log/syslog | grep voipmonitor

# For CentOS/RHEL/AlmaLinux
tail -f /var/log/messages | grep voipmonitor

Look for errors like:

  • "pcap_open_live(eth0) error: eth0: No such device" (Wrong interface name)
  • "Permission denied" (The sensor is not running with sufficient privileges)
  • Errors related to database connectivity.
  • Messages about dropping packets.

Step 7: Check for OOM (Out of Memory) Issues

If VoIPmonitor suddenly stops processing CDRs and a service restart temporarily restores functionality, the system may be experiencing OOM (Out of Memory) killer events. The Linux OOM killer terminates processes when available RAM is exhausted, and MySQL (mysqld) is a common target due to its memory-intensive nature.

1. Check for OOM killer events in kernel logs
# For Debian/Ubuntu
grep -i "out of memory\|killed process" /var/log/syslog | tail -20

# For CentOS/RHEL/AlmaLinux
grep -i "out of memory\|killed process" /var/log/messages | tail -20

# Also check dmesg:
dmesg | grep -i "killed process" | tail -10

Typical OOM killer messages look like:

Out of memory: Kill process 1234 (mysqld) score 123 or sacrifice child
Killed process 1234 (mysqld) total-vm: 12345678kB, anon-rss: 1234567kB
2. Monitor current memory usage
# Check available memory (look for low 'available' or 'free' values)
free -h

# Check per-process memory usage (sorted by RSS)
ps aux --sort=-%mem | head -15

# Check MySQL memory usage in bytes
cat /proc/$(pgrep mysqld)/status | grep -E "VmSize|VmRSS"

Warning signs:

  • Available memory consistently below 500MB during operation
  • MySQL consuming most of the available RAM
  • Swap usage near 100% (if swap is enabled)
  • Frequent process restarts without clear error messages
3. Solution
Increase physical memory:

The definitive solution for OOM-related CDR processing issues is to upgrade the server's physical RAM. After upgrading:

  • Verify memory improvements with free -h
  • Monitor for several days to ensure OOM events stop
  • Consider tuning innodb_buffer_pool_size in your MySQL configuration to use the additional memory effectively

Additional mitigation strategies (while planning for RAM upgrade):

  • Reduce MySQL's memory footprint by lowering innodb_buffer_pool_size (e.g., from 16GB to 8GB)
  • Disable or limit non-essential VoIPmonitor features (e.g., packet capture storage, RTP analysis)
  • Ensure swap space is properly configured as a safety buffer (though swap is much slower than RAM)
  • Use sysctl vm.swappiness=10 to favor RAM over swap when some memory is still available

Step 8: Missing CDRs for Calls with Large Packets

If VoIPmonitor is capturing some calls successfully but missing CDRs for specific calls (especially those that seem to have larger SIP packets like INVITEs with extensive SDP), there are two common causes to investigate.

Cause 1: snaplen Packet Truncation (VoIPmonitor Configuration)

The snaplen parameter in voipmonitor.conf limits how many bytes of each packet are captured. If a SIP packet exceeds snaplen, it is truncated and the sniffer may fail to parse the call correctly.

1. Check your current snaplen setting
grep snaplen /etc/voipmonitor.conf

Default is 3200 bytes (6000 if SSL/HTTP is enabled).

2. Test if packet truncation is the issue

Use tcpdump with -s0 (snap infinite) to capture full packets:

# Capture SIP traffic with full packet length
tcpdump -i eth0 -s0 -nn port 5060 -w /tmp/test_capture.pcap

# Analyze packet sizes with Wireshark or tshark
tshark -r /tmp/test_capture.pcap -T fields -e frame.len -Y "sip" | sort -n | tail -10

If you see SIP packets larger than your snaplen value (e.g., 4000+ bytes), increase snaplen in voipmonitor.conf:

snaplen = 65535

Then restart the sniffer: systemctl restart voipmonitor.

Cause 2: MTU Mismatch (Network Infrastructure)

If packets are being lost or fragmented due to MTU mismatches in the network path, VoIPmonitor may never receive the complete packets, regardless of snaplen settings.

1. Diagnose MTU-related packet loss

Capture traffic with tcpdump and analyze in Wireshark:

# Capture traffic on the VoIPmonitor host
tcpdump -i eth0 -s0 host <pbx_ip_address> -w /tmp/mtu_test.pcap

Open the pcap in Wireshark and look for:

  • Reassembled PDUs marked as incomplete
  • TCP retransmissions for the same packet
  • ICMP "Fragmentation needed" messages (Type 3, Code 4)
2. Verify packet completeness

In Wireshark, examine large SIP INVITE packets. If the SIP headers or SDP appear cut off or incomplete, packets are likely being lost in transit due to MTU issues.

3. Identify the MTU bottleneck

The issue is typically a network device with a lower MTU than the end devices. Common locations:

  • VPN concentrators
  • Firewalls
  • Routers with tunnel interfaces
  • Cloud provider gateways (typically 1500 bytes vs. standard 9000 jumbo frames)

To locate the problematic device, trace the MTU along the network path from the PBX to the VoIPmonitor sensor.

4. Resolution options
  • Increase MTU on the bottleneck device to match the rest of the network (e.g., from 1500 to 9000 for jumbo frame environments)
  • Enable Path MTU Discovery (PMTUD) on intermediate devices
  • Ensure your switching infrastructure supports jumbo frames end-to-end if you are using them

For more information on the snaplen parameter, see Sniffer Configuration.

Cause 3: External Source Packet Truncation (Traffic Mirroring/LBS Modules)

If packets are truncated or corrupted BEFORE they reach VoIPmonitor, changing snaplen will NOT fix the issue. This scenario occurs when using external SIP sources that have their own packet size limitations.

Symptoms to identify this scenario
  • Large SIP packets (e.g., WebRTC INVITE with big Authorization headers ~4k) appear truncated
  • Packets show as corrupted or malformatted in VoIPmonitor GUI
  • Changing snaplen in voipmonitor.conf has no effect
  • Using TCP instead of UDP in the external system does not resolve the issue
Common external sources that may truncate packets
  1. Kamailio siptrace module
  2. FreeSWITCH sip_trace module
  3. OpenSIPS tracing modules
  4. Custom HEP/HOMER agent implementations
  5. Load balancers or proxy servers with traffic mirroring
Diagnose external source truncation

Use tcpdump with -s0 (snap infinite) on the VoIPmonitor sensor to compare packet sizes:

# Capture traffic received by VoIPmonitor
sudo tcpdump -i eth0 -s0 -nn port 5060 -w /tmp/voipmonitor_input.pcap

# Analyze actual packet sizes received
tshark -r /tmp/voipmonitor_input.pcap -T fields -e frame.len -Y "sip.Method == INVITE" | sort -n | tail -10

If:

  • You see packets with truncated SIP headers or incomplete SDP
  • The packet length is much smaller than expected (e.g., 1500 bytes instead of 4000+ bytes)
  • Truncation is consistent across all calls

Then the external source is truncating packets before they reach VoIPmonitor.

Solutions for Kamailio siptrace truncation

If using Kamailio's siptrace module with traffic mirroring:

1. Configure Kamailio to use TCP transport for siptrace (may help in some cases):

# In kamailio.cfg
modparam("siptrace", "duplicate_uri", "sip:voipmonitor_ip:port;transport=tcp")

2. If Kamailio reports "Connection refused", VoIPmonitor does not open a TCP listener by default. Manually open one:

# Open TCP listener using socat
socat TCP-LISTEN:5888,fork,reuseaddr &

Then update kamailio.cfg to use the specified port instead of the standard SIP port.

3. Use HAProxy traffic 'tee' function (recommended): If your architecture includes HAProxy in front of Kamailio, use its traffic mirroring to send a copy of the WebSocket traffic directly to VoIPmonitor's standard SIP listening port. This bypasses the siptrace module entirely and preserves original packets:

# In haproxy.cfg, within your frontend/backend configuration
# Send a copy of traffic to VoIPmonitor
option splice-response
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use-server voipmonitor if { req_ssl_hello_type 1 }
listen voipmonitor_mirror
    bind :5888
    mode tcp
    server voipmonitor <voipmonitor_sensor_ip>:5060 send-proxy

Note: The exact HAProxy configuration depends on your architecture and whether you are mirroring TCP (WebSocket) or UDP traffic.

Solutions for other external sources
  1. Check the external system's documentation for packet size limits or truncation settings
  2. Consider using standard network mirroring (SPAN/ERSPAN/GRE) instead of SIP tracing modules
  3. Ensure the external system captures full packet lengths (disable any internal packet size caps)
  4. Verify that the external system does not reassemble or modify SIP packets before forwarding

Step 9: Probe Timeout Due to Virtualization Timing Issues

If remote probes are intermittently disconnecting from the central server with timeout errors, even on a high-performance network with low load, the issue may be related to virtualization host timing problems rather than network connectivity.

Diagnosis: Check System Log Timing Intervals

The VoIPmonitor sensor generates status log messages approximately every 10 seconds during normal operation. If the timing system on the probe is inconsistent, the interval between these status messages can exceed 30 seconds, triggering a connection timeout.

1. Monitor the system log on the affected probe
tail -f /var/log/syslog | grep voipmonitor
2. Examine the timestamps of voipmonitor status messages

Look for repeating log entries that should appear approximately every 10 seconds during normal operations.

3. Identify timing irregularities

Calculate the time interval between successive status log entries. If the interval exceeds 30 seconds, this indicates a timing system problem that will cause connection timeouts with the central server.

Root Cause: Virtualization Host RDTSC Issues

This problem is not network-related. It is a host-level timing issue that impacts the application's internal timers.

The issue typically occurs on virtualized probes where the host's CPU timekeeping is inconsistent. Specifically, problems with the RDTSC (Read Time-Stamp Counter) CPU instruction on the virtualization host can cause:

  • Irregular system clock behavior on the guest VM
  • Application timers that do not fire consistently
  • Sporadic timeouts in client-server connections

Resolution

1. Investigate the virtualization host configuration

Check the host's hypervisor or virtualization platform documentation for known timekeeping issues related to RDTSC.

Common virtualization platforms with known timing considerations:

  • KVM/QEMU: Check CPU passthrough and TSC mode settings
  • VMware: Verify time synchronization between guest and host
  • Hyper-V: Review Integration Services time sync configuration
  • Xen: Check TSC emulation settings
2. Apply host-level fixes

These are host-level fixes, not changes to the guest VM configuration. Consult your virtualization platform's documentation for specific steps to address RDTSC timing issues.

Typical solutions include:

  • Enabling appropriate TSC modes on the host
  • Configuring CPU features passthrough correctly
  • Adjusting hypervisor timekeeping parameters
3. Verify the fix

After applying the host-level configuration changes, monitor the probe's status logs again to confirm that the timing intervals are now consistently around 10 seconds (never exceeding 30 seconds).

# Monitor for regular status messages
tail -f /var/log/syslog | grep voipmonitor

Once the timing is corrected, probe connections to the central server should remain stable without intermittent timeouts.

Troubleshooting: Audio Missing on One Call Leg

If the sniffer captures full audio on one call leg (e.g., carrier/outside) but only partial or no audio on the other leg (e.g., PBX/inside), use this diagnostic workflow to identify the root cause BEFORE applying any configuration fixes.

The key question to answer is: Are the RTP packets for the silent leg present on the wire?

Step 1: Use tcpdump to Capture Traffic During a Test Call

Initiate a new test call that reproduces the issue. During the call, use tcpdump or tshark directly on the sensor's sniffing interface to capture all traffic:

# Capture traffic to a file during the test call
# Replace eth0 with your sniffing interface
tcpdump -i eth0 -s0 -w /tmp/direct_capture.pcap

# OR: Display live traffic for specific IPs (useful for real-time diagnostics)
tcpdump -i eth0 -s0 -nn "host <pbx_ip> or host <carrier_ip>"

Let the call run for 10-30 seconds, then stop tcpdump with Ctrl+C.

Step 2: Retrieve VoIPmonitor GUI's PCAP for the Same Call

After the call completes: 1. Navigate to the CDR View in the VoIPmonitor GUI 2. Find the test call you just made 3. Download the PCAP file for that call (click the PCAP icon/button) 4. Save it as: /tmp/gui_capture.pcap

Step 3: Compare the Two Captures

Analyze both captures to determine if RTP packets for the silent leg are present on the wire:

# Count RTP packets in the direct capture
tshark -r /tmp/direct_capture.pcap -Y "rtp" | wc -l

# Count RTP packets in the GUI capture
tshark -r /tmp/gui_capture.pcap -Y "rtp" | wc -l

# Check for RTP from specific source IPs in the direct capture
tshark -r /tmp/direct_capture.pcap -Y "rtp" -T fields -e rtp.ssrc -e ip.src -e ip.dst

# Check Call-ID in both captures to verify they're the same call
tshark -r /tmp/direct_capture.pcap -Y "sip" -T fields -e sip.Call-ID | head -1
tshark -r /tmp/gui_capture.pcap -Y "sip" -T fields -e sip.Call-ID | head -1

Step 4: Interpret the Results

Diagnostic Decision Matrix
Observation Root Cause & Next Steps
RTP packets for silent leg are NOT present in direct capture Network/PBX Issue: The PBX or network is not sending the packets. This is not a VoIPmonitor problem. Troubleshoot the PBX (check NAT, RTP port configuration) or network (SPAN/mirror configuration, firewall rules).
RTP packets for silent leg ARE present in direct capture but missing in GUI capture Sniffer Configuration Issue: Packets are on the wire but VoIPmonitor is failing to capture or correlate them. Likely causes: NAT IP mismatch (natalias configuration incorrect), SIP signaling advertises different IP than RTP source, or restrictive filter rules. Proceed with configuration fixes.
RTP packets present in both captures but audio still silent Codec/Transcoding Issue: Packets are captured correctly but may not be decoded properly. Check codec compatibility, unsupported codecs, or transcoding issues on the PBX.

Step 5: Apply the Correct Fix Based on Diagnosis

If RTP is NOT on the wire (Network/PBX issue)
  • Check PBX RTP port configuration and firewall rules
  • Verify network SPAN/mirror is capturing bidirectional traffic (see Section 3)
  • Check PBX NAT settings - RTP packets may be blocked or routed incorrectly
If RTP is on the wire but not captured (Sniffer configuration issue)
  • Configure natalias in /etc/voipmonitor.conf to map the IP advertised in SIP signaling to the actual RTP source IP:
:; /etc/voipmonitor.conf
:natalias = <Public_IP_Signaled> <Private_IP_Actual>
:
  • Check for restrictive filter directives in voipmonitor.conf
  • Verify sipport includes all necessary SIP ports
If packets are captured but audio silent (Codec issue)
  • Check CDR view for codec information on both legs
  • Verify VoIPmonitor GUI has the necessary codec decoders installed
  • Check for codec mismatches between call legs (transcoding may be missing)

Step 6: Verify the Fix After Configuration Changes

After making changes in /etc/voipmonitor.conf:

# Restart the sniffer
systemctl restart voipmonitor

# Make another test call and repeat the diagnostic workflow
# Compare direct vs GUI capture again

Confirm that RTP packets for the problematic leg now appear in both the direct tcpdump capture AND the GUI's PCAP file.

Note: This diagnostic methodology helps you identify whether the issue is in the network infrastructure (PBX, SPAN, firewall) or in VoIPmonitor configuration (natalias, filters). Applying VoIPmonitor configuration fixes when the root cause is a network issue will not resolve the problem.

Appendix: tshark Display Filter Syntax for SIP

When using tshark to analyze SIP traffic, it is important to use the correct Wireshark display filter syntax. Below are common filter examples:

Basic SIP Filters

# Show all SIP INVITE messages
tshark -r capture.pcap -Y "sip.Method == INVITE"

# Show all SIP messages (any method)
tshark -r capture.pcap -Y "sip"

# Show SIP and RTP traffic
tshark -r capture.pcap -Y "sip || rtp"

Search for Specific Phone Number or Text

# Find calls containing a specific phone number (e.g., 5551234567)
tshark -r capture.pcap -Y 'sip contains "5551234567"'

# Find INVITE messages for a specific number
tshark -r capture.pcap -Y 'sip.Method == INVITE && sip contains "5551234567"'

Extract Call-ID from Matching Calls

# Get Call-ID for calls matching a phone number
tshark -r capture.pcap -Y 'sip.Method == INVITE && sip contains "5551234567"' -T fields -e sip.Call-ID

# Get Call-ID along with From and To headers
tshark -r capture.pcap -Y 'sip.Method == INVITE' -T fields -e sip.Call-ID -e sip.from.user -e sip.to.user

Filter by IP Address

# SIP traffic from a specific source IP
tshark -r capture.pcap -Y "sip && ip.src == 192.168.1.100"

# SIP traffic between two hosts
tshark -r capture.pcap -Y "sip && ip.addr == 192.168.1.100 && ip.addr == 10.0.0.50"

Filter by SIP Response Code

# Show all 200 OK responses
tshark -r capture.pcap -Y "sip.Status-Code == 200"

# Show all 4xx and 5xx error responses
tshark -r capture.pcap -Y "sip.Status-Code >= 400"

# Show 486 Busy Here responses
tshark -r capture.pcap -Y "sip.Status-Code == 486"

Important Syntax Notes

  • Field names are case-sensitive: Use sip.Method, sip.Call-ID, sip.Status-Code (not sip.method or sip.call-id)
  • String matching uses contains: Use sip contains "text" (not sip.contains())
  • Use double quotes for strings: sip contains "number" (not single quotes)
  • Boolean operators: Use && (and), || (or), ! (not)

For a complete reference, see the Wireshark SIP Display Filter Reference.

See Also

AI Summary for RAG

Summary: Step-by-step troubleshooting guide for VoIPmonitor sensor not capturing calls. Steps: (1) Verify service running with systemctl status. If service fails to start or crashes immediately with "missing package" error: check logs (syslog/journalctl), install missing dependencies - most commonly rrdtool for RRD graphing/statistics (apt-get install rrdtool or yum/dnf install rrdtool), other common missing packages: libpcap, libssl, zlib. Use ldd to check shared library dependencies. Restart service after installing packages. (2) CRITICAL STEP: Use tshark to verify live traffic is reaching the correct network interface: tshark -i eth0 -Y "sip || rtp" -n (replace eth0 with interface from voipmonitor.conf). If command shows NO packets: issue is network - check SPAN/mirror port configuration on switch, firewall rules. If command shows OPTIONS/NOTIFY/SUBSCRIBE/METHOD but NO INVITE packets: environment has no calls (VOIPmonitor requires INVITE for CDRs). Configure to process non-call SIP messages in voipmonitor.conf with sip-options, sip-message, sip-subscribe, sip-notify set to yes. (3) Check network config - promiscuous mode required for SPAN/RSPAN but NOT for Layer 3 tunnels (ERSPAN/GRE/TZSP/VXLAN). (3A) SPECIAL CASE: Missing packets for specific IPs during high-traffic periods. Use tcpdump FIRST: `tcpdump -i eth0 -nn "host 10.1.2.3 and port 5060"`. If NO packets arrive -> check SPAN config for bidirectional capture (source ports, BOTH inbound/outbound, SPAN buffer saturation during peak, VLAN trunking). If packets DO arrive -> check sensor bottlenecks (ringbuffer, t0CPU, OOM, max_sip_packets_in_call). (3a) If tcpdump shows traffic but VoIPmonitor does NOT capture it, investigate packet encapsulation - capture with tcpdump and analyze with tshark for VLAN tags, ERSPAN, GRE (tshark -Y "gre"), VXLAN (udp.port == 4789), TZSP (udp.port 37008/37009). VLAN tags: ensure filter directive does not use "udp" which drops VLAN-tagged packets. ERSPAN/GRE: verify tunnel configured correctly and packets addressed to sensor IP (promiscuous mode NOT required). VXLAN/TZSP: require proper sending device configuration. (3B) SPECIAL CASE: RTP streams not displayed for specific provider. If SIP signaling works in GUI but RTP streams/quality graphs missing for one provider while working for others: Step 1: Make a test call to reproduce issue. Step 2: During test call, capture RTP packets with tcpdump: `sudo tcpdump -i eth0 -nn "host 1.2.3.4 and rtp" -w /tmp/test_provider_rtp.pcap`. Step 3: Compare tcpdump output with sensor GUI. If tcpdump shows NO RTP packets: network-level issue (asymmetric routing, SPAN config missing RTP path). If tcpdump shows RTP packets but GUI shows no streams: check capture rules with RTP set to DISCARD/Header Only, SRTP decryption config, or sipport/filter settings. (4) Verify voipmonitor.conf settings: interface, sipport, filter directives. (5) Check GUI capture rules with "Skip" option blocking calls. (6) Review system logs for errors. (7) Diagnose OOM killer events causing CDR processing stops. (8) Investigate missing CDRs due tosnaplen truncation, MTU mismatch, or EXTERNAL SOURCE packet truncation. Cause 3: If packets truncated before reaching VoIPmonitor (e.g., Kamailio siptrace, FreeSWITCH sip_trace, custom HEP/HOMER agents, load balancer mirrors), snaplen changes will NOT help. Diagnose with tcpdump -s0; check if received packets smaller than expected. Solutions: For Kamailio siptrace, use TCP transport in duplicate_uri parameter; if connection refused, open TCP listener with socat; best solution: use HAProxy traffic 'tee' to bypass siptrace entirely and send original packets directly. (9) Diagnose probe timeout due to virtualization timing issues - check syslog for 10-second voipmonitor status intervals, RDTSC problems on hypervisor cause >30 second gaps triggering timeouts. Includes tshark display filter syntax appendix.

Keywords: troubleshooting, no calls, not sniffing, no CDRs, tshark, missing package, missing library, rrdtool, rrdtools, dependencies, service failed, service crashed, ldd, libpcap, libssl, zlib, systemctl restart, journalctl, syslog, promiscuous mode, SPAN, RSPAN, ERSPAN, GRE, TZSP, VXLAN, voipmonitor.conf, interface, sipport, filter, capture rules, Skip, OOM, out of memory, snaplen, MTU, packet truncation, external source truncation, Kamailio siptrace, FreeSWITCH sip_trace, OpenSIPS, HEP, HOMER, HAProxy tee, traffic mirroring, load balancer, socat, TCP listener, WebRTC INVITE, truncated packets, corrupted packets, Authorization header, 4k packets, display filter, sip.Method, sip.Call-ID, probe timeout, virtualization, RDTSC, timing issues, status logs, 10 second interval, KVM, VMware, Hyper-V, Xen, non-call SIP traffic, OPTIONS, NOTIFY, SUBSCRIBE, MESSAGE, sip-options, sip-message, sip-subscribe, sip-notify, qualify pings, heartbeat, instant messaging, encapsulation, packet encapsulation, VLAN tags, 802.1Q, tcpdump analysis, tshark encapsulation filters, high traffic, specific IP, missing packets, specific IP addresses, call legs missing, INVITE missing, high-traffic periods, peak hours, bidirectional capture, inbound outbound, both directions, SPAN buffer saturation, port mirroring, SPAN buffer capacity, rx tx both, monitor session, SPAN source, SPAN destination, ringbuffer, t0CPU, max_sip_packets_in_call, max_invite_packets_in_call, RTP missing, RTP not displayed, RTP missing specific provider, audio quality graphs missing, SRTP, asymmetric routing, RTP test call, tcpdump RTP capture, RTP stream visualization, audio missing, audio missing on one leg, partial audio, silenced audio, one call leg, carrier, PBX, inside, outside, tcpdump tshark comparison, direct capture vs GUI capture, diagnose audio issues, RTP packets on the wire, NAT IP mismatch, natalias configuration, codec issue, transcoding, RTP port configuration, network issue, PBX issue, sniffer configuration, packet correlation, RTP source IP mismatch, SIP signaling IP

Key Questions:

  • What is the correct tshark command to verify SIP/RTP traffic is reaching the VoIPmonitor sensor? (Use: tshark -i eth0 -Y "sip || rtp" -n)
  • How do I diagnose why sniffer captures full audio on one call leg but no audio on the other leg?
  • How do I use tcpdump to diagnose missing audio on one call leg?
  • How do I compare tcpdump capture with the GUI's PCAP file?
  • How do I determine if RTP packets are on the wire when one leg has no audio?
  • What is the diagnostic workflow for audio missing on one call leg?
  • How do I determine if audio issue is network/PBX problem vs VoIPmonitor configuration?
  • How do I check if RTP packets for the silent leg are present on the wire?
  • How do I verify if natalias is needed for NAT IP mismatch?
  • How do I diagnose whether one-way audio is a codec issue or network issue?
  • How do I use tcpdump vs GUI PCAP comparison for troubleshooting?
  • What should I do first when one call leg has missing or partial audio?
  • How do I interpret tcpdump vs GUI capture comparison results?
  • How do I check for codec/transcoding issues causing one-way audio?
  • How do I configure VoIPmonitor to process non-call SIP messages like OPTIONS/NOTIFY/SUBSCRIBE?
  • How do I check for VLAN tags in a pcap file?
  • How do I detect ERSPAN or GRE tunnels with tshark?
  • How do I check for VXLAN encapsulation in my capture?
  • How do I identify TZSP packets in a pcap?
  • Why does my BPF filter drop VLAN-tagged packets?
  • Do I need promiscuous mode for ERSPAN or GRE tunnels?
  • Why is VoIPmonitor not recording any calls?
  • How can I check if VoIP traffic is reaching my sensor server?
  • How do I enable promiscuous mode on my network card?
  • What are the most common reasons for VoIPmonitor not capturing data?
  • How do I filter tshark output for SIP INVITE messages?
  • What is the correct tshark filter syntax to find a specific phone number?
  • Why is my VoIPmonitor probe stopping processing calls?
  • What does the "Skip" option in capture rules do?
  • How do I check for OOM killer events in Linux?
  • Why are CDRs missing for calls with large SIP packets?
  • What does the snaplen parameter do in voipmonitor.conf?
  • Traffic capture stopped with missing package error, what should I do?
  • Which package is commonly missing on newly installed sensors?
  • How do I fix a missing library dependency for VoIPmonitor sensor?
  • How do I diagnose MTU-related packet loss?
  • Why are my large SIP packets truncated even after increasing snaplen?
  • How do I tell if packets are truncated by VoIPmonitor or by an external source?
  • How do I fix Kamailio siptrace truncating large packets?
  • What is HAProxy traffic tee and how can it help with packet truncation?
  • Why does Kamailio report "Connection refused" when sending siptrace via TCP?
  • How do I open a TCP listener on VoIPmonitor for Kamailio siptrace?
  • How do I use socat to open a TCP listening port?
  • How do I troubleshoot missing packets for specific IP addresses?
  • Why are packets missing only during high-traffic periods?
  • How do I use tcpdump to verify if packets reach the VoIPmonitor sensor?
  • What should I check if tcpdump shows no traffic but the PBX is sending packets?
  • How do I verify SPAN configuration is capturing bidirectional traffic?
  • What is SPAN buffer saturation and how does it affect packet capture?
  • How do I configure Cisco switch SPAN for bidirectional mirroring?
  • Why are packets missing for specific IP addresses during peak hours?
  • What is the difference between rx, tx, and both in SPAN configuration?
  • How do I know if my SPAN buffer is overloading during high traffic?
  • Why do some calls work but others miss packet legs for specific IPs?
  • How do I verify SPAN source and destination ports are correct?
  • How do I check if SPAN is configured for trunk mode on VLAN traffic?
  • Do I need SPAN to capture both ingress and egress traffic?
  • When should I check SPAN buffer capacity vs sensor t0CPU for packet drops?
  • What should I do if FreeSWITCH sip_trace is truncating packets?
  • Why are my probes disconnecting from the server with timeout errors?
  • How do I diagnose probe timeout issues on high-performance networks?
  • What causes intermittent probe timeout errors in client-server mode?
  • How do I check for virtualization timing issues on VoIPmonitor probes?
  • Why are there no CDRs even though tshark shows SIP OPTIONS/NOTIFY traffic?
  • How do I enable sip-options, sip-message, sip-subscribe, sip-notify in voipmonitor.conf?
  • What SIP methods are processed to generate CDRs vs non-call records?
  • Why are RTP streams not displayed in the GUI for a specific provider?
  • How do I use tcpdump to capture RTP packets during a test call?
  • How do I diagnose missing RTP audio quality graphs for one provider?
  • If SIP signaling works but RTP is missing for a specific provider, what should I check?