Sniffer troubleshooting

From VoIPmonitor.org
Revision as of 23:45, 4 January 2026 by Admin (talk | contribs) (Add Check Sensor Statistics in GUI section - covers checking packet drops, common causes, and solutions)


This guide provides a systematic process to diagnose why the VoIPmonitor sensor might not be capturing any calls. Use it to quickly identify and resolve the most common issues.

Quick Diagnostic Flowchart

Use this flowchart to quickly identify where your problem lies:

Is the VoIPmonitor Service Running Correctly?

First, confirm the sensor process is active and loaded the correct configuration file.

1. Check the service status (for modern systemd systems)
systemctl status voipmonitor

Look for a line that says Active: active (running). If it is inactive or failed, try restarting it with systemctl restart voipmonitor and check the status again.

2. Service Fails to Start with "Binary Not Found" After Crash

If the VoIPmonitor service fails to start after a crash or watchdog restart with an error message indicating the binary cannot be found (e.g., "No such file or directory" for /usr/local/sbin/voipmonitor), the binary may have been renamed with an underscore suffix during the crash recovery process.

Check for a renamed binary:

# Check if the standard binary path exists
ls -l /usr/local/sbin/voipmonitor

# If not found, look for a renamed version with underscore suffix
ls -l /usr/local/sbin/voipmonitor_*

If you find a renamed binary (e.g., voipmonitor_, voipmonitor_20250104, etc.), rename it back to the standard name:

mv /usr/local/sbin/voipmonitor_ /usr/local/sbin/voipmonitor

Then restart the service:

systemctl start voipmonitor

Verify the service starts correctly:

systemctl status voipmonitor
3. Sensor Becomes Unresponsive After GUI Update

If the sensor service fails to start or becomes unresponsive after updating a sensor through the Web GUI, the update process may have left the service in a stuck state. The solution is to forcefully stop the service and restart it using these commands:

# SSH into the sensor host and execute:
killall voipmonitor
systemctl stop voipmonitor
systemctl start voipmonitor

After running these commands, verify the sensor status in the GUI to confirm it is responding correctly. This sequence ensures: (1) Any zombie or hung processes are terminated with killall, (2) systemd is fully stopped, and (3) a clean start of the service.

4. Verify the running process
ps aux | grep voipmonitor

This command will show the running process and the exact command line arguments it was started with. Critically, ensure it is using the correct configuration file, for example: --config-file /etc/voipmonitor.conf. If it is not, there may be an issue with your startup script.

Is Network Traffic Reaching the Server?

If the service is running, verify if the VoIP packets (SIP/RTP) are actually arriving at the server's network interface. The best tool for this is tshark (the command-line version of Wireshark).

1. Install tshark
# For Debian/Ubuntu
apt-get update && apt-get install tshark

# For CentOS/RHEL/AlmaLinux
yum install wireshark
2. Listen for SIP traffic on the correct interface

Replace eth0 with the interface name you have configured in voipmonitor.conf.

tshark -i eth0 -Y "sip || rtp" -n
3. Advanced
Capture to PCAP File for Definitive Testing

Live monitoring with tshark is useful for observation, but capturing traffic to a .pcap file during a test call provides definitive evidence for troubleshooting intermittent issues or specific call legs.

Method 1: Using tcpdump (Recommended)

# Start capture on the correct interface (replace eth0)
tcpdump -i eth0 -s 0 -w /tmp/test_capture.pcap port 5060

# Or capture both SIP and RTP traffic:
tcpdump -i eth0 -s 0 -w /tmp/test_capture.pcap "(port 5060 or udp)"

# Let it run while you make a test call with the missing call leg
# Press Ctrl+C to stop the capture

# Analyze the capture file:
tshark -r /tmp/test_capture.pcap -Y "sip"

Method 2: Using tshark to capture to file

# Start capture:
tshark -i eth0 -w /tmp/test_capture.pcap -f "tcp port 5060 or udp"

# Make your test call, then press Ctrl+C to stop

# Analyze the capture:
tshark -r /tmp/test_capture.pcap -Y "sip" -V

Decision Tree for PCAP Analysis:

After capturing a test call known to have a missing leg:

  • If SIP packets are missing from the .pcap file:
    • The problem is with your network mirroring configuration (SPAN/TAP port, AWS Traffic Mirroring, etc.)
    • The packets never reached the VoIPmonitor sensor's network interface
    • Fix the switch mirroring setup or infrastructure configuration first
  • If SIP packets ARE present in the .pcap file but missing in the VoIPmonitor GUI:

Example Test Call Workflow:

# 1. Start capture
tcpdump -i eth0 -s 0 -w /tmp/test.pcap "port 5060 and host 10.0.1.100"

# 2. Make a test call from phone at 10.0.1.100 to 10.0.2.200
#    (a call that you know should have recordings but is missing)

# 3. Stop capture (Ctrl+C)

# 4. Check for the specific call's Call-ID
tshark -r /tmp/test.pcap -Y "sip" -T fields -e sip.Call-ID

# 5. Verify if packets for both A-leg and B-leg exist
tshark -r /tmp/test.pcap -Y "sip && ip.addr == 10.0.1.100"

# 6. Compare results with VoIPmonitor GUI
#    - If packets found in .pcap: VoIPmonitor software issue
#    - If packets missing from .pcap: Network mirroring issue

Check Sensor Statistics in GUI

If tshark confirms traffic is reaching the interface, use the VoIPmonitor GUI to verify the sensor is processing packets without drops.

1. Navigate to Settings → Sensors
Expand the sensor details to view real-time capture statistics.
2. Check the # packet drops counter
This counter should ideally be 0. If it shows a value other than zero, the sensor is dropping packets due to processing bottlenecks, insufficient buffer memory, or hardware limitations.
3. Common causes of packet drops
Symptom Likely Cause Solution
Insufficient buffer memory | Increase ringbuffer or max_buffer_mem in voipmonitor.conf
See Scaling for tuning guidance
CPU bottleneck | Check sensor CPU utilization; consider dedicating cores with cpu_affinity
Hardware/driver issue | Verify interface driver; check for errors with ethtool -S eth0
New filter or feature overload | Remove or simplify BPF filters, disable unnecessary features
4. Other useful sensor statistics
Metric Description
Current capture rate
Current bandwidth utilization
Call processing rate
Real-time graph of capture rates over time

For detailed performance metrics beyond basic statistics, see Understanding_the_Sniffer's_Performance_Log.

Troubleshoot Network and Interface Configuration

If tshark shows no traffic, it means the packets are not being delivered to the operating system correctly.

1. Check if the interface is UP

Ensure the network interface is active.

ip link show eth0

The output should contain the word UP. If it doesn't, bring it up with:

ip link set dev eth0 up
2. Check for Promiscuous Mode (for SPAN/RSPAN Mirrored Traffic)

Important: Promiscuous mode requirements depend on your traffic mirroring method:

Mirroring Method Promiscuous Mode Required? Reason
SPAN/RSPAN (Layer 2) Yes Mirrored packets retain original MAC addresses; interface must accept all packets
ERSPAN/GRE/TZSP/VXLAN (Layer 3) No Tunneled traffic is addressed directly to sensor's IP; VoIPmonitor decapsulates automatically

For SPAN/RSPAN deployments, check the current promiscuous mode status:

ip link show eth0

Look for the PROMISC flag.

Enable promiscuous mode manually if needed:

ip link set eth0 promisc on

If this solves the problem, you should make the change permanent. The install-script.sh for the sensor usually attempts to do this, but it can fail.

3. Verify Your SPAN/Mirror/TAP Configuration

This is the most common cause of no traffic. Double-check your network switch or hardware tap configuration to ensure:

  • The correct source ports (where your PBX/SBC is connected) are being monitored.
  • The correct destination port (where your VoIPmonitor sensor is connected) is configured.
  • If you are monitoring traffic across different VLANs, ensure your mirror port is configured to carry all necessary VLAN tags (often called "trunk" mode).

Check the VoIPmonitor Configuration

If tshark sees traffic but VoIPmonitor does not, the problem is almost certainly in voipmonitor.conf.

1. Check the interface directive
Make sure the interface parameter in /etc/voipmonitor.conf exactly matches the interface where you see traffic with tshark. For example: interface = eth0.
2. Check the sipport directive
By default, VoIPmonitor only listens on port 5060. If your PBX uses a different port for SIP, you must add it. For example:
sipport = 5060,5080
3. Check for a restrictive filter
If you have a BPF filter configured, ensure it is not accidentally excluding the traffic you want to see. For debugging, try commenting out the filter line entirely and restarting the sensor.

Check GUI Capture Rules (Causing Call Stops)

If tshark sees SIP traffic and the sniffer configuration appears correct, but the probe stops processing calls or shows traffic only on the network interface, GUI capture rules may be the culprit.

Capture rules configured in the GUI can instruct the sniffer to ignore ("skip") all processing for matched calls. This includes calls matching specific IP addresses or telephone number prefixes.

1. Review existing capture rules
Navigate to GUI → Capture rules and examine all rules for any that might be blocking your traffic.
Look specifically for rules with the Skip option set to ON (displayed as "Skip: ON"). The Skip option instructs the sniffer to completely ignore matching calls (no files, RTP analysis, or CDR creation).
2. Test by temporarily removing all capture rules
To isolate the issue, first create a backup of your GUI configuration:
  • Navigate to Tools → Backup & Restore → Backup GUI → Configuration tables
  • This saves your current settings including capture rules
  • Delete all capture rules from the GUI
  • Click the Apply button to save changes
  • Reload the sniffer by clicking the green "reload sniffer" button in the control panel
  • Test if calls are now being processed correctly
  • If resolved, restore the configuration from the backup and systematically investigate the rules to identify the problematic one
3. Identify the problematic rule
  • After restoring your configuration, remove rules one at a time and reload the sniffer after each removal
  • When calls start being processed again, you have identified the problematic rule
  • Review the rule's match criteria (IP addresses, prefixes, direction) against your actual traffic pattern
  • Adjust the rule's conditions or Skip setting as needed
4. Verify rules are reloaded
After making changes to capture rules, remember that changes are not automatically applied to the running sniffer. You must click the "reload sniffer" button in the control panel, or the rules will continue using the previous configuration.

For more information on capture rules, see Capture_rules.

Troubleshoot MySQL/MariaDB Database Connection Errors

If you see "Connection refused (111)" errors or the sensor cannot connect to your database server, the issue is with the MySQL/MariaDB database connection configuration in /etc/voipmonitor.conf.

Error 111 (Connection refused) indicates that the database server is reachable on the network, but no MySQL/MariaDB service is listening on the specified port, or the connection is being blocked by a firewall. This commonly happens after migrations when the database server IP address has changed.

Symptoms and Common Errors

Error Message Likely Cause
Can't connect to MySQL server on 'IP' (111) Wrong host/port or service not running
Access denied for user 'user'@'host' Wrong username or password
Unknown database 'voipmonitor' Wrong database name

Diagnostic Steps

1. Check for database connection errors in sensor logs
# For Debian/Ubuntu (systemd journal)
journalctl -u voipmonitor --since "1 hour ago" | grep -iE "mysql|database|connection|can.t connect"

# For systems using traditional syslog
tail -f /var/log/syslog | grep voipmonitor | grep -iE "mysql|database|connection"

# For CentOS/RHEL/AlmaLinux
tail -f /var/log/messages | grep voipmonitor | grep -iE "mysql|database|connection"
2. Verify database connection parameters in voipmonitor.conf
# Database Connection Parameters
mysqlhost = 192.168.1.10       # IP address or hostname of MySQL/MariaDB server
mysqlport = 3306               # TCP port of the database server (default: 3306)
mysqlusername = root           # Database username
mysqlpassword = your_password  # Database password
mysqldatabase = voipmonitor    # Database name
3. Test MySQL connectivity from the sensor host
# Test basic TCP connectivity (replace IP and port as needed)
nc -zv 192.168.1.10 3306

# Or using telnet
telnet 192.168.1.10 3306

If you see "Connection refused", the database service is not running or not listening on that port.

4. Test MySQL authentication using credentials from voipmonitor.conf
mysql -h 192.168.1.10 -P 3306 -u root -p'your_password' voipmonitor

Commands to run inside mysql client to verify:

-- Check if connected correctly
SELECT USER(), CURRENT_USER();

-- Check database exists
SHOW DATABASES LIKE 'voipmonitor';

-- Test write access
USE voipmonitor;
SHOW TABLES;
EXIT;
5. Compare with a working sensor's configuration

If you have other sensors that successfully connect to the database, compare their configuration files:

diff <(grep -E "^mysql" /etc/voipmonitor.conf) <(grep -E "^mysql" /path/to/working/sensor/voipmonitor.conf)
6. Check firewall and network connectivity
# Test network reachability
ping -c 4 192.168.1.10

# Check if MySQL port is reachable
nc -zv 192.168.1.10 3306

# Check firewall rules (if using firewalld)
firewall-cmd --list-ports

# Check firewall rules (if using iptables)
iptables -L -n | grep 3306
7. Verify MySQL/MariaDB service is running

On the database server, check if the service is active:

# Check MySQL/MariaDB service status
systemctl status mariadb    # or systemctl status mysql

# Restart service if needed
systemctl restart mariadb

# Check which port MySQL is listening on
ss -tulpn | grep mysql
8. Apply configuration changes and restart the sensor
# Restart the VoIPmonitor service to apply changes
systemctl restart voipmonitor

# Alternatively, reload without full restart (if supported in your version)
echo 'reload' | nc 127.0.0.1 5029

# Verify the service started successfully
systemctl status voipmonitor

# Check logs for database connection confirmation
journalctl -u voipmonitor -n 20

Common Troubleshooting Scenarios

Scenario Symptom Solution
Database server IP changed "Can't connect to MySQL server on '10.1.1.10' (111)" Update mysqlhost in voipmonitor.conf
Wrong credentials "Access denied for user" Verify and update mysqlusername and mysqlpassword
Database service not running "Connection refused (111)" Start service: systemctl start mariadb
Firewall blocking port nc shows "refused" but MySQL is running Open port 3306 in firewall
Localhost vs remote confusion Works locally but fails from sensor Use actual IP address instead of localhost

For more detailed information about all mysql* configuration parameters, see Sniffer_configuration#Database_Configuration.

Check for Storage Hardware Errors (HEAP FULL / packetbuffer Issues)

If the sensor is crashing with "HEAP FULL" errors or showing "packetbuffer: MEMORY IS FULL" messages, you must distinguish between actual storage hardware failures (requires disk replacement) and performance bottlenecks (requires tuning).

1. Check kernel message buffer for storage errors
dmesg -T | grep -iE "ext4-fs error|i/o error|nvram warning|ata.*failed|sda.*error|disk failure|smart error" | tail -50

Look for these hardware error indicators:

  • ext4-fs error - Filesystem corruption or disk failure
  • I/O error or BUG: soft lockup - Disk read/write failures
  • NVRAM WARNING: nvram_check: failed - RAID controller battery/capacitor issues
  • ata.*: FAILED - Hard drive SMART failure
  • Buffer I/O error - Disk unable to complete operations

If you see ANY of these errors:

  • The storage subsystem is failing and likely needs hardware replacement
  • Do not attempt performance tuning - replace the failed disk/RAID first
  • Check SMART status: smartctl -a /dev/sda
  • Check RAID health: cat /proc/mdstat or RAID controller tools
2. If dmesg is clean of errors → Performance Bottleneck

If the kernel logs show no storage errors, the issue is a performance bottleneck (disk too slow, network latency, etc.).

Check disk I/O performance:

# Current I/O wait (should be < 10% normally)
iostat -x 5

# Detailed disk stats
dstat -d

# Real-time disk latency
ioping -c 10 .

Check NFS latency (if using NFS storage):

# Test NFS read/write latency
time dd if=/dev/zero of=/var/spool/voipmonitor/testfile bs=1M count=100
time cat /var/spool/voipmonitor/testfile > /dev/null
rm /var/spool/voipmonitor/testfile

# Check NFS mount options
mount | grep nfs

Common performance solutions:

  • Use SSD/NVMe for VoIPmonitor spool directory
  • Ensure proper NIC queue settings for high-throughput NFS
  • Check network switch port configuration for NFS
  • Review Scaling guide for detailed optimization

See also IO_Measurement for comprehensive disk benchmarking tools.

Check for OOM (Out of Memory) Issues

If VoIPmonitor suddenly stops processing CDRs and a service restart temporarily restores functionality, the system may be experiencing OOM (Out of Memory) killer events. The Linux OOM killer terminates processes when available RAM is exhausted, and MySQL (mysqld) is a common target due to its memory-intensive nature.

1. Check for OOM killer events in kernel logs
# For Debian/Ubuntu
grep -i "out of memory\|killed process" /var/log/syslog | tail -20

# For CentOS/RHEL/AlmaLinux
grep -i "out of memory\|killed process" /var/log/messages | tail -20

# Also check dmesg:
dmesg | grep -i "killed process" | tail -10

Typical OOM killer messages look like:

Out of memory: Kill process 1234 (mysqld) score 123 or sacrifice child
Killed process 1234 (mysqld) total-vm: 12345678kB, anon-rss: 1234567kB
2. Monitor current memory usage
# Check available memory (look for low 'available' or 'free' values)
free -h

# Check per-process memory usage (sorted by RSS)
ps aux --sort=-%mem | head -15

# Check MySQL memory usage in bytes
cat /proc/$(pgrep mysqld)/status | grep -E "VmSize|VmRSS"

Warning signs:

  • Available memory consistently below 500MB during operation
  • MySQL consuming most of the available RAM
  • Swap usage near 100% (if swap is enabled)
  • Frequent process restarts without clear error messages
3. First Fix
Check and correct innodb_buffer_pool_size:

Before upgrading hardware, verify that innodb_buffer_pool_size is not set too high. This is a common cause of OOM incidents.

Calculate the correct buffer pool size: For a server running both VoIPmonitor and MySQL on the same host:

Formula: innodb_buffer_pool_size = (Total RAM - VoIPmonitor memory - OS overhead) / 2

Example for a 32GB server:
- Total RAM: 32GB
- VoIPmonitor process memory (check with ps aux): ~2GB
- OS + other services overhead: ~2GB
- Available for buffer pool: 28GB
- Recommended innodb_buffer_pool_size = 14G

Edit the MariaDB configuration file:

# Common locations: /etc/mysql/my.cnf, /etc/mysql/mariadb.conf.d/50-server.cnf

innodb_buffer_pool_size = 14G  # Adjust based on your calculation

Restart MariaDB to apply:

systemctl restart mariadb  # or systemctl restart mysql
4. Second Fix
Reduce VoIPmonitor buffer memory usage:

VoIPmonitor allocates significant memory for packet buffers. The total buffer memory is calculated based on:

Parameter Default Description
ringbuffer 50MB Ring buffer size per interface (recommended ≥500MB for >100 Mbit traffic)
max_buffer_mem 2000MB Maximum buffer memory limit

Total formula: Approximate total = (ringbuffer × number of interfaces) + max_buffer_mem

To reduce VoIPmonitor memory usage:

# Edit /etc/voipmonitor.conf

# Reduce ringbuffer for each interface (e.g., from 50 to 20)
ringbuffer = 20

# Reduce maximum buffer memory (e.g., from 2000 to 1000)
max_buffer_mem = 1000

# Alternatively, reduce the number of sniffing interfaces if not all are needed
interface = eth0,eth1  # Instead of eth0,eth1,eth2,eth3

After making changes:

systemctl restart voipmonitor

Important notes:

  • Reducing ringbuffer may increase packet loss during traffic spikes
  • Reducing max_buffer_mem affects how many packets can be buffered before being written to disk
  • Monitor packet loss statistics in the GUI after reducing buffers to ensure acceptable performance
5. Solution
Increase physical memory (if buffer tuning is insufficient):

If correcting both MySQL and VoIPmonitor buffer settings does not resolve the OOM issues, upgrade the server's physical RAM. After upgrading:

  • Verify memory improvements with free -h
  • Recalculate and adjust innodb_buffer_pool_size
  • Re-tune ringbuffer and max_buffer_mem
  • Monitor for several days to ensure OOM events stop

Sensor Upgrade Fails with "Permission denied" from /tmp

If the sensor upgrade process fails with "Permission denied" errors when executing scripts from the /tmp directory, or the service fails to restart after upgrade, the /tmp partition may be mounted with the noexec flag.

The noexec mount option prevents execution of any script or binary from the /tmp directory for security reasons. However, the VoIPmonitor sensor upgrade process uses /tmp for temporary script execution.

1. Check the mount options for /tmp
mount | grep /tmp

Look for the noexec flag in the mount options:

/dev/sda2 on /tmp type ext4 rw,relatime,noexec,nosuid,nodev
2. Remount /tmp without noexec (temporary fix)
mount -o remount,exec /tmp

# Verify the change:
mount | grep /tmp

The output should no longer contain noexec.

3. Make the change permanent (edit /etc/fstab)
nano /etc/fstab

Remove the noexec option from the /tmp line:

# Before:
/dev/sda2  /tmp  ext4  rw,relatime,noexec,nosuid,nodev  0 0

# After (remove noexec):
/dev/sda2  /tmp  ext4  rw,relatime,nosuid,nodev  0 0

If /tmp is a separate partition, remount for changes to take effect:

mount -o remount /tmp
4. Re-run the sensor upgrade

After fixing the mount options, retry the sensor upgrade process.

"No space left on device" Despite Disks Having Free Space

If system services (like php-fpm, voipmonitor, or commands like screen) fail with a "No space left on device" error even though df -h shows sufficient disk space, the issue is likely with temporary filesystems (/tmp, /run) filling up, not with main disk storage.

1. Check usage of temporary filesystems
# Check /tmp usage
df -h /tmp

# Check /run usage
df -h /run

If /tmp or /run show 100% usage despite main filesystems having free space, these temporary filesystems need to be cleaned.

2. Check what is consuming space
# Find large files in /tmp
du -sh /tmp/* 2>/dev/null | sort -hr | head -20

# Check journal disk usage
journalctl --disk-usage
3. Immediate cleanup of journal logs

System journal logs stored in /run/log/journal/ can fill up the /run filesystem.

# Limit journal to 100MB total size
sudo journalctl --vacuum-size=100M

# Or limit by time (keep only last 2 days)
sudo journalctl --vacuum-time=2d
4. Permanent solution - Configure journal rotation

Edit /etc/systemd/journald.conf:

[Journal]
SystemMaxUse=100M
MaxRetentionSec=1month

Apply changes:

sudo systemctl restart systemd-journald
5. Quick fix - System reboot

The quickest way to free space in /tmp and /run is a system reboot, as these filesystems are cleared on each boot.

Check VoIPmonitor Logs for General Errors

After addressing the specific issues above, check the system logs for other error messages from the sensor process that may reveal additional problems.

# For Debian/Ubuntu
tail -f /var/log/syslog | grep voipmonitor

# For CentOS/RHEL/AlmaLinux
tail -f /var/log/messages | grep voipmonitor

Common errors to look for:

  • "pcap_open_live(eth0) error: eth0: No such device" - Wrong interface name
  • "Permission denied" - Sensor not running with sufficient privileges
  • Messages about connection issues - See database troubleshooting
  • Messages about dropping packets - See Scaling guide

Benign Database Errors When Features Are Disabled

Some VoIPmonitor features may generate harmless database errors when those features are not enabled in your configuration. These errors are benign and can be safely ignored.

Common Benign Error: Missing Tables

If you see MySQL errors stating that a table does not exist (e.g., "Table 'voipmonitor.ss7' doesn't exist") even though the corresponding feature is disabled, this is expected behavior.

Common examples:

  • Errors about the ss7 table when ss7 = no in voipmonitor.conf
  • Errors about the register_failed, register_state, or sip_msg tables when those features are disabled

Solution: Ignore or Suppress in Monitoring

Since these errors indicate that a feature is simply not active, they do not impact system functionality:

  1. Do not change the configuration to fix these errors
  2. Add monitoring exceptions to suppress warnings for table-not-found errors (MySQL error code 1146)
  3. Configure alerting systems to exclude these specific SQL errors from notifications

When to Take Action

You only need to take action if:

  • You actually want to use the feature (enable the corresponding configuration option)
  • Errors persist about tables for features that are explicitly enabled in voipmonitor.conf

Appendix: tshark Display Filter Syntax for SIP

When using tshark to analyze SIP traffic, it is important to use the correct Wireshark display filter syntax.

Basic SIP Filters

# Show all SIP INVITE messages
tshark -r capture.pcap -Y "sip.Method == INVITE"

# Show all SIP messages (any method)
tshark -r capture.pcap -Y "sip"

# Show SIP and RTP traffic
tshark -r capture.pcap -Y "sip || rtp"

Search for Specific Phone Number or Text

# Find calls containing a specific phone number (e.g., 5551234567)
tshark -r capture.pcap -Y 'sip contains "5551234567"'

# Find INVITE messages for a specific number
tshark -r capture.pcap -Y 'sip.Method == INVITE && sip contains "5551234567"'

Extract Call-ID from Matching Calls

# Get Call-ID for calls matching a phone number
tshark -r capture.pcap -Y 'sip.Method == INVITE && sip contains "5551234567"' -T fields -e sip.Call-ID

# Get Call-ID along with From and To headers
tshark -r capture.pcap -Y 'sip.Method == INVITE' -T fields -e sip.Call-ID -e sip.from.user -e sip.to.user

Filter by IP Address

# SIP traffic from a specific source IP
tshark -r capture.pcap -Y "sip && ip.src == 192.168.1.100"

# SIP traffic between two hosts
tshark -r capture.pcap -Y "sip && ip.addr == 192.168.1.100 && ip.addr == 10.0.0.50"

Filter by SIP Response Code

# Show all 200 OK responses
tshark -r capture.pcap -Y "sip.Status-Code == 200"

# Show all 4xx and 5xx error responses
tshark -r capture.pcap -Y "sip.Status-Code >= 400"

# Show 486 Busy Here responses
tshark -r capture.pcap -Y "sip.Status-Code == 486"

Important Syntax Notes

Syntax Element Correct Usage Notes
Field names sip.Method, sip.Call-ID Case-sensitive
String matching sip contains "text" Use contains keyword
String quotes Double quotes "..." Not single quotes
Boolean operators &&, , ! AND, OR, NOT

For a complete reference, see the Wireshark SIP Display Filter Reference.

AI Summary for RAG

Summary: Systematic troubleshooting guide for VoIPmonitor sensor issues when calls are not being captured. Covers service startup problems (binary renamed after crash, unresponsive after GUI update), network traffic verification using tshark, GUI sensor statistics (packet drops counter), promiscuous mode requirements (needed for SPAN/RSPAN but not for ERSPAN/GRE/TZSP), voipmonitor.conf configuration checks (interface, sipport, filter), GUI capture rules with Skip option, database connection errors (Error 111 after migration), HEAP FULL errors (hardware vs performance issues), OOM killer problems (innodb_buffer_pool_size and ringbuffer/max_buffer_mem tuning), upgrade failures due to /tmp noexec flag, and "no space left" errors caused by full /tmp or /run filesystems.

Keywords: troubleshooting, no calls, tshark, promiscuous mode, SPAN, ERSPAN, GRE, TZSP, voipmonitor.conf, interface, sipport, capture rules, Skip, packet drops, sensor statistics, Settings → Sensors, ringbuffer, max_buffer_mem, OOM killer, innodb_buffer_pool_size, HEAP FULL, Connection refused 111, noexec, /tmp, journal logs, no space left on device

Key Questions:

  • Why is VoIPmonitor not recording any calls?
  • How do I check if VoIP traffic is reaching my sensor?
  • How do I check for packet drops in the GUI sensor statistics?
  • What is the acceptable value for # packet drops in Settings → Sensors?
  • How do I diagnose packet drops in VoIPmonitor?
  • Do I need promiscuous mode for ERSPAN or GRE tunnels?
  • How do I fix "Connection refused (111)" database errors?
  • VoIPmonitor crashes with HEAP FULL error, what should I check?
  • How do I fix OOM killer issues on VoIPmonitor server?
  • Why does sensor upgrade fail with permission denied from /tmp?
  • "No space left on device" but disk has free space, what to check?