Sniffer troubleshooting
Sniffer Troubleshooting
This page covers common VoIPmonitor sniffer/sensor problems organized by symptom. For configuration reference, see Sniffer_configuration. For performance tuning, see Scaling.
Critical First Step: Is Traffic Reaching the Interface?
⚠️ Warning: Before any sensor tuning, verify packets are reaching the network interface. If packets aren't there, no amount of sensor configuration will help.
# Check for SIP traffic on the capture interface
tcpdump -i eth0 -nn "host <PROBLEMATIC_IP> and port 5060" -c 10
# If no packets: Network/SPAN issue - contact network admin
# If packets visible: Proceed with sensor troubleshooting below
Quick Diagnostic Checklist
| Check | Command | Expected Result |
|---|---|---|
| Service running | systemctl status voipmonitor |
Active (running) |
| Traffic on interface | tshark -i eth0 -c 5 -Y "sip" |
SIP packets displayed |
| Interface errors | ip -s link show eth0 |
No RX errors/drops |
| Promiscuous mode | ip link show eth0 |
PROMISC flag present |
| Logs | grep voip | No critical errors |
| GUI rules | Settings → Capture Rules | No unexpected "Skip" rules |
No Calls Being Recorded
Service Not Running
# Check status
systemctl status voipmonitor
# View recent logs
journalctl -u voipmonitor --since "10 minutes ago"
# Start/restart
systemctl restart voipmonitor
Common startup failures:
- Interface not found: Check
interfacein voipmonitor.conf matchesip aoutput - Port already in use: Another process using the management port
- License issue: Check License for activation problems
Wrong Interface or Port Configuration
# Check current config
grep -E "^interface|^sipport" /etc/voipmonitor.conf
# Example correct config:
# interface = eth0
# sipport = 5060
💡 Tip:
GUI Capture Rules Blocking
Navigate to Settings → Capture Rules and check for rules with action "Skip" that may be blocking calls. Rules are processed in order - a Skip rule early in the list will block matching calls.
See Capture_rules for detailed configuration.
SPAN/Mirror Not Configured
If tcpdump shows no traffic:
- Verify switch SPAN/mirror port configuration
- Check that both directions (ingress + egress) are mirrored
- Confirm VLAN tagging is preserved if needed
- Test physical connectivity (cable, port status)
See Sniffing_modes for SPAN, RSPAN, and ERSPAN configuration.
Filter Parameter Too Restrictive
If filter is set in voipmonitor.conf, it may exclude traffic:
# Check filter
grep "^filter" /etc/voipmonitor.conf
# Temporarily disable to test
# Comment out the filter line and restart
Missing id_sensor Parameter
Symptom: SIP packets visible in Capture/PCAP section but missing from CDR, SIP messages, and Call flow.
Cause: The id_sensor parameter is not configured or is missing. This parameter is required to associate captured packets with the CDR database.
Solution:
# Check if id_sensor is set
grep "^id_sensor" /etc/voipmonitor.conf
# Add or correct the parameter
echo "id_sensor = 1" >> /etc/voipmonitor.conf
# Restart the service
systemctl restart voipmonitor
💡 Tip: Use a unique numeric identifier (1-65535) for each sensor. Essential for multi-sensor deployments. See id_sensor documentation.
Missing Audio / RTP Issues
One-Way Audio (Asymmetric Mirroring)
Symptom: SIP recorded but only one RTP direction captured.
Cause: SPAN port configured for only one direction.
Diagnosis:
# Count RTP packets per direction
tshark -i eth0 -Y "rtp" -T fields -e ip.src -e ip.dst | sort | uniq -c
If one direction shows 0 or very few packets, configure the switch to mirror both ingress and egress traffic.
RTP Not Associated with Call
Symptom: Audio plays in sniffer but not in GUI, or RTP listed under wrong call.
Possible causes:
1. SIP and RTP on different interfaces/VLANs:
# voipmonitor.conf - enable automatic RTP association
auto_enable_use_blocks = yes
2. NAT not configured:
# voipmonitor.conf - for NAT scenarios
natalias = <public_ip> <private_ip>
# If not working, try reversed order:
natalias = <private_ip> <public_ip>
3. External device modifying media ports:
If SDP advertises one port but RTP arrives on different port (SBC/media server issue):
# Compare SDP ports vs actual RTP
tshark -r call.pcap -Y "sip.Method == INVITE" -V | grep "m=audio"
tshark -r call.pcap -Y "rtp" -T fields -e udp.dstport | sort -u
If ports don't match, the external device must be configured to preserve SDP ports - VoIPmonitor cannot compensate.
RTP Incorrectly Associated with Wrong Call (PBX Port Reuse)
Symptom: RTP streams from one call appear associated with a different CDR when your PBX aggressively reuses the same IP:port across multiple calls.
Cause: When PBX reuses media ports, VoIPmonitor may incorrectly correlate RTP packets to the wrong call based on weaker correlation methods.
Solution: Enable rtp_check_both_sides_by_sdp to require verification of both source and destination IP:port against SDP:
# voipmonitor.conf - require both source and destination to match SDP
rtp_check_both_sides_by_sdp = yes
# Alternative (strict) mode - allows initial unverified packets
rtp_check_both_sides_by_sdp = strict
⚠️ Warning: Enabling this may prevent RTP association for calls using NAT, as the source IP:port will not match the SDP. Use natalias mappings or the strict setting to mitigate this.
Snaplen Truncation
Symptom: Large SIP messages truncated, incomplete headers.
Solution:
# voipmonitor.conf - increase packet capture size
snaplen = 8192
For Kamailio siptrace, also check trace_msg_fragment_size in Kamailio config. See snaplen documentation.
PACKETBUFFER Saturation
Symptom: Log shows PACKETBUFFER: memory is FULL, truncated RTP recordings.
⚠️ Warning: This alert refers to VoIPmonitor's internal packet buffer (max_buffer_mem), NOT system RAM. High system memory availability does not prevent this error. The root cause is always a downstream bottleneck (disk I/O or CPU) preventing packets from being processed fast enough.
Before testing solutions, gather diagnostic data:
- Check sensor logs:
/var/log/syslog(Debian/Ubuntu) or/var/log/messages(RHEL/CentOS) - Generate debug log via GUI: Tools → Generate debug log
Diagnose: I/O vs CPU Bottleneck
⚠️ Warning: Do not guess the bottleneck source. Use proper diagnostics first to identify whether the issue is disk I/O, CPU, or database-related. Disabling storage as a test is valid but should be used to confirm findings, not as the primary diagnostic method.
Step 1: Read the VoIPmonitor Syslog Status Line
VoIPmonitor outputs a status line every 10 seconds. This is your first diagnostic tool:
# Monitor in real-time
journalctl -u voipmonitor -f
# or
tail -f /var/log/syslog | grep voipmonitor
Example status line:
calls[424] PS[C:4 S:41 R:13540] SQLq[C:0 M:0] heap[45|30|20] comp[48] [25.6Mb/s] t0CPU[85%] t1CPU[12%] t2CPU[8%] tacCPU[8|8|7|7%] RSS/VSZ[365|1640]MB
Key metrics for bottleneck identification:
| Metric | What It Indicates | I/O Bottleneck Sign | CPU Bottleneck Sign |
|---|---|---|---|
heap[A|B|C] |
Buffer fill % (primary / secondary / processing) | High A with low t0CPU | High A with high t0CPU |
t0CPU[X%] |
Packet capture thread (single-core, cannot parallelize) | Low (<50%) | High (>80%) |
comp[X] |
Active compression threads | Very high (maxed out) | Normal |
SQLq[C:X M:Y] |
Pending SQL queries | Growing = database bottleneck | Stable |
tacCPU[...] |
TAR compression threads | All near 100% = compression bottleneck | Normal |
Interpretation flowchart:
Step 2: Linux I/O Diagnostics
Use these standard Linux tools to confirm I/O bottleneck:
Install required tools:
# Debian/Ubuntu
apt install sysstat iotop ioping
# CentOS/RHEL
yum install sysstat iotop ioping
2a) iostat - Disk utilization and wait times
# Run for 10 intervals of 2 seconds
iostat -xz 2 10
Key output columns:
Device r/s w/s rkB/s wkB/s await %util
sda 12.50 245.30 50.00 1962.40 45.23 98.50
| Column | Description | Problem Indicator |
|---|---|---|
%util |
Device utilization percentage | > 90% = disk saturated |
await |
Average I/O wait time (ms) | > 20ms for SSD, > 50ms for HDD = high latency |
w/s |
Writes per second | Compare with disk's rated IOPS |
2b) iotop - Per-process I/O usage
# Show I/O by process (run as root)
iotop -o
Look for voipmonitor or mysqld dominating I/O. If voipmonitor shows high DISK WRITE but system %util is 100%, disk cannot keep up.
2c) ioping - Quick latency check
# Test latency on VoIPmonitor spool directory
cd /var/spool/voipmonitor
ioping -c 20 .
Expected results:
| Storage Type | Healthy Latency | Problem Indicator |
|---|---|---|
| NVMe SSD | < 0.5 ms | > 2 ms |
| SATA SSD | < 1 ms | > 5 ms |
| HDD (7200 RPM) | < 10 ms | > 30 ms |
Step 3: Linux CPU Diagnostics
3a) top - Overall CPU usage
# Press '1' to show per-core CPU
top
Look for:
- Individual CPU core at 100% (t0 thread is single-threaded)
- High
%wa(I/O wait) vs high%us/%sy(CPU-bound)
3b) Verify voipmonitor threads
# Show voipmonitor threads with CPU usage
top -H -p $(pgrep voipmonitor)
If one thread shows ~100% CPU while others are low, you have a CPU bottleneck on the capture thread (t0).
Step 4: Decision Matrix
| Observation | Likely Cause | Go To |
|---|---|---|
heap high, t0CPU > 80%, iostat %util low |
CPU Bottleneck | CPU Solution |
heap high, t0CPU < 50%, iostat %util > 90% |
I/O Bottleneck | I/O Solution |
heap high, t0CPU < 50%, iostat %util < 50%, SQLq growing |
Database Bottleneck | Database Solution |
heap normal, comp maxed, tacCPU all ~100% |
Compression Bottleneck (type of I/O) | I/O Solution |
Step 5: Confirmation Test (Optional)
After identifying the likely cause with the tools above, you can confirm with a storage disable test:
# /etc/voipmonitor.conf - temporarily disable all storage
savesip = no
savertp = no
savertcp = no
savegraph = no
systemctl restart voipmonitor
# Monitor for 5-10 minutes during peak traffic
journalctl -u voipmonitor -f | grep heap
- If
heapvalues drop to near zero → confirms I/O bottleneck - If
heapvalues remain high → confirms CPU bottleneck
⚠️ Warning: Remember to re-enable storage after testing! This test causes call recordings to be lost.
Solution: I/O Bottleneck
Immediate mitigations:
# /etc/voipmonitor.conf - reduce I/O pressure
pcap_dump_asyncwrite = yes # Async writes (default)
pcap_dump_writethreads = 1 # Start threads
pcap_dump_writethreads_max = 32 # Scale up to 32 threads
tar_maxthreads = 16 # More compression threads
Storage upgrades (in order of effectiveness):
| Solution | IOPS Improvement | Notes |
|---|---|---|
| NVMe SSD | 50-100x vs HDD | Best option, handles 10,000+ concurrent calls |
| SATA SSD | 20-50x vs HDD | Good option, handles 5,000+ concurrent calls |
| RAID 10 with BBU | 5-10x vs single disk | Enable WriteBack cache (requires battery backup) |
| Separate storage server | Variable | Use client/server mode |
Filesystem tuning (ext4):
# Check current mount options
mount | grep voipmonitor
# Recommended mount options for /var/spool/voipmonitor
# Add to /etc/fstab: noatime,data=writeback,barrier=0
# WARNING: barrier=0 requires battery-backed RAID
Verify improvement:
# After changes, monitor iostat
iostat -xz 2 10
# %util should drop below 70%, await should decrease
Solution: CPU Bottleneck
Identify CPU Bottleneck Using Manager Commands
VoIPmonitor provides manager commands to monitor thread CPU usage in real-time. This is essential for identifying which thread is saturated.
Connect to manager interface:
# Via Unix socket (local, recommended)
echo 'sniffer_threads' | nc -U /tmp/vm_manager_socket
# Via TCP port 5029 (remote or local)
echo 'sniffer_threads' | nc 127.0.0.1 5029
# Monitor continuously (every 2 seconds)
watch -n 2 "echo 'sniffer_threads' | nc -U /tmp/vm_manager_socket"
ℹ️ Note: TCP port 5029 is encrypted by default. For unencrypted access, set manager_enable_unencrypted = yes in voipmonitor.conf (security risk on public networks).
Example output:
t0 - binlog1 fifo pcap read ( 12345) : 78.5 FIFO 99 1234
t2 - binlog1 pb write ( 12346) : 12.3 456
rtp thread binlog1 binlog1 0 ( 12347) : 8.1 234
rtp thread binlog1 binlog1 1 ( 12348) : 6.2 198
t1 - binlog1 call processing ( 12349) : 4.5 567
tar binlog1 compression 0 ( 12350) : 3.2 89
Column interpretation:
| Column | Description |
|---|---|
| Thread name | Descriptive name (t0=capture, t1=call processing, t2=packet buffer write) |
| (TID) | Linux thread ID (useful for top -H -p TID)
|
| CPU % | Current CPU usage percentage - key metric |
| Sched | Scheduler type (FIFO = real-time, empty = normal) |
| Priority | Thread priority |
| CS/s | Context switches per second |
Critical threads to watch:
| Thread | Role | If at 90-100% |
|---|---|---|
| t0 (pcap read) | Packet capture from NIC | Single-core limit reached! Cannot parallelize. Need DPDK/Napatech. |
| t2 (pb write) | Packet buffer processing | Processing bottleneck. Check t2CPU breakdown. |
| rtp thread | RTP packet processing | Increase rtpthreads_max
|
| tar compression | PCAP archiving | I/O bottleneck (compression waiting for disk) |
| mysql store | Database writes | Database bottleneck. Check SQLq metric. |
⚠️ Warning: If t0 thread is at 90-100%, you have hit the fundamental single-core capture limit. The t0 thread reads packets from the kernel and cannot be parallelized. Disabling features like jitterbuffer will NOT help - those run on different threads. The only solutions are:
Interpreting t2CPU Detailed Breakdown
The syslog status line shows t2CPU with detailed sub-metrics:
t2CPU[pb:10/ d:39/ s:24/ e:17/ c:6/ g:6/ r:7/ rm:24/ rh:16/ rd:19/]
| Code | Function | High Value Indicates |
|---|---|---|
| pb | Packet buffer output | Buffer management overhead |
| d | Dispatch | Structure creation bottleneck |
| s | SIP parsing | Complex/large SIP messages |
| e | Entity lookup | Call table lookup overhead |
| c | Call processing | Call state machine processing |
| g | Register processing | High REGISTER volume |
| r, rm, rh, rd | RTP processing stages | High RTP volume - increase rtpthreads |
Thread auto-scaling: VoIPmonitor automatically spawns additional threads when load increases:
- If d > 50% → SIP parsing thread (s) starts
- If s > 50% → Entity lookup thread (e) starts
- If e > 50% → Call/register/RTP threads start
Configuration for High Traffic (>10,000 calls/sec)
# /etc/voipmonitor.conf
# Increase buffer to handle processing spikes (value in MB)
# 10000 = 10 GB - can go higher (20000, 30000+) if RAM allows
# Larger buffer absorbs I/O and CPU spikes without packet loss
max_buffer_mem = 10000
# Use IP filter instead of BPF (more efficient)
interface_ip_filter = 10.0.0.0/8
interface_ip_filter = 192.168.0.0/16
# Comment out any 'filter' parameter
CPU Optimizations
# /etc/voipmonitor.conf
# Reduce jitterbuffer calculations to save CPU (keeps MOS-F2 metric)
jitterbuffer_f1 = no
jitterbuffer_f2 = yes
jitterbuffer_adapt = no
# If MOS metrics are not needed at all, disable everything:
# jitterbuffer_f1 = no
# jitterbuffer_f2 = no
# jitterbuffer_adapt = no
Kernel Bypass Solutions (Extreme Loads)
When t0 thread hits 100% on standard NIC, kernel bypass is the only solution:
| Solution | Type | CPU Reduction | Use Case |
|---|---|---|---|
| DPDK | Open-source | ~70% | Multi-gigabit on commodity hardware |
| Napatech | Hardware SmartNIC | >97% (< 3% at 10Gbit) | Extreme performance requirements |
Verify Improvement
# Monitor thread CPU after changes
watch -n 2 "echo 'sniffer_threads' | nc -U /tmp/vm_manager_socket | head -10"
# Or monitor syslog
journalctl -u voipmonitor -f
# t0CPU should drop, heap values should stay < 20%
ℹ️ Note: After changes, monitor syslog heap[A|B|C] values - should stay below 20% during peak traffic. See Syslog_Status_Line for detailed metric explanations.
Storage Hardware Failure
Symptom: Sensor shows disconnected (red X) with "DROPPED PACKETS" at low traffic volumes.
Diagnosis:
# Check disk health
smartctl -a /dev/sda
# Check RAID status (if applicable)
cat /proc/mdstat
mdadm --detail /dev/md0
Look for reallocated sectors, pending sectors, or RAID degraded state. Replace failing disk.
OOM (Out of Memory)
Identify OOM Victim
# Check for OOM kills
dmesg | grep -i "out of memory\|oom\|killed process"
journalctl --since "1 hour ago" | grep -i oom
MySQL Killed by OOM
Reduce InnoDB buffer pool:
# /etc/mysql/my.cnf
innodb_buffer_pool_size = 2G # Reduce from default
Voipmonitor Killed by OOM
Reduce buffer sizes in voipmonitor.conf:
max_buffer_mem = 2000 # Reduce from default
ringbuffer = 50 # Reduce from default
Runaway External Process
# Find memory-hungry processes
ps aux --sort=-%mem | head -20
# Kill orphaned/runaway process
kill -9 <PID>
For servers limited to 16GB RAM or when experiencing repeated MySQL OOM kills:
# /etc/my.cnf or /etc/mysql/mariadb.conf.d/50-server.cnf
[mysqld]
# On 16GB server: 6GB buffer pool + 6GB MySQL overhead = 12GB total
# Leaves 4GB for OS + GUI, preventing OOM
innodb_buffer_pool_size = 6G
# Enable write buffering (may lose up to 1s of data on crash but reduces memory pressure)
innodb_flush_log_at_trx_commit = 2
Restart MySQL after changes:
systemctl restart mysql
# or
systemctl restart mariadb
SQL Queue Growth from Non-Call Data
If sip-register, sip-options, or sip-subscribe are enabled, non-call SIP-messages (OPTIONS, REGISTER, SUBSCRIBE, NOTIFY) can accumulate in the database and cause the SQL queue to grow unbounded. This increases MySQL memory usage and leads to OOM kills of mysqld.
⚠️ Warning: Even with reduced innodb_buffer_pool_size, SQL queue will grow indefinitely without cleanup of non-call data.
Solution: Enable automatic cleanup of old non-call data
# /etc/voipmonitor.conf
# cleandatabase=2555 automatically deletes partitions older than 7 years
# Covers: CDR, register_state, register_failed, and sip_msg (OPTIONS/SUBSCRIBE/NOTIFY)
cleandatabase = 2555
Restart the sniffer after changes:
systemctl restart voipmonitor
ℹ️ Note: See Data_Cleaning for detailed configuration options and other cleandatabase_* parameters.
Service Startup Failures
Interface No Longer Exists
After OS upgrade, interface names may change (eth0 → ensXXX):
# Find current interface names
ip a
# Update all config locations
grep -r "interface" /etc/voipmonitor.conf /etc/voipmonitor.conf.d/
# Also check GUI: Settings → Sensors → Configuration
Missing Dependencies
# Install common missing package
apt install libpcap0.8 # Debian/Ubuntu
yum install libpcap # RHEL/CentOS
Network Interface Issues
Promiscuous Mode
Required for SPAN port monitoring:
# Enable
ip link set eth0 promisc on
# Verify
ip link show eth0 | grep PROMISC
ℹ️ Note: Promiscuous mode is NOT required for ERSPAN/GRE tunnels where traffic is addressed to the sensor.
Interface Drops
# Check for drops
ip -s link show eth0 | grep -i drop
# If drops present, increase ring buffer
ethtool -G eth0 rx 4096
Bonded/EtherChannel Interfaces
Symptom: False packet loss when monitoring bond0 or br0.
Solution: Monitor physical interfaces, not logical:
# voipmonitor.conf - use physical interfaces
interface = eth0,eth1
Network Offloading Issues
Symptom: Kernel errors like bad gso: type: 1, size: 1448
# Disable offloading on capture interface
ethtool -K eth0 gso off tso off gro off lro off
Packet Ordering Issues
If SIP messages appear out of sequence:
First: Rule out Wireshark display artifact - disable "Analyze TCP sequence numbers" in Wireshark. See FAQ.
If genuine reordering: Usually caused by packet bursts in network infrastructure. Use tcpdump to verify packets arrive out of order at the interface. Work with network admin to implement QoS or traffic shaping. For persistent issues, consider dedicated capture card with hardware timestamping (see Napatech).
ℹ️ Note: For out-of-order packets in client/server mode (multiple sniffers), see Sniffer_distributed_architecture for pcap_queue_dequeu_window_length configuration.
Solutions for SPAN/Mirroring Reordering
If packets arrive out of order at the SPAN/mirror port (e.g., 302 responses before INVITE causing "000 no response" errors):
1. Configure switch to preserve packet order: Many switches allow configuring SPAN/mirror ports to maintain packet ordering. Consult your switch documentation for packet ordering guarantees in mirroring configuration.
2. Replace SPAN with TAP or packet broker: Unlike software-based SPAN mirroring, hardware TAPs and packet brokers guarantee packet order. Consider upgrading to a dedicated TAP or packet broker device for mission-critical monitoring.
Database Issues
SQL Queue Overload
Symptom: Growing SQLq metric, potential coredumps.
# voipmonitor.conf - increase threads
mysqlstore_concat_limit_cdr = 1000
cdr_check_exists_callid = 0
Error 1062 - Lookup Table Limit
Symptom: Duplicate entry '16777215' for key 'PRIMARY'
Quick fix:
# voipmonitor.conf
cdr_reason_string_enable = no
See Database Troubleshooting for complete solution.
Bad Packet Errors
Symptom: bad packet with ether_type 0xFFFF detected on interface
Diagnosis:
# Run diagnostic (let run 30-60 seconds, then kill)
voipmonitor --check_bad_ether_type=eth0
# Find and kill the diagnostic process
ps ax | grep voipmonitor
kill -9 <PID>
Causes: corrupted packets, driver issues, VLAN tagging problems. Check ethtool -S eth0 for interface errors.
Useful Diagnostic Commands
tshark Filters for SIP
# All SIP INVITEs
tshark -r capture.pcap -Y "sip.Method == INVITE"
# Find specific phone number
tshark -r capture.pcap -Y 'sip contains "5551234567"'
# Get Call-IDs
tshark -r capture.pcap -Y "sip.Method == INVITE" -T fields -e sip.Call-ID
# SIP errors (4xx, 5xx)
tshark -r capture.pcap -Y "sip.Status-Code >= 400"
Interface Statistics
# Detailed NIC stats
ethtool -S eth0
# Watch packet rates
watch -n 1 'cat /proc/net/dev | grep eth0'
See Also
- Sniffer_configuration - Configuration parameter reference
- Sniffer_distributed_architecture - Client/server deployment
- Capture_rules - GUI-based recording rules
- Sniffing_modes - SPAN, ERSPAN, GRE, TZSP setup
- Scaling - Performance optimization
- Database_troubleshooting - Database issues
- FAQ - Common questions and Wireshark display issues
AI Summary for RAG
Summary: Troubleshooting guide for VoIPmonitor sniffer/sensor issues organized by symptom. CRITICAL FIRST STEP: Run tcpdump -i eth0 -nn "host <IP> and port 5060" before any sensor tuning - if no packets visible, it's a network/SPAN issue, not sensor. Main problem categories: (1) No calls recorded - check service status, interface config, sipport, GUI capture rules, SPAN configuration; (2) Missing audio - check for asymmetric mirroring (one-way SPAN), NAT config with natalias, auto_enable_use_blocks for SIP/RTP on different NICs; (3) PACKETBUFFER saturation - diagnose I/O vs CPU by temporarily disabling savesip/savertp, for CPU bottleneck use rtpthreads_start=20, max_buffer_mem=10000; (4) Storage failure - smartctl diagnostics, RAID status; (5) OOM - identify victim in dmesg, reduce innodb_buffer_pool_size or max_buffer_mem; (6) Service startup - interface name changes after OS upgrade, missing libpcap; (7) Network issues - promiscuous mode, interface drops, bonded interfaces (use physical not logical), offloading (disable gso/tso/gro); (8) Database - SQL queue overload, Error 1062 lookup table limit. For packet ordering issues, first rule out Wireshark display artifact, then investigate network packet bursts. Bad ether_type errors: diagnose with voipmonitor --check_bad_ether_type=eth0.
Keywords: troubleshooting, no calls, PACKETBUFFER, OOM, tcpdump, tshark, SPAN, RSPAN, ERSPAN, interface, sipport, filter, capture rules, snaplen, asymmetric mirroring, one-way audio, natalias, NAT, auto_enable_use_blocks, rtpthreads_start, max_buffer_mem, storage failure, smartctl, promiscuous mode, bonded interface, EtherChannel, network offloading, gso, tso, packet ordering, SQL queue, Error 1062, bad ether_type, service startup, interface name change
Key Questions:
- Why is VoIPmonitor not recording any calls?
- How do I verify packets are reaching the capture interface?
- What causes PACKETBUFFER saturation?
- How do I diagnose if PACKETBUFFER issue is I/O or CPU bottleneck?
- Why is only one direction of audio being recorded?
- How do I configure natalias for NAT scenarios?
- What causes RTP to not be associated with the correct call?
- Why does the sensor show disconnected with dropped packets at low traffic?
- How do I check for OOM kills?
- Why does the service fail to start after OS upgrade?
- Do I need promiscuous mode for ERSPAN?
- Why does VoIPmonitor report false packet loss on bonded interfaces?
- How do I diagnose bad ether_type packet errors?
- What tshark filters are useful for SIP troubleshooting?