Sniffing modes: Difference between revisions

From VoIPmonitor.org
(Review: opravy HTML ol-tags na wiki syntax, přidány diagramy pro přehlednost)
 
(34 intermediate revisions by 4 users not shown)
Line 1: Line 1:
= Linux host =
{{DISPLAYTITLE:VoIPmonitor Deployment & Topology Guide}}


You can install or compile VoIPmonitor binary directly on linux PBX or SBC/SIP server. This does not requires additional hardware and changes in network topology. The only downside is that voipmonitor consumes hardware resources - RAM, CPU and I/O workload which can affect the whole system. If it is not acceptable to share hardware for voipmonitor the second common use case is doing port mirroring.  
'''This guide provides a comprehensive overview of VoIPmonitor's deployment models. It covers the fundamental choice between on-host and dedicated sensors, methods for capturing traffic, and detailed configurations for scalable, multi-site architectures.'''


= Hardware port mirroring =
<kroki lang="mermaid">
%%{init: {'flowchart': {'nodeSpacing': 15, 'rankSpacing': 30}}}%%
flowchart TB
    START[Where to deploy sensor?] --> Q1{PBX runs on Linux?}
    Q1 -->|Yes| Q2{Spare resources?}
    Q1 -->|No - Windows| DED[Dedicated Sensor]
    Q2 -->|Yes| ONHOST[On-Host Capture]
    Q2 -->|No| DED


Port Mirroring is used on a network switch to send a copy of network packets seen on one switch port (or an entire VLAN) to a network monitoring connection on another switch port => voipmonitor dedicated linux box. Port mirroring on a Cisco Systems switch is generally referred to as Switched Port Analyzer (SPAN); some other vendors have other names for it, such as Roving Analysis Port (RAP) on 3Com switches or just port mirroring.
    DED --> Q3{Traffic forwarding method?}
    Q3 --> SPAN[SPAN/RSPAN]
    Q3 --> TUNNEL[Software Tunnel]
    Q3 --> CLOUD[Cloud Mirroring]


In case of hardware mirroring you often need to have additional ethernet port. Sniffer is configured to use this port (interface=eth1) and it automatically put the interface into Promiscuous mode. In case you need to mirror to more ethernet ports you can set interface=any in voipmonitor.conf which will enable mirroring on all interfaces but you need to set each ethernet interface into promiscuous mode manually
    TUNNEL --> T1[GRE/ERSPAN]
    TUNNEL --> T2[TZSP/VXLAN]
    TUNNEL --> T3[HEP/AudioCodes]
</kroki>


ifconfig eth1 promisc
== Core Concept: Where to Capture Traffic ==
The first decision in any deployment is where the VoIPmonitor sensor (sniffer) will run.


= Tunneling =
=== 1. On-Host Capture (on the PBX/SBC) ===
The sensor can be installed directly on the same Linux server that runs your PBX or SBC.
* '''Pros:''' Requires no extra hardware, network changes, or port mirroring. It is the simplest setup.
* '''Cons:''' Adds CPU, memory, and disk I/O load to your production voice server. If these resources are critical, a dedicated sensor is the recommended approach.


Voipmonitor supports several tunneling options:  
'''Platform Note:''' The VoIPmonitor sensor runs exclusively on Linux. While some PBXs are available on Windows (e.g., 3CX Windows edition, certain legacy systems), the sensor cannot be installed on Windows. For Windows-based PBXs, you must use a dedicated Linux sensor with traffic mirroring (see below).


*IPinIP (out of the box)
=== 2. Dedicated Sensor ===
*GRE (out of the box)
A dedicated Linux server runs only the VoIPmonitor sensor. This is the recommended approach for production environments as it isolates monitoring resources from your voice platform. To use a dedicated sensor, you must forward a copy of the network traffic to it using one of the methods below.
*ERSPAN (out of the box)
*TZSP (mikrotik) - udp_port_tzsp = 37008 option in voipmonitor.conf
*Layer 2 Tunneling Protocol - udp_port_l2tp = 1701 option in voipmonitor.conf
*VXLAN (used by amazon aws) - udp_port_vxlan = 4789 option in voipmonitor.conf
*audiocodes tunneling - audiocodes, udp_port_audiocodes, tcp_port_audiocodes options in voipmonitor.conf
*HEP3 (requires hep* options enabled in voipmonitor.conf)


= Software packet mirroring =
'''When a Dedicated Sensor is Required:'''
* Windows-based PBXs (e.g., 3CX Windows edition) - VoIPmonitor sensor is Linux-only
* When your PBX/SBC server has limited CPU, RAM, or disk I/O resources
* When you want zero monitoring impact on your production voice platform
* When capturing from multiple sites with a centralized collector


== All in one ==
== Methods for Forwarding Traffic to a Dedicated Sensor ==
If the sensor is installed on the same server as MySQL and GUI you do not need to configure sensors in GUI. The GUI is reading PCAP files directly from local file system and database are connected via localhost mysql database.


== Multiple remote sensors one DB/GUI server ==
=== A. Hardware Port Mirroring (SPAN/RSPAN) ===
This is the most common and reliable method. You configure your physical network switch to copy all traffic from the switch ports connected to your PBX/SBC to the switch port connected to the VoIPmonitor sensor. This feature is commonly called '''Port Mirroring''', '''SPAN''', or '''RSPAN'''. Consult your switch's documentation for configuration details.


Note: sensor = sniffer, sniffer = sensor
The VoIPmonitor sensor interface will be put into promiscuous mode automatically. To capture from multiple interfaces, set <code>interface = any</code> in <code>voipmonitor.conf</code> and enable promiscuous mode manually on each NIC (e.g., <code>ip link set dev eth1 promisc on</code>).


Sensors can be configured in two ways - mirroring all packets by the remote sensor to central sensor or the remote sensor is processing packets directly and only sends CDR to central sensor which is connected to the database (keeping pcap files on local storage located on remote sensors)
==== Monitoring Multiple VoIP Platforms with a Single Sensor ====


=== Standard remote sniffer ===
A common use case is monitoring multiple separate VoIP platforms (e.g., Mitel PBX and NetSapiens hosted PBX) using a single VoIPmonitor sensor instance. This can be accomplished with a simple port mirroring configuration on your network switch.
Remote sensor in standard mode processes all packets and stores CDR to database keeping pcap files on local disk. This setup generates minimal traffic between sensor and remote database (it sends only CDR). The GUI needs a direct access to the management ip/port (to get stats, pcaps, etc.). The sensor is NOT automatically created in the GUI.


=== Client/server (aka sender/receiver aka remote/central) remote sniffers ===
'''Configuration Steps:'''
The sensors can sniff the packets on one host and process them on another host. There are two modes. The old mode (for compatibility reason, the existing users should migrate slowly to the new one) and the new mode (since version 20.0 of a sniffer, the new users should use this one). All voipmonitor configuration examples are minimal which leaves all options to default (can be changed). Don't mix old and new modes in one environment.


=== OLD client/server (aka remote/central) sensor mode ===
# '''Configure Switch Port Mirroring:'''
#* Identify the switch ports where each VoIP platform connects (e.g., port 1 for Mitel, port 2 for NetSapiens)
#* Connect the VoIPmonitor sensor to a dedicated switch port (e.g., port 24)
#* Configure the switch to mirror traffic from BOTH source ports (1 and 2) to the single destination port (24)
#* Most switches support monitoring multiple source ports simultaneously (refer to your switch documentation for syntax)


* uses two type of sensors: server/central and client/remote
# '''VoIPmonitor Sensor Configuration:'''
* uses mirror_* directives in configuration
#* No special configuration required for basic multiple-platform monitoring
* server and client must have the same time
#* The sensor receives the combined traffic stream from both platforms
#* VoIPmonitor automatically processes all SIP/RTP packets and generates CDRs for all calls
#* All platforms are monitored in a single unified interface


'''client/remote sensor'''
{{Note|1=No configuration is required on the VoIP platforms themselves. The platforms continue operating normally; the switch merely copies traffic to the monitoring port.}}
* sniff data, NO processing of this data
* no local storage
* send data to server/central node
* no sql cfg needed
* management port needs to be accessible from gui
* sensor is NOT created automatically in gui/db
* gui communicates with sensor directly via management port


voipmonitor.conf:
'''Distinguishing Platforms in the GUI:'''
#change this number on each remote sniffer to unique number
id_sensor                      = 1         
#change this to correct interface where you need to intercept traffic
interface                      = eth0       
#up to 2000MB more reading about ringbuffer in scaling section of a doc.
ringbuffer                      = 200       
packetbuffer_enable            = yes
#in MB
max_buffer_mem                  = 2000       
packetbuffer_compress          = yes
#enable compression
packetbuffer_compress_ratio    = 100
#this is address of your dedicated server (central sniffer - mirroring receiver)
mirror_destination_ip          = 192.168.0.1
mirror_destination_port        = 5030


'''server/central sensor'''
To differentiate between platforms in the VoIPmonitor Web GUI, use one of these methods:
* has direct access to the sql
* has local storage
* receives sniffed data from clients, process them, saves cdrs to the sql and stores pcaps to the local spooldir
* management port needs to be accessible from gui
* sensor is NOT created automatically in gui/db
* gui communicates with sensor directly via management port


voipmonitor.conf:
* '''IP Address Filtering:''' Use IP-based filters in the CDR view to show calls from specific platform ranges
#do not forget to configure mysql* options
* '''Prefix Filtering:''' Filter by calling/called number prefixes if platforms use different numbering plans
#set here IP address of central server, which is accessible from remote sniffers.
* '''Custom Sensors (Optional):''' If you want to tag calls by platform, use a separate dedicated sensor for each platform with unique <code>id_sensor</code> values
mirror_bind_ip              = 0.0.0.0
mirror_bind_port            = 5030


=== NEW client/server (aka remote/central) sensor mode ===
'''Example Scenarios:'''
* has two type of sensors: server(central) and client(remote)
* prerequisite is GNU/GPL sniffer version >= 20.x on both ends and version of a GUI >= 18.3 which is supporting also multiple receivers.
* uses server_* options in voipmonitor.conf
* server and client must have the same time (ideally use NTP on both server/client or connection from remote will be refused)


Remote sniffers can operate in two ways:  
'''Scenario 1: On-Premise Platforms (e.g., Mitel + FreeSWITCH):'''
* Both platforms connect to the same LAN switch
* Mirror both switch ports to the VoIPmonitor sensor destination port
* Single sensor, unified view


* packets are sniffed and processed on remote sniffers which uses CPU/memory, sends CDR to central sniffer and stores pcap files on local storage.
'''Scenario 2: On-Premise + Cloud Platform (e.g., Mitel + NetSapiens):'''
* OR packets are sniffed and sent to central sniffer which process them (does not use much CPU/memory but uses more network throughput)
* On-premise Mitel: Mirror switch port to sensor
* Cloud NetSapiens: Use [[Tls|External TLS Session Key Provider]] if encryption is present, or use [[Tls#Method_5:_TCP_or_UDP_Key_Distribution|Method 5]] for decryption key export
* All traffic processed by the same sensor in unified interface


{{Tip|When mirroring traffic from multiple ports, ensure the destination port (connected to VoIPmonitor) has sufficient bandwidth to handle the combined traffic from all sources without packet loss.}}


this mode is controlled by packetbuffer_sender option ("yes" will send packets to central sniffer).  
{{Warning|1='''Critical for Multiple Mirrored Interfaces:''' When sniffing from multiple mirrored interfaces, VLANs, or switch ports, packets may arrive as duplicates (same traffic from multiple SPAN sources). This can cause incomplete calls, missing audio, or incorrect SIP/RTP session reassembly. Add <code>auto_enable_use_blocks = yes</code> to <code>voipmonitor.conf</code>. This enables automatic packet deduplication and defragmentation. See [[Sniffer_configuration#auto_enable_use_blocks|Sniffer_configuration]] for details.}}


* mysql configuration is set only on server(central) configuration
=== B. Software-based Tunnelling ===
* Server(central) sniffer communicates with remote sniffers through TCP connection. Client is connecting to the server so it can be behind firewall/NAT etc.  
When hardware mirroring is not an option, many network devices and PBXs can encapsulate VoIP packets and send them to the sensor's IP address using a tunnel. VoIPmonitor natively supports a wide range of protocols.
* GUI communicates ONLY with the central server. If GUI wants to get pcap from remote sniffer it requests it from the central sniffer which contacts client sniffer (so there is no direct TCP connection to a client sniffers)  
* '''Built-in Support:''' IP-in-IP, GRE, ERSPAN
* Remote sensors are populated in GUI configuration automatically once remote sniffer is connected to a central sniffer.
* '''UDP-based Tunnels:''' Configure the corresponding port in <code>voipmonitor.conf</code>:
* Connection between client/server uses strong encryption (DH key exchange / AES cypher) with compression.
** <code>udp_port_tzsp = 37008</code> (for MikroTik's TZSP)
* The server's managerport and server_bind_port ports need to be accessible from the GUI.
** <code>udp_port_l2tp = 1701</code>
** <code>udp_port_vxlan = 4789</code> (common in cloud environments)
* '''Proprietary & Other Protocols:'''
** [[Audiocodes_tunneling|AudioCodes Tunneling]] (uses <code>udp_port_audiocodes</code> or <code>tcp_port_audiocodes</code>)
** HEP (Homer Encapsulation Protocol)
** IPFIX (for Oracle SBCs) (enable <code>ipfix*</code> options)


==== client(remote) sensor configuration ====
==== HEP (Homer Encapsulation Protocol) ====


# this example configuration will process packets and sends only CDR to the server.
HEP is a lightweight protocol for capturing and mirroring VoIP packets. Many SBCs and SIP proxies (such as Kamailio, OpenSIPS, FreeSWITCH) support HEP to send a copy of traffic to a monitoring server.
# id_Sensor needs to be < 65536
id_sensor = unique_number
server_destination = serverip
#needs to be defined same as server_bind_port option on the central server
server_destination_port = 60024
server_password = somepassword
#If you want to mirror all packets (so the remote sniffer will not use much CPU and memory and NO local storage) add one more option:
packetbuffer_sender = yes


==== server(central) sensor configuration ====
'''Configuration in voipmonitor.conf:'''
#this will listen on all IPs
server_bind = 0.0.0.0
server_bind_port = 60024
server_password = somepassword
#do not forget to configure mysql* options


=== cloud mode - client sensor mode ===
<syntaxhighlight lang="ini">
# Enable HEP support
hep = yes


in my services section you can download install script that will add options into default voipmonitor.conf
# Port to listen for HEP packets (default: 9060)
(you don't need to take care on the cloud* options - the cloud install script will add these.)
hep_bind_port = 9060


# id_Sensor needs to be unique number < 65536
# Optional: Bind to specific IP address
id_sensor = unique_number
# hep_bind_ip = 0.0.0.0
cloud_token = __Your_cloud_token_here__
cloud_url = https://cloud.voipmonitor.org/reg/register.php
packetbuffer_file_path = /var/spool/voipmonitor/packetbuffer


# Optional: Enable UDP binding (default: yes)
hep_bind_udp = yes
</syntaxhighlight>


= Firewall settings =
When <code>hep = yes</code>, VoIPmonitor listens for HEPv3 (and compatible HEPv2) packets and extracts the original VoIP traffic from the encapsulation.
== For new client/server mode ==
* You need to allow port 60024{tcp} on the server to be accessible by all probes.
* You need to allow port 60024{tcp} and '''managerport''' 5029{tcp} on the server to be accessible by GUI host. (if the GUI is on another host, modify '''managerip''' option on server side which binds by default to localhost only)


== For old mirroring mode ==
'''Use Cases:'''
* You need to allow port 5030{tcp} on the server to be accessible by all probes.
* Remote SBCs or PBXs export traffic to a centralized VoIPmonitor server
* You need to allow port '''managerport''' 5029{tcp} on the server to be accessible by GUI host. If the GUI is on another server, modify '''managerip''' option on the server side, which binds by default to localhost only)
* Kamailio/FreeSWITCH <code>siptrace</code> module integration
* Environments where standard tunnels (GRE/ERSPAN) are not available


== For standalone mode ==
{{Note|1=There is also <code>hep_kamailio_protocol_id_fix = yes</code> for Kamailio-specific protocol ID issues.}}
* You need to allow port 3306{tcp} on the db server to be accessible by all standalone remotes.
* If the GUI is on another host, modify '''managerip''' option on the server side, which binds by default to localhost only.


== For cloud mode ==
'''Known Limitations:'''
* You need to allow port 60023{tcp} of cloud.voipmonitor.org to be accessible by probes.


= NTP server tweak =
===== HEP Timestamp Precision =====


It is recommended that all machines are synchronised with NTP with minpoll (3) and maxpoll (4) setting
HEP3 packets include a timestamp field that represents when the packet was captured at the source. VoIPmonitor uses this HEP timestamp for the call record. If the source HEP server has an unreliable or unsynchronized time source, this can cause incorrect timestamps in the captured calls.


Currently, there is no built-in configuration option to ignore the HEP timestamp and instead use the time when VoIPmonitor receives the packet. If you need this functionality, please:


* Request the feature on the product roadmap (no guaranteed ETA)
* Consider a custom development project for a fee


===== No HEP Correlation ID Support =====


'''VoIPmonitor does not use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets.'''


When SIP signaling and RTP media are encapsulated in HEP and arrive from different HEP sources (different capture nodes or sensors), VoIPmonitor cannot correlate them into a single CDR using the HEP protocol metadata.


{{Warning|1='''HEP Correlation Limitation:'''
* HEP Source A sends SIP packets
* HEP Source B sends RTP packets for the same call
* VoIPmonitor tries to use HEP captureNodeID/correlation ID to merge them
* '''Result:''' SIP and RTP are NOT correlated; the call appears incomplete or missing


VoIPmonitor extracts the payload from HEP encapsulation and correlates using standard SIP Call-ID, To/From tags, and RTP SSRC fields. It does not utilize the HEP envelope metadata fields (correlation ID, capture node ID, etc.) for cross-sensor correlation.


'''Workaround (Feature Request VS-1703):''' Currently, there is no available workaround. The only options are to wait for a future release that adds HEP correlation ID support (feature request VS-1703 has been logged) or pursue a custom paid implementation.}}


=== Pre-Deployment Compatibility Verification ===


Before full production deployment, especially when integrating VoIPmonitor with network hardware (Cisco/Juniper routers, SBCs), or complex mirroring setups (RSPAN, ERSPAN, tunnels), it is highly recommended to verify that VoIPmonitor can correctly capture and process packets from your specific environment.


This approach allows you to identify compatibility issues early, without committing to a full deployment that may need adjustments.


'''Typical Use Cases:'''
* Deploying a dedicated sensor with SPAN/RSPAN from a Cisco router or switch
* Using ERSPAN to forward VoIP traffic across routers
* Capturing from proprietary SBCs or VoIP gateways (Cisco C2951, AudioCodes, etc.)
* Implementing newer or complex tunneling protocols (VXLAN, GRE with specific configurations)


'''Verification Workflow:'''


# '''Configure Mirroring in Test Mode:''' Set up the SPAN, RSPAN, ERSPAN, or tunnel configuration to forward a small subset of VoIP traffic to a test sensor or VM.
# '''Capture Test Calls:'''
#* Make a few test calls through your VoIP system.
#* Using <code>tcpdump</code> or <code>tshark</code>, capture the mirrored traffic into a pcap file:
#:<syntaxhighlight lang="bash">
# Example: Capture SIP and RTP from the mirrored interface
sudo tcpdump -i eth0 -s0 port 5060 -w /tmp/compatibility_test.pcap
</syntaxhighlight>
# '''Verify Packet Capture:'''
#* Confirm the pcap contains both SIP signaling and RTP audio:
#:<syntaxhighlight lang="bash">
tshark -r /tmp/compatibility_test.pcap -Y "sip || rtp"
</syntaxhighlight>
#* Check for expected packet sizes, codecs, and call flow.
# '''Submit for Analysis:''' Send the pcap file to VoIPmonitor support along with details about:
#* Your network hardware (Cisco router model, switch model, SBC model)
#* Mirroring method (SPAN, RSPAN, ERSPAN, GRE, VXLAN, etc.)
#* Any special configurations (VLAN tags, MPLS labels, encapsulation)
#* Your planned deployment (on-host vs. dedicated sensor, client/server vs. standalone)
# '''Feedback and Adjustment:''' Support will analyze the pcap and confirm if VoIPmonitor can process your specific traffic structure. They may recommend configuration changes (e.g., adjusting <code>sipport</code>, enabling tunnel decapsulation, modifying TCP/UDP port settings) or identify incompatible traffic patterns.


'''Benefits of Pre-Deployment Testing:'''
* Confirms VoIPmonitor compatibility with your specific hardware and network setup
* Identifies configuration needs before full production deployment
* Saves time by avoiding trial-and-error during go-live
* Provides documented proof of concept for stakeholders
* Allows tuning of sensor resources (CPU/RAM/disk) based on actual traffic characteristics


If verification fails or reveals incompatibilities, support can often suggest alternative approaches or configuration adjustments before you proceed.


==== Cloud Packet Mirroring (GCP, AWS, Azure) ====


Cloud providers offer native packet mirroring services that can forward traffic to a dedicated VoIPmonitor sensor. These services typically use '''VXLAN''' or '''GRE''' encapsulation.


'''Supported Cloud Services:'''


* Google Cloud Platform (GCP): Packet Mirroring
* Amazon Web Services (AWS): Traffic Mirroring
* Microsoft Azure: Virtual Network TAP


'''Configuration Steps:'''


# '''Create a Dedicated Sensor VM:''' Deploy a VoIPmonitor sensor instance in your cloud environment. This VM should be sized appropriately for your expected traffic volume.
# '''Configure Cloud Mirroring Policy:''' In your cloud provider's console, create a mirroring policy:
#* Select source VMs or subnets where your VoIP traffic (PBX/SBC) originates.
#* Set the destination to the internal IP of your VoIPmonitor sensor VM.
#* Ensure the encapsulation protocol is compatible with VoIPmonitor (VXLAN is recommended and most common).
# '''Critical: Bidirectional Capture:''' Configure the mirroring policy to capture traffic '''in BOTH directions''':
#* <code>INGRESS</code> (incoming traffic to sources)
#* <code>EGRESS</code> (outgoing traffic from sources)
#* <code>BOTH</code> or <code>EITHER</code> is recommended


{{Warning|1=Capturing only ingress or only egress will result in incomplete call data and broken CDRs.}}


; 4. '''Configure VoIPmonitor Sensor:'''


<syntaxhighlight lang="ini">
# Enable VXLAN support for cloud packet mirroring
udp_port_vxlan = 4789


# Interface configuration
interface = eth0


# SIP ports
sipport = 5060


# Optional: Filter at source to save bandwidth
# Configure cloud mirroring filters to forward only SIP/RTP traffic
</syntaxhighlight>


; 5. '''VM Sizing for Cloud Sensor:''' Properly size the sensor VM instance:
* '''vCPU:''' Allow 1-2 cores per 100 concurrent calls (adjusted for codec complexity and packet rate).
* '''RAM:''' 4GB minimum for production; more if using on-disk compression or high PCAP retention.
* '''Storage:''' Use SSD or high-throughput block storage for the <code>spooldir</code>. VoIPmonitor is I/O intensive — persistent disk performance is critical to avoid packet loss.
* '''Network:''' Ensure sufficient NIC bandwidth; mirroring multiple high-traffic sources can saturate the sensor's interface.


; 6. '''NTP Synchronization:''' Accurate timekeeping is critical. Ensure all VMs (sources, sensor, and related infrastructure) use the cloud provider's internal NTP servers or a reliable external NTP source.


'''Best Practices for Cloud Mirroring:'''


* '''Filter at the Source:''' Use cloud mirroring filters to forward only SIP signaling and RTP audio ports. Sending all network traffic (HTTP, SSH, etc.) wastes CPU and bandwidth.
* '''Monitor Network Limits:''' Cloud NICs have bandwidth limits (e.g., 10 Gbps). Mirroring multiple high-traffic sources may saturate the sensor VM's interface.
* '''MTU Considerations:''' VXLAN adds ~50 bytes of overhead. If original packets are near 1500 bytes MTU, encapsulated packets may exceed it, causing fragmentation or drops. Ensure network path supports jumbo frames or proper fragmentation handling.
* '''Test Load:''' Start with filtered ports and a subset of traffic, monitor performance, then expand to full production volume.


'''Alternative: Client/Server Architecture with On-Host Sensors'''


Instead of cloud packet mirroring, consider installing VoIPmonitor sensors directly on each PBX/SBC VM using the [[Sniffer_distributed_architecture|Client/Server architecture]]:
* Install sensor on each Asterisk/SBC VM (on-host capture)
* Sensors process calls locally or forward packets via <code>packetbuffer_sender</code> to a central collector
* Eliminates mirroring overhead and potential incomplete capture issues
* May have better performance for high-traffic environments


== Distributed Deployment Models ==
For monitoring multiple remote offices or a large infrastructure, a distributed model is essential. This involves a central GUI/Database server collecting data from multiple remote sensors.


=== Classic Mode: Standalone Remote Sensors ===
In this traditional model, each remote sensor is a fully independent entity.
* '''How it works:''' The remote sensor processes packets and stores PCAPs locally. It connects directly to the central MySQL/MariaDB database to write CDRs. For PCAP retrieval, the GUI typically needs network access to each sensor's management port (default <code>TCP/5029</code>).
* '''Pros:''' Simple conceptual model.
* '''Cons:''' Requires opening firewall ports to each sensor and managing database credentials on every remote machine.


==== Alternative PCAP Access: NFS/SSHFS Mounting ====


For environments where direct TCP/5029 access to remote sensors is impractical (e.g., firewalls, VPN limitations), you can mount remote spool directories on the central GUI server using NFS or SSHFS.


'''Use Cases:'''
* Firewall policies block TCP/5029 but allow SSH or NFS traffic
* Remote sensors have local databases that need to be queried separately
* You want the GUI to access PCAPs directly from mounted filesystems instead of proxying through TCP/5029


'''Configuration Steps:'''


# '''Mount remote spools on GUI server:'''


Using NFS:
<syntaxhighlight lang="bash">
# On GUI server, mount remote spool directory
sudo mount -t nfs 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1
sudo mount -t nfs 10.224.0.102:/var/spool/voipmonitor /mnt/voipmonitor/sensor2


# Add to /etc/fstab for persistent mounts
10.224.0.101:/var/spool/voipmonitor  /mnt/voipmonitor/sensor1  nfs  defaults  0  0
10.224.0.102:/var/spool/voipmonitor  /mnt/voipmonitor/sensor2  nfs  defaults  0  0
</syntaxhighlight>


Using SSHFS:
<syntaxhighlight lang="bash">
# On GUI server, mount remote spool via SSHFS
sshfs voipmonitor@10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1
sshfs voipmonitor@10.224.0.102:/var/spool/voipmonitor /mnt/voipmonitor/sensor2


# Add to /etc/fstab for persistent mounts (with key-based auth)
voipmonitor@10.224.0.101:/var/spool/voipmonitor  /mnt/voipmonitor/sensor1  fuse.sshfs  defaults,IdentityFile=/home/voipmonitor/.ssh/id_rsa  0  0
</syntaxhighlight>


; 2. '''Configure PCAP spooldir path in GUI:'''


In the GUI, go to '''Settings > System Configuration > Sniffer data path''' and set it to search multiple spool directories. Each directory is separated by a colon (<code>:</code>).


<syntaxhighlight lang="text">
Sniffer data path: /var/spool/voipmonitor:/mnt/voipmonitor/sensor1:/mnt/voipmonitor/sensor2
</syntaxhighlight>


The GUI will search these paths in order when looking for PCAP files.


; 3. '''Register remote sensors in GUI:'''


Go to '''Settings > Sensors''' and register each remote sensor:
* '''Sensor ID:''' Must match <code>id_sensor</code> in each remote's <code>voipmonitor.conf</code>
* '''Name:''' Descriptive name (e.g., "Site 1 - London")
* '''Manager IP, Port:''' Optional with NFS/SSHFS mount (leave empty if mounting spools directly)


'''Important Notes:'''
* Each remote sensor must have a unique <code>id_sensor</code> configured in <code>voipmonitor.conf</code>
* Remote sensors write directly to their local MySQL database (or possibly to a central database)
* Filter calls by site using the <code>id_sensor</code> column in the CDR view
* Ensure mounted directories are writable by the GUI user for PCAP uploads
* For better performance, use NFS with async or SSHFS with caching options


'''Filtering and Site Identification:'''
* In the CDR view, use the '''Sensor''' dropdown filter to select specific sites
* Alternatively, filter by IP address ranges using CDR columns
* The <code>id_sensor</code> column in the database uniquely identifies which sensor captured each call
* Sensor names can be customized in '''Settings > Sensors''' for easier identification


'''Comparison: TCP/5029 vs NFS/SSHFS'''
{| class="wikitable"
! Approach
! Network Traffic
! Firewall Requirements
! Performance
! Use Case
|-
| TCP/5029 Proxy (Standard)
| On-demand fetch per request
| TCP/5029 outbound from GUI to sensors
| Better (no continuous mount overhead)
| Most deployments
|-
| NFS Mount
| Continuous (filesystem access)
| NFS ports (usually 2049) bidirectional
| Excellent (local filesystem speed)
| Local networks, high-throughput
|-
| SSHFS Mount
| Continuous (encrypted filesystem)
| SSH (TCP/22) outbound from GUI
| Good (some encryption overhead)
| Remote sites, cloud/VPN
|}


=== Troubleshooting NFS/SSHFS Mounts ===


If you experience missing CDRs or PCAP files for a specific time period, or if the GUI reports files not found despite sensors receiving traffic, the issue is often NFS/SSHFS connectivity between the probe and storage server.


==== Check for NFS/SSHFS Connectivity Issues ====


Missing data (both CDRs and PCAPs) for a specific time period is typically caused by network unavailability between the VoIPmonitor probe and the NFS/SSHFS storage server.


'''1. Check system logs for NFS or SSHFS errors:'''


<syntaxhighlight lang="bash">
# Check for NFS-specific errors
journalctl -u voipmonitor --since "2024-01-01" --until "2024-01-02"


# Look for specific patterns in syslog
grep "nfs: server.*not responding" /var/log/syslog
grep "nfs.*timed out" /var/log/syslog
grep "I/O error" /var/log/syslog


# For SSHFS issues
grep "sshfs.*Connection reset" /var/log/syslog
grep "sshfs.*Transport endpoint is not connected" /var/log/syslog
</syntaxhighlight>


Key error messages to look for:
* <code>nfs: server 192.168.1.100 not responding, timed out</code> - NFS server unreachable
* <code>nfs: server 192.168.1.100 OK</code> - Connection restored after interruption
* <code>Stale file handle</code> - NFS mount needs remounting
* <code>Transport endpoint is not connected</code> - SSHFS mount disconnected


'''2. Verify network connectivity to the storage server:'''


<syntaxhighlight lang="bash">
# Ping test to the NFS/SSHFS server
ping 192.168.1.100


# Trace the network path to identify bottlenecks
traceroute 192.168.1.100


# Test DNS resolution if using hostnames
nslookup storage-server.domain.com
</syntaxhighlight>


'''3. Ensure the NFS/SSHFS server is running and accessible:'''


<syntaxhighlight lang="bash">
# On the probe/sensor side - check if mount is active
mount | grep nfs
mount | grep fuse.sshfs


# Check mount status for all mounted spool directories
stat /mnt/voipmonitor/sensor1


# On the NFS server side - verify services are running
systemctl status nfs-server
systemctl status sshd
</syntaxhighlight>


'''4. Check for mount-specific issues:'''


<syntaxhighlight lang="bash">
# Test NFS mount manually (unmount and remount)
sudo umount /mnt/voipmonitor/sensor1
sudo mount -t nfs 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1


# Check /etc/fstab for mount errors
sudo mount -a  # Test all mounts in /etc/fstab


# Verify mount permissions
ls -la /mnt/voipmonitor/sensor1
</syntaxhighlight>


==== Common Causes of Missing Data ====


{| class="wikitable"
! Symptom
! Most Likely Cause
! Troubleshooting Step
|-
| Gap in data during a specific time period
| '''NFS/SSHFS server unreachable'''
| Check logs for "not responding, timed out"
|-
| Stale file handle errors
| NFS server rebooted or export changed
| Remount NFS share
|-
| Connection resets
| Network interruption or unstable connection
| Check network stability and ping times
|-
| Very slow file access
| Network latency or bandwidth saturation
| Monitor network throughput
|-
| GUI shows "File not found"
| Mount point dismounted
| Check mount status and remount if needed
|}


==== Preventative Measures ====


To minimize data loss from NFS/SSHFS connectivity issues:


'''Use TCP for NFS''' (more reliable than UDP):
<syntaxhighlight lang="bash">
# Mount NFS with TCP explicitly
sudo mount -t nfs -o tcp 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1
</syntaxhighlight>


'''Use the <code>hard,nofail</code> mount options:'''
<syntaxhighlight lang="bash">
# In /etc/fstab
10.224.0.101:/var/spool/voipmonitor  /mnt/voipmonitor/sensor1  nfs  hard,nofail,tcp  0  0
</syntaxhighlight>
* <code>hard</code>: Make NFS operations wait indefinitely instead of timing out
* <code>nofail</code>: Do not fail if the mount is unavailable at boot time


'''Monitor mount status:''' Set up automated monitoring to alert when NFS/SSHFS mounts become unresponsive or disconnected.


'''Consider Client/Server mode as alternative:''' If NFS/SSHFS connectivity is unreliable, consider using the modern [[Sniffer_distributed_architecture|Client/Server architecture]] instead, which uses encrypted TCP channels and is more resilient to network interruptions.


=== Modern Mode: Client/Server Architecture (v20+) — Recommended ===
This model uses a secure, encrypted TCP channel between remote sensors (clients) and a central sensor instance (server). The GUI communicates with the central server only, which significantly simplifies networking and security.


<kroki lang="mermaid">
%%{init: {'flowchart': {'nodeSpacing': 10, 'rankSpacing': 25}}}%%
flowchart LR
    subgraph "Local Processing"
        R1[Remote Sensor] -->|CDRs only| C1[Central Server]
        R1 -.->|PCAP on demand| C1
    end


    subgraph "Packet Mirroring"
        R2[Remote Sensor] -->|Raw packets| C2[Central Server]
    end
</kroki>


This architecture supports two primary modes:
# '''Local Processing:''' Remote sensors process packets locally and send only lightweight CDR data over the encrypted channel. PCAPs remain on the remote sensor. On-demand PCAP fetch is proxied via the central server (to the sensor's <code>TCP/5029</code>).
# '''Packet Mirroring:''' Remote sensors forward the entire raw packet stream to the central server, which performs all processing and storage. Ideal for low-resource remote sites.


==== Architecture Diagrams ====


<kroki lang="plantuml">
@startuml
skinparam shadowing false
skinparam defaultFontName Arial
skinparam rectangle {
  BorderColor #4A90E2
  BackgroundColor #FFFFFF
  stereotypeFontColor #333333
}
skinparam packageBorderColor #B0BEC5
skinparam packageBackgroundColor #F7F9FC


title Client/Server Architecture — Local Processing Mode


package "Remote Site" {
  [Remote Probe/Sensor] as Remote
  database "Local Storage (PCAP)" as RemotePCAP
}


package "Central Site" {
  [Central VoIPmonitor Server] as Central
  database "Central MySQL/MariaDB" as CentralDB
  [Web GUI] as GUI
}


Remote -[#2F6CB0]-> Central : Encrypted TCP/60024\nCDRs only
Remote --> RemotePCAP : Stores PCAP locally
Central --> CentralDB : Writes CDRs
GUI -[#2F6CB0]-> Central : Queries data & requests PCAPs
Central -[#2F6CB0]-> RemotePCAP : Fetches PCAPs on demand (TCP/5029)
@enduml
</kroki>


<kroki lang="plantuml">
@startuml
skinparam shadowing false
skinparam defaultFontName Arial
skinparam rectangle {
  BorderColor #4A90E2
  BackgroundColor #FFFFFF
  stereotypeFontColor #333333
}
skinparam packageBorderColor #B0BEC5
skinparam packageBackgroundColor #F7F9FC


title Client/Server Architecture — Packet Mirroring Mode


package "Remote Site" {
  [Remote Probe/Sensor\n(Low Resource)] as Remote
}


package "Central Site" {
  [Central VoIPmonitor Server] as Central
  database "Central MySQL/MariaDB" as CentralDB
  database "Central Storage (PCAP)" as CentralPCAP
  [Web GUI] as GUI
}


Remote -[#2F6CB0]-> Central : Encrypted TCP/60024\nRaw packet stream
Central --> CentralDB : Writes CDRs
Central --> CentralPCAP : Processes & stores PCAPs
GUI -[#2F6CB0]-> Central : Queries data & downloads PCAPs
@enduml
</kroki>


==== Step-by-Step Configuration Guide ====


'''Prerequisites'''
* VoIPmonitor v20+ on all sensors.
* Central database reachable from the central server instance.
* Unique <code>id_sensor</code> per sensor (< 65536).
* NTP running everywhere (see '''Time Synchronization''' below).


'''Scenario A — Local Processing (default, low WAN usage)'''


<syntaxhighlight lang="ini">
# /etc/voipmonitor.conf on the REMOTE sensor (LOCAL PROCESSING)


id_sensor              = 2          # unique per sensor (< 65536)
server_destination      = 10.224.0.250
server_destination_port = 60024
server_password        = your_strong_password


packetbuffer_sender    = no        # local analysis; sends only CDRs
interface              = eth0      # or: interface = any
sipport                = 5060      # example; add your usual sniffer options


# No MySQL credentials here — remote sensor does NOT write to DB directly.
</syntaxhighlight>


<syntaxhighlight lang="ini">
# /etc/voipmonitor.conf on the CENTRAL server (LOCAL PROCESSING network)


server_bind            = 0.0.0.0
server_bind_port        = 60024
server_password        = your_strong_password


mysqlhost              = 10.224.0.201
mysqldb                = voipmonitor
mysqluser              = voipmonitor
mysqlpassword          = db_password


cdr_partition          = yes        # partitions for CDR tables
mysqlloadconfig        = yes        # allows DB-driven config if used


interface              =            # leave empty to avoid local sniffing
# The central server will proxy on-demand PCAP fetches to sensors (TCP/5029).
</syntaxhighlight>


'''Scenario B — Packet Mirroring (centralized processing/storage)'''


<syntaxhighlight lang="ini">
# /etc/voipmonitor.conf on the REMOTE sensor (PACKET MIRRORING)


id_sensor              = 3
server_destination      = 10.224.0.250
server_destination_port = 60024
server_password        = your_strong_password


packetbuffer_sender    = yes        # send RAW packet stream to central
interface              = eth0      # capture source; no DB settings needed
</syntaxhighlight>


<syntaxhighlight lang="ini">
# /etc/voipmonitor.conf on the CENTRAL server (PACKET MIRRORING)


server_bind            = 0.0.0.0
server_bind_port        = 60024
server_password        = your_strong_password


mysqlhost              = 10.224.0.201
mysqldb                = voipmonitor
mysqluser              = voipmonitor
mysqlpassword          = db_password


cdr_partition          = yes
mysqlloadconfig        = yes


# As this server does all analysis, configure as if sniffing locally:
sipport                = 5060
# ... add your usual sniffer/storage options (pcap directories, limits, etc.)
</syntaxhighlight>


==== Firewall Checklist (Quick Reference) ====
* '''Modern Client/Server (v20+):'''
** '''Central Server:''' Allow inbound <code>TCP/60024</code> from remote sensors. Allow inbound <code>TCP/5029</code> from GUI (management/API to central sensor).
** '''Remote Sensors (Local Processing only):''' Allow inbound <code>TCP/5029</code> from the central server (for on-demand PCAP fetch via proxy). Outbound <code>TCP/60024</code> to the central server.
* '''Cloud Mode:'''
** '''Remote Sensors:''' Allow outbound <code>TCP/60023</code> to <code>cloud.voipmonitor.org</code>.


== Configuration & Checklists ==


=== Parameter Notes (clarifications) ===
* '''<code>id_sensor</code>''' — Mandatory in any distributed deployment (Classic or Client/Server). Must be unique per sensor (< 65536). The value is written to the database and used by the GUI to identify where a call was captured.
* '''<code>cdr_partition</code>''' — In Client/Server, enable on the central server instance that writes to the database. It can be disabled on remote "client" sensors that only mirror packets.
* '''<code>mysqlloadconfig</code>''' — When enabled, the sensor can load additional parameters dynamically from the <code>sensor_config</code> table in the database. Typically enabled on the central server sensor that writes to DB; keep disabled on remote clients which do not access DB directly.
* '''<code>interface</code>''' — Use a specific NIC (e.g., <code>eth0</code>) or <code>any</code> to capture from multiple NICs. For <code>any</code> ensure promiscuous mode on each NIC.


=== Initial Service Start & Database Initialization ===
After installation, the '''first startup''' against a new/empty database is critical.
# Start the service: <code>systemctl start voipmonitor</code>
# Follow logs to ensure schema/partition creation completes:
#* <code>journalctl -u voipmonitor -f</code>
#* or <code>tail -f /var/log/syslog | grep voipmonitor</code>


You should see creation of functions and partitions shortly after start. If you see errors like <code>Table 'cdr_next_1' doesn't exist</code>, the sensor is failing to initialize the schema — usually due to insufficient DB privileges or connectivity. Fix DB access and restart the sensor so it can finish initialization.


=== Time Synchronization ===
Accurate and synchronized time is '''critical''' for correlating call legs from different sensors. All servers (GUI, DB, and all Sensors) must run an NTP client (e.g., <code>chrony</code> or <code>ntpdate</code>) to keep clocks in sync.


== Comparison of Remote Deployment Modes ==
{| class="wikitable"
! Deployment Model
! Packet Processing Location
! PCAP Storage Location
! Network Traffic to Central Server
! GUI Connectivity
|-
| Classic Standalone
| Remote
| Remote
| Minimal (MySQL CDRs)
| GUI ↔ each Sensor (management port)
|-
| '''Modern Client/Server (Local Processing)'''
| Remote
| Remote
| Minimal (Encrypted CDRs)
| '''GUI ↔ Central Server only''' (central proxies PCAP fetch)
|-
| '''Modern Client/Server (Packet Mirroring)'''
| '''Central'''
| '''Central'''
| High (Encrypted full packets)
| '''GUI ↔ Central Server only'''
|}


== FAQ & Common Pitfalls ==
* '''Do remote sensors need DB credentials in Client/Server?''' No. Only the central server instance writes to DB.
* '''Why is <code>id_sensor</code> required everywhere?''' The GUI uses it to tag and filter calls by capture source.
* '''Local Processing still fetches PCAPs from remote — who connects to whom?''' The GUI requests via the central server; the central server then connects to the remote sensor's <code>TCP/5029</code> to retrieve the PCAP.


== Related Documentation ==


* [[Sniffer_distributed_architecture|Distributed Architecture: Client-Server Mode]] - Detailed client/server configuration
* [[Scaling|Scaling and Performance Tuning Guide]] - For performance optimization
* [[Sniffer_troubleshooting|Sniffer Troubleshooting]] - For systematic diagnostic procedures
* [[Cloud|Cloud Service Configuration]] - For cloud deployment specifics
* [[Systemd_for_voipmonitor_service_management|Systemd Service Management]] - For service management best practices


== AI Summary for RAG ==


'''Summary:''' This guide covers deployment topologies for VoIPmonitor. It contrasts running the sensor on the same host as a PBX versus on a dedicated server. For dedicated sensors, it details methods for forwarding traffic, including hardware-based port mirroring (SPAN) and various software-based tunneling protocols (IP-in-IP, GRE, TZSP, VXLAN, HEP, AudioCodes, IPFIX). HEP (Homer Encapsulation Protocol) is a lightweight protocol for capturing and mirroring VoIP packets. When <code>hep = yes</code>, VoIPmonitor listens for HEPv3 (and compatible HEPv2) packets and extracts the original VoIP traffic from the encapsulation. CRITICAL HEP LIMITATION: VoIPmonitor does NOT use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets. When SIP signaling and RTP media are encapsulated in HEP and arrive from different HEP sources (different capture nodes or sensors), VoIPmonitor cannot correlate them into a single CDR using HEP protocol metadata. This is feature request VS-1703 and there is currently no available workaround. The article covers cloud service packet mirroring options (GCP Packet Mirroring, AWS Traffic Mirroring, Azure Virtual Network TAP) with critical requirements: bidirectional capture (ingress and egress) and proper VM sizing (vCPU, RAM, storage I/O). The core of the article explains distributed architectures for multi-site monitoring, comparing the "classic" standalone remote sensor model with the modern, recommended "client/server" model. It details the two operational modes of the client/server architecture: local processing (sending only CDRs, PCAPs remain remote with central-proxied fetch) and packet mirroring (sending full, raw packets for central processing), which is ideal for low-resource endpoints. The article also explains an alternative approach for classic remote sensors: mounting PCAP spools via NFS or SSHFS when TCP/5029 access to sensors is blocked by firewalls, including troubleshooting steps for missing data due to NFS/SSHFS connectivity issues (checking logs for "not responding, timed out" errors, verifying network connectivity with ping/traceroute, and ensuring NFS/SSHFS server is running and accessible). The guide concludes with step-by-step configuration, firewall rules, critical parameter notes, and the importance of NTP plus first-start DB initialization.


'''Keywords:''' deployment, architecture, topology, on-host, dedicated sensor, port mirroring, SPAN, RSPAN, traffic mirroring, tunneling, GRE, TZSP, VXLAN, HEP, HEP correlation ID, captureNodeID, HEP limitation, HEP SIP RTP correlation, AudioCodes, IPFIX, cloud mirroring, GCP, AWS, Azure, Packet Mirroring, Traffic Mirroring, Virtual Network TAP, ingress, egress, bidirectional, VM sizing, remote sensor, multi-site, client server mode, packet mirroring, local processing, firewall rules, NTP, time synchronization, cloud mode, NFS, SSHFS, spooldir mounting, NFS troubleshooting, SSHFS troubleshooting, missing data, network connectivity


 
'''Key Questions:'''
 
* Can I use cloud packet mirroring (GCP/AWS/Azure) with VoIPmonitor?
 
* How should I configure cloud packet mirroring for ingress and egress traffic?
 
* What is the difference between the classic remote sensor and the modern client/server mode?
 
* When should I use packet mirroring (<code>packetbuffer_sender</code>) instead of local processing?
 
* What are the firewall requirements for the client/server deployment model?
 
* How can I access PCAP files from remote sensors if TCP/5029 is blocked?
 
* How do I configure NFS or SSHFS to mount remote PCAP spools?
 
* How do I configure the GUI sniffer data path for multiple mounted spools?
 
* How do I troubleshoot missing CDRs or PCAPs when using NFS or SSHFS mounts?
 
* What should I look for in logs to diagnose NFS connectivity issues?
 
* Can I run the sensor on the same machine as my Asterisk/FreeSWITCH server?
 
* What is a SPAN port and how is it used with VoIPmonitor?
 
* Why is NTP important for a distributed VoIPmonitor setup?
 
* What is HEP and how do I configure VoIPmonitor to receive HEP packets?
 
* Does VoIPmonitor use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets?
 
* Can VoIPmonitor correlate SIP and RTP packets that arrive from different HEP sources?
 
* Is there a workaround for HEP SIP/RTP correlation across multiple HEP capture nodes?
 
* How do I configure GRE, ERSPAN, and VXLAN tunneling for VoIPmonitor?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.

Latest revision as of 21:29, 6 January 2026


This guide provides a comprehensive overview of VoIPmonitor's deployment models. It covers the fundamental choice between on-host and dedicated sensors, methods for capturing traffic, and detailed configurations for scalable, multi-site architectures.

Core Concept: Where to Capture Traffic

The first decision in any deployment is where the VoIPmonitor sensor (sniffer) will run.

1. On-Host Capture (on the PBX/SBC)

The sensor can be installed directly on the same Linux server that runs your PBX or SBC.

  • Pros: Requires no extra hardware, network changes, or port mirroring. It is the simplest setup.
  • Cons: Adds CPU, memory, and disk I/O load to your production voice server. If these resources are critical, a dedicated sensor is the recommended approach.

Platform Note: The VoIPmonitor sensor runs exclusively on Linux. While some PBXs are available on Windows (e.g., 3CX Windows edition, certain legacy systems), the sensor cannot be installed on Windows. For Windows-based PBXs, you must use a dedicated Linux sensor with traffic mirroring (see below).

2. Dedicated Sensor

A dedicated Linux server runs only the VoIPmonitor sensor. This is the recommended approach for production environments as it isolates monitoring resources from your voice platform. To use a dedicated sensor, you must forward a copy of the network traffic to it using one of the methods below.

When a Dedicated Sensor is Required:

  • Windows-based PBXs (e.g., 3CX Windows edition) - VoIPmonitor sensor is Linux-only
  • When your PBX/SBC server has limited CPU, RAM, or disk I/O resources
  • When you want zero monitoring impact on your production voice platform
  • When capturing from multiple sites with a centralized collector

Methods for Forwarding Traffic to a Dedicated Sensor

A. Hardware Port Mirroring (SPAN/RSPAN)

This is the most common and reliable method. You configure your physical network switch to copy all traffic from the switch ports connected to your PBX/SBC to the switch port connected to the VoIPmonitor sensor. This feature is commonly called Port Mirroring, SPAN, or RSPAN. Consult your switch's documentation for configuration details.

The VoIPmonitor sensor interface will be put into promiscuous mode automatically. To capture from multiple interfaces, set interface = any in voipmonitor.conf and enable promiscuous mode manually on each NIC (e.g., ip link set dev eth1 promisc on).

Monitoring Multiple VoIP Platforms with a Single Sensor

A common use case is monitoring multiple separate VoIP platforms (e.g., Mitel PBX and NetSapiens hosted PBX) using a single VoIPmonitor sensor instance. This can be accomplished with a simple port mirroring configuration on your network switch.

Configuration Steps:

  1. Configure Switch Port Mirroring:
    • Identify the switch ports where each VoIP platform connects (e.g., port 1 for Mitel, port 2 for NetSapiens)
    • Connect the VoIPmonitor sensor to a dedicated switch port (e.g., port 24)
    • Configure the switch to mirror traffic from BOTH source ports (1 and 2) to the single destination port (24)
    • Most switches support monitoring multiple source ports simultaneously (refer to your switch documentation for syntax)
  1. VoIPmonitor Sensor Configuration:
    • No special configuration required for basic multiple-platform monitoring
    • The sensor receives the combined traffic stream from both platforms
    • VoIPmonitor automatically processes all SIP/RTP packets and generates CDRs for all calls
    • All platforms are monitored in a single unified interface

ℹ️ Note: No configuration is required on the VoIP platforms themselves. The platforms continue operating normally; the switch merely copies traffic to the monitoring port.

Distinguishing Platforms in the GUI:

To differentiate between platforms in the VoIPmonitor Web GUI, use one of these methods:

  • IP Address Filtering: Use IP-based filters in the CDR view to show calls from specific platform ranges
  • Prefix Filtering: Filter by calling/called number prefixes if platforms use different numbering plans
  • Custom Sensors (Optional): If you want to tag calls by platform, use a separate dedicated sensor for each platform with unique id_sensor values

Example Scenarios:

Scenario 1: On-Premise Platforms (e.g., Mitel + FreeSWITCH):

  • Both platforms connect to the same LAN switch
  • Mirror both switch ports to the VoIPmonitor sensor destination port
  • Single sensor, unified view

Scenario 2: On-Premise + Cloud Platform (e.g., Mitel + NetSapiens):

  • On-premise Mitel: Mirror switch port to sensor
  • Cloud NetSapiens: Use External TLS Session Key Provider if encryption is present, or use Method 5 for decryption key export
  • All traffic processed by the same sensor in unified interface

💡 Tip: When mirroring traffic from multiple ports, ensure the destination port (connected to VoIPmonitor) has sufficient bandwidth to handle the combined traffic from all sources without packet loss.

⚠️ Warning: Critical for Multiple Mirrored Interfaces: When sniffing from multiple mirrored interfaces, VLANs, or switch ports, packets may arrive as duplicates (same traffic from multiple SPAN sources). This can cause incomplete calls, missing audio, or incorrect SIP/RTP session reassembly. Add auto_enable_use_blocks = yes to voipmonitor.conf. This enables automatic packet deduplication and defragmentation. See Sniffer_configuration for details.

B. Software-based Tunnelling

When hardware mirroring is not an option, many network devices and PBXs can encapsulate VoIP packets and send them to the sensor's IP address using a tunnel. VoIPmonitor natively supports a wide range of protocols.

  • Built-in Support: IP-in-IP, GRE, ERSPAN
  • UDP-based Tunnels: Configure the corresponding port in voipmonitor.conf:
    • udp_port_tzsp = 37008 (for MikroTik's TZSP)
    • udp_port_l2tp = 1701
    • udp_port_vxlan = 4789 (common in cloud environments)
  • Proprietary & Other Protocols:
    • AudioCodes Tunneling (uses udp_port_audiocodes or tcp_port_audiocodes)
    • HEP (Homer Encapsulation Protocol)
    • IPFIX (for Oracle SBCs) (enable ipfix* options)

HEP (Homer Encapsulation Protocol)

HEP is a lightweight protocol for capturing and mirroring VoIP packets. Many SBCs and SIP proxies (such as Kamailio, OpenSIPS, FreeSWITCH) support HEP to send a copy of traffic to a monitoring server.

Configuration in voipmonitor.conf:

# Enable HEP support
hep = yes

# Port to listen for HEP packets (default: 9060)
hep_bind_port = 9060

# Optional: Bind to specific IP address
# hep_bind_ip = 0.0.0.0

# Optional: Enable UDP binding (default: yes)
hep_bind_udp = yes

When hep = yes, VoIPmonitor listens for HEPv3 (and compatible HEPv2) packets and extracts the original VoIP traffic from the encapsulation.

Use Cases:

  • Remote SBCs or PBXs export traffic to a centralized VoIPmonitor server
  • Kamailio/FreeSWITCH siptrace module integration
  • Environments where standard tunnels (GRE/ERSPAN) are not available

ℹ️ Note: There is also hep_kamailio_protocol_id_fix = yes for Kamailio-specific protocol ID issues.

Known Limitations:

HEP Timestamp Precision

HEP3 packets include a timestamp field that represents when the packet was captured at the source. VoIPmonitor uses this HEP timestamp for the call record. If the source HEP server has an unreliable or unsynchronized time source, this can cause incorrect timestamps in the captured calls.

Currently, there is no built-in configuration option to ignore the HEP timestamp and instead use the time when VoIPmonitor receives the packet. If you need this functionality, please:

  • Request the feature on the product roadmap (no guaranteed ETA)
  • Consider a custom development project for a fee
No HEP Correlation ID Support

VoIPmonitor does not use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets.

When SIP signaling and RTP media are encapsulated in HEP and arrive from different HEP sources (different capture nodes or sensors), VoIPmonitor cannot correlate them into a single CDR using the HEP protocol metadata.

⚠️ Warning: HEP Correlation Limitation:

  • HEP Source A sends SIP packets
  • HEP Source B sends RTP packets for the same call
  • VoIPmonitor tries to use HEP captureNodeID/correlation ID to merge them
  • Result: SIP and RTP are NOT correlated; the call appears incomplete or missing

VoIPmonitor extracts the payload from HEP encapsulation and correlates using standard SIP Call-ID, To/From tags, and RTP SSRC fields. It does not utilize the HEP envelope metadata fields (correlation ID, capture node ID, etc.) for cross-sensor correlation.

Workaround (Feature Request VS-1703): Currently, there is no available workaround. The only options are to wait for a future release that adds HEP correlation ID support (feature request VS-1703 has been logged) or pursue a custom paid implementation.

Pre-Deployment Compatibility Verification

Before full production deployment, especially when integrating VoIPmonitor with network hardware (Cisco/Juniper routers, SBCs), or complex mirroring setups (RSPAN, ERSPAN, tunnels), it is highly recommended to verify that VoIPmonitor can correctly capture and process packets from your specific environment.

This approach allows you to identify compatibility issues early, without committing to a full deployment that may need adjustments.

Typical Use Cases:

  • Deploying a dedicated sensor with SPAN/RSPAN from a Cisco router or switch
  • Using ERSPAN to forward VoIP traffic across routers
  • Capturing from proprietary SBCs or VoIP gateways (Cisco C2951, AudioCodes, etc.)
  • Implementing newer or complex tunneling protocols (VXLAN, GRE with specific configurations)

Verification Workflow:

  1. Configure Mirroring in Test Mode: Set up the SPAN, RSPAN, ERSPAN, or tunnel configuration to forward a small subset of VoIP traffic to a test sensor or VM.
  2. Capture Test Calls:
    • Make a few test calls through your VoIP system.
    • Using tcpdump or tshark, capture the mirrored traffic into a pcap file:
    # Example: Capture SIP and RTP from the mirrored interface
    sudo tcpdump -i eth0 -s0 port 5060 -w /tmp/compatibility_test.pcap
    
  3. Verify Packet Capture:
    • Confirm the pcap contains both SIP signaling and RTP audio:
    tshark -r /tmp/compatibility_test.pcap -Y "sip || rtp"
    
    • Check for expected packet sizes, codecs, and call flow.
  4. Submit for Analysis: Send the pcap file to VoIPmonitor support along with details about:
    • Your network hardware (Cisco router model, switch model, SBC model)
    • Mirroring method (SPAN, RSPAN, ERSPAN, GRE, VXLAN, etc.)
    • Any special configurations (VLAN tags, MPLS labels, encapsulation)
    • Your planned deployment (on-host vs. dedicated sensor, client/server vs. standalone)
  5. Feedback and Adjustment: Support will analyze the pcap and confirm if VoIPmonitor can process your specific traffic structure. They may recommend configuration changes (e.g., adjusting sipport, enabling tunnel decapsulation, modifying TCP/UDP port settings) or identify incompatible traffic patterns.

Benefits of Pre-Deployment Testing:

  • Confirms VoIPmonitor compatibility with your specific hardware and network setup
  • Identifies configuration needs before full production deployment
  • Saves time by avoiding trial-and-error during go-live
  • Provides documented proof of concept for stakeholders
  • Allows tuning of sensor resources (CPU/RAM/disk) based on actual traffic characteristics

If verification fails or reveals incompatibilities, support can often suggest alternative approaches or configuration adjustments before you proceed.

Cloud Packet Mirroring (GCP, AWS, Azure)

Cloud providers offer native packet mirroring services that can forward traffic to a dedicated VoIPmonitor sensor. These services typically use VXLAN or GRE encapsulation.

Supported Cloud Services:

  • Google Cloud Platform (GCP): Packet Mirroring
  • Amazon Web Services (AWS): Traffic Mirroring
  • Microsoft Azure: Virtual Network TAP

Configuration Steps:

  1. Create a Dedicated Sensor VM: Deploy a VoIPmonitor sensor instance in your cloud environment. This VM should be sized appropriately for your expected traffic volume.
  2. Configure Cloud Mirroring Policy: In your cloud provider's console, create a mirroring policy:
    • Select source VMs or subnets where your VoIP traffic (PBX/SBC) originates.
    • Set the destination to the internal IP of your VoIPmonitor sensor VM.
    • Ensure the encapsulation protocol is compatible with VoIPmonitor (VXLAN is recommended and most common).
  3. Critical: Bidirectional Capture: Configure the mirroring policy to capture traffic in BOTH directions:
    • INGRESS (incoming traffic to sources)
    • EGRESS (outgoing traffic from sources)
    • BOTH or EITHER is recommended

⚠️ Warning: Capturing only ingress or only egress will result in incomplete call data and broken CDRs.

4. Configure VoIPmonitor Sensor:
# Enable VXLAN support for cloud packet mirroring
udp_port_vxlan = 4789

# Interface configuration
interface = eth0

# SIP ports
sipport = 5060

# Optional: Filter at source to save bandwidth
# Configure cloud mirroring filters to forward only SIP/RTP traffic
5. VM Sizing for Cloud Sensor: Properly size the sensor VM instance
  • vCPU: Allow 1-2 cores per 100 concurrent calls (adjusted for codec complexity and packet rate).
  • RAM: 4GB minimum for production; more if using on-disk compression or high PCAP retention.
  • Storage: Use SSD or high-throughput block storage for the spooldir. VoIPmonitor is I/O intensive — persistent disk performance is critical to avoid packet loss.
  • Network: Ensure sufficient NIC bandwidth; mirroring multiple high-traffic sources can saturate the sensor's interface.
6. NTP Synchronization: Accurate timekeeping is critical. Ensure all VMs (sources, sensor, and related infrastructure) use the cloud provider's internal NTP servers or a reliable external NTP source.

Best Practices for Cloud Mirroring:

  • Filter at the Source: Use cloud mirroring filters to forward only SIP signaling and RTP audio ports. Sending all network traffic (HTTP, SSH, etc.) wastes CPU and bandwidth.
  • Monitor Network Limits: Cloud NICs have bandwidth limits (e.g., 10 Gbps). Mirroring multiple high-traffic sources may saturate the sensor VM's interface.
  • MTU Considerations: VXLAN adds ~50 bytes of overhead. If original packets are near 1500 bytes MTU, encapsulated packets may exceed it, causing fragmentation or drops. Ensure network path supports jumbo frames or proper fragmentation handling.
  • Test Load: Start with filtered ports and a subset of traffic, monitor performance, then expand to full production volume.

Alternative: Client/Server Architecture with On-Host Sensors

Instead of cloud packet mirroring, consider installing VoIPmonitor sensors directly on each PBX/SBC VM using the Client/Server architecture:

  • Install sensor on each Asterisk/SBC VM (on-host capture)
  • Sensors process calls locally or forward packets via packetbuffer_sender to a central collector
  • Eliminates mirroring overhead and potential incomplete capture issues
  • May have better performance for high-traffic environments

Distributed Deployment Models

For monitoring multiple remote offices or a large infrastructure, a distributed model is essential. This involves a central GUI/Database server collecting data from multiple remote sensors.

Classic Mode: Standalone Remote Sensors

In this traditional model, each remote sensor is a fully independent entity.

  • How it works: The remote sensor processes packets and stores PCAPs locally. It connects directly to the central MySQL/MariaDB database to write CDRs. For PCAP retrieval, the GUI typically needs network access to each sensor's management port (default TCP/5029).
  • Pros: Simple conceptual model.
  • Cons: Requires opening firewall ports to each sensor and managing database credentials on every remote machine.

Alternative PCAP Access: NFS/SSHFS Mounting

For environments where direct TCP/5029 access to remote sensors is impractical (e.g., firewalls, VPN limitations), you can mount remote spool directories on the central GUI server using NFS or SSHFS.

Use Cases:

  • Firewall policies block TCP/5029 but allow SSH or NFS traffic
  • Remote sensors have local databases that need to be queried separately
  • You want the GUI to access PCAPs directly from mounted filesystems instead of proxying through TCP/5029

Configuration Steps:

  1. Mount remote spools on GUI server:

Using NFS:

# On GUI server, mount remote spool directory
sudo mount -t nfs 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1
sudo mount -t nfs 10.224.0.102:/var/spool/voipmonitor /mnt/voipmonitor/sensor2

# Add to /etc/fstab for persistent mounts
10.224.0.101:/var/spool/voipmonitor  /mnt/voipmonitor/sensor1  nfs  defaults  0  0
10.224.0.102:/var/spool/voipmonitor  /mnt/voipmonitor/sensor2  nfs  defaults  0  0

Using SSHFS:

# On GUI server, mount remote spool via SSHFS
sshfs voipmonitor@10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1
sshfs voipmonitor@10.224.0.102:/var/spool/voipmonitor /mnt/voipmonitor/sensor2

# Add to /etc/fstab for persistent mounts (with key-based auth)
voipmonitor@10.224.0.101:/var/spool/voipmonitor  /mnt/voipmonitor/sensor1  fuse.sshfs  defaults,IdentityFile=/home/voipmonitor/.ssh/id_rsa  0  0
2. Configure PCAP spooldir path in GUI:

In the GUI, go to Settings > System Configuration > Sniffer data path and set it to search multiple spool directories. Each directory is separated by a colon (:).

Sniffer data path: /var/spool/voipmonitor:/mnt/voipmonitor/sensor1:/mnt/voipmonitor/sensor2

The GUI will search these paths in order when looking for PCAP files.

3. Register remote sensors in GUI:

Go to Settings > Sensors and register each remote sensor:

  • Sensor ID: Must match id_sensor in each remote's voipmonitor.conf
  • Name: Descriptive name (e.g., "Site 1 - London")
  • Manager IP, Port: Optional with NFS/SSHFS mount (leave empty if mounting spools directly)

Important Notes:

  • Each remote sensor must have a unique id_sensor configured in voipmonitor.conf
  • Remote sensors write directly to their local MySQL database (or possibly to a central database)
  • Filter calls by site using the id_sensor column in the CDR view
  • Ensure mounted directories are writable by the GUI user for PCAP uploads
  • For better performance, use NFS with async or SSHFS with caching options

Filtering and Site Identification:

  • In the CDR view, use the Sensor dropdown filter to select specific sites
  • Alternatively, filter by IP address ranges using CDR columns
  • The id_sensor column in the database uniquely identifies which sensor captured each call
  • Sensor names can be customized in Settings > Sensors for easier identification

Comparison: TCP/5029 vs NFS/SSHFS

Approach Network Traffic Firewall Requirements Performance Use Case
TCP/5029 Proxy (Standard) On-demand fetch per request TCP/5029 outbound from GUI to sensors Better (no continuous mount overhead) Most deployments
NFS Mount Continuous (filesystem access) NFS ports (usually 2049) bidirectional Excellent (local filesystem speed) Local networks, high-throughput
SSHFS Mount Continuous (encrypted filesystem) SSH (TCP/22) outbound from GUI Good (some encryption overhead) Remote sites, cloud/VPN

Troubleshooting NFS/SSHFS Mounts

If you experience missing CDRs or PCAP files for a specific time period, or if the GUI reports files not found despite sensors receiving traffic, the issue is often NFS/SSHFS connectivity between the probe and storage server.

Check for NFS/SSHFS Connectivity Issues

Missing data (both CDRs and PCAPs) for a specific time period is typically caused by network unavailability between the VoIPmonitor probe and the NFS/SSHFS storage server.

1. Check system logs for NFS or SSHFS errors:

# Check for NFS-specific errors
journalctl -u voipmonitor --since "2024-01-01" --until "2024-01-02"

# Look for specific patterns in syslog
grep "nfs: server.*not responding" /var/log/syslog
grep "nfs.*timed out" /var/log/syslog
grep "I/O error" /var/log/syslog

# For SSHFS issues
grep "sshfs.*Connection reset" /var/log/syslog
grep "sshfs.*Transport endpoint is not connected" /var/log/syslog

Key error messages to look for:

  • nfs: server 192.168.1.100 not responding, timed out - NFS server unreachable
  • nfs: server 192.168.1.100 OK - Connection restored after interruption
  • Stale file handle - NFS mount needs remounting
  • Transport endpoint is not connected - SSHFS mount disconnected

2. Verify network connectivity to the storage server:

# Ping test to the NFS/SSHFS server
ping 192.168.1.100

# Trace the network path to identify bottlenecks
traceroute 192.168.1.100

# Test DNS resolution if using hostnames
nslookup storage-server.domain.com

3. Ensure the NFS/SSHFS server is running and accessible:

# On the probe/sensor side - check if mount is active
mount | grep nfs
mount | grep fuse.sshfs

# Check mount status for all mounted spool directories
stat /mnt/voipmonitor/sensor1

# On the NFS server side - verify services are running
systemctl status nfs-server
systemctl status sshd

4. Check for mount-specific issues:

# Test NFS mount manually (unmount and remount)
sudo umount /mnt/voipmonitor/sensor1
sudo mount -t nfs 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1

# Check /etc/fstab for mount errors
sudo mount -a  # Test all mounts in /etc/fstab

# Verify mount permissions
ls -la /mnt/voipmonitor/sensor1

Common Causes of Missing Data

Symptom Most Likely Cause Troubleshooting Step
Gap in data during a specific time period NFS/SSHFS server unreachable Check logs for "not responding, timed out"
Stale file handle errors NFS server rebooted or export changed Remount NFS share
Connection resets Network interruption or unstable connection Check network stability and ping times
Very slow file access Network latency or bandwidth saturation Monitor network throughput
GUI shows "File not found" Mount point dismounted Check mount status and remount if needed

Preventative Measures

To minimize data loss from NFS/SSHFS connectivity issues:

Use TCP for NFS (more reliable than UDP):

# Mount NFS with TCP explicitly
sudo mount -t nfs -o tcp 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1

Use the hard,nofail mount options:

# In /etc/fstab
10.224.0.101:/var/spool/voipmonitor  /mnt/voipmonitor/sensor1  nfs  hard,nofail,tcp  0  0
  • hard: Make NFS operations wait indefinitely instead of timing out
  • nofail: Do not fail if the mount is unavailable at boot time

Monitor mount status: Set up automated monitoring to alert when NFS/SSHFS mounts become unresponsive or disconnected.

Consider Client/Server mode as alternative: If NFS/SSHFS connectivity is unreliable, consider using the modern Client/Server architecture instead, which uses encrypted TCP channels and is more resilient to network interruptions.

Modern Mode: Client/Server Architecture (v20+) — Recommended

This model uses a secure, encrypted TCP channel between remote sensors (clients) and a central sensor instance (server). The GUI communicates with the central server only, which significantly simplifies networking and security.

This architecture supports two primary modes:

  1. Local Processing: Remote sensors process packets locally and send only lightweight CDR data over the encrypted channel. PCAPs remain on the remote sensor. On-demand PCAP fetch is proxied via the central server (to the sensor's TCP/5029).
  2. Packet Mirroring: Remote sensors forward the entire raw packet stream to the central server, which performs all processing and storage. Ideal for low-resource remote sites.

Architecture Diagrams

Step-by-Step Configuration Guide

Prerequisites

  • VoIPmonitor v20+ on all sensors.
  • Central database reachable from the central server instance.
  • Unique id_sensor per sensor (< 65536).
  • NTP running everywhere (see Time Synchronization below).

Scenario A — Local Processing (default, low WAN usage)

# /etc/voipmonitor.conf on the REMOTE sensor (LOCAL PROCESSING)

id_sensor               = 2          # unique per sensor (< 65536)
server_destination      = 10.224.0.250
server_destination_port = 60024
server_password         = your_strong_password

packetbuffer_sender     = no         # local analysis; sends only CDRs
interface               = eth0       # or: interface = any
sipport                 = 5060       # example; add your usual sniffer options

# No MySQL credentials here — remote sensor does NOT write to DB directly.
# /etc/voipmonitor.conf on the CENTRAL server (LOCAL PROCESSING network)

server_bind             = 0.0.0.0
server_bind_port        = 60024
server_password         = your_strong_password

mysqlhost               = 10.224.0.201
mysqldb                 = voipmonitor
mysqluser               = voipmonitor
mysqlpassword           = db_password

cdr_partition           = yes        # partitions for CDR tables
mysqlloadconfig         = yes        # allows DB-driven config if used

interface               =            # leave empty to avoid local sniffing
# The central server will proxy on-demand PCAP fetches to sensors (TCP/5029).

Scenario B — Packet Mirroring (centralized processing/storage)

# /etc/voipmonitor.conf on the REMOTE sensor (PACKET MIRRORING)

id_sensor               = 3
server_destination      = 10.224.0.250
server_destination_port = 60024
server_password         = your_strong_password

packetbuffer_sender     = yes        # send RAW packet stream to central
interface               = eth0       # capture source; no DB settings needed
# /etc/voipmonitor.conf on the CENTRAL server (PACKET MIRRORING)

server_bind             = 0.0.0.0
server_bind_port        = 60024
server_password         = your_strong_password

mysqlhost               = 10.224.0.201
mysqldb                 = voipmonitor
mysqluser               = voipmonitor
mysqlpassword           = db_password

cdr_partition           = yes
mysqlloadconfig         = yes

# As this server does all analysis, configure as if sniffing locally:
sipport                 = 5060
# ... add your usual sniffer/storage options (pcap directories, limits, etc.)

Firewall Checklist (Quick Reference)

  • Modern Client/Server (v20+):
    • Central Server: Allow inbound TCP/60024 from remote sensors. Allow inbound TCP/5029 from GUI (management/API to central sensor).
    • Remote Sensors (Local Processing only): Allow inbound TCP/5029 from the central server (for on-demand PCAP fetch via proxy). Outbound TCP/60024 to the central server.
  • Cloud Mode:
    • Remote Sensors: Allow outbound TCP/60023 to cloud.voipmonitor.org.

Configuration & Checklists

Parameter Notes (clarifications)

  • id_sensor — Mandatory in any distributed deployment (Classic or Client/Server). Must be unique per sensor (< 65536). The value is written to the database and used by the GUI to identify where a call was captured.
  • cdr_partition — In Client/Server, enable on the central server instance that writes to the database. It can be disabled on remote "client" sensors that only mirror packets.
  • mysqlloadconfig — When enabled, the sensor can load additional parameters dynamically from the sensor_config table in the database. Typically enabled on the central server sensor that writes to DB; keep disabled on remote clients which do not access DB directly.
  • interface — Use a specific NIC (e.g., eth0) or any to capture from multiple NICs. For any ensure promiscuous mode on each NIC.

Initial Service Start & Database Initialization

After installation, the first startup against a new/empty database is critical.

  1. Start the service: systemctl start voipmonitor
  2. Follow logs to ensure schema/partition creation completes:
    • journalctl -u voipmonitor -f
    • or tail -f /var/log/syslog | grep voipmonitor

You should see creation of functions and partitions shortly after start. If you see errors like Table 'cdr_next_1' doesn't exist, the sensor is failing to initialize the schema — usually due to insufficient DB privileges or connectivity. Fix DB access and restart the sensor so it can finish initialization.

Time Synchronization

Accurate and synchronized time is critical for correlating call legs from different sensors. All servers (GUI, DB, and all Sensors) must run an NTP client (e.g., chrony or ntpdate) to keep clocks in sync.

Comparison of Remote Deployment Modes

Deployment Model Packet Processing Location PCAP Storage Location Network Traffic to Central Server GUI Connectivity
Classic Standalone Remote Remote Minimal (MySQL CDRs) GUI ↔ each Sensor (management port)
Modern Client/Server (Local Processing) Remote Remote Minimal (Encrypted CDRs) GUI ↔ Central Server only (central proxies PCAP fetch)
Modern Client/Server (Packet Mirroring) Central Central High (Encrypted full packets) GUI ↔ Central Server only

FAQ & Common Pitfalls

  • Do remote sensors need DB credentials in Client/Server? No. Only the central server instance writes to DB.
  • Why is id_sensor required everywhere? The GUI uses it to tag and filter calls by capture source.
  • Local Processing still fetches PCAPs from remote — who connects to whom? The GUI requests via the central server; the central server then connects to the remote sensor's TCP/5029 to retrieve the PCAP.

Related Documentation

AI Summary for RAG

Summary: This guide covers deployment topologies for VoIPmonitor. It contrasts running the sensor on the same host as a PBX versus on a dedicated server. For dedicated sensors, it details methods for forwarding traffic, including hardware-based port mirroring (SPAN) and various software-based tunneling protocols (IP-in-IP, GRE, TZSP, VXLAN, HEP, AudioCodes, IPFIX). HEP (Homer Encapsulation Protocol) is a lightweight protocol for capturing and mirroring VoIP packets. When hep = yes, VoIPmonitor listens for HEPv3 (and compatible HEPv2) packets and extracts the original VoIP traffic from the encapsulation. CRITICAL HEP LIMITATION: VoIPmonitor does NOT use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets. When SIP signaling and RTP media are encapsulated in HEP and arrive from different HEP sources (different capture nodes or sensors), VoIPmonitor cannot correlate them into a single CDR using HEP protocol metadata. This is feature request VS-1703 and there is currently no available workaround. The article covers cloud service packet mirroring options (GCP Packet Mirroring, AWS Traffic Mirroring, Azure Virtual Network TAP) with critical requirements: bidirectional capture (ingress and egress) and proper VM sizing (vCPU, RAM, storage I/O). The core of the article explains distributed architectures for multi-site monitoring, comparing the "classic" standalone remote sensor model with the modern, recommended "client/server" model. It details the two operational modes of the client/server architecture: local processing (sending only CDRs, PCAPs remain remote with central-proxied fetch) and packet mirroring (sending full, raw packets for central processing), which is ideal for low-resource endpoints. The article also explains an alternative approach for classic remote sensors: mounting PCAP spools via NFS or SSHFS when TCP/5029 access to sensors is blocked by firewalls, including troubleshooting steps for missing data due to NFS/SSHFS connectivity issues (checking logs for "not responding, timed out" errors, verifying network connectivity with ping/traceroute, and ensuring NFS/SSHFS server is running and accessible). The guide concludes with step-by-step configuration, firewall rules, critical parameter notes, and the importance of NTP plus first-start DB initialization.

Keywords: deployment, architecture, topology, on-host, dedicated sensor, port mirroring, SPAN, RSPAN, traffic mirroring, tunneling, GRE, TZSP, VXLAN, HEP, HEP correlation ID, captureNodeID, HEP limitation, HEP SIP RTP correlation, AudioCodes, IPFIX, cloud mirroring, GCP, AWS, Azure, Packet Mirroring, Traffic Mirroring, Virtual Network TAP, ingress, egress, bidirectional, VM sizing, remote sensor, multi-site, client server mode, packet mirroring, local processing, firewall rules, NTP, time synchronization, cloud mode, NFS, SSHFS, spooldir mounting, NFS troubleshooting, SSHFS troubleshooting, missing data, network connectivity

Key Questions:

  • Can I use cloud packet mirroring (GCP/AWS/Azure) with VoIPmonitor?
  • How should I configure cloud packet mirroring for ingress and egress traffic?
  • What is the difference between the classic remote sensor and the modern client/server mode?
  • When should I use packet mirroring (packetbuffer_sender) instead of local processing?
  • What are the firewall requirements for the client/server deployment model?
  • How can I access PCAP files from remote sensors if TCP/5029 is blocked?
  • How do I configure NFS or SSHFS to mount remote PCAP spools?
  • How do I configure the GUI sniffer data path for multiple mounted spools?
  • How do I troubleshoot missing CDRs or PCAPs when using NFS or SSHFS mounts?
  • What should I look for in logs to diagnose NFS connectivity issues?
  • Can I run the sensor on the same machine as my Asterisk/FreeSWITCH server?
  • What is a SPAN port and how is it used with VoIPmonitor?
  • Why is NTP important for a distributed VoIPmonitor setup?
  • What is HEP and how do I configure VoIPmonitor to receive HEP packets?
  • Does VoIPmonitor use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets?
  • Can VoIPmonitor correlate SIP and RTP packets that arrive from different HEP sources?
  • Is there a workaround for HEP SIP/RTP correlation across multiple HEP capture nodes?
  • How do I configure GRE, ERSPAN, and VXLAN tunneling for VoIPmonitor?