|
|
| (2 intermediate revisions by the same user not shown) |
| Line 1: |
Line 1: |
| {{DISPLAYTITLE:VoIPmonitor Deployment & Topology Guide}} | | {{DISPLAYTITLE:VoIPmonitor Deployment & Topology Guide}} |
|
| |
|
| '''This guide provides a comprehensive overview of VoIPmonitor's deployment models. It covers the fundamental choice between on-host and dedicated sensors, methods for capturing traffic, and detailed configurations for scalable, multi-site architectures.'''
| | This guide covers VoIPmonitor deployment options: where to install the sensor, how to forward traffic, and distributed architectures for multi-site monitoring. |
|
| |
|
| <kroki lang="mermaid"> | | <kroki lang="mermaid"> |
| Line 22: |
Line 22: |
| </kroki> | | </kroki> |
|
| |
|
| == Core Concept: Where to Capture Traffic == | | = Sensor Deployment Options = |
| The first decision in any deployment is where the VoIPmonitor sensor (sniffer) will run.
| |
|
| |
|
| === 1. On-Host Capture (on the PBX/SBC) === | | == On-Host Capture == |
| The sensor can be installed directly on the same Linux server that runs your PBX or SBC.
| |
| * '''Pros:''' Requires no extra hardware, network changes, or port mirroring. It is the simplest setup.
| |
| * '''Cons:''' Adds CPU, memory, and disk I/O load to your production voice server. If these resources are critical, a dedicated sensor is the recommended approach.
| |
|
| |
|
| '''Platform Note:''' The VoIPmonitor sensor runs exclusively on Linux. While some PBXs are available on Windows (e.g., 3CX Windows edition, certain legacy systems), the sensor cannot be installed on Windows. For Windows-based PBXs, you must use a dedicated Linux sensor with traffic mirroring (see below).
| | Install the sensor directly on the same Linux server as your PBX/SBC. |
|
| |
|
| === 2. Dedicated Sensor === | | {| class="wikitable" |
| A dedicated Linux server runs only the VoIPmonitor sensor. This is the recommended approach for production environments as it isolates monitoring resources from your voice platform. To use a dedicated sensor, you must forward a copy of the network traffic to it using one of the methods below.
| | ! Pros !! Cons |
| | |- |
| | | No extra hardware, network changes, or port mirroring required || Adds CPU, memory, and disk I/O load to production voice server |
| | |- |
| | | Simplest setup || Not suitable if resources are critical |
| | |} |
|
| |
|
| '''When a Dedicated Sensor is Required:''' | | {{Note|1=VoIPmonitor sensor runs '''exclusively on Linux'''. For Windows-based PBXs (e.g., 3CX Windows edition), you must use a dedicated Linux sensor with traffic mirroring.}} |
| * Windows-based PBXs (e.g., 3CX Windows edition) - VoIPmonitor sensor is Linux-only
| |
| * When your PBX/SBC server has limited CPU, RAM, or disk I/O resources
| |
| * When you want zero monitoring impact on your production voice platform
| |
| * When capturing from multiple sites with a centralized collector
| |
|
| |
|
| == Methods for Forwarding Traffic to a Dedicated Sensor == | | == Dedicated Sensor == |
|
| |
|
| === A. Hardware Port Mirroring (SPAN/RSPAN) ===
| | A separate Linux server runs only VoIPmonitor. '''Recommended for production environments''' as it isolates monitoring from voice platform resources. |
| This is the most common and reliable method. You configure your physical network switch to copy all traffic from the switch ports connected to your PBX/SBC to the switch port connected to the VoIPmonitor sensor. This feature is commonly called '''Port Mirroring''', '''SPAN''', or '''RSPAN'''. Consult your switch's documentation for configuration details.
| |
|
| |
|
| The VoIPmonitor sensor interface will be put into promiscuous mode automatically. To capture from multiple interfaces, set <code>interface = any</code> in <code>voipmonitor.conf</code> and enable promiscuous mode manually on each NIC (e.g., <code>ip link set dev eth1 promisc on</code>).
| | '''When Required:''' |
| | * Windows-based PBXs |
| | * Limited CPU/RAM/disk I/O on PBX server |
| | * Zero monitoring impact needed |
| | * Centralized capture from multiple sites |
|
| |
|
| ==== Monitoring Multiple VoIP Platforms with a Single Sensor ==== | | = Traffic Forwarding Methods = |
|
| |
|
| A common use case is monitoring multiple separate VoIP platforms (e.g., Mitel PBX and NetSapiens hosted PBX) using a single VoIPmonitor sensor instance. This can be accomplished with a simple port mirroring configuration on your network switch.
| | When using a dedicated sensor, you must forward traffic to it using one of these methods. |
|
| |
|
| '''Configuration Steps:'''
| | == Hardware Port Mirroring (SPAN/RSPAN) == |
|
| |
|
| # '''Configure Switch Port Mirroring:'''
| | Physical or virtual switches copy traffic from source port(s) to a monitoring port. |
| #* Identify the switch ports where each VoIP platform connects (e.g., port 1 for Mitel, port 2 for NetSapiens)
| |
| #* Connect the VoIPmonitor sensor to a dedicated switch port (e.g., port 24)
| |
| #* Configure the switch to mirror traffic from BOTH source ports (1 and 2) to the single destination port (24)
| |
| #* Most switches support monitoring multiple source ports simultaneously (refer to your switch documentation for syntax)
| |
|
| |
|
| # '''VoIPmonitor Sensor Configuration:'''
| | === Physical Switch === |
| #* No special configuration required for basic multiple-platform monitoring
| |
| #* The sensor receives the combined traffic stream from both platforms
| |
| #* VoIPmonitor automatically processes all SIP/RTP packets and generates CDRs for all calls
| |
| #* All platforms are monitored in a single unified interface
| |
|
| |
|
| {{Note|1=No configuration is required on the VoIP platforms themselves. The platforms continue operating normally; the switch merely copies traffic to the monitoring port.}}
| | Configure your switch to mirror traffic from PBX/SBC ports to the sensor's port. Consult your switch documentation for specific commands. |
|
| |
|
| '''Distinguishing Platforms in the GUI:'''
| | <syntaxhighlight lang="ini"> |
| | # /etc/voipmonitor.conf |
| | interface = eth0 |
| | sipport = 5060 |
| | savertp = yes |
| | </syntaxhighlight> |
| | |
| | {{Tip|To capture from multiple interfaces, set <code>interface = any</code> and enable promiscuous mode on each NIC: <code>ip link set dev eth1 promisc on</code>}} |
|
| |
|
| To differentiate between platforms in the VoIPmonitor Web GUI, use one of these methods:
| | === VMware/ESXi Virtual Switch === |
|
| |
|
| * '''IP Address Filtering:''' Use IP-based filters in the CDR view to show calls from specific platform ranges
| | For virtualized environments, VMware provides port mirroring at the virtual switch level. |
| * '''Prefix Filtering:''' Filter by calling/called number prefixes if platforms use different numbering plans
| |
| * '''Custom Sensors (Optional):''' If you want to tag calls by platform, use a separate dedicated sensor for each platform with unique <code>id_sensor</code> values
| |
|
| |
|
| '''Example Scenarios:''' | | '''Standard vSwitch:''' |
| | # In vSphere Client, navigate to ESXi host |
| | # Select virtual switch → Properties/Edit Settings → Enable Port Mirroring |
| | # Set source (SBC VM) and destination (VoIPmonitor VM) ports |
|
| |
|
| '''Scenario 1: On-Premise Platforms (e.g., Mitel + FreeSWITCH):''' | | '''Distributed vSwitch:''' |
| * Both platforms connect to the same LAN switch
| | # In vSphere Web Client → Networking → Select distributed switch |
| * Mirror both switch ports to the VoIPmonitor sensor destination port
| | # Configure tab → Port Mirroring → Create mirroring session |
| * Single sensor, unified view
| | # Specify source/destination ports and enable |
|
| |
|
| '''Scenario 2: On-Premise + Cloud Platform (e.g., Mitel + NetSapiens):'''
| | {{Note|1=Distributed switch mirroring can span multiple ESXi hosts within a cluster.}} |
| * On-premise Mitel: Mirror switch port to sensor
| |
| * Cloud NetSapiens: Use [[Tls|External TLS Session Key Provider]] if encryption is present, or use [[Tls#Method_5:_TCP_or_UDP_Key_Distribution|Method 5]] for decryption key export
| |
| * All traffic processed by the same sensor in unified interface
| |
|
| |
|
| {{Tip|When mirroring traffic from multiple ports, ensure the destination port (connected to VoIPmonitor) has sufficient bandwidth to handle the combined traffic from all sources without packet loss.}}
| | === Multiple VoIP Platforms === |
|
| |
|
| {{Warning|1='''Critical for Multiple Mirrored Interfaces:''' When sniffing from multiple mirrored interfaces, VLANs, or switch ports, packets may arrive as duplicates (same traffic from multiple SPAN sources). This can cause incomplete calls, missing audio, or incorrect SIP/RTP session reassembly. Add <code>auto_enable_use_blocks = yes</code> to <code>voipmonitor.conf</code>. This enables automatic packet deduplication and defragmentation. See [[Sniffer_configuration#auto_enable_use_blocks|Sniffer_configuration]] for details.}}
| | Monitor multiple platforms (e.g., Mitel + FreeSWITCH) with a single sensor by mirroring multiple source ports to one destination port. |
|
| |
|
| === B. Software-based Tunnelling ===
| | '''GUI differentiation:''' |
| When hardware mirroring is not an option, many network devices and PBXs can encapsulate VoIP packets and send them to the sensor's IP address using a tunnel. VoIPmonitor natively supports a wide range of protocols.
| | * Filter by IP address ranges |
| * '''Built-in Support:''' IP-in-IP, GRE, ERSPAN
| | * Filter by number prefixes |
| * '''UDP-based Tunnels:''' Configure the corresponding port in <code>voipmonitor.conf</code>: | | * Use separate sensors with unique <code>id_sensor</code> values |
| ** <code>udp_port_tzsp = 37008</code> (for MikroTik's TZSP) | |
| ** <code>udp_port_l2tp = 1701</code> | |
| ** <code>udp_port_vxlan = 4789</code> (common in cloud environments)
| |
| * '''Proprietary & Other Protocols:'''
| |
| ** [[Audiocodes_tunneling|AudioCodes Tunneling]] (uses <code>udp_port_audiocodes</code> or <code>tcp_port_audiocodes</code>)
| |
| ** HEP (Homer Encapsulation Protocol)
| |
| ** IPFIX (for Oracle SBCs) (enable <code>ipfix*</code> options)
| |
|
| |
|
| ==== HEP (Homer Encapsulation Protocol) ==== | | {{Warning|1='''Critical:''' When sniffing from multiple mirrored sources, packets may arrive as duplicates. Add <code>auto_enable_use_blocks = yes</code> to voipmonitor.conf to enable automatic deduplication. See [[Sniffer_configuration#auto_enable_use_blocks|Sniffer_configuration]] for details.}} |
|
| |
|
| HEP is a lightweight protocol for capturing and mirroring VoIP packets. Many SBCs and SIP proxies (such as Kamailio, OpenSIPS, FreeSWITCH) support HEP to send a copy of traffic to a monitoring server.
| | == Software-based Tunneling == |
|
| |
|
| '''Configuration in voipmonitor.conf:'''
| | When hardware mirroring is unavailable, use software tunneling to encapsulate and forward packets. |
| | |
| | {| class="wikitable" |
| | ! Protocol !! Configuration Parameter !! Notes |
| | |- |
| | | IP-in-IP, GRE, ERSPAN || Built-in (auto-detected) || No additional config needed |
| | |- |
| | | TZSP (MikroTik) || <code>udp_port_tzsp = 37008</code> || |
| | |- |
| | | L2TP || <code>udp_port_l2tp = 1701</code> || |
| | |- |
| | | VXLAN || <code>udp_port_vxlan = 4789</code> || Common in cloud environments |
| | |- |
| | | AudioCodes || <code>udp_port_audiocodes = 925</code> || See [[Audiocodes_tunneling|AudioCodes Tunneling]] |
| | |- |
| | | IPFIX (Oracle SBCs) || <code>ipfix*</code> options || Enable ipfix options in config |
| | |} |
| | |
| | === HEP (Homer Encapsulation Protocol) === |
| | |
| | Lightweight protocol for mirroring VoIP packets. Supported by Kamailio, OpenSIPS, FreeSWITCH, and many SBCs. |
|
| |
|
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # Enable HEP support | | # /etc/voipmonitor.conf |
| hep = yes | | hep = yes |
|
| |
| # Port to listen for HEP packets (default: 9060)
| |
| hep_bind_port = 9060 | | hep_bind_port = 9060 |
|
| |
| # Optional: Bind to specific IP address
| |
| # hep_bind_ip = 0.0.0.0
| |
|
| |
| # Optional: Enable UDP binding (default: yes)
| |
| hep_bind_udp = yes | | hep_bind_udp = yes |
| | # Optional: hep_kamailio_protocol_id_fix = yes |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
| When <code>hep = yes</code>, VoIPmonitor listens for HEPv3 (and compatible HEPv2) packets and extracts the original VoIP traffic from the encapsulation.
| |
|
| |
| '''Use Cases:'''
| |
| * Remote SBCs or PBXs export traffic to a centralized VoIPmonitor server
| |
| * Kamailio/FreeSWITCH <code>siptrace</code> module integration
| |
| * Environments where standard tunnels (GRE/ERSPAN) are not available
| |
|
| |
| {{Note|1=There is also <code>hep_kamailio_protocol_id_fix = yes</code> for Kamailio-specific protocol ID issues.}}
| |
|
| |
|
| '''Known Limitations:''' | | '''Known Limitations:''' |
|
| |
|
| ===== HEP Timestamp Precision ===== | | {{Warning|1='''HEP Correlation ID Not Supported:''' VoIPmonitor does NOT use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets. If SIP and RTP arrive from different HEP sources, they will NOT be correlated into a single CDR. |
|
| |
|
| HEP3 packets include a timestamp field that represents when the packet was captured at the source. VoIPmonitor uses this HEP timestamp for the call record. If the source HEP server has an unreliable or unsynchronized time source, this can cause incorrect timestamps in the captured calls.
| | VoIPmonitor correlates using standard SIP Call-ID, To/From tags, and RTP SSRC fields only. Feature request VS-1703 has been logged but there is no workaround currently.}} |
|
| |
|
| Currently, there is no built-in configuration option to ignore the HEP timestamp and instead use the time when VoIPmonitor receives the packet. If you need this functionality, please:
| | '''HEP Timestamp:''' VoIPmonitor uses the HEP timestamp field. If the source has an unsynchronized clock, call timestamps will be incorrect. There is no option to ignore HEP timestamps. |
|
| |
|
| * Request the feature on the product roadmap (no guaranteed ETA)
| | '''HEP3 with Port 0:''' Not captured by default. Add port 0 to sipport: |
| * Consider a custom development project for a fee
| |
|
| |
|
| ===== No HEP Correlation ID Support =====
| | <syntaxhighlight lang="ini"> |
| | | sipport = 0,5060 |
| '''VoIPmonitor does not use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets.'''
| |
| | |
| When SIP signaling and RTP media are encapsulated in HEP and arrive from different HEP sources (different capture nodes or sensors), VoIPmonitor cannot correlate them into a single CDR using the HEP protocol metadata.
| |
| | |
| {{Warning|1='''HEP Correlation Limitation:'''
| |
| * HEP Source A sends SIP packets
| |
| * HEP Source B sends RTP packets for the same call
| |
| * VoIPmonitor tries to use HEP captureNodeID/correlation ID to merge them
| |
| * '''Result:''' SIP and RTP are NOT correlated; the call appears incomplete or missing
| |
| | |
| VoIPmonitor extracts the payload from HEP encapsulation and correlates using standard SIP Call-ID, To/From tags, and RTP SSRC fields. It does not utilize the HEP envelope metadata fields (correlation ID, capture node ID, etc.) for cross-sensor correlation.
| |
| | |
| '''Workaround (Feature Request VS-1703):''' Currently, there is no available workaround. The only options are to wait for a future release that adds HEP correlation ID support (feature request VS-1703 has been logged) or pursue a custom paid implementation.}}
| |
| | |
| === Pre-Deployment Compatibility Verification ===
| |
| | |
| Before full production deployment, especially when integrating VoIPmonitor with network hardware (Cisco/Juniper routers, SBCs), or complex mirroring setups (RSPAN, ERSPAN, tunnels), it is highly recommended to verify that VoIPmonitor can correctly capture and process packets from your specific environment.
| |
| | |
| This approach allows you to identify compatibility issues early, without committing to a full deployment that may need adjustments.
| |
| | |
| '''Typical Use Cases:'''
| |
| * Deploying a dedicated sensor with SPAN/RSPAN from a Cisco router or switch
| |
| * Using ERSPAN to forward VoIP traffic across routers
| |
| * Capturing from proprietary SBCs or VoIP gateways (Cisco C2951, AudioCodes, etc.)
| |
| * Implementing newer or complex tunneling protocols (VXLAN, GRE with specific configurations)
| |
| | |
| '''Verification Workflow:'''
| |
| | |
| # '''Configure Mirroring in Test Mode:''' Set up the SPAN, RSPAN, ERSPAN, or tunnel configuration to forward a small subset of VoIP traffic to a test sensor or VM.
| |
| # '''Capture Test Calls:'''
| |
| #* Make a few test calls through your VoIP system.
| |
| #* Using <code>tcpdump</code> or <code>tshark</code>, capture the mirrored traffic into a pcap file:
| |
| #:<syntaxhighlight lang="bash">
| |
| # Example: Capture SIP and RTP from the mirrored interface
| |
| sudo tcpdump -i eth0 -s0 port 5060 -w /tmp/compatibility_test.pcap
| |
| </syntaxhighlight>
| |
| # '''Verify Packet Capture:'''
| |
| #* Confirm the pcap contains both SIP signaling and RTP audio:
| |
| #:<syntaxhighlight lang="bash">
| |
| tshark -r /tmp/compatibility_test.pcap -Y "sip || rtp"
| |
| </syntaxhighlight> | | </syntaxhighlight> |
| #* Check for expected packet sizes, codecs, and call flow.
| |
| # '''Submit for Analysis:''' Send the pcap file to VoIPmonitor support along with details about:
| |
| #* Your network hardware (Cisco router model, switch model, SBC model)
| |
| #* Mirroring method (SPAN, RSPAN, ERSPAN, GRE, VXLAN, etc.)
| |
| #* Any special configurations (VLAN tags, MPLS labels, encapsulation)
| |
| #* Your planned deployment (on-host vs. dedicated sensor, client/server vs. standalone)
| |
| # '''Feedback and Adjustment:''' Support will analyze the pcap and confirm if VoIPmonitor can process your specific traffic structure. They may recommend configuration changes (e.g., adjusting <code>sipport</code>, enabling tunnel decapsulation, modifying TCP/UDP port settings) or identify incompatible traffic patterns.
| |
|
| |
|
| '''Benefits of Pre-Deployment Testing:'''
| | == Cloud Packet Mirroring == |
| * Confirms VoIPmonitor compatibility with your specific hardware and network setup
| |
| * Identifies configuration needs before full production deployment
| |
| * Saves time by avoiding trial-and-error during go-live
| |
| * Provides documented proof of concept for stakeholders
| |
| * Allows tuning of sensor resources (CPU/RAM/disk) based on actual traffic characteristics
| |
|
| |
|
| If verification fails or reveals incompatibilities, support can often suggest alternative approaches or configuration adjustments before you proceed.
| | Cloud providers offer native mirroring services using VXLAN or GRE encapsulation. |
|
| |
|
| ==== Cloud Packet Mirroring (GCP, AWS, Azure) ==== | | {| class="wikitable" |
| | | ! Provider !! Service Name |
| Cloud providers offer native packet mirroring services that can forward traffic to a dedicated VoIPmonitor sensor. These services typically use '''VXLAN''' or '''GRE''' encapsulation.
| | |- |
| | | | Google Cloud || Packet Mirroring |
| '''Supported Cloud Services:'''
| | |- |
| | | | AWS || Traffic Mirroring |
| * Google Cloud Platform (GCP): Packet Mirroring
| | |- |
| * Amazon Web Services (AWS): Traffic Mirroring
| | | Azure || Virtual Network TAP |
| * Microsoft Azure: Virtual Network TAP
| | |} |
|
| |
|
| '''Configuration Steps:''' | | '''Configuration Steps:''' |
|
| |
|
| # '''Create a Dedicated Sensor VM:''' Deploy a VoIPmonitor sensor instance in your cloud environment. This VM should be sized appropriately for your expected traffic volume. | | # Create a VoIPmonitor sensor VM in your cloud environment |
| # '''Configure Cloud Mirroring Policy:''' In your cloud provider's console, create a mirroring policy: | | # Create mirroring policy: select source VMs/subnets, set destination to sensor VM |
| #* Select source VMs or subnets where your VoIP traffic (PBX/SBC) originates.
| | # '''Critical:''' Capture traffic in '''BOTH directions''' (INGRESS and EGRESS) |
| #* Set the destination to the internal IP of your VoIPmonitor sensor VM.
| | # Configure sensor: |
| #* Ensure the encapsulation protocol is compatible with VoIPmonitor (VXLAN is recommended and most common).
| |
| # '''Critical: Bidirectional Capture:''' Configure the mirroring policy to capture traffic '''in BOTH directions''': | |
| #* <code>INGRESS</code> (incoming traffic to sources)
| |
| #* <code>EGRESS</code> (outgoing traffic from sources)
| |
| #* <code>BOTH</code> or <code>EITHER</code> is recommended | |
| | |
| {{Warning|1=Capturing only ingress or only egress will result in incomplete call data and broken CDRs.}}
| |
| | |
| ; 4. '''Configure VoIPmonitor Sensor:'''
| |
|
| |
|
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # Enable VXLAN support for cloud packet mirroring
| |
| udp_port_vxlan = 4789 | | udp_port_vxlan = 4789 |
|
| |
| # Interface configuration
| |
| interface = eth0 | | interface = eth0 |
|
| |
| # SIP ports
| |
| sipport = 5060 | | sipport = 5060 |
|
| |
| # Optional: Filter at source to save bandwidth
| |
| # Configure cloud mirroring filters to forward only SIP/RTP traffic
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| ; 5. '''VM Sizing for Cloud Sensor:''' Properly size the sensor VM instance:
| | {{Warning|1=Capturing only ingress or only egress results in incomplete CDRs and broken call data.}} |
| * '''vCPU:''' Allow 1-2 cores per 100 concurrent calls (adjusted for codec complexity and packet rate).
| |
| * '''RAM:''' 4GB minimum for production; more if using on-disk compression or high PCAP retention.
| |
| * '''Storage:''' Use SSD or high-throughput block storage for the <code>spooldir</code>. VoIPmonitor is I/O intensive — persistent disk performance is critical to avoid packet loss.
| |
| * '''Network:''' Ensure sufficient NIC bandwidth; mirroring multiple high-traffic sources can saturate the sensor's interface.
| |
|
| |
|
| ; 6. '''NTP Synchronization:''' Accurate timekeeping is critical. Ensure all VMs (sources, sensor, and related infrastructure) use the cloud provider's internal NTP servers or a reliable external NTP source.
| | '''Best Practices:''' |
| | * Filter at source to forward only SIP/RTP ports |
| | * Monitor NIC bandwidth limits |
| | * Account for VXLAN overhead (~50 bytes) - may need jumbo frames |
| | * Ensure NTP sync across all VMs |
|
| |
|
| '''Best Practices for Cloud Mirroring:''' | | '''Alternative:''' Consider [[Sniffer_distributed_architecture|Client/Server architecture]] with on-host sensors instead of cloud mirroring for better performance. |
|
| |
|
| * '''Filter at the Source:''' Use cloud mirroring filters to forward only SIP signaling and RTP audio ports. Sending all network traffic (HTTP, SSH, etc.) wastes CPU and bandwidth.
| | == Pre-Deployment Verification == |
| * '''Monitor Network Limits:''' Cloud NICs have bandwidth limits (e.g., 10 Gbps). Mirroring multiple high-traffic sources may saturate the sensor VM's interface.
| |
| * '''MTU Considerations:''' VXLAN adds ~50 bytes of overhead. If original packets are near 1500 bytes MTU, encapsulated packets may exceed it, causing fragmentation or drops. Ensure network path supports jumbo frames or proper fragmentation handling.
| |
| * '''Test Load:''' Start with filtered ports and a subset of traffic, monitor performance, then expand to full production volume.
| |
|
| |
|
| '''Alternative: Client/Server Architecture with On-Host Sensors'''
| | For complex setups (RSPAN, ERSPAN, proprietary SBCs), verify compatibility before production deployment: |
|
| |
|
| Instead of cloud packet mirroring, consider installing VoIPmonitor sensors directly on each PBX/SBC VM using the [[Sniffer_distributed_architecture|Client/Server architecture]]:
| | # Configure test mirroring with a subset of traffic |
| * Install sensor on each Asterisk/SBC VM (on-host capture)
| | # Capture test calls with tcpdump: <code>sudo tcpdump -i eth0 -s0 port 5060 -w /tmp/test.pcap</code> |
| * Sensors process calls locally or forward packets via <code>packetbuffer_sender</code> to a central collector
| | # Verify pcap contains SIP and RTP: <code>tshark -r /tmp/test.pcap -Y "sip || rtp"</code> |
| * Eliminates mirroring overhead and potential incomplete capture issues
| | # Submit pcap to VoIPmonitor support with hardware/configuration details |
| * May have better performance for high-traffic environments
| |
|
| |
|
| == Distributed Deployment Models ==
| | = Distributed Architectures = |
| For monitoring multiple remote offices or a large infrastructure, a distributed model is essential. This involves a central GUI/Database server collecting data from multiple remote sensors.
| |
|
| |
|
| === Classic Mode: Standalone Remote Sensors ===
| | For multi-site monitoring, sensors can be deployed in various configurations. |
| In this traditional model, each remote sensor is a fully independent entity.
| |
| * '''How it works:''' The remote sensor processes packets and stores PCAPs locally. It connects directly to the central MySQL/MariaDB database to write CDRs. For PCAP retrieval, the GUI typically needs network access to each sensor's management port (default <code>TCP/5029</code>).
| |
| * '''Pros:''' Simple conceptual model.
| |
| * '''Cons:''' Requires opening firewall ports to each sensor and managing database credentials on every remote machine.
| |
|
| |
|
| ==== Alternative PCAP Access: NFS/SSHFS Mounting ==== | | == Classic Mode: Standalone Sensors == |
|
| |
|
| For environments where direct TCP/5029 access to remote sensors is impractical (e.g., firewalls, VPN limitations), you can mount remote spool directories on the central GUI server using NFS or SSHFS.
| | Each sensor operates independently: |
| | * Processes packets and stores PCAPs locally |
| | * Connects directly to central MySQL to write CDRs |
| | * GUI needs network access to each sensor's <code>TCP/5029</code> for PCAP retrieval |
|
| |
|
| '''Use Cases:''' | | '''Alternative: NFS/SSHFS Mounting''' |
| * Firewall policies block TCP/5029 but allow SSH or NFS traffic
| |
| * Remote sensors have local databases that need to be queried separately
| |
| * You want the GUI to access PCAPs directly from mounted filesystems instead of proxying through TCP/5029
| |
| | |
| '''Configuration Steps:'''
| |
|
| |
|
| # '''Mount remote spools on GUI server:'''
| | If TCP/5029 access is blocked, mount remote spool directories on the GUI server: |
|
| |
|
| Using NFS:
| |
| <syntaxhighlight lang="bash"> | | <syntaxhighlight lang="bash"> |
| # On GUI server, mount remote spool directory | | # NFS mount |
| sudo mount -t nfs 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1 | | sudo mount -t nfs 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1 |
| sudo mount -t nfs 10.224.0.102:/var/spool/voipmonitor /mnt/voipmonitor/sensor2
| |
|
| |
| # Add to /etc/fstab for persistent mounts
| |
| 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1 nfs defaults 0 0
| |
| 10.224.0.102:/var/spool/voipmonitor /mnt/voipmonitor/sensor2 nfs defaults 0 0
| |
| </syntaxhighlight>
| |
|
| |
|
| Using SSHFS:
| | # SSHFS mount |
| <syntaxhighlight lang="bash">
| |
| # On GUI server, mount remote spool via SSHFS
| |
| sshfs voipmonitor@10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1 | | sshfs voipmonitor@10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1 |
| sshfs voipmonitor@10.224.0.102:/var/spool/voipmonitor /mnt/voipmonitor/sensor2
| |
|
| |
| # Add to /etc/fstab for persistent mounts (with key-based auth)
| |
| voipmonitor@10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1 fuse.sshfs defaults,IdentityFile=/home/voipmonitor/.ssh/id_rsa 0 0
| |
| </syntaxhighlight>
| |
|
| |
| ; 2. '''Configure PCAP spooldir path in GUI:'''
| |
|
| |
| In the GUI, go to '''Settings > System Configuration > Sniffer data path''' and set it to search multiple spool directories. Each directory is separated by a colon (<code>:</code>).
| |
|
| |
| <syntaxhighlight lang="text">
| |
| Sniffer data path: /var/spool/voipmonitor:/mnt/voipmonitor/sensor1:/mnt/voipmonitor/sensor2
| |
| </syntaxhighlight>
| |
|
| |
| The GUI will search these paths in order when looking for PCAP files.
| |
|
| |
| ; 3. '''Register remote sensors in GUI:'''
| |
|
| |
| Go to '''Settings > Sensors''' and register each remote sensor:
| |
| * '''Sensor ID:''' Must match <code>id_sensor</code> in each remote's <code>voipmonitor.conf</code>
| |
| * '''Name:''' Descriptive name (e.g., "Site 1 - London")
| |
| * '''Manager IP, Port:''' Optional with NFS/SSHFS mount (leave empty if mounting spools directly)
| |
|
| |
| '''Important Notes:'''
| |
| * Each remote sensor must have a unique <code>id_sensor</code> configured in <code>voipmonitor.conf</code>
| |
| * Remote sensors write directly to their local MySQL database (or possibly to a central database)
| |
| * Filter calls by site using the <code>id_sensor</code> column in the CDR view
| |
| * Ensure mounted directories are writable by the GUI user for PCAP uploads
| |
| * For better performance, use NFS with async or SSHFS with caching options
| |
|
| |
| '''Filtering and Site Identification:'''
| |
| * In the CDR view, use the '''Sensor''' dropdown filter to select specific sites
| |
| * Alternatively, filter by IP address ranges using CDR columns
| |
| * The <code>id_sensor</code> column in the database uniquely identifies which sensor captured each call
| |
| * Sensor names can be customized in '''Settings > Sensors''' for easier identification
| |
|
| |
| '''Comparison: TCP/5029 vs NFS/SSHFS'''
| |
| {| class="wikitable"
| |
| ! Approach
| |
| ! Network Traffic
| |
| ! Firewall Requirements
| |
| ! Performance
| |
| ! Use Case
| |
| |-
| |
| | TCP/5029 Proxy (Standard)
| |
| | On-demand fetch per request
| |
| | TCP/5029 outbound from GUI to sensors
| |
| | Better (no continuous mount overhead)
| |
| | Most deployments
| |
| |-
| |
| | NFS Mount
| |
| | Continuous (filesystem access)
| |
| | NFS ports (usually 2049) bidirectional
| |
| | Excellent (local filesystem speed)
| |
| | Local networks, high-throughput
| |
| |-
| |
| | SSHFS Mount
| |
| | Continuous (encrypted filesystem)
| |
| | SSH (TCP/22) outbound from GUI
| |
| | Good (some encryption overhead)
| |
| | Remote sites, cloud/VPN
| |
| |}
| |
|
| |
| === Troubleshooting NFS/SSHFS Mounts ===
| |
|
| |
| If you experience missing CDRs or PCAP files for a specific time period, or if the GUI reports files not found despite sensors receiving traffic, the issue is often NFS/SSHFS connectivity between the probe and storage server.
| |
|
| |
| ==== Check for NFS/SSHFS Connectivity Issues ====
| |
|
| |
| Missing data (both CDRs and PCAPs) for a specific time period is typically caused by network unavailability between the VoIPmonitor probe and the NFS/SSHFS storage server.
| |
|
| |
| '''1. Check system logs for NFS or SSHFS errors:'''
| |
|
| |
| <syntaxhighlight lang="bash">
| |
| # Check for NFS-specific errors
| |
| journalctl -u voipmonitor --since "2024-01-01" --until "2024-01-02"
| |
|
| |
| # Look for specific patterns in syslog
| |
| grep "nfs: server.*not responding" /var/log/syslog
| |
| grep "nfs.*timed out" /var/log/syslog
| |
| grep "I/O error" /var/log/syslog
| |
|
| |
| # For SSHFS issues
| |
| grep "sshfs.*Connection reset" /var/log/syslog
| |
| grep "sshfs.*Transport endpoint is not connected" /var/log/syslog
| |
| </syntaxhighlight>
| |
|
| |
| Key error messages to look for:
| |
| * <code>nfs: server 192.168.1.100 not responding, timed out</code> - NFS server unreachable
| |
| * <code>nfs: server 192.168.1.100 OK</code> - Connection restored after interruption
| |
| * <code>Stale file handle</code> - NFS mount needs remounting
| |
| * <code>Transport endpoint is not connected</code> - SSHFS mount disconnected
| |
|
| |
| '''2. Verify network connectivity to the storage server:'''
| |
|
| |
| <syntaxhighlight lang="bash">
| |
| # Ping test to the NFS/SSHFS server
| |
| ping 192.168.1.100
| |
|
| |
| # Trace the network path to identify bottlenecks
| |
| traceroute 192.168.1.100
| |
|
| |
| # Test DNS resolution if using hostnames
| |
| nslookup storage-server.domain.com
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| '''3. Ensure the NFS/SSHFS server is running and accessible:''' | | Configure GUI: '''Settings > System Configuration > Sniffer data path:''' |
| | <code>/var/spool/voipmonitor:/mnt/voipmonitor/sensor1:/mnt/voipmonitor/sensor2</code> |
|
| |
|
| <syntaxhighlight lang="bash"> | | {{Tip|For NFS, use <code>hard,nofail,tcp</code> mount options for reliability.}} |
| # On the probe/sensor side - check if mount is active
| |
| mount | grep nfs
| |
| mount | grep fuse.sshfs
| |
|
| |
|
| # Check mount status for all mounted spool directories
| | == Modern Mode: Client/Server (v20+) — Recommended == |
| stat /mnt/voipmonitor/sensor1
| |
|
| |
|
| # On the NFS server side - verify services are running
| | Secure encrypted TCP channel between remote sensors and central server. GUI communicates only with central server. |
| systemctl status nfs-server
| |
| systemctl status sshd
| |
| </syntaxhighlight>
| |
| | |
| '''4. Check for mount-specific issues:'''
| |
| | |
| <syntaxhighlight lang="bash">
| |
| # Test NFS mount manually (unmount and remount)
| |
| sudo umount /mnt/voipmonitor/sensor1
| |
| sudo mount -t nfs 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1
| |
| | |
| # Check /etc/fstab for mount errors
| |
| sudo mount -a # Test all mounts in /etc/fstab
| |
| | |
| # Verify mount permissions
| |
| ls -la /mnt/voipmonitor/sensor1
| |
| </syntaxhighlight>
| |
| | |
| ==== Common Causes of Missing Data ====
| |
| | |
| {| class="wikitable"
| |
| ! Symptom
| |
| ! Most Likely Cause
| |
| ! Troubleshooting Step
| |
| |-
| |
| | Gap in data during a specific time period
| |
| | '''NFS/SSHFS server unreachable'''
| |
| | Check logs for "not responding, timed out"
| |
| |-
| |
| | Stale file handle errors
| |
| | NFS server rebooted or export changed
| |
| | Remount NFS share
| |
| |-
| |
| | Connection resets
| |
| | Network interruption or unstable connection
| |
| | Check network stability and ping times
| |
| |-
| |
| | Very slow file access
| |
| | Network latency or bandwidth saturation
| |
| | Monitor network throughput
| |
| |-
| |
| | GUI shows "File not found"
| |
| | Mount point dismounted
| |
| | Check mount status and remount if needed
| |
| |}
| |
| | |
| ==== Preventative Measures ====
| |
| | |
| To minimize data loss from NFS/SSHFS connectivity issues:
| |
| | |
| '''Use TCP for NFS''' (more reliable than UDP):
| |
| <syntaxhighlight lang="bash">
| |
| # Mount NFS with TCP explicitly
| |
| sudo mount -t nfs -o tcp 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1
| |
| </syntaxhighlight>
| |
| | |
| '''Use the <code>hard,nofail</code> mount options:'''
| |
| <syntaxhighlight lang="bash">
| |
| # In /etc/fstab
| |
| 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1 nfs hard,nofail,tcp 0 0
| |
| </syntaxhighlight>
| |
| * <code>hard</code>: Make NFS operations wait indefinitely instead of timing out
| |
| * <code>nofail</code>: Do not fail if the mount is unavailable at boot time
| |
| | |
| '''Monitor mount status:''' Set up automated monitoring to alert when NFS/SSHFS mounts become unresponsive or disconnected.
| |
| | |
| '''Consider Client/Server mode as alternative:''' If NFS/SSHFS connectivity is unreliable, consider using the modern [[Sniffer_distributed_architecture|Client/Server architecture]] instead, which uses encrypted TCP channels and is more resilient to network interruptions.
| |
| | |
| === Modern Mode: Client/Server Architecture (v20+) — Recommended ===
| |
| This model uses a secure, encrypted TCP channel between remote sensors (clients) and a central sensor instance (server). The GUI communicates with the central server only, which significantly simplifies networking and security.
| |
|
| |
|
| <kroki lang="mermaid"> | | <kroki lang="mermaid"> |
| Line 505: |
Line 233: |
| </kroki> | | </kroki> |
|
| |
|
| This architecture supports two primary modes:
| | {| class="wikitable" |
| # '''Local Processing:''' Remote sensors process packets locally and send only lightweight CDR data over the encrypted channel. PCAPs remain on the remote sensor. On-demand PCAP fetch is proxied via the central server (to the sensor's <code>TCP/5029</code>).
| | ! Mode !! Processing !! PCAP Storage !! WAN Traffic !! Best For |
| # '''Packet Mirroring:''' Remote sensors forward the entire raw packet stream to the central server, which performs all processing and storage. Ideal for low-resource remote sites.
| | |- |
| | | | '''Local Processing''' (<code>packetbuffer_sender=no</code>) || Remote || Remote || Low (CDRs only) || Limited WAN bandwidth |
| ==== Architecture Diagrams ====
| | |- |
| | | | '''Packet Mirroring''' (<code>packetbuffer_sender=yes</code>) || Central || Central || High (full packets) || Low-resource remote sites |
| <kroki lang="plantuml"> | | |} |
| @startuml
| |
| skinparam shadowing false
| |
| skinparam defaultFontName Arial
| |
| skinparam rectangle {
| |
| BorderColor #4A90E2
| |
| BackgroundColor #FFFFFF
| |
| stereotypeFontColor #333333
| |
| }
| |
| skinparam packageBorderColor #B0BEC5
| |
| skinparam packageBackgroundColor #F7F9FC
| |
| | |
| title Client/Server Architecture — Local Processing Mode
| |
| | |
| package "Remote Site" {
| |
| [Remote Probe/Sensor] as Remote
| |
| database "Local Storage (PCAP)" as RemotePCAP
| |
| }
| |
| | |
| package "Central Site" {
| |
| [Central VoIPmonitor Server] as Central
| |
| database "Central MySQL/MariaDB" as CentralDB
| |
| [Web GUI] as GUI
| |
| }
| |
| | |
| Remote -[#2F6CB0]-> Central : Encrypted TCP/60024\nCDRs only
| |
| Remote --> RemotePCAP : Stores PCAP locally
| |
| Central --> CentralDB : Writes CDRs
| |
| GUI -[#2F6CB0]-> Central : Queries data & requests PCAPs
| |
| Central -[#2F6CB0]-> RemotePCAP : Fetches PCAPs on demand (TCP/5029)
| |
| @enduml
| |
| </kroki> | |
| | |
| <kroki lang="plantuml">
| |
| @startuml
| |
| skinparam shadowing false
| |
| skinparam defaultFontName Arial
| |
| skinparam rectangle {
| |
| BorderColor #4A90E2
| |
| BackgroundColor #FFFFFF
| |
| stereotypeFontColor #333333
| |
| }
| |
| skinparam packageBorderColor #B0BEC5
| |
| skinparam packageBackgroundColor #F7F9FC
| |
| | |
| title Client/Server Architecture — Packet Mirroring Mode
| |
| | |
| package "Remote Site" {
| |
| [Remote Probe/Sensor\n(Low Resource)] as Remote
| |
| }
| |
| | |
| package "Central Site" {
| |
| [Central VoIPmonitor Server] as Central
| |
| database "Central MySQL/MariaDB" as CentralDB
| |
| database "Central Storage (PCAP)" as CentralPCAP
| |
| [Web GUI] as GUI
| |
| } | |
| | |
| Remote -[#2F6CB0]-> Central : Encrypted TCP/60024\nRaw packet stream
| |
| Central --> CentralDB : Writes CDRs
| |
| Central --> CentralPCAP : Processes & stores PCAPs
| |
| GUI -[#2F6CB0]-> Central : Queries data & downloads PCAPs
| |
| @enduml
| |
| </kroki>
| |
| | |
| ==== Step-by-Step Configuration Guide ====
| |
|
| |
|
| '''Prerequisites'''
| | For detailed configuration, see [[Sniffer_distributed_architecture|Distributed Architecture: Client-Server Mode]]. |
| * VoIPmonitor v20+ on all sensors.
| |
| * Central database reachable from the central server instance.
| |
| * Unique <code>id_sensor</code> per sensor (< 65536).
| |
| * NTP running everywhere (see '''Time Synchronization''' below).
| |
|
| |
|
| '''Scenario A — Local Processing (default, low WAN usage)''' | | '''Quick Start - Remote Sensor (Local Processing):''' |
|
| |
|
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # /etc/voipmonitor.conf on the REMOTE sensor (LOCAL PROCESSING)
| | id_sensor = 2 |
| | |
| id_sensor = 2 # unique per sensor (< 65536) | |
| server_destination = 10.224.0.250 | | server_destination = 10.224.0.250 |
| server_destination_port = 60024 | | server_destination_port = 60024 |
| server_password = your_strong_password | | server_password = your_strong_password |
| | packetbuffer_sender = no |
| | interface = eth0 |
| | sipport = 5060 |
| | # No MySQL credentials needed - central server writes to DB |
| | </syntaxhighlight> |
|
| |
|
| packetbuffer_sender = no # local analysis; sends only CDRs
| | '''Quick Start - Central Server:''' |
| interface = eth0 # or: interface = any
| |
| sipport = 5060 # example; add your usual sniffer options
| |
| | |
| # No MySQL credentials here — remote sensor does NOT write to DB directly.
| |
| </syntaxhighlight>
| |
|
| |
|
| <syntaxhighlight lang="ini"> | | <syntaxhighlight lang="ini"> |
| # /etc/voipmonitor.conf on the CENTRAL server (LOCAL PROCESSING network)
| |
|
| |
| server_bind = 0.0.0.0 | | server_bind = 0.0.0.0 |
| server_bind_port = 60024 | | server_bind_port = 60024 |
| server_password = your_strong_password | | server_password = your_strong_password |
|
| |
| mysqlhost = 10.224.0.201 | | mysqlhost = 10.224.0.201 |
| mysqldb = voipmonitor | | mysqldb = voipmonitor |
| mysqluser = voipmonitor | | mysqluser = voipmonitor |
| mysqlpassword = db_password | | mysqlpassword = db_password |
| | | cdr_partition = yes |
| cdr_partition = yes # partitions for CDR tables | | interface = # Leave empty - don't sniff locally |
| mysqlloadconfig = yes # allows DB-driven config if used
| |
| | |
| interface = # leave empty to avoid local sniffing | |
| # The central server will proxy on-demand PCAP fetches to sensors (TCP/5029).
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| '''Scenario B — Packet Mirroring (centralized processing/storage)'''
| | == Firewall Requirements == |
|
| |
|
| <syntaxhighlight lang="ini">
| | {| class="wikitable" |
| # /etc/voipmonitor.conf on the REMOTE sensor (PACKET MIRRORING)
| | ! Deployment !! Port !! Direction !! Purpose |
| | |- |
| | | Client/Server || TCP/60024 || Remote → Central || Encrypted CDR/packet channel |
| | |- |
| | | Client/Server || TCP/5029 || Central → Remote || On-demand PCAP fetch (Local Processing mode) |
| | |- |
| | | GUI Access || TCP/5029 || GUI → Central || Management/API |
| | |- |
| | | Cloud Mode || TCP/60023 || Sensor → cloud.voipmonitor.org || Cloud service connection |
| | |} |
|
| |
|
| id_sensor = 3
| | = Configuration Notes = |
| server_destination = 10.224.0.250
| |
| server_destination_port = 60024
| |
| server_password = your_strong_password
| |
|
| |
|
| packetbuffer_sender = yes # send RAW packet stream to central
| | == Critical Parameters == |
| interface = eth0 # capture source; no DB settings needed
| |
| </syntaxhighlight>
| |
|
| |
|
| <syntaxhighlight lang="ini">
| | {| class="wikitable" |
| # /etc/voipmonitor.conf on the CENTRAL server (PACKET MIRRORING)
| | ! Parameter !! Description !! Notes |
| | |- |
| | | <code>id_sensor</code> || Unique sensor identifier (1-65535) || '''Mandatory''' in distributed deployments |
| | |- |
| | | <code>cdr_partition</code> || Enable daily CDR table partitions || Enable on server writing to DB |
| | |- |
| | | <code>mysqlloadconfig</code> || Load config from database || Enable on central server only |
| | |- |
| | | <code>interface</code> || Capture interface || Use specific NIC or <code>any</code> |
| | |} |
|
| |
|
| server_bind = 0.0.0.0
| | == Time Synchronization == |
| server_bind_port = 60024
| |
| server_password = your_strong_password
| |
|
| |
|
| mysqlhost = 10.224.0.201
| | {{Warning|1=Accurate NTP sync is '''critical''' for correlating call legs across sensors. All servers (GUI, DB, sensors) must run NTP client (chrony or ntpd).}} |
| mysqldb = voipmonitor
| |
| mysqluser = voipmonitor
| |
| mysqlpassword = db_password
| |
|
| |
|
| cdr_partition = yes
| | == First Startup == |
| mysqlloadconfig = yes
| |
|
| |
|
| # As this server does all analysis, configure as if sniffing locally:
| | On first start against empty database: |
| sipport = 5060
| | # Start service: <code>systemctl start voipmonitor</code> |
| # ... add your usual sniffer/storage options (pcap directories, limits, etc.) | | # Monitor logs: <code>journalctl -u voipmonitor -f</code> |
| </syntaxhighlight> | | # Wait for schema/partition creation to complete |
|
| |
|
| ==== Firewall Checklist (Quick Reference) ====
| | If you see <code>Table 'cdr_next_1' doesn't exist</code> errors, check DB connectivity and privileges. |
| * '''Modern Client/Server (v20+):'''
| |
| ** '''Central Server:''' Allow inbound <code>TCP/60024</code> from remote sensors. Allow inbound <code>TCP/5029</code> from GUI (management/API to central sensor).
| |
| ** '''Remote Sensors (Local Processing only):''' Allow inbound <code>TCP/5029</code> from the central server (for on-demand PCAP fetch via proxy). Outbound <code>TCP/60024</code> to the central server.
| |
| * '''Cloud Mode:'''
| |
| ** '''Remote Sensors:''' Allow outbound <code>TCP/60023</code> to <code>cloud.voipmonitor.org</code>.
| |
|
| |
|
| == Configuration & Checklists == | | = Deployment Comparison = |
|
| |
|
| === Parameter Notes (clarifications) === | | {| class="wikitable" |
| * '''<code>id_sensor</code>''' — Mandatory in any distributed deployment (Classic or Client/Server). Must be unique per sensor (< 65536). The value is written to the database and used by the GUI to identify where a call was captured.
| | ! Model !! Processing !! PCAP Storage !! WAN Traffic !! GUI Connectivity |
| * '''<code>cdr_partition</code>''' — In Client/Server, enable on the central server instance that writes to the database. It can be disabled on remote "client" sensors that only mirror packets.
| | |- |
| * '''<code>mysqlloadconfig</code>''' — When enabled, the sensor can load additional parameters dynamically from the <code>sensor_config</code> table in the database. Typically enabled on the central server sensor that writes to DB; keep disabled on remote clients which do not access DB directly.
| | | Classic Standalone || Remote || Remote || Minimal (MySQL CDRs) || GUI ↔ each Sensor |
| * '''<code>interface</code>''' — Use a specific NIC (e.g., <code>eth0</code>) or <code>any</code> to capture from multiple NICs. For <code>any</code> ensure promiscuous mode on each NIC.
| | |- |
| | | '''Client/Server (Local Processing)''' || Remote || Remote || Minimal (encrypted CDRs) || '''GUI ↔ Central only''' |
| | |- |
| | | '''Client/Server (Packet Mirroring)''' || Central || Central || High (encrypted packets) || '''GUI ↔ Central only''' |
| | |} |
|
| |
|
| === Initial Service Start & Database Initialization === | | = Troubleshooting = |
| After installation, the '''first startup''' against a new/empty database is critical.
| |
| # Start the service: <code>systemctl start voipmonitor</code>
| |
| # Follow logs to ensure schema/partition creation completes:
| |
| #* <code>journalctl -u voipmonitor -f</code>
| |
| #* or <code>tail -f /var/log/syslog | grep voipmonitor</code>
| |
|
| |
|
| You should see creation of functions and partitions shortly after start. If you see errors like <code>Table 'cdr_next_1' doesn't exist</code>, the sensor is failing to initialize the schema — usually due to insufficient DB privileges or connectivity. Fix DB access and restart the sensor so it can finish initialization.
| | == NFS/SSHFS Connectivity == |
|
| |
|
| === Time Synchronization ===
| | Missing data for specific time periods usually indicates storage server connectivity issues. |
| Accurate and synchronized time is '''critical''' for correlating call legs from different sensors. All servers (GUI, DB, and all Sensors) must run an NTP client (e.g., <code>chrony</code> or <code>ntpdate</code>) to keep clocks in sync.
| |
|
| |
|
| == Comparison of Remote Deployment Modes ==
| |
| {| class="wikitable" | | {| class="wikitable" |
| ! Deployment Model | | ! Symptom !! Likely Cause !! Solution |
| ! Packet Processing Location | |
| ! PCAP Storage Location | |
| ! Network Traffic to Central Server | |
| ! GUI Connectivity | |
| |- | | |- |
| | Classic Standalone | | | Data gap in time period || NFS/SSHFS server unreachable || Check logs for "not responding, timed out" |
| | Remote | |
| | Remote | |
| | Minimal (MySQL CDRs) | |
| | GUI ↔ each Sensor (management port) | |
| |- | | |- |
| | '''Modern Client/Server (Local Processing)''' | | | Stale file handle || Server rebooted or export changed || Remount NFS share |
| | Remote | |
| | Remote | |
| | Minimal (Encrypted CDRs) | |
| | '''GUI ↔ Central Server only''' (central proxies PCAP fetch) | |
| |- | | |- |
| | '''Modern Client/Server (Packet Mirroring)''' | | | Connection resets || Network interruption || Check network stability |
| | '''Central''' | | |- |
| | '''Central''' | | | GUI shows "File not found" || Mount point dismounted || Verify mount with <code>mount | grep nfs</code> |
| | High (Encrypted full packets) | |
| | '''GUI ↔ Central Server only''' | |
| |} | | |} |
|
| |
|
| == FAQ & Common Pitfalls == | | <syntaxhighlight lang="bash"> |
| * '''Do remote sensors need DB credentials in Client/Server?''' No. Only the central server instance writes to DB. | | # Check for NFS errors |
| * '''Why is <code>id_sensor</code> required everywhere?''' The GUI uses it to tag and filter calls by capture source. | | grep "nfs: server.*not responding" /var/log/syslog |
| * '''Local Processing still fetches PCAPs from remote — who connects to whom?''' The GUI requests via the central server; the central server then connects to the remote sensor's <code>TCP/5029</code> to retrieve the PCAP.
| | grep "nfs.*timed out" /var/log/syslog |
| | |
| | # Verify mount status |
| | mount | grep nfs |
| | stat /mnt/voipmonitor/sensor1 |
| | </syntaxhighlight> |
|
| |
|
| == Related Documentation == | | = See Also = |
|
| |
|
| * [[Sniffer_distributed_architecture|Distributed Architecture: Client-Server Mode]] - Detailed client/server configuration | | * [[Sniffer_distributed_architecture|Distributed Architecture: Client-Server Mode]] - Detailed client/server configuration |
| * [[Scaling|Scaling and Performance Tuning Guide]] - For performance optimization | | * [[Sniffer_troubleshooting|Sniffer Troubleshooting]] - Diagnostic procedures |
| * [[Sniffer_troubleshooting|Sniffer Troubleshooting]] - For systematic diagnostic procedures | | * [[Audiocodes_tunneling|AudioCodes Tunneling]] - AudioCodes SBC integration |
| * [[Cloud|Cloud Service Configuration]] - For cloud deployment specifics | | * [[Tls|TLS/SRTP Decryption]] - Encrypted traffic monitoring |
| * [[Systemd_for_voipmonitor_service_management|Systemd Service Management]] - For service management best practices | | * [[Cloud|Cloud Service Configuration]] - Cloud deployment specifics |
| | * [[Scaling|Scaling and Performance Tuning]] - Performance optimization |
|
| |
|
| == AI Summary for RAG ==
| | = AI Summary for RAG = |
|
| |
|
| '''Summary:''' This guide covers deployment topologies for VoIPmonitor. It contrasts running the sensor on the same host as a PBX versus on a dedicated server. For dedicated sensors, it details methods for forwarding traffic, including hardware-based port mirroring (SPAN) and various software-based tunneling protocols (IP-in-IP, GRE, TZSP, VXLAN, HEP, AudioCodes, IPFIX). HEP (Homer Encapsulation Protocol) is a lightweight protocol for capturing and mirroring VoIP packets. When <code>hep = yes</code>, VoIPmonitor listens for HEPv3 (and compatible HEPv2) packets and extracts the original VoIP traffic from the encapsulation. CRITICAL HEP LIMITATION: VoIPmonitor does NOT use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets. When SIP signaling and RTP media are encapsulated in HEP and arrive from different HEP sources (different capture nodes or sensors), VoIPmonitor cannot correlate them into a single CDR using HEP protocol metadata. This is feature request VS-1703 and there is currently no available workaround. The article covers cloud service packet mirroring options (GCP Packet Mirroring, AWS Traffic Mirroring, Azure Virtual Network TAP) with critical requirements: bidirectional capture (ingress and egress) and proper VM sizing (vCPU, RAM, storage I/O). The core of the article explains distributed architectures for multi-site monitoring, comparing the "classic" standalone remote sensor model with the modern, recommended "client/server" model. It details the two operational modes of the client/server architecture: local processing (sending only CDRs, PCAPs remain remote with central-proxied fetch) and packet mirroring (sending full, raw packets for central processing), which is ideal for low-resource endpoints. The article also explains an alternative approach for classic remote sensors: mounting PCAP spools via NFS or SSHFS when TCP/5029 access to sensors is blocked by firewalls, including troubleshooting steps for missing data due to NFS/SSHFS connectivity issues (checking logs for "not responding, timed out" errors, verifying network connectivity with ping/traceroute, and ensuring NFS/SSHFS server is running and accessible). The guide concludes with step-by-step configuration, firewall rules, critical parameter notes, and the importance of NTP plus first-start DB initialization. | | '''Summary:''' VoIPmonitor deployment guide covering sensor placement (on-host vs dedicated), traffic forwarding methods (SPAN/RSPAN, software tunneling, cloud mirroring), and distributed architectures. Key traffic forwarding options: hardware port mirroring (physical/VMware switches), software tunnels (GRE, ERSPAN, TZSP, VXLAN, HEP, AudioCodes, IPFIX), and cloud provider services (GCP Packet Mirroring, AWS Traffic Mirroring, Azure Virtual Network TAP). CRITICAL HEP LIMITATION: VoIPmonitor does NOT use HEP correlation ID (captureNodeID) - SIP and RTP from different HEP sources will NOT be correlated (feature request VS-1703, no workaround). HEP3 packets with port 0 require adding port 0 to sipport directive. Cloud mirroring requires BIDIRECTIONAL capture (ingress+egress) or CDRs will be incomplete. Distributed architectures: Classic standalone (each sensor writes to central DB, GUI connects to each sensor) vs Modern Client/Server (recommended, encrypted TCP/60024 channel, GUI connects only to central server). Client/Server modes: Local Processing (packetbuffer_sender=no, CDRs only, PCAPs remain remote) vs Packet Mirroring (packetbuffer_sender=yes, full packets sent to central). Alternative for blocked TCP/5029: mount remote spools via NFS/SSHFS, configure multiple paths in GUI Sniffer data path setting. NFS troubleshooting: check for "not responding, timed out" in logs, verify mount status, use hard,nofail,tcp mount options. Critical requirement: NTP sync across all servers. |
|
| |
|
| '''Keywords:''' deployment, architecture, topology, on-host, dedicated sensor, port mirroring, SPAN, RSPAN, traffic mirroring, tunneling, GRE, TZSP, VXLAN, HEP, HEP correlation ID, captureNodeID, HEP limitation, HEP SIP RTP correlation, AudioCodes, IPFIX, cloud mirroring, GCP, AWS, Azure, Packet Mirroring, Traffic Mirroring, Virtual Network TAP, ingress, egress, bidirectional, VM sizing, remote sensor, multi-site, client server mode, packet mirroring, local processing, firewall rules, NTP, time synchronization, cloud mode, NFS, SSHFS, spooldir mounting, NFS troubleshooting, SSHFS troubleshooting, missing data, network connectivity | | '''Keywords:''' deployment, topology, on-host, dedicated sensor, SPAN, RSPAN, port mirroring, VMware, vSwitch, dvSwitch, tunneling, GRE, ERSPAN, TZSP, VXLAN, HEP, HEP correlation ID, captureNodeID, VS-1703, HEP port 0, sipport, AudioCodes, IPFIX, cloud mirroring, GCP, AWS, Azure, Packet Mirroring, Traffic Mirroring, ingress, egress, bidirectional, client server, packetbuffer_sender, local processing, packet mirroring, TCP 60024, TCP 5029, NFS, SSHFS, sniffer data path, NTP, time synchronization, id_sensor, cdr_partition |
|
| |
|
| '''Key Questions:''' | | '''Key Questions:''' |
| * Can I use cloud packet mirroring (GCP/AWS/Azure) with VoIPmonitor? | | * Should I install VoIPmonitor on my PBX or use a dedicated sensor? |
| * How should I configure cloud packet mirroring for ingress and egress traffic? | | * How do I configure port mirroring (SPAN) for VoIPmonitor? |
| * What is the difference between the classic remote sensor and the modern client/server mode? | | * How do I configure VMware/ESXi virtual switch mirroring? |
| * When should I use packet mirroring (<code>packetbuffer_sender</code>) instead of local processing? | | * What software tunneling protocols does VoIPmonitor support? |
| * What are the firewall requirements for the client/server deployment model?
| | * How do I configure HEP (Homer Encapsulation Protocol)? |
| * How can I access PCAP files from remote sensors if TCP/5029 is blocked? | | * Does VoIPmonitor use HEP correlation ID to correlate SIP and RTP? |
| * How do I configure NFS or SSHFS to mount remote PCAP spools? | | * Why are SIP and RTP from different HEP sources not correlated? |
| * How do I configure the GUI sniffer data path for multiple mounted spools?
| | * How do I capture HEP3 packets with port 0? |
| * How do I troubleshoot missing CDRs or PCAPs when using NFS or SSHFS mounts? | | * How do I configure cloud packet mirroring (GCP/AWS/Azure)? |
| * What should I look for in logs to diagnose NFS connectivity issues? | | * Why do I get incomplete CDRs with cloud mirroring? |
| * Can I run the sensor on the same machine as my Asterisk/FreeSWITCH server?
| | * What is the difference between classic and client/server deployment? |
| * What is a SPAN port and how is it used with VoIPmonitor?
| | * What is the difference between local processing and packet mirroring mode? |
| * Why is NTP important for a distributed VoIPmonitor setup? | | * How do I access PCAPs if TCP/5029 is blocked? |
| * What is HEP and how do I configure VoIPmonitor to receive HEP packets?
| | * How do I configure NFS/SSHFS for remote spool access? |
| * Does VoIPmonitor use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets?
| | * How do I troubleshoot missing data with NFS mounts? |
| * Can VoIPmonitor correlate SIP and RTP packets that arrive from different HEP sources?
| | * What firewall ports are required for client/server mode? |
| * Is there a workaround for HEP SIP/RTP correlation across multiple HEP capture nodes?
| | * Why is NTP important for distributed VoIPmonitor? |
| * How do I configure GRE, ERSPAN, and VXLAN tunneling for VoIPmonitor?
| |
This guide covers VoIPmonitor deployment options: where to install the sensor, how to forward traffic, and distributed architectures for multi-site monitoring.
Sensor Deployment Options
On-Host Capture
Install the sensor directly on the same Linux server as your PBX/SBC.
| Pros |
Cons
|
| No extra hardware, network changes, or port mirroring required |
Adds CPU, memory, and disk I/O load to production voice server
|
| Simplest setup |
Not suitable if resources are critical
|
ℹ️ Note: VoIPmonitor sensor runs exclusively on Linux. For Windows-based PBXs (e.g., 3CX Windows edition), you must use a dedicated Linux sensor with traffic mirroring.
Dedicated Sensor
A separate Linux server runs only VoIPmonitor. Recommended for production environments as it isolates monitoring from voice platform resources.
When Required:
- Windows-based PBXs
- Limited CPU/RAM/disk I/O on PBX server
- Zero monitoring impact needed
- Centralized capture from multiple sites
Traffic Forwarding Methods
When using a dedicated sensor, you must forward traffic to it using one of these methods.
Hardware Port Mirroring (SPAN/RSPAN)
Physical or virtual switches copy traffic from source port(s) to a monitoring port.
Physical Switch
Configure your switch to mirror traffic from PBX/SBC ports to the sensor's port. Consult your switch documentation for specific commands.
# /etc/voipmonitor.conf
interface = eth0
sipport = 5060
savertp = yes
VMware/ESXi Virtual Switch
For virtualized environments, VMware provides port mirroring at the virtual switch level.
Standard vSwitch:
- In vSphere Client, navigate to ESXi host
- Select virtual switch → Properties/Edit Settings → Enable Port Mirroring
- Set source (SBC VM) and destination (VoIPmonitor VM) ports
Distributed vSwitch:
- In vSphere Web Client → Networking → Select distributed switch
- Configure tab → Port Mirroring → Create mirroring session
- Specify source/destination ports and enable
ℹ️ Note: Distributed switch mirroring can span multiple ESXi hosts within a cluster.
Multiple VoIP Platforms
Monitor multiple platforms (e.g., Mitel + FreeSWITCH) with a single sensor by mirroring multiple source ports to one destination port.
GUI differentiation:
- Filter by IP address ranges
- Filter by number prefixes
- Use separate sensors with unique
id_sensor values
⚠️ Warning: Critical: When sniffing from multiple mirrored sources, packets may arrive as duplicates. Add auto_enable_use_blocks = yes to voipmonitor.conf to enable automatic deduplication. See Sniffer_configuration for details.
Software-based Tunneling
When hardware mirroring is unavailable, use software tunneling to encapsulate and forward packets.
| Protocol |
Configuration Parameter |
Notes
|
| IP-in-IP, GRE, ERSPAN |
Built-in (auto-detected) |
No additional config needed
|
| TZSP (MikroTik) |
udp_port_tzsp = 37008 |
|
| L2TP |
udp_port_l2tp = 1701 |
|
| VXLAN |
udp_port_vxlan = 4789 |
Common in cloud environments
|
| AudioCodes |
udp_port_audiocodes = 925 |
See AudioCodes Tunneling
|
| IPFIX (Oracle SBCs) |
ipfix* options |
Enable ipfix options in config
|
HEP (Homer Encapsulation Protocol)
Lightweight protocol for mirroring VoIP packets. Supported by Kamailio, OpenSIPS, FreeSWITCH, and many SBCs.
# /etc/voipmonitor.conf
hep = yes
hep_bind_port = 9060
hep_bind_udp = yes
# Optional: hep_kamailio_protocol_id_fix = yes
Known Limitations:
⚠️ Warning: HEP Correlation ID Not Supported: VoIPmonitor does NOT use HEP correlation ID (captureNodeID) to correlate SIP and RTP packets. If SIP and RTP arrive from different HEP sources, they will NOT be correlated into a single CDR.
VoIPmonitor correlates using standard SIP Call-ID, To/From tags, and RTP SSRC fields only. Feature request VS-1703 has been logged but there is no workaround currently.
HEP Timestamp: VoIPmonitor uses the HEP timestamp field. If the source has an unsynchronized clock, call timestamps will be incorrect. There is no option to ignore HEP timestamps.
HEP3 with Port 0: Not captured by default. Add port 0 to sipport:
Cloud Packet Mirroring
Cloud providers offer native mirroring services using VXLAN or GRE encapsulation.
| Provider |
Service Name
|
| Google Cloud |
Packet Mirroring
|
| AWS |
Traffic Mirroring
|
| Azure |
Virtual Network TAP
|
Configuration Steps:
- Create a VoIPmonitor sensor VM in your cloud environment
- Create mirroring policy: select source VMs/subnets, set destination to sensor VM
- Critical: Capture traffic in BOTH directions (INGRESS and EGRESS)
- Configure sensor:
udp_port_vxlan = 4789
interface = eth0
sipport = 5060
⚠️ Warning: Capturing only ingress or only egress results in incomplete CDRs and broken call data.
Best Practices:
- Filter at source to forward only SIP/RTP ports
- Monitor NIC bandwidth limits
- Account for VXLAN overhead (~50 bytes) - may need jumbo frames
- Ensure NTP sync across all VMs
Alternative: Consider Client/Server architecture with on-host sensors instead of cloud mirroring for better performance.
Pre-Deployment Verification
For complex setups (RSPAN, ERSPAN, proprietary SBCs), verify compatibility before production deployment:
- Configure test mirroring with a subset of traffic
- Capture test calls with tcpdump:
sudo tcpdump -i eth0 -s0 port 5060 -w /tmp/test.pcap
- Verify pcap contains SIP and RTP:
tshark -r /tmp/test.pcap -Y "sip || rtp"
- Submit pcap to VoIPmonitor support with hardware/configuration details
Distributed Architectures
For multi-site monitoring, sensors can be deployed in various configurations.
Classic Mode: Standalone Sensors
Each sensor operates independently:
- Processes packets and stores PCAPs locally
- Connects directly to central MySQL to write CDRs
- GUI needs network access to each sensor's
TCP/5029 for PCAP retrieval
Alternative: NFS/SSHFS Mounting
If TCP/5029 access is blocked, mount remote spool directories on the GUI server:
# NFS mount
sudo mount -t nfs 10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1
# SSHFS mount
sshfs voipmonitor@10.224.0.101:/var/spool/voipmonitor /mnt/voipmonitor/sensor1
Configure GUI: Settings > System Configuration > Sniffer data path:
/var/spool/voipmonitor:/mnt/voipmonitor/sensor1:/mnt/voipmonitor/sensor2
💡 Tip: For NFS, use hard,nofail,tcp mount options for reliability.
Modern Mode: Client/Server (v20+) — Recommended
Secure encrypted TCP channel between remote sensors and central server. GUI communicates only with central server.
| Mode |
Processing |
PCAP Storage |
WAN Traffic |
Best For
|
Local Processing (packetbuffer_sender=no) |
Remote |
Remote |
Low (CDRs only) |
Limited WAN bandwidth
|
Packet Mirroring (packetbuffer_sender=yes) |
Central |
Central |
High (full packets) |
Low-resource remote sites
|
For detailed configuration, see Distributed Architecture: Client-Server Mode.
Quick Start - Remote Sensor (Local Processing):
id_sensor = 2
server_destination = 10.224.0.250
server_destination_port = 60024
server_password = your_strong_password
packetbuffer_sender = no
interface = eth0
sipport = 5060
# No MySQL credentials needed - central server writes to DB
Quick Start - Central Server:
server_bind = 0.0.0.0
server_bind_port = 60024
server_password = your_strong_password
mysqlhost = 10.224.0.201
mysqldb = voipmonitor
mysqluser = voipmonitor
mysqlpassword = db_password
cdr_partition = yes
interface = # Leave empty - don't sniff locally
Firewall Requirements
| Deployment |
Port |
Direction |
Purpose
|
| Client/Server |
TCP/60024 |
Remote → Central |
Encrypted CDR/packet channel
|
| Client/Server |
TCP/5029 |
Central → Remote |
On-demand PCAP fetch (Local Processing mode)
|
| GUI Access |
TCP/5029 |
GUI → Central |
Management/API
|
| Cloud Mode |
TCP/60023 |
Sensor → cloud.voipmonitor.org |
Cloud service connection
|
Configuration Notes
Critical Parameters
| Parameter |
Description |
Notes
|
id_sensor |
Unique sensor identifier (1-65535) |
Mandatory in distributed deployments
|
cdr_partition |
Enable daily CDR table partitions |
Enable on server writing to DB
|
mysqlloadconfig |
Load config from database |
Enable on central server only
|
interface |
Capture interface |
Use specific NIC or any
|
Time Synchronization
⚠️ Warning: Accurate NTP sync is critical for correlating call legs across sensors. All servers (GUI, DB, sensors) must run NTP client (chrony or ntpd).
First Startup
On first start against empty database:
- Start service:
systemctl start voipmonitor
- Monitor logs:
journalctl -u voipmonitor -f
- Wait for schema/partition creation to complete
If you see Table 'cdr_next_1' doesn't exist errors, check DB connectivity and privileges.
Deployment Comparison
| Model |
Processing |
PCAP Storage |
WAN Traffic |
GUI Connectivity
|
| Classic Standalone |
Remote |
Remote |
Minimal (MySQL CDRs) |
GUI ↔ each Sensor
|
| Client/Server (Local Processing) |
Remote |
Remote |
Minimal (encrypted CDRs) |
GUI ↔ Central only
|
| Client/Server (Packet Mirroring) |
Central |
Central |
High (encrypted packets) |
GUI ↔ Central only
|
Troubleshooting
NFS/SSHFS Connectivity
Missing data for specific time periods usually indicates storage server connectivity issues.
| Symptom |
Likely Cause |
Solution
|
| Data gap in time period |
NFS/SSHFS server unreachable |
Check logs for "not responding, timed out"
|
| Stale file handle |
Server rebooted or export changed |
Remount NFS share
|
| Connection resets |
Network interruption |
Check network stability
|
| GUI shows "File not found" |
Mount point dismounted |
grep nfs
|
# Check for NFS errors
grep "nfs: server.*not responding" /var/log/syslog
grep "nfs.*timed out" /var/log/syslog
# Verify mount status
mount | grep nfs
stat /mnt/voipmonitor/sensor1
See Also
AI Summary for RAG
Summary: VoIPmonitor deployment guide covering sensor placement (on-host vs dedicated), traffic forwarding methods (SPAN/RSPAN, software tunneling, cloud mirroring), and distributed architectures. Key traffic forwarding options: hardware port mirroring (physical/VMware switches), software tunnels (GRE, ERSPAN, TZSP, VXLAN, HEP, AudioCodes, IPFIX), and cloud provider services (GCP Packet Mirroring, AWS Traffic Mirroring, Azure Virtual Network TAP). CRITICAL HEP LIMITATION: VoIPmonitor does NOT use HEP correlation ID (captureNodeID) - SIP and RTP from different HEP sources will NOT be correlated (feature request VS-1703, no workaround). HEP3 packets with port 0 require adding port 0 to sipport directive. Cloud mirroring requires BIDIRECTIONAL capture (ingress+egress) or CDRs will be incomplete. Distributed architectures: Classic standalone (each sensor writes to central DB, GUI connects to each sensor) vs Modern Client/Server (recommended, encrypted TCP/60024 channel, GUI connects only to central server). Client/Server modes: Local Processing (packetbuffer_sender=no, CDRs only, PCAPs remain remote) vs Packet Mirroring (packetbuffer_sender=yes, full packets sent to central). Alternative for blocked TCP/5029: mount remote spools via NFS/SSHFS, configure multiple paths in GUI Sniffer data path setting. NFS troubleshooting: check for "not responding, timed out" in logs, verify mount status, use hard,nofail,tcp mount options. Critical requirement: NTP sync across all servers.
Keywords: deployment, topology, on-host, dedicated sensor, SPAN, RSPAN, port mirroring, VMware, vSwitch, dvSwitch, tunneling, GRE, ERSPAN, TZSP, VXLAN, HEP, HEP correlation ID, captureNodeID, VS-1703, HEP port 0, sipport, AudioCodes, IPFIX, cloud mirroring, GCP, AWS, Azure, Packet Mirroring, Traffic Mirroring, ingress, egress, bidirectional, client server, packetbuffer_sender, local processing, packet mirroring, TCP 60024, TCP 5029, NFS, SSHFS, sniffer data path, NTP, time synchronization, id_sensor, cdr_partition
Key Questions:
- Should I install VoIPmonitor on my PBX or use a dedicated sensor?
- How do I configure port mirroring (SPAN) for VoIPmonitor?
- How do I configure VMware/ESXi virtual switch mirroring?
- What software tunneling protocols does VoIPmonitor support?
- How do I configure HEP (Homer Encapsulation Protocol)?
- Does VoIPmonitor use HEP correlation ID to correlate SIP and RTP?
- Why are SIP and RTP from different HEP sources not correlated?
- How do I capture HEP3 packets with port 0?
- How do I configure cloud packet mirroring (GCP/AWS/Azure)?
- Why do I get incomplete CDRs with cloud mirroring?
- What is the difference between classic and client/server deployment?
- What is the difference between local processing and packet mirroring mode?
- How do I access PCAPs if TCP/5029 is blocked?
- How do I configure NFS/SSHFS for remote spool access?
- How do I troubleshoot missing data with NFS mounts?
- What firewall ports are required for client/server mode?
- Why is NTP important for distributed VoIPmonitor?