Alerts: Difference between revisions

From VoIPmonitor.org
(Document OR logic limitation: alerts use OR logic between conditions, AND logic is not supported. No operand parameter exists for combining conditions.)
(Fix template syntax - remove curly braces from 'from' and 'to' field names)
 
(14 intermediate revisions by the same user not shown)
Line 4: Line 4:
= Alerts & Reports =
= Alerts & Reports =


Alerts & Reports generate email notifications based on QoS parameters or SIP error conditions. The system includes daily reports, ad hoc reports, and stores all generated items in history.
Email notifications triggered by QoS thresholds, SIP errors, or sensor health conditions. The system stores all alerts in history for review.
 
== Overview ==
 
The alert system monitors call quality and SIP signaling in real-time, triggering notifications when configured thresholds are exceeded.


<kroki lang="plantuml">
<kroki lang="plantuml">
Line 14: Line 10:
skinparam shadowing false
skinparam shadowing false
skinparam defaultFontName Arial
skinparam defaultFontName Arial
 
rectangle "Sensor" as sensor
rectangle "VoIPmonitor\nSensor" as sensor
database "MySQL" as db
database "MySQL\nDatabase" as db
rectangle "Cron\n(1min)" as cron
rectangle "Cron Job\n(every minute)" as cron
rectangle "Alert\nProcessor" as processor
rectangle "Alert\nProcessor" as processor
rectangle "MTA\n(Postfix/Exim)" as mta
rectangle "MTA" as mta
actor "Admin" as admin
actor "Admin" as admin
 
sensor --> db : CDRs
sensor --> db : CDRs with\nQoS metrics
cron --> processor : Trigger
cron --> processor : Trigger
processor --> db : Query alerts\n& CDRs
processor --> db : Query
processor --> mta : Send email
processor --> mta : Email
mta --> admin : Alert notification
mta --> admin : Alert
@enduml
@enduml
</kroki>
</kroki>


== Email Configuration Prerequisites ==
== Prerequisites ==


Emails are sent using PHP's <code>mail()</code> function, which relies on the server's Mail Transfer Agent (MTA) such as Exim, Postfix, or Sendmail. Configure your MTA according to your Linux distribution documentation.
=== Email Configuration ===


=== Setting the Email "From" Address ===
Alerts use PHP's <code>mail()</code> function via the server's MTA (Postfix/Exim/Sendmail).


To configure the "From" address that appears in outgoing alert emails:
{| class="wikitable"
 
|-
;Navigate to: GUI > Settings > System Configuration > Email / HTTP Referer
! Setting !! Location !! Description
;Locate the field: DEFAULT_EMAIL_FROM (Default "From" address for outgoing emails)
|-
;Set your desired email address (e.g., <code>alerts@yourcompany.com</code>)
| From Address || GUI > Settings > System Configuration > Email || <code>DEFAULT_EMAIL_FROM</code> - sender address for all alerts
 
|-
This setting applies to all automated emails sent by VoIPmonitor, including:
| Cron Job || <code>/etc/crontab</code> || Required for alert processing
* QoS alerts (RTP, SIP response, sensors)
|}
* Daily reports
* License notifications
 
=== Setting Up the Cron Job ===
 
Alert processing requires a cron job that runs every minute:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Add to /etc/crontab (adjust path based on your GUI installation)
# Add cron job (required)
echo "* * * * * root php /var/www/html/php/run.php cron" >> /etc/crontab
echo "* * * * * root php /var/www/html/php/run.php cron" >> /etc/crontab
# Reload crontab
killall -HUP cron  # Debian/Ubuntu
killall -HUP cron  # Debian/Ubuntu
# or
# or: killall -HUP crond  # RHEL/CentOS
killall -HUP crond  # CentOS/RHEL
</syntaxhighlight>
</syntaxhighlight>


== Configure Alerts ==
== Alert Types ==
 
Email alerts can trigger on SIP protocol events or RTP QoS metrics. Access alerts configuration via '''GUI > Alerts'''.


[[File:alertgrid.png|frame|center|Alert configuration grid]]
Access via '''GUI > Alerts'''.


=== Alert Types ===
=== RTP Alerts ===


==== RTP Alerts ====
Trigger on voice quality metrics:
 
* '''MOS''' - below threshold
RTP alerts trigger based on voice quality metrics:
* '''MOS''' (Mean Opinion Score) - below threshold
* '''Packet loss''' - percentage exceeded
* '''Packet loss''' - percentage exceeded
* '''Jitter''' - variation exceeded
* '''Jitter''' - variation exceeded
* '''Delay''' (PDV) - latency exceeded
* '''Delay''' (PDV) - latency exceeded
* '''One-way calls''' - answered but one RTP stream missing
* '''One-way calls''' - one RTP stream missing
* '''Missing RTP''' - answered but both RTP streams missing
* '''Missing RTP''' - both RTP streams missing
 
Configure alerts to trigger when:
* Number of incidents exceeds a set value, OR
* Percentage of CDRs exceeds a threshold
 
[[File:alertrtpform.png|frame|center|RTP alert configuration form]]
 
==== RTP&CDR Alerts ====
 
RTP&CDR alerts combine RTP quality metrics with CDR-based conditions, including Post Dial Delay (PDD). These alerts are useful for monitoring call setup performance and detecting network latency issues.
 
'''Available Conditions:'''
 
In the '''filter-common tab''', you can configure conditions including:
* '''PDD (Post Dial Delay)''' - Time between sending INVITE and receiving final response. Configure with comparison operators like <code>PDD > 5</code> (in seconds) to alert on long call setup delays. This can also detect [[Sniffer_troubleshooting#Routing_Loops|routing loops]] where looping calls continuously retransmit INVITE without receiving responses (PDD will be very large).
 
In the '''base config tab''':
* Set the '''recipient email address''' for alert notifications
* Consider limiting the '''max-lines in body''' to prevent oversized emails when many CDRs match the alert condition
 
==== SIP Response Alerts ====
 
SIP response alerts trigger based on SIP response codes:
* '''Empty response field''': Matches all call attempts per configured filters
* '''Response code 0''': Matches unreplied INVITE requests (no response received). This is useful for detecting [[Sniffer_troubleshooting#Routing_Loops|routing loops]] where calls continuously loop and never receive any SIP response.
* '''Specific codes''': Match exact codes like 404, 503, etc.
 
[[File:alertsipform.png|frame|center|SIP response alert configuration form]]
 
==== Percentage Alerts and the "from all" Checkbox ====
 
SIP response alerts can trigger based either on the '''number of incidents''' or the '''percentage of CDRs''' exceeding a threshold.
 
When setting a percentage threshold (e.g., <code>>10%</code>):
 
* '''"from all" checkbox CHECKED''': The percentage is calculated from '''ALL CDRs in the database''' (not just those matching filters).
* '''"from all" checkbox UNCHECKED''': The percentage is calculated only from CDRs that match your '''common filters''' (IP groups, numbers, etc.). This is the correct setting when monitoring a specific IP group.
 
'''Example: Monitor 503 responses for a specific IP group'''
 
To alert when the percentage of SIP 503 responses from the "Datora" IP group exceeds 10%:
 
1. Navigate to '''GUI > Alerts > filter common''' subtab
2. In '''IP/Number Group''', select your group (e.g., "Datora")
3. '''UNCHECK "from all"''' - this ensures the percentage is calculated only from CDRs involving IPs in the Datora group
4. In the '''Base config''' subtab:
  * Set '''Type''' to '''SIP Response'''
  * Set '''Response code''' to <code>503</code>
  * Set '''Incidents threshold''' to <code>>10%</code>
5. Configure recipient emails and save
 
If you leave "from all" CHECKED, the alert would calculate the 503 percentage across ALL CDRs in your database, which defeats the purpose of monitoring a specific IP group.
 
==== Detecting 408 Request Timeout Failures ====
 
A '''408 Request Timeout''' response occurs when the caller sends multiple INVITE retransmissions and receives no final response. This is useful for alerting on calls that timeout after the UAS (User Agent Server) sends a provisional response like '''100 Trying''' but then fails to send any further responses.
 
'''Use Cases:'''
* Detect failing PBX or SBC (Session Border Controller) instances that accept calls but stop processing
* Monitor network failures where SIP messages stop flowing after initial dialog establishment
* Identify servers that become unresponsive mid-call setup
 
'''Configuration:'''
1. Navigate to '''GUI > Alerts'''
2. Create new alert with type '''SIP Response'''
3. Set '''Response code''' to <code>408</code>
4. Optionally add Common Filters (IP addresses, numbers) to narrow scope
5. Save the alert
 
'''Understanding the Difference Between Response Code 0 and 408:'''
* '''Response code 0''': Matches calls that received absolutely no response (not even a 100 Trying). These are network or reachability issues.
* '''Response code 408''': Matches calls that received at least one provisional response (like 100 Trying) but eventually timed out. These indicate a server or application layer problem where the UAS stopped responding after initial acknowledgment.
 
Note: When a call times out with a 408 response, the CDR stores <code>408</code> as the Last SIP Response. Alerting on 408 will catch all call setup timeouts, including those where a 100 Trying was initially received.
 
==== Sensors Alerts ====
 
Sensors alerts monitor the health of VoIPmonitor probes and sniffer instances. This is the most reliable method to check if remote sensors are online and actively monitoring traffic.
 
Unlike simple network port monitoring (which may show a port as open even if the process is frozen or unresponsive), sensors alerts verify that the sensor instance is actively communicating with the VoIPmonitor GUI server.
 
;Setup:
:# Configure sensors in '''Settings > Sensors'''
:# Create a sensors alert to be notified when a probe goes offline or becomes unresponsive
 
==== Sensor Health Monitoring Conditions ====


In addition to detecting offline sensors, you can configure sensors alert conditions to monitor sensor performance issues:
Configure alerts to trigger when number of incidents OR percentage of CDRs exceeds threshold.


<br>
=== RTP&CDR Alerts ===
'''Conditions you can configure:'''


* '''Old CDR''' - Alerts when the sensor has not written CDRs to the database recently. This indicates the sensor is either not capturing calls, or there is a database insertion bottleneck preventing CDRs from being committed.
Combine RTP metrics with CDR conditions including '''PDD (Post Dial Delay)'''.


* '''Big SQL queue stat''' - Alerts when the SQL cache queue is growing. The SQL queue represents CDRs waiting in memory to be written to the database. A growing queue indicates the database cannot keep up with write operations.
'''Using Filter Templates:'''
# Create CDR filter in '''GUI > CDR'''
# Save as template
# In alert config, select from '''Filter template''' dropdown


<br>
{{Tip|1=Use filter templates for complex conditions like <code>duration > 14400</code> (calls over 4 hours) or <code>absolute_timeout</code> (truncated recordings).}}
'''SQL Queue Threshold Guidance:'''


The SQL queue is measured in the number of cache files waiting. A healthy sensor should maintain a low SQL count.
=== SIP Response Alerts ===
 
* '''Normal operation''': SQL queue should remain near 0 during all traffic conditions.
* '''Warning level''': SQL queue above 20 files indicates a significant delay between packet capture and database insertion.
* '''Critical level''': SQL queue above 100 files indicates severe database performance issues requiring immediate attention.
 
When configuring a "Big SQL queue stat" alert, setting the threshold to 20 files provides early warning before the problem escalates to critical levels.
 
<br>
'''Configuring Alert Actions:'''
 
When a sensor health alert triggers, you can configure the following actions:
 
* '''Email notification''' - Send alerts to administrators via email.
* '''External script execution''' - Execute a custom script with arguments about the triggering sensor and condition. This enables integration with monitoring systems like Nagios or Zabbix, or automated remediation workflows.
 
The external script receives information about which specific sensor triggered the alert and which health condition was violated, allowing you to build automated responses tailored to the type of failure.
 
==== SIP REGISTER RRD Beta Alerts ====
 
The '''SIP REGISTER RRD beta''' alert type monitors SIP REGISTER response times and alerts when REGISTER packets do not receive a response within a specified threshold (in milliseconds). This is useful for detecting network latency issues, packet loss, or failing switches that cause SIP retransmissions.
 
This alert serves as an effective proxy to monitor for registration issues, as REGISTER retransmissions often indicate problems with network connectivity or unresponsive SIP servers.
 
;Configuration:
:# Navigate to '''GUI > Alerts'''
:# Create a new alert with type '''SIP REGISTER RRD beta'''
:# Set the response time threshold in milliseconds (e.g., alert if REGISTER does not receive a response within 2000ms)
:# Configure recipient email addresses
:# Save the alert configuration
 
The system monitors REGISTER packets and triggers an alert when responses exceed the configured threshold, indicating potential SIP registration failures or network issues.
 
==== SIP Failed Register Beta Alerts ====
 
The '''SIP failed Register (beta)''' alert type detects SIP registration floods from a single IP address using multiple different usernames. This is a common attack pattern used in brute-force or credential-stuffing attacks where the attacker tries many different usernames from one source IP to find valid credentials.
 
Unlike the basic "SIP REGISTER flood" alert (which counts total registration attempts regardless of success/failure or username), this alert specifically monitors '''failed''' registrations and aggregates them by source IP address to detect patterns that indicate credential-guessing attacks.
 
;Use Cases:
:* Detect credential-stuffing attacks (one IP trying many different usernames in brute-force attempts)
:* Identify botnets attempting account takeovers by cycling through username lists
:* Monitor for registration abuse patterns that may indicate dictionary attacks
:* Alert administrators when an IP shows signs of attempting unauthorized access to SIP accounts
 
;How It Works:
 
This alert triggers when the total number of '''failed''' SIP registrations from any single IP address exceeds a specified threshold within a configured time interval. By focusing on failed registrations and grouping by source IP, it catches floods that use a variety of usernames from the same source.
 
;Configuration:
:# Navigate to '''GUI > Alerts'''
:# Create a new alert with type '''SIP failed Register (beta)'''
:# Set the '''threshold''' - maximum number of failed registrations allowed from a single IP (e.g., 20 failed registrations)
:# Set the '''interval''' - time window in seconds to evaluate (e.g., 60 seconds to check for 20 failed registrations in one minute)
:# Configure recipient email addresses
:# Optionally add Common Filters (IP addresses, numbers) to narrow scope
:# Save the alert configuration
 
;Example Scenario:
 
An attacker attempts to brute-force SIP credentials by sending REGISTER requests with 50 different usernames from IP 203.0.113.50 within 60 seconds. All 50 attempts fail because the credentials are invalid.
 
If you configure the alert with threshold=20 and interval=60 seconds, the system will:
# Detect 50 failed registrations from 203.0.113.50 within 60 seconds
# Compare (50 failed) > (threshold 20) = TRUE
# Trigger an alert notifying administrators about the potential registration flood attack from IP 203.0.113.50
 
;Comparison with Other Registration Alerts:


{| class="wikitable"
{| class="wikitable"
|-
|-
! Alert Type !! What It Monitors !! Attack Detection
! Response Code !! Meaning
|-
|-
| '''SIP failed Register (beta)''' || Failed registrations grouped by IP || Brute-force, credential stuffing
| Empty || All call attempts per filters
|-
|-
| SIP REGISTER RRD beta || REGISTER response times || Network latency, packet loss
| '''0''' || No response received (routing loops)
|-
|-
| multiple register (beta) || Same account from multiple IPs || Compromised credentials, misuse
| '''408''' || Timeout after provisional response (server unresponsive)
|-
|-
| Realtime REGISTER flood || Total REGISTER attempts (any status) from IP || Flood/spam of any registration
| Specific || Exact codes (404, 503, etc.)
|}
|}


The '''SIP failed Register (beta)''' alert is specifically optimized to detect attacks that use many different usernames from a single IP, which is the hallmark of credential-guessing or dictionary attacks. Use this alert in combination with other anti-fraud rules like [[Anti-fraud|Anti-Fraud Rules]] for comprehensive registration attack detection.
==== "from all" Checkbox (Percentage Thresholds) ====
 
{{Warning|1=This setting is critical for IP group monitoring.}}
 
* '''CHECKED''': % calculated from ALL CDRs in database
* '''UNCHECKED''': % calculated only from filtered CDRs (correct for specific IP groups)


==== CDR Trends Alerts ====


The '''CDR trends''' alert type enables trend-based monitoring and alerting on aggregated CDR statistics, including ASR (Answer Seizure Ratio) and other metrics. This alert type compares current performance against historical baselines and triggers notifications when metrics deviate beyond configurable thresholds.


'''Use Cases:'''
==== SIP Response vs Last SIP Response ====
* Monitor ASR drops or increases over time windows
* Detect sudden changes in call volume patterns
* Compare current hour/day/week against historical data
* Identify quality degradation trends before they become critical


'''Configuration Parameters:'''
There are two different fields for matching SIP responses:


{| class="wikitable"
{| class="wikitable"
|-
|-
! Parameter !! Description !! Example Values
! Field !! Location !! Supports % Threshold !! Use Case
|-
| '''Type''' || The metric to monitor for trend changes || ASR (Answer Seizure Ratio), Call count, ACD, etc.
|-
| '''Offset''' || Historical baseline period to compare against || 1 week, 1 day, 1 month
|-
| '''Range''' || Current time window to evaluate || 1 hour, 1 day, 1 week
|-
| '''Method''' || Calculation method for trend comparison || Deviation (detects % change), Threshold (absolute value)
|-
|-
| '''Limit Inc./Limit Dec.''' || Percentage threshold for triggering alerts || 10%, 15%, 20%
| '''SIP response''' || GUI > Alerts > SIP Response Alerts || {{Yes}} || Match by numeric code (e.g., 487, 503)
|-
|-
| '''IP whitelist''' || Optional filter to limit scope to specific IPs/agents || Source IP addresses or user agents
| '''Last sip response''' || GUI > Alerts > Filter common || {{No}} || Match by full text (e.g., "487 Request Terminated")
|}
|}


'''Example Configuration - ASR Trend Alert:'''
{{Warning|1=The GUI '''cannot trigger alerts based on percentage of full textual response strings'''. If you need percentage-based triggering for SIP response codes, use the '''SIP response''' numeric field instead.}}


To receive an alert when ASR drops by 10% compared to the previous week:
The '''Last sip response''' field supports wildcard patterns (%, %Request Terminated%, %487%) but only triggers based on count thresholds, not percentages.
=== International Call Alerts (Called Number Prefixes) ===


1. Navigate to '''GUI > Alerts'''
Monitor calls to international destinations using '''prefix-based matching''' (dialing patterns like 00, +).
2. Create new alert with type '''CDR trends'''
3. Configure parameters:
* '''Type:''' ASR
* '''Offset:''' 1 week (compare current period to previous week)
* '''Range:''' 1 hour (evaluate hourly)
* '''Method:''' Deviation (percentage-based comparison)
* '''Limit Dec.:''' 10% (trigger when drop exceeds 10%)
* '''IP whitelist:''' (optional) specify specific test user agents or IP addresses
4. Set recipient email addresses
5. Save the alert configuration


'''How Deviation Method Works:'''
{{Note|1=This uses phone number prefix detection, NOT IP geolocation. For GeoIP-based detection, see [[Anti-fraud|Anti-Fraud Rules]].}}


When using the '''Deviation''' method, the system calculates:
'''Configuration:'''
<syntaxhighlight lang="text">
# '''GUI > Settings > Country prefixes''' - Define international prefixes (00, +), local country, minimum digits
Deviation % = ((Current Value - Historical Baseline) / Historical Baseline) * 100
# '''GUI > Alerts > Filter common''' - Configure:
</syntaxhighlight>
 
* '''Limit Inc.''' triggers when Deviation % > threshold (e.g., ASR increased by 15%)
* '''Limit Dec.''' triggers when Deviation % < -threshold (e.g., ASR decreased by 10%)
 
A 10% ASR drop means current ASR is 90% of the historical baseline.
 
'''Understanding Offset vs Range:'''
 
* '''Offset''' defines the historical reference period (e.g., "1 week" means "same hour last week")
* '''Range''' defines the current evaluation window (e.g., "1 hour" means "current hour's ASR")
 
For example, with Offset=1 week and Range=1 hour, the system compares ASR for "today 09:00-10:00" against "last week 09:00-10:00".
 
==== Multiple Register Beta Alerts ====
 
The '''multiple register (beta)''' alert type detects SIP accounts that are registered from multiple different IP addresses. This is useful for identifying potential security issues, configuration errors, or unauthorized use of SIP credentials.
 
This alert specifically finds phone numbers or SIP accounts that have registered from more than one distinct IP address within the monitored timeframe.
 
;Use Cases:
:* Detect SIP account compromise (credential theft leading to registrations from unauthorized IPs)
:* Identify configuration issues where phones are registering from multiple networks unexpectedly
:* Monitor for roaming behavior when multi-IP registration is not expected
:* Audit SIP account usage across distributed environments
 
;Configuration:
:# Navigate to '''GUI > Alerts'''
:# Create a new alert with type '''multiple register (beta)'''
:# Configure recipient email addresses
:# Optionally add Common Filters (IP addresses, numbers) to narrow scope - '''leave filters empty to check ALL SIP numbers/accounts across all customers'''
:# Save the alert configuration
 
;Alert Scope:
:* '''With filters''': Monitors only the specific IP addresses, numbers, or groups defined in the Common Filters section
:* '''Without filters''': Monitors all SIP numbers/accounts across all customers in your system
 
The alert will trigger whenever it detects a SIP account that has registered from multiple distinct IP addresses, providing details about the affected account(s) and the IP addresses observed.
 
[[File:alertgrid.png|frame|center|Alert configuration grid]]
 
=== Alerts with Multiple Conditions: OR Logic Only ===
 
{{Warning|1=Alerts use '''OR logic''' between multiple conditions. AND logic between alert conditions is '''NOT currently supported'''.}}
 
When you configure an alert with multiple filters or conditions, the alert triggers when ANY ONE of the conditions is met (OR logic). There is no option to require that ALL conditions must be true (AND logic) before triggering.
 
'''Example Scenario:'''
 
You create an alert with two separate alert rules:
* Alert A: Triggers when "CDR age > 600s"
* Alert B: Triggers when "calls > 10"
 
If you expect to receive an alert only when BOTH conditions are true (CDR age is > 600s AND call count > 10), this will not work. The current alert system will trigger if EITHER condition A is met OR condition B is met.
 
'''Current Limitations:'''
 
* No "AND" operator or "operand" parameter exists for combining alert conditions
* Multiple conditions in a single alert are evaluated with OR logic only
* The '''IP/Number Group filter''' and '''Numbers filter''' match against both caller and called fields by default (see [[#Caller_vs_Called_Number_Filtering|Caller vs Called Number Filtering]])
 
'''Workaround for AND Logic:'''
 
Since AND logic between conditions is not supported, use the following approach:
 
If you need to monitor multiple conditions simultaneously:
# Create separate alerts for each condition you want to monitor
# Correlate the alert results manually
# Consider using [[Reports|custom reports]] or external monitoring systems that can aggregate multiple conditions
 
'''Example Workaround:'''
 
To monitor for "CDR age > 600s AND calls > 10":
# Create Alert 1: sensors alert for "Old CDR" threshold > 600s
# Create Alert 2: separate monitoring for call count threshold > 10
# Review both alert histories together to identify when both conditions are met
 
'''Feature Request:'''
 
AND logic between alert conditions is a requested enhancement. If this functionality is critical for your use case, submit a feature request describing your specific scenario.
 
=== Common Filters ===
 
All alert types support the following filters:


{| class="wikitable"
{| class="wikitable"
|-
|-
! Filter !! Description
! Setting !! Description
|-
| IP/Number Group || Apply alert to predefined groups (from '''Groups''' menu)
|-
| IP Addresses || Individual IPs or ranges (one per line)
|-
|-
| Numbers || Individual phone numbers or prefixes (one per line)
| Called number prefixes || Which prefixes trigger alert (uncheck ALL for all international)
|-
|-
| Email Group || Send alerts to group-defined email addresses
| Exclude called number || Country codes to exclude (e.g., +44, 0044 for UK)
|-
|-
| Emails || Individual recipient emails (one per line)
| Strict for prefixes || Require international prefix (00/+)
|-
|-
| External script || Path to custom script to execute when alert triggers (see below)
| NANPA || North American Numbering Plan
|}
|}


[[File:alertgroup.png|frame|center|Alert filter configuration]]
=== Sensors Alerts ===
 
=== Caller vs Called Number Filtering ===
 
{{Warning|1=The alert system does not currently support creating an alert that only triggers when a number appears as the '''caller''' OR only when it appears as the '''called number'''.}}
 
The IP/Number Group filter and the Numbers filter in alert configurations behave differently from the CDR filter interface:
 
* '''CDR Filter''': Allows you to select "caller&called" mode to search for a number appearing in either field, or use the "Combination" subtab for OR logic between multiple conditions.
 
* '''Alerts''': When you configure an alert with a telephone number group or individual numbers, these numbers will match against '''both caller and called fields by default'''. You cannot create an alert that triggers:
 
** Only when a specific number is the '''caller''' (but not when it is the called number)
** Only when a specific number is the '''called''' (but not when it is the caller number)
 
This is a deliberate design choice for performance reasons. The alert processing system evaluates alerts every minute against potentially millions of CDRs, and implementing complex caller/called separation logic would significantly impact performance.


{{Note|1=For filtering by caller or called direction, consider using '''IP Groups''' with the '''Trunk''' and '''Server''' checkboxes to classify call direction based on IP addresses instead of telephone numbers. See the [[Groups|Groups: IP, Telephone Numbers & Emails]] page for information on call direction classification.}}
Monitor sensor health and status:
* '''Offline detection''' - Sensor not communicating
* '''Old CDR''' - No recent CDRs written (capture or DB issue)
* '''Big SQL queue stat''' - Growing queue indicates DB bottleneck (warning: >20 files, critical: >100)


== Using External Scripts for Alert Actions ==
=== SIP REGISTER Alerts ===
 
Beyond email notifications, alerts can execute custom scripts when triggered. This enables integration with third-party systems (webhooks, Datadog, Slack, custom monitoring tools) without sending emails.
 
=== Configuration ===
 
1. Navigate to '''GUI > Alerts'''
2. Create or edit an alert (RTP, SIP Response, Sensors, etc.)
3. In the configuration form, locate the '''External script''' field
4. Enter the full path to your custom script (e.g., <code>/usr/local/bin/alert-webhook.sh</code>)
5. Save the alert configuration
 
The script will execute immediately when the alert triggers.
 
=== Script Arguments ===
 
The custom script receives alert data as command-line arguments. The format is identical to anti-fraud scripts (see [[Anti-fraud|Anti-Fraud Rules]]):


{| class="wikitable"
{| class="wikitable"
|-
|-
! Argument !! Description
! Alert Type !! Purpose !! Use Case
|-
|-
| <code>$1</code> || Alert ID (numeric identifier)
| '''SIP REGISTER RRD beta''' || Response time monitoring || Network latency, packet loss
|-
|-
| <code>$2</code> || Alert name/type
| '''SIP failed Register (beta)''' || Failed registrations by IP || Brute-force, credential stuffing
|-
|-
| <code>$3</code> || Unix timestamp of alert trigger
| '''multiple register (beta)''' || Same account from multiple IPs || Credential compromise detection
|-
| <code>$4</code> || JSON-encoded alert data
|}
|}


=== Alert Data Structure ===
{{Warning|1='''multiple register (beta)''' detects SIMULTANEOUS registrations from multiple IPs (security). For detecting IP changes when device moves networks, use CDR&RTP alert with external script.}}


The JSON in the fourth argument contains CDR IDs affected by the alert:


<syntaxhighlight lang="json">
{
  "cdr": [12345, 12346, 12347],
  "alert_type": "MOS below threshold",
  "threshold": 3.5,
  "actual_value": 2.8
}
</syntaxhighlight>


Use the <code>cdr</code> array to query additional information from the database if needed.
==== Alert Output Fields ====


=== Example: Send Webhook to Datadog ===
The '''multiple register (beta)''' and other SIP REGISTER alerts output the following fields in email notifications and GUI:
 
This bash script sends an alert notification to a Datadog webhook API:
 
<syntaxhighlight lang="bash">
#!/bin/bash
# /usr/local/bin/datadog-alert.sh
 
# Configuration
WEBHOOK_URL="https://webhook.site/your-custom-url"
DATADOG_API_KEY="your-datadog-api-key"
 
# Parse arguments
ALERT_ID="$1"
ALERT_NAME="$2"
TIMESTAMP="$3"
ALERT_DATA="$4"
 
# Convert Unix timestamp to readable date
DATE=$(date -d "@$TIMESTAMP" '+%Y-%m-%d %H:%M:%S')
 
# Extract relevant data from JSON
cdrCount=$(echo "$ALERT_DATA" | jq -r '.cdr | length')
threshold=$(echo "$ALERT_DATA" | jq -r '.threshold // empty')
actualValue=$(echo "$ALERT_DATA" | jq -r '.actual_value // empty')
 
# Build webhook payload
PAYLOAD=$(cat <<EOF
{
  "alert_id": "$ALERT_ID",
  "alert_name": "$ALERT_NAME",
  "triggered_at": "$DATE",
  "cdr_count": $cdrCount,
  "threshold": $threshold,
  "actual_value": $actualValue,
  "source": "voipmonitor"
}
EOF
)
 
# Send webhook
curl -X POST "$WEBHOOK_URL" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $DATADOG_API_KEY" \
  -d "$PAYLOAD"
</syntaxhighlight>
 
Make the script executable:
 
<syntaxhighlight lang="bash">
chmod +x /usr/local/bin/datadog-alert.sh
</syntaxhighlight>
 
=== Example: Send Slack Notification ===
 
<syntaxhighlight lang="bash">
#!/bin/bash
# /usr/local/bin/slack-alert.sh
 
SLACK_WEBHOOK="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
 
ALERT_NAME="$2"
ALERT_DATA="$4"
cdrCount=$(echo "$ALERT_DATA" | jq -r '.cdr | length')
 
curl -X POST "$SLACK_WEBHOOK" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "VoIPmonitor Alert: '"$ALERT_NAME"'",
    "attachments": [{
      "color": "danger",
      "fields": [
        {"title": "CDRs affected", "value": "'"$cdrCount"'"}
      ]
    }]
  }'
</syntaxhighlight>
 
=== Example: Store Alert Details in File ===
 
<syntaxhighlight lang="bash">
#!/bin/bash
# /usr/local/bin/log-alert.sh
 
LOG_DIR="/var/log/voipmonitor-alerts"
mkdir -p "$LOG_DIR"
 
# Log all arguments for debugging
echo "=== Alert triggered at $(date) ===" >> "$LOG_DIR/alerts.log"
echo "Alert ID: $1" >> "$LOG_DIR/alerts.log"
echo "Alert name: $2" >> "$LOG_DIR/alerts.log"
echo "Timestamp: $3" >> "$LOG_DIR/alerts.log"
echo "Data: $4" >> "$LOG_DIR/alerts.log"
echo "" >> "$LOG_DIR/alerts.log"
</syntaxhighlight>
 
=== Example: Access Source IP Addresses ===
 
When querying the CDR database from an alert script, IP addresses are stored as decimal integers in the <code>cdr</code> table. To convert them to human-readable dotted-decimal format (e.g., <code>185.107.80.4</code>), use either PHP's <code>long2ip()</code> function or MySQL's <code>INET_NTOA()</code> function.
 
==== Using PHP's long2ip() (Recommended for Post-Processing) ====
 
If you fetch the raw integer value from the database and convert it in your script:
 
<syntaxhighlight lang="php">
#!/usr/bin/php
<?php
// Parse alert data
$alert = json_decode($argv[4]);
$cdrIds = implode(',', $alert->cdr);
 
// Query the CDR table - note: sipcallerip is a decimal integer
$query = "SELECT id, sipcallerip, sipcalledip
          FROM voipmonitor.cdr
          WHERE id IN ($cdrIds)";
$command = "mysql -h MYSQLHOST -u MYSQLUSER -pMYSQLPASS -N -e \"$query\"";
exec($command, $results);
 
// Process results and convert IP addresses
foreach ($results as $line) {
    list($id, $callerIP, $calledIP) = preg_split('/\t/', trim($line));
 
    // Convert decimal integer to dotted-decimal format
    $callerIPFormatted = long2ip($callerIP);
    $calledIPFormatted = long2ip($calledIP);
 
    echo "CDR ID $id: Caller IP $callerIPFormatted, Called IP $calledIPFormatted\n";
 
    // Example: long2ip(3110817796) returns "185.107.80.4"
}
?>
</syntaxhighlight>
 
==== Using MySQL's INET_NTOA() (Recommended for Database Queries) ====
 
If you prefer to handle conversion in the SQL query itself:
 
<syntaxhighlight lang="php">
#!/usr/bin/php
<?php
// Parse alert data
$alert = json_decode($argv[4]);
$cdrIds = implode(',', $alert->cdr);
 
// Query with IP conversion done in MySQL
$query = "SELECT id, INET_NTOA(sipcallerip) as caller_ip, INET_NTOA(sipcalledip) as called_ip
          FROM voipmonitor.cdr
          WHERE id IN ($cdrIds)";
$command = "mysql -h MYSQLHOST -u MYSQLUSER -pMYSQLPASS -N -e \"$query\"";
exec($command, $results);
 
// Process results - IPs are already formatted
foreach ($results as $line) {
    list($id, $callerIP, $calledIP) = preg_split('/\t/', trim($line));
    echo "CDR ID $id: Caller IP $callerIP, Called IP $calledIP\n";
}
?>
</syntaxhighlight>
 
==== Common IP Columns in CDR Table ====
 
The following columns contain IP addresses (all stored as decimal integers):


{| class="wikitable"
{| class="wikitable"
|-
|-
! Column !! Description
! Field !! Source !! Description
|-
|-
| <code>sipcallerip</code> || SIP signaling source IP
| '''username''' || SIP Contact header || The registered user identity
|-
|-
| <code>sipcalledip</code> || SIP signaling destination IP
| '''from''' fields || SIP From header || From-number, From-domain extracted from From header
|-
|-
| <code>rtpsrcipX</code> || RTP source IP for stream X (where X = 0-9)
| '''to''' fields || SIP To header || To-number, To-domain extracted from To header
|-
|-
| <code>rtpdstipX</code> || RTP destination IP for stream X (where X = 0-9)
| '''lookup name''' || Tools > Prefix Lookup || Custom label if phone number matches a configured prefix entry
|}
|}


==== Troubleshooting IP Format Issues ====
{{Note|1=The '''lookup name''' column displays custom labels from [[Tools#Prefix_Lookup|Prefix Lookup]] when a phone number matches a configured prefix. If no match exists, the field remains empty or shows the raw number.}}
 
=== CDR Trends Alerts ===
If your alert script receives IP addresses as large numbers (e.g., <code>3110817796</code> instead of <code>185.107.80.4</code>):
 
1. Verify you are querying the <code>cdr</code> table directly (not using formatted variables)
2. Use <code>long2ip()</code> in PHP or <code>INET_NTOA()</code> in MySQL to convert the value
3. Check that the column is not already being converted by another layer of the application
 
For reference:
* <code>long2ip(3110817796)</code> returns <code>185.107.80.4</code>
* <code>long2ip(3232255785)</code> returns <code>192.168.1.101</code>
* <code>long2ip(2130706433)</code> returns <code>127.0.0.1</code>
 
=== Important Notes ===
 
* '''IP Address Format''': IP addresses in the <code>cdr</code> table are stored as decimal integers. Use <code>long2ip()</code> (PHP) or <code>INET_NTOA()</code> (MySQL) to convert to dotted-decimal format.
* '''Script execution time''': The alert processor waits for the script to complete. Keep scripts fast (under 5 seconds) or run them in the background if processing takes longer.
* '''Script permissions''': Ensure the script is executable by the web server user (typically <code>www-data</code> or <code>apache</code>).
* '''Error handling''': Script failures are logged but do not prevent email alerts from being sent.
* '''Querying CDRs''': The script receives CDR IDs in the JSON data. Query the <code>cdr</code> table to retrieve detailed information like caller numbers, call duration, etc.
* '''Security''': Validate input before using it in commands or database queries to prevent injection attacks.
 
=== Troubleshooting External Script Not Triggering ===
 
If an external script configured for an alert is not being triggered when the alert conditions are met, use the following diagnostic steps:
 
{{Note|1=This troubleshooting section focuses on GUI-level external script configuration in the "External script" field. For sniffer-level <code>alert_command</code> configuration in <code>[[Sniffer_configuration|voipmonitor.conf]]</code>, see the sniffer configuration documentation.}}
 
==== Step 1: Verify Alert Configuration and Use Preview Button ====
 
Before troubleshooting the script itself, verify that the alert is correctly configured and test using the preview button:
 
1. Navigate to '''GUI > Alerts'''
2. Edit your alert configuration
3. Click the '''preview button''' (if available) to test the alert with actual data
4. Configure the alert with only an email address first to verify the alert triggers correctly
5. Once confirmed working, add the external script path to the '''External script''' field
 
{{Warning|1=If the alert does not trigger with email notification, the issue is with the alert conditions or evaluation logic (see [[#Troubleshooting_Concurrent_Calls_Alerts_Not_Triggering|Troubleshooting Alerts Not Triggering]] above), not with the script configuration.}}
 
==== Step 2: Verify Script Path and Format ====
 
Ensure the '''External script''' field contains the correct path:
 
* Use the '''full local absolute path''' to the script file (e.g., <code>/root/custom-script.sh</code> or <code>/usr/local/bin/alert-handler.sh</code>)
* Do not use relative paths (e.g., <code>./script.sh</code>) or paths without absolute location
* Verify the script file exists at the specified location
 
<syntaxhighlight lang="bash">
# Verify the file exists
ls -l /root/custom-script.sh
 
# If not found, check your script's actual location
find / -name "custom-script.sh" 2>/dev/null
</syntaxhighlight>
 
==== Step 3: Check Script Permissions ===
 
The script must have execute permissions. Check and fix permissions:
 
<syntaxhighlight lang="bash">
# Check current permissions
ls -l /root/custom-script.sh
 
# Grant execute permissions
chmod 755 /root/custom-script.sh
 
# Verify permissions changed (should show -rwxr-xr-x)
ls -l /root/custom-script.sh
</syntaxhighlight>
 
==== Step 4: Test Script Execution Manually ===
 
Execute the script manually with sample arguments to verify it works:
 
<syntaxhighlight lang="bash">
# Run the script with test arguments
# $1 = Alert ID, $2 = Alert name, $3 = Timestamp, $4 = JSON data
/root/custom-script.sh 1 "Test Alert" 1234567890 '{"cdr":[123],"alert_type":"test"}'
</syntaxhighlight>
 
If the script fails when run manually, fix any issues before testing in the alert system.
 
==== Step 5: Verify Script Context and Dependencies ===
 
Scripts run in the context of the alert processor, not in an interactive shell:
 
* Ensure the script has a proper shebang line (e.g., <code>#!/bin/bash</code> or <code>#!/usr/bin/php</code>)
* Use full paths to commands (e.g., <code>/usr/bin/curl</code> instead of <code>curl</code>)
* Check that the script does not depend on environment variables that may not be set
 
<syntaxhighlight lang="bash">
# Check shebang line
head -1 /root/custom-script.sh
# Should output: #!/bin/bash or similar
 
# Test with full paths to commands
type curl  # Shows /usr/bin/curl or similar
type wget  # Shows /usr/bin/wget or similar
</syntaxhighlight>
 
==== Step 6: Triggering URLs from External Scripts ===
 
If your goal is to trigger a webhook or URL when an alert fires, you cannot put the URL directly in the '''External script''' field. Instead:
 
1. Create a local script file that performs the URL request
2. Reference that script in the '''External script''' field
3. The script should contain the command to trigger the URL
 
**Example: Script to trigger Slack webhook:**
 
<syntaxhighlight lang="bash">
#!/bin/bash
# /root/custom-script.sh - Trigger Slack webhook
 
WEBHOOK_URL="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
 
curl -X POST "$WEBHOOK_URL" \
  -H "Content-Type: application/json" \
  -d '{"text": "'"Alert triggered: $2"'"}'
</syntaxhighlight>
 
**Example: Script to trigger custom endpoint:**
 
<syntaxhighlight lang="bash">
#!/bin/bash
# /root/webhook-alert.sh - Trigger custom alert endpoint
 
ENDPOINT_URL="https://mydomain.tld/alert"
 
wget -q -O- --post-data="alert=$2" "$ENDPOINT_URL"
</syntaxhighlight>
 
Remember to make the script executable after creation:
 
<syntaxhighlight lang="bash">
chmod +x /root/custom-script.sh
chmod +x /root/webhook-alert.sh
</syntaxhighlight>
 
==== Step 7: Check Script Logs ===
 
Enable alert processing logs to monitor script execution:
 
See [[#Enable_Detailed_Alert_Processing_Logs|Troubleshooting Alerts Not Triggering]] above to configure <code>CRON_LOG_FILE</code>.
 
<syntaxhighlight lang="bash">
# Watch alert processing logs
tail -f /tmp/alert.log
</syntaxhighlight>
 
If the script fails, error output may be captured in system logs:
 
<syntaxhighlight lang="bash">
# Check web server error logs (location varies by distribution)
tail -f /var/log/apache2/error.log      # Ubuntu/Debian with Apache
tail -f /var/log/httpd/error_log        # RHEL/CentOS with httpd
tail -f /var/log/php-fpm/error.log      # PHP-FPM
</syntaxhighlight>


==== Step 8: Common Issues and Solutions ===
Monitor metric deviations from historical baselines (e.g., ASR drops).


{| class="wikitable"
{| class="wikitable"
|-
|-
! Symptom !! Possible Cause !! Solution
! Parameter !! Description
|-
|-
| Alert appears in Sent Alerts but script does not run || Script path incorrect in configuration || Verify full absolute path in '''External script''' field
| Type || Metric to monitor (ASR, ACD, etc.)
|-
|-
| Script executes but fails with "command not found" || Missing shebang or incomplete PATH || Add shebang (<code>#!/bin/bash</code>) and use full command paths
| Offset || Historical baseline (1 week, 1 day)
|-
|-
| Script runs manually but not via alert || Permissions issue (web server cannot execute) || Ensure script is executable (<code>chmod +x</code>) and owned by appropriate user
| Range || Current evaluation window (1 hour)
|-
|-
| Alert never triggers (no Sent Alerts entry) || Alert conditions not met or alert disabled || Test with preview button, verify thresholds, check [[#Troubleshooting_Alerts_Not_Triggering_General|Troubleshooting Alerts Not Triggering]]
| Method || Deviation (%) or Threshold (absolute)
|-
|-
| Script timeout blocks subsequent alerts || Long-running script without backgrounding || Add <code>&</code> to run in background or use <code>nohup</code>
| Limit Inc./Dec. || Trigger threshold percentage
|}
|}


== Sent Alerts ==
== Common Filters ==
 
All triggered alerts are saved in history and can be viewed via '''GUI > Alerts > Sent Alerts'''. The content matches what was sent via email.
 
[[File:alert-sentalerts.png|frame|center|Sent alerts history]]
 
=== Parameters Table ===
 
The parameters table shows QoS metrics with problematic values highlighted for quick identification.
 
[[File:alert-perameters.png|frame|center|Alert parameters with highlighted bad values]]
 
=== CDR Records Table ===
 
The CDR records table lists all calls that triggered the alert. Each row includes alert flags indicating which thresholds were exceeded:
* '''(M)''' - MOS below threshold
* '''(J)''' - Jitter exceeded
* '''(P)''' - Packet loss exceeded
* '''(D)''' - Delay exceeded
 
== Anti-Fraud Alerts ==
 
VoIPmonitor includes specialized anti-fraud alert rules for detecting attacks and fraudulent activity. These include:
* Realtime concurrent calls monitoring
* SIP REGISTER flood/attack detection
* SIP PACKETS flood detection
* Country/Continent destination alerts
* CDR/REGISTER country change detection
 
For detailed configuration of anti-fraud rules and custom action scripts, see [[Anti-fraud|Anti-Fraud Rules]].
 
== Alerts Based on Custom Reports ==
 
In addition to native alert types (RTP, SIP response, Sensors), VoIPmonitor supports generating alerts from custom reports. This workflow enables alerts based on criteria not available in native alert types, such as SIP header values captured via CDR custom headers.
 
=== Limitations of Custom Report Alerts ===
 
Custom report alerts are designed for filtering by SIP header values captured as CDR custom headers. They have the following limitations:
 
* '''No "Group By" functionality for threshold-based alerts:''' You cannot create an alert that triggers only when multiple events from the same caller ID or called number occur. For example, you cannot configure an alert that triggers when the same caller ID generates multiple SIP 486 responses, while ignoring single isolated failures from different callers.
* '''Scheduled reports vs. threshold alerts:''' CDR Summary reports can group data by caller number or called number, but these are scheduled reports that send data on a time-based schedule (e.g., daily), not threshold-based alerts that trigger when specific conditions are met.
* '''No alert-level aggregation:''' SIP Response alerts aggregate by total counts or percentages across all calls matching your filters, but cannot aggregate or group by specific caller/called numbers within those filtered results.
 
'''Workaround:'''
 
The closest available workflow is to create a **CDR Summary daily report** that:
1. Filters by the SIP response code (e.g., 486)
2. Groups by source number (caller) or destination number (called)
3. Sends a scheduled email (e.g., every 15 minutes or hourly)
 
This report will show which caller numbers have generated failures, but you must manually review the data to identify patterns. The report will be sent on schedule regardless of whether any failures occurred, and there is no way to configure it to only trigger when a specific threshold per unique caller is exceeded.
 
'''Feature Request:'''
 
Alerting based on grouped thresholds (e.g., "alert if the same caller ID generates >X SIP 486 responses") is a requested feature not currently available in VoIPmonitor. If you require this functionality, submit a feature request describing your specific use case.
 
=== Workflow Overview ===
 
1. [[Settings#CDR_Custom_Headers|Capture custom SIP headers]] in the database
2. Create a custom report filtered by the custom header values
3. Generate an alert from that report
4. Configure alert email options (e.g., limit email size)
 
=== Example: Alert on SIP Max-Forwards Header Value ===
 
This example shows how to receive an alert when the SIP <code>Max-Forwards</code> header value drops below 15.
 
'''Step 1: Configure Sniffer Capture'''
 
Add the header to your <code>/etc/voipmonitor.conf</code> configuration file:
 
<syntaxhighlight lang="ini">
# Capture Max-Forwards header
custom_headers = Max-Forwards
</syntaxhighlight>
 
Restart the sniffer to apply changes:
 
<syntaxhighlight lang="bash">
service voipmonitor restart
</syntaxhighlight>
 
'''Step 2: Configure Custom Header in GUI'''
 
1. Navigate to '''GUI > Settings > CDR Custom Headers'''
2. Select <code>Max-Forwards</code> from the available headers
3. Enable '''Show as Column''' to display it in CDR views
4. Save configuration
 
'''Step 3: Create Custom Report'''
 
1. Navigate to '''GUI > CDR Custom Headers''' or use the Report Generator
2. Create a filter for calls where <code>Max-Forwards</code> is less than 15
3. Since custom headers store string values, use a filter expression that matches the desired values:
<syntaxhighlight lang="text">
15 14 13 12 11 10 0_ _
</syntaxhighlight>
  Include additional space-separated values or use NULL to match other ranges as needed.
 
4. Run the report to verify it captures the expected calls
 
'''Step 4: Generate Alert from Report'''
 
You can create an alert based on this custom report using the Daily Reports feature:
 
1. Navigate to '''GUI > Reports > Configure Daily Reports'''
2. Click '''Add Daily Report'''
3. Configure the filter to target the custom header criteria (e.g., Max-Forwards < 15)
4. Set the schedule (e.g., run every hour)
5. Save the daily report configuration
 
'''Step 5: Limit Alert Email Size (Optional)'''
 
If the custom report generates many matching calls, the alert email can become large. To limit the email size:


1. Edit the daily report
All alert types support:
2. Go to the '''Basic Data''' tab
* '''IP/Number Group''' - Predefined groups from '''Groups''' menu
3. Set the '''max-lines in body''' option to the desired limit (e.g., 100 lines)
* '''IP Addresses''' / '''Numbers''' - Individual values (one per line)
* '''Email Group''' / '''Emails''' - Recipients
* '''Last sip response''' - Filter by response text (requires <code>save_sip_history = responses</code>)
* '''External script''' - Custom script path for integrations


=== Additional Use Cases ===
{{Warning|1=Alerts use '''OR logic''' between conditions. AND logic is NOT supported. Workaround: create separate alerts and correlate manually.}}


This workflow can be used for various custom monitoring scenarios:
=== Caller vs Called Filtering ===


* '''SIP headers beyond standard SIP response codes''' - Monitor any custom SIP header
The Numbers filter matches against '''both caller and called fields'''. You cannot create alerts that trigger only when a number is the caller or only the called. Use IP Groups with Trunk/Server checkboxes for direction-based filtering. See [[Groups]].
* '''Complex filtering logic''' - Create reports based on multiple custom header filters
* '''Threshold monitoring for string fields''' - When numeric comparison is not available, use string matching


For more information on configuring custom headers, see [[Settings#CDR_Custom_Headers|CDR Custom Headers]].
== External Scripts ==


== Troubleshooting Email Alerts ==
Enable webhook integration (Datadog, Slack, custom systems).


If email alerts are not being sent, the issue is typically with the Mail Transfer Agent (MTA) rather than VoIPmonitor.
'''Configuration:''' Enter full absolute path in '''External script''' field (e.g., <code>/usr/local/bin/alert-webhook.sh</code>).


=== Step 1: Test Email Delivery from Command Line ===
'''Arguments passed to script:'''
 
Before investigating complex issues, verify your server can send emails:
 
<syntaxhighlight lang="bash">
# Test using the 'mail' command
echo "Test email body" | mail -s "Test Subject" your.email@example.com
</syntaxhighlight>
 
If this fails, the issue is with your MTA configuration, not VoIPmonitor.
 
=== Step 2: Check MTA Service Status ===
 
Ensure the MTA service is running:
 
<syntaxhighlight lang="bash">
# For Postfix (most common)
sudo systemctl status postfix
 
# For Exim (Debian default)
sudo systemctl status exim4
 
# For Sendmail
sudo systemctl status sendmail
</syntaxhighlight>
 
If the service is not running or not installed, install and configure it according to your Linux distribution's documentation.
 
=== Step 3: Check Mail Logs ===
 
Examine the MTA logs for specific error messages:
 
<syntaxhighlight lang="bash">
# Debian/Ubuntu
tail -f /var/log/mail.log
 
# RHEL/CentOS/AlmaLinux/Rocky
tail -f /var/log/maillog
</syntaxhighlight>
 
Common errors and their meanings:
{| class="wikitable"
{| class="wikitable"
|-
|-
! Error Message !! Cause !! Solution
! Arg !! Description
|-
| Connection refused || MTA not running or firewall blocking || Start MTA service, check firewall rules
|-
|-
| Relay access denied || SMTP relay misconfiguration || See "Configuring SMTP Relay" below
| <code>$1</code> || Alert ID
|-
|-
| Authentication failed || Incorrect credentials || Verify credentials in sasl_passwd
| <code>$2</code> || Alert name
|-
|-
| Host or domain name lookup failed || DNS issues || Check /etc/resolv.conf
| <code>$3</code> || Unix timestamp
|-
|-
| Greylisted || Temporary rejection || Wait and retry, or whitelist sender
| <code>$4</code> || JSON data with CDR IDs
|}
|}


=== Step 4: Check Mail Queue ===
'''Example - Slack notification:'''
 
Emails may be stuck in the queue if delivery is failing:
 
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# View the mail queue
#!/bin/bash
mailq
# /usr/local/bin/slack-alert.sh
 
SLACK_WEBHOOK="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
# Force immediate delivery attempt
curl -X POST "$SLACK_WEBHOOK" -H "Content-Type: application/json" \
postqueue -f
  -d '{"text": "VoIPmonitor Alert: '"$2"'"}'
</syntaxhighlight>
</syntaxhighlight>


Deferred or failed messages in the queue contain error details explaining why delivery failed.
{{Note|1=IP addresses in CDR table are decimal integers. Use <code>long2ip()</code> (PHP) or <code>INET_NTOA()</code> (MySQL) for conversion.}}


=== Configuring SMTP Relay ===
== Sent Alerts ==


If you encounter "Relay access denied" errors, your Postfix server cannot send emails through your external SMTP server. There are two solutions:
View triggered alerts via '''GUI > Alerts > Sent Alerts'''. Shows:
* '''Parameters table''' - QoS metrics with highlighted bad values
* '''CDR records''' - Calls that triggered alert with flags: (M)OS, (J)itter, (P)acket loss, (D)elay


'''Solution 1: Configure External SMTP to Permit Relaying (Recommended for Trusted Networks)'''
== Custom Report Alerts ==


If the VoIPmonitor server is in a trusted network, configure your external SMTP server to permit relaying from the VoIPmonitor server's IP address:
Alert on criteria not in native types (e.g., custom SIP headers).


1. Access your external SMTP server configuration
'''Workflow:'''
2. Add the VoIPmonitor server's IP address to the allowed relay hosts (mynetworks)
# Capture header in <code>/etc/voipmonitor.conf</code>: <code>custom_headers = Max-Forwards</code>
3. Save configuration and reload: <code>postfix reload</code>
# Enable in '''GUI > Settings > CDR Custom Headers'''
# Create filter in CDR view, save as template
# Create Daily Report with filter in '''GUI > Reports > Configure Daily Reports'''


'''Solution 2: Configure Postfix SMTP Authentication (Recommended for Remote SMTP)'''
{{Note|1=Custom report alerts cannot group by caller/called for threshold detection (e.g., "alert if same caller has >X failures"). Use CDR Summary reports for aggregated data.}}


If using an external SMTP server that requires authentication, configure Postfix to authenticate using SASL:
== Troubleshooting ==


1. Install SASL authentication packages:
=== Email Not Sent ===
<syntaxhighlight lang="bash">
# Debian/Ubuntu
sudo apt-get install libsasl2-modules


# RHEL/CentOS/AlmaLinux/Rocky
'''Diagnosis:'''
sudo yum install cyrus-sasl-plain
* Entries in "Sent Alerts" but no email → MTA issue
</syntaxhighlight>
* No entries in "Sent Alerts" → Alert conditions or cron issue
 
2. Configure Postfix to use the external SMTP relay:
 
Edit <code>/etc/postfix/main.cf</code>:
<syntaxhighlight lang="ini">
# Use external SMTP as relay host
relayhost = smtp.yourprovider.com:587
 
# Enable SASL authentication
smtp_sasl_auth_enable = yes
 
# Use SASL password file
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
 
# Disable anonymous authentication (use only SASL)
smtp_sasl_security_options = noanonymous
 
# Enable TLS (recommended)
smtp_tls_security_level = encrypt
</syntaxhighlight>
 
3. Create the SASL password file with your SMTP credentials:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Create the file (your SMTP username and password)
# Test MTA
echo "[smtp.yourprovider.com]:587 username:password" | sudo tee /etc/postfix/sasl_passwd
echo "Test" | mail -s "Test" your@email.com


# Secure the file (rw root only)
# Check MTA status
sudo chmod 600 /etc/postfix/sasl_passwd
systemctl status postfix # or exim4/sendmail


# Create the Postfix hash database
# Check logs
sudo postmap /etc/postfix/sasl_passwd
tail -f /var/log/mail.log  # Debian/Ubuntu
tail -f /var/log/maillog  # RHEL/CentOS


# Reload Postfix
# Check mail queue
sudo systemctl reload postfix
mailq
</syntaxhighlight>
</syntaxhighlight>


4. Test email delivery:
'''Status 250 or "Queued mail for delivery"''' = Your server delivered successfully. If recipient didn't receive, issue is on their side (spam folder, quarantine, blacklisting).
'''Mail Queue Not Delivering:'''
If emails accumulate in the queue but are not being sent:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
echo "Test email" | mail -s "SMTP Relay Test" your.email@example.com
# Verify queue manager is running
</syntaxhighlight>
ps aux | grep qmgr
 
If successful, emails should be delivered through the authenticated SMTP relay.


=== Step 5: Verify Cronjob ===
# Restart Postfix
systemctl restart postfix


Ensure the alert processing script runs every minute:
# Force immediate delivery of queued emails
 
postfix flush
<syntaxhighlight lang="bash">
# Check current crontab
crontab -l
</syntaxhighlight>
</syntaxhighlight>
=== Alerts Not Triggering ===


You should see:
'''Enable debug logging:'''
<syntaxhighlight lang="bash">
<syntaxhighlight lang="php">
* * * * * root php /var/www/html/php/run.php cron
// Add to ./config/system_configuration.php
define('CRON_LOG_FILE', '/tmp/alert.log');
</syntaxhighlight>
</syntaxhighlight>


If missing, add it:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
crontab -e
# Monitor processing
# Add the line above, then reload cron
tail -f /tmp/alert.log
killall -HUP cron
</syntaxhighlight>
</syntaxhighlight>


=== Step 6: Verify Alert Configuration in GUI ===
'''Common causes:'''
 
* Cron not running - verify with <code>crontab -l</code>
After confirming the MTA works:
* PHP CLI version mismatch - use <code>update-alternatives --set php /usr/bin/php8.x</code>
# Navigate to '''GUI > Alerts'''
* SQL queue growing - DB can't keep up (see [[Scaling]])
# Verify alert conditions are enabled
* Alert disabled or filter mismatch
# Check that recipient email addresses are valid
# Go to '''GUI > Alerts > Sent Alerts''' to see if alerts were triggered
 
'''Diagnosis:'''
* Entries in "Sent Alerts" but no emails received → MTA issue
* No entries in "Sent Alerts" → Check alert conditions or cronjob
 
=== Step 7: Test PHP mail() Function ===
 
Isolate the issue by testing PHP directly:
 
<syntaxhighlight lang="bash">
php -r "mail('your.email@example.com', 'Test from PHP', 'This is a test email');"
</syntaxhighlight>
 
* If this works but VoIPmonitor alerts don't → Check GUI cronjob and alert configuration
* If this fails → MTA or PHP configuration issue
 
== Troubleshooting Concurrent Calls Alerts Not Triggering ==
 
CDR-based concurrent calls alerts may not trigger as expected due to database queue delays or alert timing configuration. Unlike realtime concurrent calls alerts (see [[Anti-fraud|Anti-Fraud Rules]]), CDR-based alerts require CDRs to be written to the database before evaluation.
 
=== Check SQL Cache Files Queue ===
 
A growing SQL cache queue can prevent CDR-based alerts from triggering because the alert processor evaluates CDRs that have already been stored in the database, not calls still waiting in the queue.
 
* Navigate to '''GUI > Settings > Sensors'''
* Check the RRD chart for '''SQL cache files''' (SQLq/SQLf metric)
* '''If the queue is growing during peak times:'''
** Database cannot keep up with CDR insertion rates
** Alerts evaluate outdated data because recent CDRs have not been written yet
** See [[SQL_queue_is_growing_in_a_peaktime|Delay between active call and cdr view]] for solutions
 
=== CDR Timing vs "CDR not older than" Setting ===
 
CDR-based alerts include a ''CDR not older than'' parameter that filters which CDRs are considered for alert evaluation.
 
* '''Parameter location:''' In the concurrent calls alert configuration form
* '''Function:''' Only CDRs newer than this time window are evaluated
* '''Diagnosis:'''
** Verify that the time difference between ''Last CDR in database'' and ''Last CDR in processing queue'' (in Sensors status) is smaller than your ''CDR not older than'' value
** If the delay is larger, CDRs are being excluded from alert evaluation
** Common causes: Database overload, slow storage, insufficient MySQL configuration
* '''Solution:'''
** Increase the ''CDR not older than'' value to match your database performance
** See [[SQL_queue_is_growing_in_a_peaktime]] and [[Scaling]] for database tuning
** Check SQLq value should remain low (under 1000) during peak load
 
=== "Check Interval" Parameter and Low Thresholds ===
 
When testing concurrent calls alerts with very low thresholds (e.g., greater than 0 calls or 1 call), consider the ''Check interval'' parameter.
 
* '''Parameter location:''' In the concurrent calls alert configuration form
* '''Function:''' How often the alert condition is evaluated (time window for concurrent call calculation)
* '''Issue with low thresholds:'''
** A call lasting 300 seconds (5 minutes) will show as concurrent for the entire duration
** If ''Check interval'' is shorter than typical call durations, you may see temporary concurrent counts that disappear between evaluations
* '''Recommendation for testing:'''
** Increase the ''Check interval'' to a longer duration (e.g., 60 minutes) when testing with very low thresholds
** This ensures concurrent calls are counted over a longer time window, avoiding false negatives from short interval checks


=== Fraud Concurrent Calls vs Regular Concurrent Calls Alerts ===
=== Concurrent Calls Alerts ===
 
VoIPmonitor provides two different alert types for concurrent calls monitoring, which operate on different data sources and have different capabilities:


{| class="wikitable"
{| class="wikitable"
|-
|-
! Feature !! Fraud Concurrent Calls !! Regular Concurrent Calls
! Type !! Data Source !! Aggregation !! Timing
|-
|-
| '''Data source''' || SIP INVITEs (realtime) || CDRs (after call ends)
| '''Fraud concurrent calls''' || SIP INVITEs (realtime) || Source IP only || Immediate
|-
|-
| '''Processing type''' || Realtime (packet inspection) || CDR-based (database query)
| '''Regular concurrent calls''' || CDRs (database) || Source/Dest IP, Domain, Custom || Delayed
|-
| '''Aggregation ("BY" dropdown)''' || '''Source IP only''' (hard-coded) || Source IP, '''Destination IP''', Domain, Custom Headers
|-
| '''Domain filtering''' || '''Not available''' (major limitation) || Available via SQL filter
|-
| '''Timing''' || Immediate (no database delay) || Delayed (requires CDR insertion)
|-
| '''Where configured''' || '''GUI > Alerts > Anti Fraud''' || '''GUI > Alerts'''
|-
| '''Table name''' || <code>list_concurrent_calls</code> in anti-fraud section || Standard alerts table
|}
|}


'''Key Differences:'''
Use regular concurrent calls for destination IP monitoring (trunk capacity).
'''Investigating Fraud: Realtime Concurrent Calls Alerts'''


* '''Fraud concurrent calls''' detect concurrent INVITEs in realtime but the "BY" dropdown only supports '''Source IP aggregation'''. This is a hard-coded limitation of the realtime detection logic designed for detecting attacks from specific IPs. Use this for attack detection where you need immediate alerts.
Since this alert type triggers before CDRs are written, use the following procedure to investigate the calls that triggered the alert:
* '''Regular concurrent calls alerts''' use stored CDRs and support multiple aggregation options including '''Destination IP (Called)''', Source IP, Domain, and custom headers via the Common Filters tab. Use this for capacity planning, trunk/capacity monitoring, or when you need to filter by destination.
# Navigate to '''GUI → CDR'''
* '''Destination IP monitoring''': If you need to alert when concurrent calls to a specific destination IP (e.g., carrier, trunk) exceed a threshold, use the regular '''Concurrent calls''' alert. The Fraud concurrent calls alert cannot filter or aggregate by destination IP.
# Use the filter form to add the '''is international''' filter
# Set the '''from''' and '''to''' date range to match the time the alert was sent
# Go to the bottom of the CDR view and enable grouping by '''country'''
# Analyze the traffic by country to identify the source of the fraudulent activity
=== External Script Not Running ===


Example use cases:
# Use '''preview button''' to test alert triggers
# Verify absolute path (not relative)
# Check permissions: <code>chmod 755 /path/to/script.sh</code>
# Include shebang: <code>#!/bin/bash</code>
# Use full command paths (e.g., <code>/usr/bin/curl</code>)
# For URLs, create script with curl/wget - cannot put URL directly in field


{| class="wikitable"
=== "Crontab Log Too Old" Warning ===
|-
! Scenario !! Recommended Alert Type !! Configuration
|-
| Detect flooding/attack from specific source IP (immediate) || '''Fraud: realtime concurrent calls''' || GUI > Alerts > Anti Fraud, BY: Source IP
|-
| Monitor trunk capacity limit (destination IP threshold) || '''Concurrent calls''' || GUI > Alerts, BY: Destination IP (Called)
|-
| Detect domain-specific concurrent call patterns || '''Concurrent calls''' || GUI > Alerts, Common Filters: Domain
|-
| Detect attacks with immediate triggering || '''Fraud: realtime concurrent calls''' || GUI > Alerts > Anti Fraud
|}


If you have configured a concurrent calls alert and need filtering by destination IP, domain, or custom headers, verify that you are using the regular '''Concurrent calls''' alert in '''GUI > Alerts''' and not the fraud variant.
'''Causes:'''
# Cron not running → Add cron entry
# PHP CLI version mismatch → <code>update-alternatives --set php /usr/bin/php8.x</code>
# Database overload → Check SQLq in '''GUI > Settings > Sensors''', see [[Scaling]]


== Troubleshooting Alerts Not Triggering (General) ==
== See Also ==
 
If alerts are not appearing in the '''Sent Alerts''' history at all, the problem is typically with the alert processor not evaluating alerts. This is different from MTA issues where alerts appear in history but emails are not sent.
 
=== Enable Detailed Alert Processing Logs ===
 
To debug why alerts are not being evaluated, enable detailed logging for the alert processor by adding the following line to your GUI configuration file:
 
<syntaxhighlight lang="bash">
# Edit the config file (adjust path based on your GUI installation)
nano ./config/system_configuration.php
</syntaxhighlight>
 
Add this line at the end of the file:
<syntaxhighlight lang="php">
<?php
define('CRON_LOG_FILE', '/tmp/alert.log');
?>
</syntaxhighlight>
 
This enables logging that shows which alerts are being processed during each cron job run.
 
=== Increase Parallel Processing Threads ===
 
If you have many alerts or reports, the default number of parallel threads may cause timeout issues. Increase the parallel task limit:
 
<syntaxhighlight lang="bash">
# Edit the configuration file
nano ./config/configuration.php
</syntaxhighlight>
 
Add this line at the end of the file:
<syntaxhighlight lang="php">
<?php
define('CRON_PARALLEL_TASKS', 8);
?>
</syntaxhighlight>
 
The value of 8 is recommended for high-load environments. Adjust based on your alert/report volume and server capacity.
 
=== Monitor Alert Processing Logs ===
 
After enabling logging, monitor the alert log file to see which alerts are being processed:
 
<syntaxhighlight lang="bash">
# Watch the log in real-time
tail -f /tmp/alert.log
</syntaxhighlight>
 
The log shows entries like:
<syntaxhighlight lang="text">
begin alert [alert_name]
end alert [alert_name]
</syntaxhighlight>
 
'''Interpreting the logs:'''
* If you '''do not see''' your alert name in the logs → The alert processor is not evaluating it. Check your alert configuration, filters, and data availability.
* If you '''see''' the alert in logs but it does not trigger → The alert conditions are not being met. Check your thresholds, filter logic, and verify the CDR data matches your expectations.
* If logs are completely empty → The cron job may not be running or the GUI configuration files are not being loaded. Verify the cron job and file paths.
 
=== Alert Not Appearing in Logs ===
 
If your alert does not appear in `/tmp/alert.log`:
 
1. '''Verify the cron job is running:'''
<syntaxhighlight lang="bash">
# Check the cron job exists
crontab -l
 
# Manually test the cron script to see errors
php ./php/run.php cron
</syntaxhighlight>
 
2. '''Verify data exists in CDR:'''
<syntaxhighlight lang="bash">
# Check if the calls that should trigger the alert exist
# Navigate to GUI > CDR > Browse and filter for the timeframe
</syntaxhighlight>
 
3. '''Check alert configuration:'''
* Verify alert is enabled
* Verify filter logic matches your data (IP addresses, numbers, groups)
* Verify thresholds are reasonable for the actual QoS metrics
* Verify GUI license is not locked (Check '''GUI > Settings > License''')
 
== "Crontab Log is Too Old" Warning - Database Performance Issues ==
 
The VoIPmonitor GUI displays a warning message "Crontab log is too old" when the last successful cron run timestamp exceeds the expected interval. While this often indicates a missing or misconfigured cron job, it can also occur when the database is overloaded and the cron script runs slowly.
 
=== Common Causes ===
 
# '''Missing or broken cron entry''' - The cron job does not exist in /etc/crontab or the command fails when executed
# '''Database overload''' - The cron job runs but completes slowly due to database performance bottlenecks, causing the "last run" timestamp to drift outside the expected window
 
=== Distinguishing the Causes ===
 
Use the following diagnostic workflow to determine if the issue is cron configuration vs. database performance:
 
'''Step 1: Verify the cron job is actually running'''
 
Check if the cron execution timestamp is updating (even if slowly):
 
<syntaxhighlight lang="bash"># Check the current cron timestamp from the database
mysql -u voipmonitor -p voipmonitor -e "SELECT name, last_run FROM scheduler LIMIT 1"
 
# The last_run timestamp should update at least every few minutes
# If it never updates, the cron is not running (see Step 2)
# If it updates but lags by more than 5-10 minutes, it's a performance issue (see Step 3)
</syntaxhighlight>
 
'''Step 2: If cron is not running at all'''
 
Follow the standard cron setup instructions in the "Setting Up the Cron Job" section above. Common issues:
 
* Cron entry missing from /etc/crontab
* Incorrect PHP path (use full path like /usr/bin/php instead of php)
* PHP CLI missing IonCube loader (check with `php -r 'echo extension_loaded("ionCube Loader")?"yes":"no";'`)
* Wrong file permissions or incorrect web directory path
* '''PHP CLI version mismatch''' - System CLI PHP differs from web server PHP
 
==== PHP CLI Version Mismatch Fix ====
 
Even if the cron job exists and IonCube Loader is installed for both web and CLI, the CLI may be using a different PHP version than the web server, causing the cron script to fail. This commonly occurs when multiple PHP versions are installed on the system.
 
'''Symptoms:'''
* Cron job exists in /etc/crontab
* PHP CLI has IonCube loader installed
* The GUI shows "Crontab log is too old" warning
* Manual command execution succeeds when using the correct PHP version
 
'''Diagnosis:'''
 
1. Check the target PHP version required by the GUI:
<syntaxhighlight lang="bash">
cat /var/www/html/ioncube_phpver
# Output may be: 81 (for PHP 8.1), 82 (for PHP 8.2), etc.
</syntaxhighlight>
 
2. Check the current CLI PHP version:
<syntaxhighlight lang="bash">
php -v
# Example output: PHP 8.2.26 (the default CLI version may not match the GUI requirement)
</syntaxhighlight>
 
3. List available PHP CLI versions:
<syntaxhighlight lang="bash">
ls /usr/bin/php*
# You may see: php8.1, php8.2, php8.3
</syntaxhighlight>
 
'''Solution: Set CLI PHP Version to Match Web Server'''
 
Use the `update-alternatives` command to set the default CLI PHP version to match the web server:


<syntaxhighlight lang="bash">
* [[Anti-fraud|Anti-Fraud Rules]] - Realtime fraud detection
# If ioncube_phpver shows "81", set CLI to PHP 8.1
* [[Reports]] - Daily reports and report generator
sudo update-alternatives --set php /usr/bin/php8.1
* [[Groups]] - IP and number groups for filtering


# If ioncube_phpver shows "82", set CLI to PHP 8.2
sudo update-alternatives --set php /usr/bin/php8.2


# Verify the change
php -v
which php
# Should now point to the correct version
</syntaxhighlight>


'''Verify the Fix:'''


Test the cron command manually and check IonCube is loaded with the new version:
<syntaxhighlight lang="bash">
cd /var/www/html
php php/run.php cron
# Verify IonCube is loaded
php -r 'echo extension_loaded("ionCube Loader")?"yes":"no";'
# Should output: yes
</syntaxhighlight>
After a few minutes, the "Crontab log is too old" warning in the GUI should disappear, confirming the cron job is now running successfully.
'''Step 3: If cron runs but slowly (database performance issue)'''
When the cron job runs but takes a long time to complete, the issue is database overload. Diagnose using the Sensors statistics:
# Navigate to '''GUI > Settings > Sensors'''
# Click on the sensor status to view detailed statistics
# Compare the following timestamps:
**'''Last CDR in database'''** - The timestamp of the most recently completed call stored in MySQL
**'''Last CDR in processing queue'''** - The timestamp of the most recent call reached by the sniffer
If there is a significant delay (minutes or more) between these two timestamps during peak traffic, the database cannot keep up with CDR insertion. This causes alert/reports processing (run.php cron) to also run slowly.
== Solutions for Database Performance Issues ==
**1. Check MySQL configuration**
Ensure your MySQL/MariaDB configuration follows the recommended settings for your call volume. Key parameters:
* <code>innodb_flush_log_at_trx_commit</code> - Set to 2 for better performance (or 0 in extreme high-CPS environments)
* <code>innodb_buffer_pool_size</code> - Allocate 70-80% of available RAM for high-volume deployments
* <code>innodb_io_capacity</code> - Match your storage system capabilities (e.g., 1000000 for NVMe SSDs)
See [[Scaling]] and [[High-Performance_VoIPmonitor_and_MySQL_Setup_Manual]] for detailed tuning guides.
**2. Increase database write threads**
In <code>/etc/voipmonitor.conf</code>, increase the number of threads used for writing CDRs:
<syntaxhighlight lang="ini">
mysqlstore_max_threads_cdr = 8  # Default is 4, increase based on workload
</syntaxhighlight>
**3. Monitor SQL queue statistics**
In the expanded status view (GUI > Settings > Sensors > status), check the SQLq value:
* '''SQLq (SQL queue) growing steadily''' - Database is a bottleneck, calls are waiting in memory
* '''SQLq remains low (under 1000)''' - Database is keeping up, may need other tuning
See [[SQL_queue_is_growing_in_a_peaktime]] for more information.
**4. Reduce alert/report processing load**
Too many alert rules or complex reports can exacerbate the problem:
* Review and disable unnecessary alerts in '''GUI > Alerts'''
* Reduce the frequency of daily reports (edit in '''GUI > Reports''')
* Increase parallel processing tasks: In <code>/var/www/html/configuration.php</code>, set <code>define('CRON_PARALLEL_TASKS', 8);</code> (requires increasing PHP memory limits)
**5. Check database query performance**
Identify slow queries:
<syntaxhighlight lang="bash"># Enable slow query logging in my.cnf
slow_query_log = 1
long_query_time = 2
# After waiting for a cron cycle, check the slow query log
tail -f /var/log/mysql/slow.log
</syntaxhighlight>
Look for queries taking more than a few seconds. Common culprits:
* Missing indexes on frequently filtered columns (caller, callee, sipcallerip, etc.)
* Complex alert conditions joining large tables
* Daily reports scanning millions of rows without date range limitations
**6. Scale database architecture**
For very high call volumes (4000+ concurrent calls), consider:
* Separate database server from sensor hosts
* Use MariaDB with LZ4 page compression
* Implement database replication for read queries
* Use hourly table partitioning for improved write performance
See [[High-Performance_VoIPmonitor_and_MySQL_Setup_Manual]] for architecture recommendations.
== Verification ==
After applying fixes:
1. '''Monitor the "Crontab log is too old" timestamp in the GUI'''
  * The timestamp should update every 1-3 minutes during normal operation
  * If it still lags by 10+ minutes, further tuning is required
2. '''Check sensor statistics (GUI > Settings > Sensors)'''
  * The delay between "Last CDR in database" and "Last CDR in processing queue" should be under 1-2 minutes during peak load
  * SQLq should remain below 1000 and not grow continuously
3. '''Test alert processing manually'''
<syntaxhighlight lang="bash">
# Run the cron script manually and measure execution time
time php /var/www/html/php/run.php cron
# Should complete within 10-30 seconds in most environments
# If it takes longer than 60-120 seconds, database tuning is needed
</syntaxhighlight>
== See Also ==


* [[Anti-fraud|Anti-Fraud Rules]] - Detailed fraud detection configuration
* [[Reports|Reports]] - Daily reports and report generator
* [[Sniffer_troubleshooting|Sniffer Troubleshooting]] - General troubleshooting


== AI Summary for RAG ==
== AI Summary for RAG ==


'''Summary:''' VoIPmonitor Alerts & Reports system provides email notifications based on QoS parameters and SIP conditions. Alert types include: RTP alerts (MOS, jitter, packet loss, delay), SIP response alerts (including 408 timeout and response code 0), sensors health monitoring, SIP REGISTER RRD beta (response time monitoring), SIP failed Register beta (credential-stuffing detection), multiple register beta (accounts from multiple IPs), RTP&CDR alerts (PDD monitoring), CDR trends alerts (ASR trend-based monitoring with Offset/Range/Deviation parameters), and custom report-based alerts. CRITICAL: For percentage thresholds, the "from all" checkbox controls whether calculation uses ALL CDRs (checked) or only filtered CDRs (unchecked) - always UNCHECK when monitoring specific IP groups. CRITICAL: Alerts use OR logic between conditions - AND logic is NOT supported. Multiple trigger if ANY condition is met, not all. Workaround: create separate alerts for each condition and correlate manually. No "operand" parameter exists for combining conditions. External scripts enable webhook integration (Datadog, Slack). IP addresses in CDR table are stored as decimal integers - use long2ip() (PHP) or INET_NTOA() (MySQL) for conversion. Troubleshooting covers: MTA configuration, crontab setup, CRON_LOG_FILE debugging, concurrent calls alerts timing issues (SQL queue delays, "CDR not older than" parameter), PHP CLI version mismatch (use update-alternatives to match web server PHP version), and external script not triggering (verify absolute path, chmod 755 permissions, use preview button to test alert first, create local script with curl/wget for URL triggering).
'''Summary:''' VoIPmonitor Alerts system provides email notifications for QoS thresholds (RTP: MOS, jitter, packet loss), SIP response codes (0=no response, 408=timeout), sensor health, and registration monitoring. Alert types include RTP, RTP&CDR (with filter templates for duration/absolute_timeout), SIP Response (use "from all" unchecked for IP group percentages), International Calls (prefix-based, NOT GeoIP), Sensors, SIP REGISTER alerts (RRD beta for latency, failed Register beta for brute-force, multiple register beta for credential compromise), and CDR Trends (ASR deviation monitoring). External scripts enable webhook integrations. CRITICAL: Alerts use OR logic only - AND not supported. IP addresses stored as integers - use long2ip()/INET_NTOA() for conversion.


'''Keywords:''' alerts, email notifications, QoS, MOS, jitter, packet loss, SIP response, 408 Request Timeout, response code 0, sensors monitoring, SIP REGISTER RRD beta, SIP failed Register beta, credential stuffing, brute force, multiple register beta, RTP&CDR alerts, PDD, Post Dial Delay, CDR trends, ASR, Answer Seizure Ratio, Offset, Range, Deviation, from all checkbox, percentage threshold, OR logic, AND logic, multiple conditions, no operand parameter, alerts and vs or, DEFAULT_EMAIL_FROM, crontab, MTA, Postfix, CRON_LOG_FILE, CRON_PARALLEL_TASKS, external scripts, webhooks, Datadog, Slack, long2ip, INET_NTOA, decimal IP, concurrent calls alerts, SQL queue, SQLq, fraud concurrent calls, PHP CLI version mismatch, update-alternatives, ioncube_phpver, external script not triggering, preview button, absolute path, chmod 755, curl, wget, webhook
'''Keywords:''' alerts, email notifications, QoS, MOS, jitter, packet loss, SIP response, 408 timeout, sensors monitoring, SIP REGISTER, brute force, credential stuffing, international calls, called number prefixes, CDR trends, ASR, external scripts, webhooks, from all checkbox, OR logic, crontab, MTA, Postfix, CRON_LOG_FILE, concurrent calls, SQL queue


'''Key Questions:'''
'''Key Questions:'''
* How do I set up email alerts in VoIPmonitor?
* How do I configure email alerts in VoIPmonitor?
* What types of alerts are available?
* What alert types are available (RTP, SIP, Sensors)?
* How do I configure international call alerts with prefix filtering?
* What does the "from all" checkbox do in percentage alerts?
* What does the "from all" checkbox do in percentage alerts?
* How do I configure alerts for a specific IP group?
* How do I integrate alerts with webhooks (Slack, Datadog)?
* How do I detect SIP registration floods (credential-stuffing)?
* Why are my alerts not triggering?
* What is the difference between SIP failed Register beta and multiple register beta?
* How do I troubleshoot email delivery issues?
* How do I configure CDR trends alerts for ASR monitoring?
* What are Offset and Range parameters in CDR trends?
* How do I configure the "From" address for alert emails (DEFAULT_EMAIL_FROM)?
* How do I configure external scripts for webhooks (Datadog, Slack)?
* How do I convert decimal IP addresses to dotted-decimal format?
* Do alerts use AND or OR logic between multiple conditions?
* Can I configure an alert that requires ALL conditions to be met?
* Why is my alert with multiple conditions triggering when only one condition is met?
* Is there an "operand" parameter to combine alert conditions with AND logic?
* How do I work around the lack of AND logic in alerts?
* Why are concurrent calls alerts not triggering?
* What is the "CDR not older than" parameter?
* What is the difference between fraud and regular concurrent calls alerts?
* What is the difference between fraud and regular concurrent calls alerts?
* What does "Crontab log is too old" warning mean?
* How do I detect SIP registration attacks (brute-force)?
* How do I fix PHP CLI version mismatch for cron jobs?
* Do alerts support AND logic between conditions?
* How do I enable detailed alert processing logs (CRON_LOG_FILE)?
* How do I troubleshoot MTA email delivery issues?
* Why is my external script not triggering when alert conditions are met?
* How do I troubleshoot external script configuration in alerts?
* Do I use absolute path or relative path for external script field?
* How do I trigger a URL from an external script?
* How do I use the preview button to test alerts?

Latest revision as of 10:34, 17 January 2026


Alerts & Reports

Email notifications triggered by QoS thresholds, SIP errors, or sensor health conditions. The system stores all alerts in history for review.

Prerequisites

Email Configuration

Alerts use PHP's mail() function via the server's MTA (Postfix/Exim/Sendmail).

Setting Location Description
From Address GUI > Settings > System Configuration > Email DEFAULT_EMAIL_FROM - sender address for all alerts
Cron Job /etc/crontab Required for alert processing
# Add cron job (required)
echo "* * * * * root php /var/www/html/php/run.php cron" >> /etc/crontab
killall -HUP cron   # Debian/Ubuntu
# or: killall -HUP crond  # RHEL/CentOS

Alert Types

Access via GUI > Alerts.

RTP Alerts

Trigger on voice quality metrics:

  • MOS - below threshold
  • Packet loss - percentage exceeded
  • Jitter - variation exceeded
  • Delay (PDV) - latency exceeded
  • One-way calls - one RTP stream missing
  • Missing RTP - both RTP streams missing

Configure alerts to trigger when number of incidents OR percentage of CDRs exceeds threshold.

RTP&CDR Alerts

Combine RTP metrics with CDR conditions including PDD (Post Dial Delay).

Using Filter Templates:

  1. Create CDR filter in GUI > CDR
  2. Save as template
  3. In alert config, select from Filter template dropdown

💡 Tip: Use filter templates for complex conditions like duration > 14400 (calls over 4 hours) or absolute_timeout (truncated recordings).

SIP Response Alerts

Response Code Meaning
Empty All call attempts per filters
0 No response received (routing loops)
408 Timeout after provisional response (server unresponsive)
Specific Exact codes (404, 503, etc.)

"from all" Checkbox (Percentage Thresholds)

⚠️ Warning: This setting is critical for IP group monitoring.

  • CHECKED: % calculated from ALL CDRs in database
  • UNCHECKED: % calculated only from filtered CDRs (correct for specific IP groups)


SIP Response vs Last SIP Response

There are two different fields for matching SIP responses:

Field Location Supports % Threshold Use Case
SIP response GUI > Alerts > SIP Response Alerts ✓ Yes Match by numeric code (e.g., 487, 503)
Last sip response GUI > Alerts > Filter common ✗ No Match by full text (e.g., "487 Request Terminated")

⚠️ Warning: The GUI cannot trigger alerts based on percentage of full textual response strings. If you need percentage-based triggering for SIP response codes, use the SIP response numeric field instead.

The Last sip response field supports wildcard patterns (%, %Request Terminated%, %487%) but only triggers based on count thresholds, not percentages.

International Call Alerts (Called Number Prefixes)

Monitor calls to international destinations using prefix-based matching (dialing patterns like 00, +).

ℹ️ Note: This uses phone number prefix detection, NOT IP geolocation. For GeoIP-based detection, see Anti-Fraud Rules.

Configuration:

  1. GUI > Settings > Country prefixes - Define international prefixes (00, +), local country, minimum digits
  2. GUI > Alerts > Filter common - Configure:
Setting Description
Called number prefixes Which prefixes trigger alert (uncheck ALL for all international)
Exclude called number Country codes to exclude (e.g., +44, 0044 for UK)
Strict for prefixes Require international prefix (00/+)
NANPA North American Numbering Plan

Sensors Alerts

Monitor sensor health and status:

  • Offline detection - Sensor not communicating
  • Old CDR - No recent CDRs written (capture or DB issue)
  • Big SQL queue stat - Growing queue indicates DB bottleneck (warning: >20 files, critical: >100)

SIP REGISTER Alerts

Alert Type Purpose Use Case
SIP REGISTER RRD beta Response time monitoring Network latency, packet loss
SIP failed Register (beta) Failed registrations by IP Brute-force, credential stuffing
multiple register (beta) Same account from multiple IPs Credential compromise detection

⚠️ Warning: multiple register (beta) detects SIMULTANEOUS registrations from multiple IPs (security). For detecting IP changes when device moves networks, use CDR&RTP alert with external script.


Alert Output Fields

The multiple register (beta) and other SIP REGISTER alerts output the following fields in email notifications and GUI:

Field Source Description
username SIP Contact header The registered user identity
from fields SIP From header From-number, From-domain extracted from From header
to fields SIP To header To-number, To-domain extracted from To header
lookup name Tools > Prefix Lookup Custom label if phone number matches a configured prefix entry

ℹ️ Note: The lookup name column displays custom labels from Prefix Lookup when a phone number matches a configured prefix. If no match exists, the field remains empty or shows the raw number.

CDR Trends Alerts

Monitor metric deviations from historical baselines (e.g., ASR drops).

Parameter Description
Type Metric to monitor (ASR, ACD, etc.)
Offset Historical baseline (1 week, 1 day)
Range Current evaluation window (1 hour)
Method Deviation (%) or Threshold (absolute)
Limit Inc./Dec. Trigger threshold percentage

Common Filters

All alert types support:

  • IP/Number Group - Predefined groups from Groups menu
  • IP Addresses / Numbers - Individual values (one per line)
  • Email Group / Emails - Recipients
  • Last sip response - Filter by response text (requires save_sip_history = responses)
  • External script - Custom script path for integrations

⚠️ Warning: Alerts use OR logic between conditions. AND logic is NOT supported. Workaround: create separate alerts and correlate manually.

Caller vs Called Filtering

The Numbers filter matches against both caller and called fields. You cannot create alerts that trigger only when a number is the caller or only the called. Use IP Groups with Trunk/Server checkboxes for direction-based filtering. See Groups.

External Scripts

Enable webhook integration (Datadog, Slack, custom systems).

Configuration: Enter full absolute path in External script field (e.g., /usr/local/bin/alert-webhook.sh).

Arguments passed to script:

Arg Description
$1 Alert ID
$2 Alert name
$3 Unix timestamp
$4 JSON data with CDR IDs

Example - Slack notification:

#!/bin/bash
# /usr/local/bin/slack-alert.sh
SLACK_WEBHOOK="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
curl -X POST "$SLACK_WEBHOOK" -H "Content-Type: application/json" \
  -d '{"text": "VoIPmonitor Alert: '"$2"'"}'

ℹ️ Note: IP addresses in CDR table are decimal integers. Use long2ip() (PHP) or INET_NTOA() (MySQL) for conversion.

Sent Alerts

View triggered alerts via GUI > Alerts > Sent Alerts. Shows:

  • Parameters table - QoS metrics with highlighted bad values
  • CDR records - Calls that triggered alert with flags: (M)OS, (J)itter, (P)acket loss, (D)elay

Custom Report Alerts

Alert on criteria not in native types (e.g., custom SIP headers).

Workflow:

  1. Capture header in /etc/voipmonitor.conf: custom_headers = Max-Forwards
  2. Enable in GUI > Settings > CDR Custom Headers
  3. Create filter in CDR view, save as template
  4. Create Daily Report with filter in GUI > Reports > Configure Daily Reports

ℹ️ Note: Custom report alerts cannot group by caller/called for threshold detection (e.g., "alert if same caller has >X failures"). Use CDR Summary reports for aggregated data.

Troubleshooting

Email Not Sent

Diagnosis:

  • Entries in "Sent Alerts" but no email → MTA issue
  • No entries in "Sent Alerts" → Alert conditions or cron issue
# Test MTA
echo "Test" | mail -s "Test" your@email.com

# Check MTA status
systemctl status postfix  # or exim4/sendmail

# Check logs
tail -f /var/log/mail.log  # Debian/Ubuntu
tail -f /var/log/maillog   # RHEL/CentOS

# Check mail queue
mailq

Status 250 or "Queued mail for delivery" = Your server delivered successfully. If recipient didn't receive, issue is on their side (spam folder, quarantine, blacklisting). Mail Queue Not Delivering: If emails accumulate in the queue but are not being sent:

# Verify queue manager is running
ps aux | grep qmgr

# Restart Postfix
systemctl restart postfix

# Force immediate delivery of queued emails
postfix flush

Alerts Not Triggering

Enable debug logging:

// Add to ./config/system_configuration.php
define('CRON_LOG_FILE', '/tmp/alert.log');
# Monitor processing
tail -f /tmp/alert.log

Common causes:

  • Cron not running - verify with crontab -l
  • PHP CLI version mismatch - use update-alternatives --set php /usr/bin/php8.x
  • SQL queue growing - DB can't keep up (see Scaling)
  • Alert disabled or filter mismatch

Concurrent Calls Alerts

Type Data Source Aggregation Timing
Fraud concurrent calls SIP INVITEs (realtime) Source IP only Immediate
Regular concurrent calls CDRs (database) Source/Dest IP, Domain, Custom Delayed

Use regular concurrent calls for destination IP monitoring (trunk capacity). Investigating Fraud: Realtime Concurrent Calls Alerts

Since this alert type triggers before CDRs are written, use the following procedure to investigate the calls that triggered the alert:

  1. Navigate to GUI → CDR
  2. Use the filter form to add the is international filter
  3. Set the from and to date range to match the time the alert was sent
  4. Go to the bottom of the CDR view and enable grouping by country
  5. Analyze the traffic by country to identify the source of the fraudulent activity

External Script Not Running

  1. Use preview button to test alert triggers
  2. Verify absolute path (not relative)
  3. Check permissions: chmod 755 /path/to/script.sh
  4. Include shebang: #!/bin/bash
  5. Use full command paths (e.g., /usr/bin/curl)
  6. For URLs, create script with curl/wget - cannot put URL directly in field

"Crontab Log Too Old" Warning

Causes:

  1. Cron not running → Add cron entry
  2. PHP CLI version mismatch → update-alternatives --set php /usr/bin/php8.x
  3. Database overload → Check SQLq in GUI > Settings > Sensors, see Scaling

See Also




AI Summary for RAG

Summary: VoIPmonitor Alerts system provides email notifications for QoS thresholds (RTP: MOS, jitter, packet loss), SIP response codes (0=no response, 408=timeout), sensor health, and registration monitoring. Alert types include RTP, RTP&CDR (with filter templates for duration/absolute_timeout), SIP Response (use "from all" unchecked for IP group percentages), International Calls (prefix-based, NOT GeoIP), Sensors, SIP REGISTER alerts (RRD beta for latency, failed Register beta for brute-force, multiple register beta for credential compromise), and CDR Trends (ASR deviation monitoring). External scripts enable webhook integrations. CRITICAL: Alerts use OR logic only - AND not supported. IP addresses stored as integers - use long2ip()/INET_NTOA() for conversion.

Keywords: alerts, email notifications, QoS, MOS, jitter, packet loss, SIP response, 408 timeout, sensors monitoring, SIP REGISTER, brute force, credential stuffing, international calls, called number prefixes, CDR trends, ASR, external scripts, webhooks, from all checkbox, OR logic, crontab, MTA, Postfix, CRON_LOG_FILE, concurrent calls, SQL queue

Key Questions:

  • How do I configure email alerts in VoIPmonitor?
  • What alert types are available (RTP, SIP, Sensors)?
  • How do I configure international call alerts with prefix filtering?
  • What does the "from all" checkbox do in percentage alerts?
  • How do I integrate alerts with webhooks (Slack, Datadog)?
  • Why are my alerts not triggering?
  • How do I troubleshoot email delivery issues?
  • What is the difference between fraud and regular concurrent calls alerts?
  • How do I detect SIP registration attacks (brute-force)?
  • Do alerts support AND logic between conditions?