Alerts
Category:GUI manual
Alerts & Reports
Alerts & Reports generate email notifications based on QoS parameters or SIP error conditions. The system includes daily reports, ad hoc reports, and stores all generated items in history.
Overview
The alert system monitors call quality and SIP signaling in real-time, triggering notifications when configured thresholds are exceeded.
Email Configuration Prerequisites
Emails are sent using PHP's mail() function, which relies on the server's Mail Transfer Agent (MTA) such as Exim, Postfix, or Sendmail. Configure your MTA according to your Linux distribution documentation.
Setting Up the Cron Job
Alert processing requires a cron job that runs every minute:
# Add to /etc/crontab (adjust path based on your GUI installation)
echo "* * * * * root php /var/www/html/php/run.php cron" >> /etc/crontab
# Reload crontab
killall -HUP cron # Debian/Ubuntu
# or
killall -HUP crond # CentOS/RHEL
Configure Alerts
Email alerts can trigger on SIP protocol events or RTP QoS metrics. Access alerts configuration via GUI > Alerts.

Alert Types
RTP Alerts
RTP alerts trigger based on voice quality metrics:
- MOS (Mean Opinion Score) - below threshold
- Packet loss - percentage exceeded
- Jitter - variation exceeded
- Delay (PDV) - latency exceeded
- One-way calls - answered but one RTP stream missing
- Missing RTP - answered but both RTP streams missing
Configure alerts to trigger when:
- Number of incidents exceeds a set value, OR
- Percentage of CDRs exceeds a threshold

SIP Response Alerts
SIP response alerts trigger based on SIP response codes:
- Empty response field: Matches all call attempts per configured filters
- Response code 0: Matches unreplied INVITE requests (no response received)
- Specific codes: Match exact codes like 404, 503, etc.

Detecting 408 Request Timeout Failures
A 408 Request Timeout response occurs when the caller sends multiple INVITE retransmissions and receives no final response. This is useful for alerting on calls that timeout after the UAS (User Agent Server) sends a provisional response like 100 Trying but then fails to send any further responses.
Use Cases:
- Detect failing PBX or SBC (Session Border Controller) instances that accept calls but stop processing
- Monitor network failures where SIP messages stop flowing after initial dialog establishment
- Identify servers that become unresponsive mid-call setup
Configuration:
1. Navigate to GUI > Alerts
2. Create new alert with type SIP Response
3. Set Response code to 408
4. Optionally add Common Filters (IP addresses, numbers) to narrow scope
5. Save the alert
Understanding the Difference Between Response Code 0 and 408:
- 'Response code 0: Matches calls that received absolutely no response (not even a 100 Trying). These are network or reachability issues.
- Response code 408: Matches calls that received at least one provisional response (like 100 Trying) but eventually timed out. These indicate a server or application layer problem where the UAS stopped responding after initial acknowledgment.
Note: When a call times out with a 408 response, the CDR stores 408 as the Last SIP Response. Alerting on 408 will catch all call setup timeouts, including those where a 100 Trying was initially received.
Sensors Alerts
Sensors alerts monitor the health of VoIPmonitor probes and sniffer instances. This is the most reliable method to check if remote sensors are online and actively monitoring traffic.
Unlike simple network port monitoring (which may show a port as open even if the process is frozen or unresponsive), sensors alerts verify that the sensor instance is actively communicating with the VoIPmonitor GUI server.
- Setup
-
- Configure sensors in Settings > Sensors
- Create a sensors alert to be notified when a probe goes offline or becomes unresponsive
SIP REGISTER RRD Beta Alerts
The SIP REGISTER RRD beta alert type monitors SIP REGISTER response times and alerts when REGISTER packets do not receive a response within a specified threshold (in milliseconds). This is useful for detecting network latency issues, packet loss, or failing switches that cause SIP retransmissions.
This alert serves as an effective proxy to monitor for registration issues, as REGISTER retransmissions often indicate problems with network connectivity or unresponsive SIP servers.
- Configuration
-
- Navigate to GUI > Alerts
- Create a new alert with type SIP REGISTER RRD beta
- Set the response time threshold in milliseconds (e.g., alert if REGISTER does not receive a response within 2000ms)
- Configure recipient email addresses
- Save the alert configuration
The system monitors REGISTER packets and triggers an alert when responses exceed the configured threshold, indicating potential SIP registration failures or network issues.

Common Filters
All alert types support the following filters:
| Filter | Description |
|---|---|
| IP/Number Group | Apply alert to predefined groups (from Groups menu) |
| IP Addresses | Individual IPs or ranges (one per line) |
| Numbers | Individual phone numbers or prefixes (one per line) |
| Email Group | Send alerts to group-defined email addresses |
| Emails | Individual recipient emails (one per line) |
| External script | Path to custom script to execute when alert triggers (see below) |

Using External Scripts for Alert Actions
Beyond email notifications, alerts can execute custom scripts when triggered. This enables integration with third-party systems (webhooks, Datadog, Slack, custom monitoring tools) without sending emails.
Configuration
1. Navigate to GUI > Alerts
2. Create or edit an alert (RTP, SIP Response, Sensors, etc.)
3. In the configuration form, locate the External script field
4. Enter the full path to your custom script (e.g., /usr/local/bin/alert-webhook.sh)
5. Save the alert configuration
The script will execute immediately when the alert triggers.
Script Arguments
The custom script receives alert data as command-line arguments. The format is identical to anti-fraud scripts (see Anti-Fraud Rules):
| Argument | Description |
|---|---|
$1 |
Alert ID (numeric identifier) |
$2 |
Alert name/type |
$3 |
Unix timestamp of alert trigger |
$4 |
JSON-encoded alert data |
Alert Data Structure
The JSON in the fourth argument contains CDR IDs affected by the alert:
{
"cdr": [12345, 12346, 12347],
"alert_type": "MOS below threshold",
"threshold": 3.5,
"actual_value": 2.8
}
Use the cdr array to query additional information from the database if needed.
Example: Send Webhook to Datadog
This bash script sends an alert notification to a Datadog webhook API:
#!/bin/bash
# /usr/local/bin/datadog-alert.sh
# Configuration
WEBHOOK_URL="https://webhook.site/your-custom-url"
DATADOG_API_KEY="your-datadog-api-key"
# Parse arguments
ALERT_ID="$1"
ALERT_NAME="$2"
TIMESTAMP="$3"
ALERT_DATA="$4"
# Convert Unix timestamp to readable date
DATE=$(date -d "@$TIMESTAMP" '+%Y-%m-%d %H:%M:%S')
# Extract relevant data from JSON
cdrCount=$(echo "$ALERT_DATA" | jq -r '.cdr | length')
threshold=$(echo "$ALERT_DATA" | jq -r '.threshold // empty')
actualValue=$(echo "$ALERT_DATA" | jq -r '.actual_value // empty')
# Build webhook payload
PAYLOAD=$(cat <<EOF
{
"alert_id": "$ALERT_ID",
"alert_name": "$ALERT_NAME",
"triggered_at": "$DATE",
"cdr_count": $cdrCount,
"threshold": $threshold,
"actual_value": $actualValue,
"source": "voipmonitor"
}
EOF
)
# Send webhook
curl -X POST "$WEBHOOK_URL" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $DATADOG_API_KEY" \
-d "$PAYLOAD"
Make the script executable:
chmod +x /usr/local/bin/datadog-alert.sh
Example: Send Slack Notification
#!/bin/bash
# /usr/local/bin/slack-alert.sh
SLACK_WEBHOOK="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
ALERT_NAME="$2"
ALERT_DATA="$4"
cdrCount=$(echo "$ALERT_DATA" | jq -r '.cdr | length')
curl -X POST "$SLACK_WEBHOOK" \
-H "Content-Type: application/json" \
-d '{
"text": "VoIPmonitor Alert: '"$ALERT_NAME"'",
"attachments": [{
"color": "danger",
"fields": [
{"title": "CDRs affected", "value": "'"$cdrCount"'"}
]
}]
}'
Example: Store Alert Details in File
#!/bin/bash
# /usr/local/bin/log-alert.sh
LOG_DIR="/var/log/voipmonitor-alerts"
mkdir -p "$LOG_DIR"
# Log all arguments for debugging
echo "=== Alert triggered at $(date) ===" >> "$LOG_DIR/alerts.log"
echo "Alert ID: $1" >> "$LOG_DIR/alerts.log"
echo "Alert name: $2" >> "$LOG_DIR/alerts.log"
echo "Timestamp: $3" >> "$LOG_DIR/alerts.log"
echo "Data: $4" >> "$LOG_DIR/alerts.log"
echo "" >> "$LOG_DIR/alerts.log"
Important Notes
- Script execution time: The alert processor waits for the script to complete. Keep scripts fast (under 5 seconds) or run them in the background if processing takes longer.
- Script permissions: Ensure the script is executable by the web server user (typically
www-dataorapache). - Error handling: Script failures are logged but do not prevent email alerts from being sent.
- Querying CDRs: The script receives CDR IDs in the JSON data. Query the
cdrtable to retrieve detailed information like caller numbers, call duration, etc. - Security: Validate input before using it in commands or database queries to prevent injection attacks.
Sent Alerts
All triggered alerts are saved in history and can be viewed via GUI > Alerts > Sent Alerts. The content matches what was sent via email.

Parameters Table
The parameters table shows QoS metrics with problematic values highlighted for quick identification.

CDR Records Table
The CDR records table lists all calls that triggered the alert. Each row includes alert flags indicating which thresholds were exceeded:
- (M) - MOS below threshold
- (J) - Jitter exceeded
- (P) - Packet loss exceeded
- (D) - Delay exceeded
Anti-Fraud Alerts
VoIPmonitor includes specialized anti-fraud alert rules for detecting attacks and fraudulent activity. These include:
- Realtime concurrent calls monitoring
- SIP REGISTER flood/attack detection
- SIP PACKETS flood detection
- Country/Continent destination alerts
- CDR/REGISTER country change detection
For detailed configuration of anti-fraud rules and custom action scripts, see Anti-Fraud Rules.
Alerts Based on Custom Reports
In addition to native alert types (RTP, SIP response, Sensors), VoIPmonitor supports generating alerts from custom reports. This workflow enables alerts based on criteria not available in native alert types, such as SIP header values captured via CDR custom headers.
Workflow Overview =
1. Capture custom SIP headers in the database 2. Create a custom report filtered by the custom header values 3. Generate an alert from that report 4. Configure alert email options (e.g., limit email size)
Example: Alert on SIP Max-Forwards Header Value =
This example shows how to receive an alert when the SIP Max-Forwards header value drops below 15.
Step 1: Configure Sniffer Capture
Add the header to your /etc/voipmonitor.conf configuration file:
# Capture Max-Forwards header
custom_headers = Max-Forwards
Restart the sniffer to apply changes:
service voipmonitor restart
Step 2: Configure Custom Header in GUI
1. Navigate to GUI > Settings > CDR Custom Headers
2. Select Max-Forwards from the available headers
3. Enable Show as Column to display it in CDR views
4. Save configuration
Step 3: Create Custom Report
1. Navigate to GUI > CDR Custom Headers or use the Report Generator
2. Create a filter for calls where Max-Forwards is less than 15
3. Since custom headers store string values, use a filter expression that matches the desired values:
15 14 13 12 11 10 0_ _
Include additional space-separated values or use NULL to match other ranges as needed.
4. Run the report to verify it captures the expected calls
Step 4: Generate Alert from Report
You can create an alert based on this custom report using the Daily Reports feature:
1. Navigate to GUI > Reports > Configure Daily Reports 2. Click Add Daily Report 3. Configure the filter to target the custom header criteria (e.g., Max-Forwards < 15) 4. Set the schedule (e.g., run every hour) 5. Save the daily report configuration
Step 5: Limit Alert Email Size (Optional)
If the custom report generates many matching calls, the alert email can become large. To limit the email size:
1. Edit the daily report 2. Go to the Basic Data tab 3. Set the max-lines in body option to the desired limit (e.g., 100 lines)
Additional Use Cases =
This workflow can be used for various custom monitoring scenarios:
- SIP headers beyond standard SIP response codes - Monitor any custom SIP header
- Complex filtering logic - Create reports based on multiple custom header filters
- Threshold monitoring for string fields - When numeric comparison is not available, use string matching
For more information on configuring custom headers, see CDR Custom Headers.
Troubleshooting Email Alerts
If email alerts are not being sent, the issue is typically with the Mail Transfer Agent (MTA) rather than VoIPmonitor.
Step 1: Test Email Delivery from Command Line
Before investigating complex issues, verify your server can send emails:
# Test using the 'mail' command
echo "Test email body" | mail -s "Test Subject" your.email@example.com
If this fails, the issue is with your MTA configuration, not VoIPmonitor.
Step 2: Check MTA Service Status
Ensure the MTA service is running:
# For Postfix (most common)
sudo systemctl status postfix
# For Exim (Debian default)
sudo systemctl status exim4
# For Sendmail
sudo systemctl status sendmail
If the service is not running or not installed, install and configure it according to your Linux distribution's documentation.
Step 3: Check Mail Logs
Examine the MTA logs for specific error messages:
# Debian/Ubuntu
tail -f /var/log/mail.log
# RHEL/CentOS/AlmaLinux/Rocky
tail -f /var/log/maillog
Common errors and their meanings:
| Error Message | Cause | Solution |
|---|---|---|
| Connection refused | MTA not running or firewall blocking | Start MTA service, check firewall rules |
| Relay access denied | SMTP relay misconfiguration | See "Configuring SMTP Relay" below |
| Authentication failed | Incorrect credentials | Verify credentials in sasl_passwd |
| Host or domain name lookup failed | DNS issues | Check /etc/resolv.conf |
| Greylisted | Temporary rejection | Wait and retry, or whitelist sender |
Step 4: Check Mail Queue
Emails may be stuck in the queue if delivery is failing:
# View the mail queue
mailq
# Force immediate delivery attempt
postqueue -f
Deferred or failed messages in the queue contain error details explaining why delivery failed.
= Configuring SMTP Relay
If you encounter "Relay access denied" errors, your Postfix server cannot send emails through your external SMTP server. There are two solutions:
Solution 1: Configure External SMTP to Permit Relaying (Recommended for Trusted Networks)
If the VoIPmonitor server is in a trusted network, configure your external SMTP server to permit relaying from the VoIPmonitor server's IP address:
1. Access your external SMTP server configuration
2. Add the VoIPmonitor server's IP address to the allowed relay hosts (mynetworks)
3. Save configuration and reload: postfix reload
Solution 2: Configure Postfix SMTP Authentication (Recommended for Remote SMTP)
If using an external SMTP server that requires authentication, configure Postfix to authenticate using SASL:
1. Install SASL authentication packages:
# Debian/Ubuntu
sudo apt-get install libsasl2-modules
# RHEL/CentOS/AlmaLinux/Rocky
sudo yum install cyrus-sasl-plain
2. Configure Postfix to use the external SMTP relay:
Edit /etc/postfix/main.cf:
# Use external SMTP as relay host
relayhost = smtp.yourprovider.com:587
# Enable SASL authentication
smtp_sasl_auth_enable = yes
# Use SASL password file
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
# Disable anonymous authentication (use only SASL)
smtp_sasl_security_options = noanonymous
# Enable TLS (recommended)
smtp_tls_security_level = encrypt
3. Create the SASL password file with your SMTP credentials:
# Create the file (your SMTP username and password)
echo "[smtp.yourprovider.com]:587 username:password" | sudo tee /etc/postfix/sasl_passwd
# Secure the file (rw root only)
sudo chmod 600 /etc/postfix/sasl_passwd
# Create the Postfix hash database
sudo postmap /etc/postfix/sasl_passwd
# Reload Postfix
sudo systemctl reload postfix
4. Test email delivery:
echo "Test email" | mail -s "SMTP Relay Test" your.email@example.com
If successful, emails should be delivered through the authenticated SMTP relay.
Step 5: Verify Cronjob
Ensure the alert processing script runs every minute:
# Check current crontab
crontab -l
You should see:
* * * * * root php /var/www/html/php/run.php cron
If missing, add it:
crontab -e
# Add the line above, then reload cron
killall -HUP cron
Step 6: Verify Alert Configuration in GUI
After confirming the MTA works:
- Navigate to GUI > Alerts
- Verify alert conditions are enabled
- Check that recipient email addresses are valid
- Go to GUI > Alerts > Sent Alerts to see if alerts were triggered
Diagnosis:
- Entries in "Sent Alerts" but no emails received → MTA issue
- No entries in "Sent Alerts" → Check alert conditions or cronjob
Step 7: Test PHP mail() Function
Isolate the issue by testing PHP directly:
php -r "mail('your.email@example.com', 'Test from PHP', 'This is a test email');"
- If this works but VoIPmonitor alerts don't → Check GUI cronjob and alert configuration
- If this fails → MTA or PHP configuration issue
Troubleshooting Alerts Not Triggering
If alerts are not appearing in the Sent Alerts history at all, the problem is typically with the alert processor not evaluating alerts. This is different from MTA issues where alerts appear in history but emails are not sent.
Enable Detailed Alert Processing Logs
To debug why alerts are not being evaluated, enable detailed logging for the alert processor by adding the following line to your GUI configuration file:
# Edit the config file (adjust path based on your GUI installation)
nano ./config/system_configuration.php
Add this line at the end of the file:
<?php
define('CRON_LOG_FILE', '/tmp/alert.log');
?>
This enables logging that shows which alerts are being processed during each cron job run.
Increase Parallel Processing Threads
If you have many alerts or reports, the default number of parallel threads may cause timeout issues. Increase the parallel task limit:
# Edit the configuration file
nano ./config/configuration.php
Add this line at the end of the file:
<?php
define('CRON_PARALLEL_TASKS', 8);
?>
The value of 8 is recommended for high-load environments. Adjust based on your alert/report volume and server capacity.
Monitor Alert Processing Logs
After enabling logging, monitor the alert log file to see which alerts are being processed:
# Watch the log in real-time
tail -f /tmp/alert.log
The log shows entries like:
begin alert [alert_name]
end alert [alert_name]
Interpreting the logs:
- If you do not see your alert name in the logs → The alert processor is not evaluating it. Check your alert configuration, filters, and data availability.
- If you see the alert in logs but it does not trigger → The alert conditions are not being met. Check your thresholds, filter logic, and verify the CDR data matches your expectations.
- If logs are completely empty → The cron job may not be running or the GUI configuration files are not being loaded. Verify the cron job and file paths.
Alert Not Appearing in Logs
If your alert does not appear in `/tmp/alert.log`:
1. Verify the cron job is running:
# Check the cron job exists
crontab -l
# Manually test the cron script to see errors
php ./php/run.php cron
2. Verify data exists in CDR:
# Check if the calls that should trigger the alert exist
# Navigate to GUI > CDR > Browse and filter for the timeframe
3. Check alert configuration:
- Verify alert is enabled
- Verify filter logic matches your data (IP addresses, numbers, groups)
- Verify thresholds are reasonable for the actual QoS metrics
- Verify GUI license is not locked (Check GUI > Settings > License)
See Also
- Anti-Fraud Rules - Detailed fraud detection configuration
- Reports - Daily reports and report generator
- Sniffer Troubleshooting - General troubleshooting
AI Summary for RAG
Summary: VoIPmonitor Alerts & Reports system for email notifications on QoS and SIP issues. Covers RTP alerts (MOS, jitter, packet loss), SIP response alerts (including detecting 408 Request Timeout from 100 Trying scenarios), sensors health monitoring, SIP REGISTER RRD beta alerts for monitoring registration response times, and creating alerts from custom reports based on CDR custom headers. Includes email troubleshooting for MTA configuration, detailed debugging for alerts not triggering using CRON_LOG_FILE and CRON_PARALLEL_TASKS, and external scripts for webhook integration (Datadog, Slack, third-party monitoring).
Keywords: alerts, email notifications, QoS, MOS, jitter, packet loss, SIP response, 408 Request Timeout, 100 Trying, response code 0, INVITE retransmissions, sensors monitoring, SIP REGISTER RRD beta, REGISTER retransmissions, registration monitoring, crontab, MTA, Postfix, Exim, troubleshooting, custom headers, custom reports, Max-Forwards, daily reports, CRON_LOG_FILE, CRON_PARALLEL_TASKS, alert not triggering, /tmp/alert.log, begin alert, end alert, configuration.php, system_configuration.php, external scripts, webhooks, Datadog, Slack, command-line arguments, JSON data, CDR array
Key Questions:
- How do I set up email alerts in VoIPmonitor?
- What types of alerts are available (RTP, SIP, Sensors, REGISTER RRD beta)?
- How do I detect calls with 408 Request Timeout after 100 Trying?
- How do I create an alert for calls where 100 Trying is sent but no further response?
- What is the difference between response code 0 and 408?
- How do I monitor and alert on SIP REGISTER retransmissions?
- How do I detect registration response time issues?
- How can I use SIP REGISTER RRD beta alert for detecting switch problems?
- How do I configure crontab for alert processing?
- How do I monitor remote sensor health?
- Why are email alerts not being sent?
- How do I troubleshoot MTA email issues?
- How can I create alerts based on SIP headers like Max-Forwards?
- How do I use CDR custom headers for custom reports?
- How do I limit the size of alert emails from custom reports?
- Why are alerts not triggering or appearing in Sent Alerts?
- How do I enable detailed alert processing logs using CRON_LOG_FILE?
- How do I increase parallel alert processing threads with CRON_PARALLEL_TASKS?
- How do I monitor /tmp/alert.log for alert processing debug information?
- How do I configure external scripts for alerts?
- How do I send webhooks from VoIPmonitor alerts to Datadog?
- How do I send alerts to Slack from VoIPmonitor?
- What command-line arguments are passed to alert scripts?
- What JSON data structure is provided to external scripts?