|
|
| (11 intermediate revisions by 2 users not shown) |
| Line 1: |
Line 1: |
| = Delay between active call and cdr view =
| | {{DISPLAYTITLE:SQL Queue Growing During Peak Time}} |
| The Active Calls view in the GUI displays the timestart of a calls(INVITEs) obtained from the VoIPmonitor sniffer service, whereas the CDR view shows the stored CDRs (after the call ends) from the database.
| | [[Category:Troubleshooting]] |
| | [[Category:Database]] |
| | [[Category:Performance]] |
|
| |
|
| == Symptoms of Database Delays == | | = SQL Queue Growing During Peak Time = |
|
| |
|
| When the database cannot keep up with CDR insertion rates, you may experience: | | When the SQL queue (SQLq/SQLf) grows during peak traffic, the database cannot keep up with the CDR insertion rate. This causes delayed CDR visibility in the GUI and can lead to memory exhaustion. |
|
| |
|
| * '''Slow CDR appearance in GUI''' - New calls take minutes to appear after they end
| | == Quick Diagnosis == |
| * '''"Crontab log is too old" warning''' - The cron job runs slowly (every 5-10 minutes or more) instead of every minute due to database overload
| |
| * '''Lag between call end and reporting''' - Daily reports and alerts process outdated data
| |
|
| |
|
| To diagnose cron-related delays specifically, see [[Alerts#Crontab_Log_is_Too_Old_Warning_-_Database_Performance_Issues|"Crontab log is too old" troubleshooting]].
| | <kroki lang="mermaid"> |
| | %%{init: {'flowchart': {'nodeSpacing': 15, 'rankSpacing': 30}}}%% |
| | flowchart TD |
| | A[SQLq/SQLf growing] --> B{mysqld CPU 100%?} |
| | B -->|Yes| C[CPU bottleneck] |
| | B -->|No| D{High iowait?} |
| | D -->|Yes| E[I/O bottleneck] |
| | D -->|No| F[RAM/Buffer bottleneck] |
|
| |
|
| == SQLq/SQLf ==
| | C --> G[Upgrade CPU or optimize queries] |
| In the service status (expanded status, status line) under GUI -> Settings -> Sensors : status, the SQLq/SQLf values represent the size of the queue before the CDRs are pushed to the database. When SQLq/SQLf is high, it usually indicates that the database is unable to process requests in a timely manner, causing them to queue.
| | E --> H[Upgrade to SSD/NVMe] |
| [[Logging#SQLq/SQLf]] | | F --> I[Increase innodb_buffer_pool_size] |
| | </kroki> |
|
| |
|
| == Make sure that db config is not causing io overhead ==
| | '''Monitor the queue:''' |
| If the MySQL configuration already follows the recommendations from the scaling section of our documentation (especially innodb_flush_log_at_trx_commit and innodb_buffer_pool_size):
| | <syntaxhighlight lang="bash"> |
| | | journalctl -u voipmonitor -f | grep SQLf |
| https://www.voipmonitor.org/doc/Scaling#SSDs
| | </syntaxhighlight> |
| | |
| == Enable Disk-Based Query Queue (Prevent OOM) ==
| |
| If your system is experiencing Out Of Memory (OOM) issues or if the SQL queue keeps growing during peak traffic, ensure that disk-based query queuing is enabled. Add or verify this parameter in <code>/etc/voipmonitor.conf</code>:
| |
| | |
| query_cache = yes
| |
| | |
| This is a '''critical''' feature. When enabled, SQL queries are first saved to disk-based queue files (qoq* files in the spool directory) before being sent to the database. This prevents OOM and data loss if the database is temporarily unavailable. The default is <code>yes</code>, but if it was previously set to <code>no</code>, changing it to <code>yes</code> will use disk storage instead of RAM for queued queries.
| |
| | |
| See [[Sniffer_configuration#Performance_.26_Schema|Sniffer Configuration]] for more details on this parameter.
| |
|
| |
|
| == More threads/connections to a db ==
| | '''Check RRD graphs:''' Settings > Sensors > click graph icon > select peak time period. Look for growing "SQL Cache" or "SQL Cache Files". |
| you can also increase the number of threads used for connection to a db for particular use like CDRs - the VoIPmonitor sniffer service in /etc/voipmonitor.conf uses the option
| |
|
| |
|
| mysqlstore_max_threads_cdr = 8
| | {{Note|SQLf value should normally stay below 10. If it consistently exceeds 10 or grows during peak time, apply mitigations.}} |
|
| |
|
| === Important: mysql_enable_set_id Limitation === | | == Storage Speed is Critical == |
| If you are using <code>mysql_enable_new_store = per_query</code> or have enabled <code>mysql_enable_set_id = yes</code>, please note that this configuration '''limits your setup to a single sensor writing to the database'''. This option allows the central server to generate CDR IDs instead of waiting for MySQL's AUTO_INCREMENT, which can improve batch insert performance. However, this architecture only works when one sensor is responsible for all database writes. If you have multiple sensors writing to the same database, do not enable this option as it will cause ID conflicts and data corruption.
| |
|
| |
|
| If you are processing a high volume of SIP OPTIONS, SUBSCRIBE, or NOTIFY messages and see the <code>sip_msg</code> queue growing, you can increase threads specifically for those messages:
| | '''The most common cause of SQL queue growth is slow storage.''' MySQL write performance depends heavily on disk I/O speed. |
|
| |
|
| mysqlstore_max_threads_sip_msg = 8
| | {| class="wikitable" |
| | | ! Storage Type !! Typical IOPS !! CDR Write Capacity !! Recommendation |
| However, if the database is waiting for storage I/O, increasing the number of threads will not help.
| | |- |
| | | | HDD (7200 RPM) || ~150 || 10-50 CDR/s || '''Not suitable''' for production |
| == Clearing File Queue Backlog (qoq* Files) ==
| | |- |
| | | | SATA SSD || ~50,000 || 200-500 CDR/s || Suitable for small deployments |
| When the file queue grows and recent calls are not appearing in CDR despite active calls being visible, you may have a backlog of qoq* files in the spool directory waiting to be processed.
| | |- |
| | | | NVMe SSD || ~500,000+ || 1000+ CDR/s || '''Recommended''' for production |
| === What are qoq* Files? ===
| | |} |
| | |
| The VoIPmonitor sniffer creates <code>qoq*</code> (queue) files in the spool directory (default: <code>/var/spool/voipmonitor</code>) to buffer database operations. These files contain queued SQL commands waiting to be inserted into the database.
| |
| | |
| When the database cannot keep up with the insertion rate:
| |
| * Active calls are visible in the GUI (retrieved from sensor memory in real-time)
| |
| * Recent CDRs do not appear (waiting in qoq queue files)
| |
| * SQLq metric stays high or continues growing
| |
| | |
| === Solution 1: Wait for Queue Processing ===
| |
| | |
| The system will automatically process the qoq* files as the database catches up. Monitor the progress:
| |
|
| |
|
| | '''Check your current I/O performance:''' |
| <syntaxhighlight lang="bash"> | | <syntaxhighlight lang="bash"> |
| # Check the SQLf parameter in logs to monitor queue size | | # Check disk utilization during peak time |
| # SQLf should decrease over time as files are processed
| | iostat -x 1 5 | grep -E "Device|sda|nvme" |
|
| |
|
| # View current qoq files in spool directory | | # If %util stays above 80%, storage is the bottleneck |
| ls -lh /var/spool/voipmonitor/qoq* 2>/dev/null | wc -l
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| This is the preferred approach if losing older CDRs is not acceptable.
| | == VoIPmonitor Tuning == |
|
| |
|
| === Solution 2: Delete qoq* Files (Emergency) ===
| | These parameters in <code>/etc/voipmonitor.conf</code> control how the sniffer writes to the database: |
|
| |
|
| If immediate access to recent CDRs is critical and you can afford to lose older CDRs, you can delete the qoq* files to clear the backlog:
| | {| class="wikitable" |
| | ! Parameter !! Default !! Recommended !! Description |
| | |- |
| | | <code>mysqlstore_max_threads_cdr</code> || 1 || 4-8 || Number of parallel threads writing CDRs to database |
| | |- |
| | | <code>mysqlstore_concat_limit</code> || 400 || 1000-2000 || CDRs batched into single INSERT statement |
| | |- |
| | | <code>query_cache</code> || no || yes || Write queries to disk when queue grows (prevents OOM) |
| | |} |
|
| |
|
| <syntaxhighlight lang="bash"> | | '''Example configuration:''' |
| # WARNING: This will queue any CDRs still waiting in the files | | <syntaxhighlight lang="ini"> |
| # Stop the VoIPmonitor service first
| | # /etc/voipmonitor.conf |
| systemctl stop voipmonitor
| | mysqlstore_max_threads_cdr = 8 |
| | | mysqlstore_concat_limit = 2000 |
| # Delete all qoq* files from the spool directory
| | query_cache = yes |
| rm -f /var/spool/voipmonitor/qoq*
| |
| | |
| # Start the VoIPmonitor service
| |
| systemctl start voipmonitor
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| {{Warning|
| | Restart after changes: <code>systemctl restart voipmonitor</code> |
| '''Data Loss Warning:''' Deleting qoq* files will queue any CDRs that were waiting in the queue files. Only use this method if:
| |
| * Immediate access to recent CDRs is critical
| |
| * Losing older CDRs is acceptable
| |
| * You have exhausted all other options (configuration tuning, hardware upgrade)
| |
| }}
| |
|
| |
|
| This clears the backlog and allows new CDRs to be written immediately without waiting for the old queue to process.
| | == MySQL Tuning == |
|
| |
|
| == When Configuration Tuning Is Not Enough: Hardware Upgrade Required == | | === Essential Settings === |
|
| |
|
| If you have applied all the configuration optimizations above and the SQL queue continues to grow or the database remains significantly behind the processing queue, the underlying issue may be insufficient hardware.
| | These settings are safe for most deployments and significantly improve write performance: |
|
| |
|
| === Signs That Hardware Upgrade Is Necessary === | | <syntaxhighlight lang="ini"> |
| | # /etc/mysql/mysql.conf.d/mysqld.cnf |
|
| |
|
| * '''CPU load is consistently at or near 100%''' on both the database and GUI servers, even during normal traffic patterns
| | # Buffer pool - use 50-70% of available RAM |
| * '''Old or aging hardware''' - Servers with CPUs more than several years old may lack the performance of modern processors
| | innodb_buffer_pool_size = 8G |
| * '''Configuration tuning provides only marginal improvement''' - After applying MySQL and VoIPmonitor optimizations, the delay between the "Last CDR in processing queue" and "Last CDR in database" remains significant
| |
|
| |
|
| === Diagnosing Hardware Limitations === | | # Faster writes - flush log once per second instead of every transaction |
| | # Small risk: up to 1 second of data loss on crash |
| | innodb_flush_log_at_trx_commit = 2 |
|
| |
|
| Monitor CPU usage on both the database and GUI servers:
| | # I/O threads - adjust to your CPU core count |
| | innodb_read_io_threads = 4 |
| | innodb_write_io_threads = 4 |
| | innodb_purge_threads = 2 |
|
| |
|
| <syntaxhighlight lang="bash">
| | # Skip DNS lookups for faster connections |
| # Check CPU load during peak traffic | | skip-name-resolve |
| top
| |
|
| |
|
| # Or use sar for historical data | | # Larger log files = fewer checkpoints = better performance |
| sar -u 1 10
| | innodb_log_file_size = 1G |
| | | innodb_log_buffer_size = 256M |
| # Check CPU core usage per process
| |
| mpstat -P ALL 1 5
| |
| </syntaxhighlight> | | </syntaxhighlight> |
|
| |
|
| If the CPU load is consistently at or near 100% across multiple cores, the hardware cannot keep up with the traffic load. No amount of configuration tuning will solve this issue - the servers themselves need to be upgraded.
| | === SSD/NVMe Specific Settings === |
|
| |
|
| === Hardware Upgrade Recommendations ===
| | If using SSD or NVMe storage, add these optimizations: |
|
| |
|
| Upgrade to a more modern CPU architecture with significantly better performance:
| | <syntaxhighlight lang="ini"> |
| | # Disable HDD optimization (not needed for SSD) |
| | innodb_flush_neighbors = 0 |
|
| |
|
| * '''CPU''' - Modern AMD EPYC or Intel Xeon Gold/Silver processors with more cores
| | # Tell InnoDB about fast storage (adjust to your disk specs) |
| * '''RAM''' - Ensure sufficient memory for the database buffer pool (see [[Scaling#Memory_Configuration|Memory Configuration]])
| | innodb_io_capacity = 2000 |
| * '''Storage''' - Use NVMe SSDs for the database to eliminate I/O bottlenecks
| | innodb_io_capacity_max = 4000 |
|
| |
|
| === Architecture Consideration: Merge GUI and Database === | | # Bypass OS cache - better for dedicated DB servers |
| | innodb_flush_method = O_DIRECT |
| | </syntaxhighlight> |
|
| |
|
| In some cases, merging the GUI and database roles onto a single, powerful new server can be more efficient than maintaining separate, underpowered servers. A single modern server with sufficient CPU cores and RAM can often handle both workloads more effectively than multiple older servers.
| | Restart MySQL after changes: <code>systemctl restart mysql</code> |
|
| |
|
| For hardware sizing examples, see the [[Hardware]] page, which includes real-world deployments for various call volumes.
| | === Verify Settings === |
|
| |
|
| === Migrating MySQL Data to Faster Storage === | | <syntaxhighlight lang="bash"> |
| | # Check current buffer pool size |
| | mysql -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';" |
|
| |
|
| When upgrading from HDD or slow SATA SSDs to NVMe storage, you can migrate the MySQL data directory (`datadir`) while the system is running. There is no CDR loss expected with this method because the sniffer queues the CDRs internally and will process them after MySQL restarts.
| | # Check if buffer pool is fully utilized |
| | mysql -e "SHOW STATUS LIKE 'Innodb_buffer_pool_pages%';" |
| | </syntaxhighlight> |
|
| |
|
| '''Prerequisites:'''
| | == Prevention == |
| * New SSD/NVMe storage installed and mounted
| |
| * Sufficient disk space on the new storage for the existing MySQL data
| |
| * MySQL/MariaDB service can be stopped briefly for the migration
| |
|
| |
|
| '''Migration Procedure:''' | | * '''Schedule heavy operations off-peak:''' Configure alerts and reports to run at night |
| | * '''Separate components:''' Use dedicated database server for high-traffic deployments |
| | * '''Monitor trends:''' Set up alerts for SQLq growth before it becomes critical |
|
| |
|
| <syntaxhighlight lang="bash">
| | == Extreme High-Volume Deployments == |
| # 1. Prepare the new storage (if your SSD is hot-pluggable)
| |
| # Mount the new filesystem. Example:
| |
| mount /dev/nvme0n1p1 /mnt/fast_storage
| |
| mkdir /mnt/fast_storage/mysql
| |
|
| |
|
| # 2. Stop the MySQL service
| | For deployments processing '''hundreds or thousands of CDRs per second''' (10,000+ concurrent calls), standard tuning is not sufficient. These scenarios require: |
| systemctl stop mysql
| |
| # Or for MariaDB:
| |
| # systemctl stop mariadb
| |
|
| |
|
| # 3. Copy the MySQL data directory to the new location
| | * Disabling AUTO_INCREMENT and letting the sniffer generate IDs |
| # The -a flag preserves permissions and ownership, -x skips other filesystems
| | * Hourly table partitioning instead of daily |
| rsync -avx /var/lib/mysql/ /mnt/fast_storage/mysql/
| | * Batch sizes of 10,000+ CDRs |
| | * Aggressive MySQL settings that trade durability for performance |
|
| |
|
| # 4. Verify the copy
| | See [[High-Performance_VoIPmonitor_and_MySQL_Setup_Manual]] for detailed configuration. |
| ls -la /mnt/fast_storage/mysql/
| |
| # Check that all databases are present
| |
|
| |
|
| # 5. Update MySQL configuration to point to the new datadir location
| | == See Also == |
| # Edit /etc/mysql/my.cnf or /etc/my.cnf and change:
| |
| # datadir = /var/lib/mysql
| |
| # To:
| |
| # datadir = /mnt/fast_storage/mysql
| |
|
| |
|
| # Also update the socket location if needed:
| | * [[Database_troubleshooting]] - Comprehensive database troubleshooting guide |
| # socket = /tmp/mysql.sock (or your preferred location)
| | * [[Emergency_procedures]] - RRD graph diagnostics for bottleneck identification |
| | * [[Scaling]] - Performance tuning and scaling guidelines |
| | * [[High-Performance_VoIPmonitor_and_MySQL_Setup_Manual]] - Configuration for 1000+ CDR/s deployments |
|
| |
|
| # 6. Update AppArmor/SELinux (Ubuntu/Debian only)
| |
| # Edit /etc/apparmor.d/usr.sbin.mysqld and update the paths:
| |
| # /var/lib/mysql/ r,
| |
| # /var/lib/mysql/** rwk,
| |
| # Change to:
| |
| # /mnt/fast_storage/mysql/ r,
| |
| # /mnt/fast_storage/mysql/** rwk,
| |
|
| |
|
| # Reload AppArmor:
| |
| systemctl reload apparmor
| |
|
| |
|
| # 7. Start MySQL with the new datadir
| |
| systemctl start mysql
| |
|
| |
|
| # 8. Verify MySQL is running and databases are accessible
| |
| mysql -e "SHOW DATABASES;"
| |
| mysql -e "SELECT COUNT(*) FROM cdr;"
| |
|
| |
|
| # 9. Monitor the SQL queue in the VoIPmonitor GUI
| |
| # Navigate to GUI → Settings → Sensors → Status
| |
| # The SQLq value should decrease as queued CDRs are processed
| |
| </syntaxhighlight>
| |
|
| |
|
| '''Important Notes:'''
| | == AI Summary for RAG == |
|
| |
|
| * '''No CDR Loss:''' The VoIPmonitor sniffer queues CDRs in memory during MySQL downtime. These will be processed after MySQL restarts.
| | '''Summary:''' Troubleshooting guide for SQL queue (SQLq/SQLf) growth during peak traffic periods. The queue grows when the database cannot keep up with CDR insertion rate. Diagnosis involves checking mysqld CPU usage, iowait, and RRD graphs in GUI (Settings > Sensors > graph icon). Immediate mitigations: enable <code>query_cache=yes</code> to prevent OOM, <code>quick_save_cdr=yes</code> to reduce GUI delay, <code>mysqlstore_max_threads_cdr=8</code> for parallel writes. Root causes are typically I/O bottleneck (upgrade HDD to SSD/NVMe), RAM bottleneck (increase <code>innodb_buffer_pool_size</code> to 50-70% of RAM), or CPU bottleneck (hardware upgrade needed). Prevention includes scheduling heavy operations off-peak and using dedicated database servers. |
| * '''Backup First:''' Always take a backup of `/var/lib/mysql` before migration.
| |
| * '''Service Downtime:''' Plan for MySQL to be stopped for the duration of the copy operation (depends on database size and storage speed).
| |
| * '''Storage Mount Options:''' For the new database partition, mount with ext4 optimizations:
| |
| <pre> | |
| /dev/nvme0n1p1 /mnt/fast_storage ext4 defaults,noatime,data=writeback,barrier=0 0 1 | |
| </pre> | |
| * '''Permissions:''' Ensure the new datadir and all files are owned by the MySQL user (`mysql:mysql`).
| |
| * '''Symbolic Links Alternative:''' You can create a symbolic link instead of changing the datadir in my.cnf:
| |
| <syntaxhighlight lang="bash"> | |
| # After stopping MySQL and copying data:
| |
| mv /var/lib/mysql /var/lib/mysql.old
| |
| ln -s /mnt/fast_storage/mysql /var/lib/mysql
| |
| systemctl start mysql
| |
| </syntaxhighlight>
| |
|
| |
|
| == AI Summary for RAG ==
| | '''Keywords:''' SQL queue, SQLq, SQLf, peak time, database bottleneck, query_cache, quick_save_cdr, mysqlstore_max_threads_cdr, innodb_buffer_pool_size, I/O bottleneck, SSD upgrade, OOM, CDR delay, database performance, peak traffic |
| '''Summary:''' This page explains the delay between Active Calls view and CDR view, which occurs when the database cannot keep up with CDR insertion rates. Symptoms include slow CDR appearance, "Crontab log is too old" warning, lag between call end and reporting, or active calls visible but recent CDRs missing. The SQLq/SQLf metric indicates queue size before CDRs are pushed to the database. When the database is slow, CDRs queue in qoq* files in the spool directory (default: /var/spool/voipmonitor). Two solutions exist: (1) Wait for the system to process the queue - monitor SQLf parameter which should decrease over time; (2) Delete qoq* files from spool directory as emergency measure - this clears backlog and allows new CDRs to be written immediately, but any CDRs still waiting in the queue files will be lost (only use if immediate access is critical and losing older CDRs is acceptable). Configuration fixes include: ensuring MySQL config follows recommendations from Scaling page (especially innodb_flush_log_at_trx_commit and innodb_buffer_pool_size), enabling disk-based query cache via query_cache=yes (prevents OOM by storing queued queries in spool files instead of RAM), increasing database connection threads via mysqlstore_max_threads_cdr and mysqlstore_max_threads_sip_msg in voipmonitor.conf, and understanding that mysql_enable_set_id=yes or mysql_enable_new_store=per_query limits setup to single sensor writing to database (for centralized writer architecture). HARDWARE UPGRADE: If CPU load is consistently at or near 100% on database/GUI servers even after configuration tuning, this indicates insufficient hardware and requires hardware upgrade to modern AMD EPYC or Intel Xeon Gold/Silver with more cores, sufficient RAM for buffer pool, and NVMe SSD storage. MYSQL MIGRATION TO SSD: When upgrading storage to SSD/NVMe to fix IOPS bottlenecks, you can migrate MySQL datadir while system is running: stop MySQL, rsync /var/lib/mysql to new SSD, update datadir in my.cnf, update AppArmor paths if using Ubuntu/Debian, start MySQL. No CDR loss expected because sniffer queues CDRs internally during downtime. Monitor SQLq in GUI to verify queued CDRs are processed after restart. Use symbolic link alternative: mv /var/lib/mysql to mysql.old and symlink to new location. | |
|
| |
|
| '''Keywords:''' SQL queue, SQLq, SQLf, delay, active calls, CDR, database bottleneck, mysqlstore_max_threads_cdr, mysqlstore_max_threads_sip_msg, innodb_flush_log_at_trx_commit, innodb_buffer_pool_size, query_cache, OOM, out of memory, disk-based queue, spool files, mysql_enable_set_id, mysql_enable_new_store, central writer, single sensor, Crontab log, hardware upgrade, CPU 100%, aging hardware, inadequate hardware, server upgrade, modern CPU, AMD EPYC, Intel Xeon, NVMe SSD, SSD upgrade, datadir migration, move MySQL to SSD, migrate MySQL datadir, rsync MySQL, no CDR loss, queued CDRs processed, AppArmor MySQL paths, symlink database, qoq files, queue files, spool directory, file queue backlog, delete qoq files, missing CDRs recent calls, clearing queue, emergency queue clearing | | '''Key Questions:''' |
| | * Why is the SQL queue growing during peak hours? |
| | * How do I monitor SQL queue size in VoIPmonitor? |
| | * What causes CDR delays in the GUI? |
| | * How do I prevent out-of-memory errors when SQL queue grows? |
| | * Should I upgrade to SSD for VoIPmonitor database? |
| | * What is the recommended innodb_buffer_pool_size setting? |
| | * How do I speed up CDR insertion during high traffic? |
SQL Queue Growing During Peak Time
When the SQL queue (SQLq/SQLf) grows during peak traffic, the database cannot keep up with the CDR insertion rate. This causes delayed CDR visibility in the GUI and can lead to memory exhaustion.
Quick Diagnosis
Monitor the queue:
journalctl -u voipmonitor -f | grep SQLf
Check RRD graphs: Settings > Sensors > click graph icon > select peak time period. Look for growing "SQL Cache" or "SQL Cache Files".
ℹ️ Note: SQLf value should normally stay below 10. If it consistently exceeds 10 or grows during peak time, apply mitigations.
Storage Speed is Critical
The most common cause of SQL queue growth is slow storage. MySQL write performance depends heavily on disk I/O speed.
| Storage Type |
Typical IOPS |
CDR Write Capacity |
Recommendation
|
| HDD (7200 RPM) |
~150 |
10-50 CDR/s |
Not suitable for production
|
| SATA SSD |
~50,000 |
200-500 CDR/s |
Suitable for small deployments
|
| NVMe SSD |
~500,000+ |
1000+ CDR/s |
Recommended for production
|
Check your current I/O performance:
# Check disk utilization during peak time
iostat -x 1 5 | grep -E "Device|sda|nvme"
# If %util stays above 80%, storage is the bottleneck
VoIPmonitor Tuning
These parameters in /etc/voipmonitor.conf control how the sniffer writes to the database:
| Parameter |
Default |
Recommended |
Description
|
mysqlstore_max_threads_cdr |
1 |
4-8 |
Number of parallel threads writing CDRs to database
|
mysqlstore_concat_limit |
400 |
1000-2000 |
CDRs batched into single INSERT statement
|
query_cache |
no |
yes |
Write queries to disk when queue grows (prevents OOM)
|
Example configuration:
# /etc/voipmonitor.conf
mysqlstore_max_threads_cdr = 8
mysqlstore_concat_limit = 2000
query_cache = yes
Restart after changes: systemctl restart voipmonitor
MySQL Tuning
Essential Settings
These settings are safe for most deployments and significantly improve write performance:
# /etc/mysql/mysql.conf.d/mysqld.cnf
# Buffer pool - use 50-70% of available RAM
innodb_buffer_pool_size = 8G
# Faster writes - flush log once per second instead of every transaction
# Small risk: up to 1 second of data loss on crash
innodb_flush_log_at_trx_commit = 2
# I/O threads - adjust to your CPU core count
innodb_read_io_threads = 4
innodb_write_io_threads = 4
innodb_purge_threads = 2
# Skip DNS lookups for faster connections
skip-name-resolve
# Larger log files = fewer checkpoints = better performance
innodb_log_file_size = 1G
innodb_log_buffer_size = 256M
SSD/NVMe Specific Settings
If using SSD or NVMe storage, add these optimizations:
# Disable HDD optimization (not needed for SSD)
innodb_flush_neighbors = 0
# Tell InnoDB about fast storage (adjust to your disk specs)
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
# Bypass OS cache - better for dedicated DB servers
innodb_flush_method = O_DIRECT
Restart MySQL after changes: systemctl restart mysql
Verify Settings
# Check current buffer pool size
mysql -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"
# Check if buffer pool is fully utilized
mysql -e "SHOW STATUS LIKE 'Innodb_buffer_pool_pages%';"
Prevention
- Schedule heavy operations off-peak: Configure alerts and reports to run at night
- Separate components: Use dedicated database server for high-traffic deployments
- Monitor trends: Set up alerts for SQLq growth before it becomes critical
Extreme High-Volume Deployments
For deployments processing hundreds or thousands of CDRs per second (10,000+ concurrent calls), standard tuning is not sufficient. These scenarios require:
- Disabling AUTO_INCREMENT and letting the sniffer generate IDs
- Hourly table partitioning instead of daily
- Batch sizes of 10,000+ CDRs
- Aggressive MySQL settings that trade durability for performance
See High-Performance_VoIPmonitor_and_MySQL_Setup_Manual for detailed configuration.
See Also
AI Summary for RAG
Summary: Troubleshooting guide for SQL queue (SQLq/SQLf) growth during peak traffic periods. The queue grows when the database cannot keep up with CDR insertion rate. Diagnosis involves checking mysqld CPU usage, iowait, and RRD graphs in GUI (Settings > Sensors > graph icon). Immediate mitigations: enable query_cache=yes to prevent OOM, quick_save_cdr=yes to reduce GUI delay, mysqlstore_max_threads_cdr=8 for parallel writes. Root causes are typically I/O bottleneck (upgrade HDD to SSD/NVMe), RAM bottleneck (increase innodb_buffer_pool_size to 50-70% of RAM), or CPU bottleneck (hardware upgrade needed). Prevention includes scheduling heavy operations off-peak and using dedicated database servers.
Keywords: SQL queue, SQLq, SQLf, peak time, database bottleneck, query_cache, quick_save_cdr, mysqlstore_max_threads_cdr, innodb_buffer_pool_size, I/O bottleneck, SSD upgrade, OOM, CDR delay, database performance, peak traffic
Key Questions:
- Why is the SQL queue growing during peak hours?
- How do I monitor SQL queue size in VoIPmonitor?
- What causes CDR delays in the GUI?
- How do I prevent out-of-memory errors when SQL queue grows?
- Should I upgrade to SSD for VoIPmonitor database?
- What is the recommended innodb_buffer_pool_size setting?
- How do I speed up CDR insertion during high traffic?