SQL queue is growing in a peaktime: Difference between revisions

From VoIPmonitor.org
(Add quick_save_cdr section for fast CDR visibility and warning about query_cache)
(Improve page: add storage speed section, VoIPmonitor tuning, MySQL tuning, reference High-Performance only for extreme cases)
 
(9 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= Delay between active call and cdr view =
{{DISPLAYTITLE:SQL Queue Growing During Peak Time}}
The Active Calls view in the GUI displays the timestart of a calls(INVITEs) obtained from the VoIPmonitor sniffer service, whereas the CDR view shows the stored CDRs (after the call ends) from the database.
[[Category:Troubleshooting]]
[[Category:Database]]
[[Category:Performance]]


== Symptoms of Database Delays ==
= SQL Queue Growing During Peak Time =


When the database cannot keep up with CDR insertion rates, you may experience:
When the SQL queue (SQLq/SQLf) grows during peak traffic, the database cannot keep up with the CDR insertion rate. This causes delayed CDR visibility in the GUI and can lead to memory exhaustion.


* '''Slow CDR appearance in GUI''' - New calls take minutes to appear after they end
== Quick Diagnosis ==
* '''"Crontab log is too old" warning''' - The cron job runs slowly (every 5-10 minutes or more) instead of every minute due to database overload
* '''Lag between call end and reporting''' - Daily reports and alerts process outdated data


To diagnose cron-related delays specifically, see [[Alerts#Crontab_Log_is_Too_Old_Warning_-_Database_Performance_Issues|"Crontab log is too old" troubleshooting]].
<kroki lang="mermaid">
%%{init: {'flowchart': {'nodeSpacing': 15, 'rankSpacing': 30}}}%%
flowchart TD
    A[SQLq/SQLf growing] --> B{mysqld CPU 100%?}
    B -->|Yes| C[CPU bottleneck]
    B -->|No| D{High iowait?}
    D -->|Yes| E[I/O bottleneck]
    D -->|No| F[RAM/Buffer bottleneck]


== SQLq/SQLf ==
    C --> G[Upgrade CPU or optimize queries]
In the service status (expanded status, status line) under GUI -> Settings -> Sensors : status, the SQLq/SQLf values represent the size of the queue before the CDRs are pushed to the database. When SQLq/SQLf is high, it usually indicates that the database is unable to process requests in a timely manner, causing them to queue.
    E --> H[Upgrade to SSD/NVMe]
[[Logging#SQLq/SQLf]]
    F --> I[Increase innodb_buffer_pool_size]
</kroki>


== Make sure that db config is not causing io overhead ==
'''Monitor the queue:'''
If the MySQL configuration already follows the recommendations from the scaling section of our documentation (especially innodb_flush_log_at_trx_commit and innodb_buffer_pool_size):
<syntaxhighlight lang="bash">
journalctl -u voipmonitor -f | grep SQLf
</syntaxhighlight>


https://www.voipmonitor.org/doc/Scaling#SSDs
'''Check RRD graphs:''' Settings > Sensors > click graph icon > select peak time period. Look for growing "SQL Cache" or "SQL Cache Files".


== Quick CDR Visibility (Reduce Delay) ==
{{Note|SQLf value should normally stay below 10. If it consistently exceeds 10 or grows during peak time, apply mitigations.}}
If CDRs are taking too long to appear in the GUI after calls end (typically 30-60 seconds delay), you can reduce this delay by enabling quick CDR save mode. Add this parameter to <code>/etc/voipmonitor.conf</code>:


quick_save_cdr = yes
== Storage Speed is Critical ==


This speeds up the visibility of calls in the GUI by reducing the buffering delay before CDRs are written to the database.
'''The most common cause of SQL queue growth is slow storage.''' MySQL write performance depends heavily on disk I/O speed.


{{Warning|
{| class="wikitable"
'''Performance Impact:''' Enabling <code>quick_save_cdr</code> increases CPU and I/O load on the database server by forcing more frequent CDR writes. Only use this if near-real-time CDR visibility is absolutely required.
! Storage Type !! Typical IOPS !! CDR Write Capacity !! Recommendation
}}
|-
| HDD (7200 RPM) || ~150 || 10-50 CDR/s || '''Not suitable''' for production
|-
| SATA SSD || ~50,000 || 200-500 CDR/s || Suitable for small deployments
|-
| NVMe SSD || ~500,000+ || 1000+ CDR/s || '''Recommended''' for production
|}


Available options:
'''Check your current I/O performance:'''
* <code>no</code> (default) - 10 second delay, recommended for most deployments
<syntaxhighlight lang="bash">
* <code>yes</code> - 3 second delay, moderate performance impact
# Check disk utilization during peak time
* <code>quick</code> - 1 second delay, high performance impact
iostat -x 1 5 | grep -E "Device|sda|nvme"


See [[Sniffer_configuration#Performance_.26_Schema|Sniffer Configuration]] for more details on this parameter.
# If %util stays above 80%, storage is the bottleneck
</syntaxhighlight>


== Enable Disk-Based Query Queue (Prevent OOM) ==
== VoIPmonitor Tuning ==
If your system is experiencing Out Of Memory (OOM) issues or if the SQL queue keeps growing during peak traffic, ensure that disk-based query queuing is enabled. Add or verify this parameter in <code>/etc/voipmonitor.conf</code>:


query_cache = yes
These parameters in <code>/etc/voipmonitor.conf</code> control how the sniffer writes to the database:


This is a '''critical''' feature. When enabled, SQL queries are first saved to disk-based queue files (qoq* files in the spool directory) before being sent to the database. This prevents OOM and data loss if the database is temporarily unavailable. The default is <code>yes</code>, but if it was previously set to <code>no</code>, changing it to <code>yes</code> will use disk storage instead of RAM for queued queries.
{| class="wikitable"
! Parameter !! Default !! Recommended !! Description
|-
| <code>mysqlstore_max_threads_cdr</code> || 1 || 4-8 || Number of parallel threads writing CDRs to database
|-
| <code>mysqlstore_concat_limit</code> || 400 || 1000-2000 || CDRs batched into single INSERT statement
|-
| <code>query_cache</code> || no || yes || Write queries to disk when queue grows (prevents OOM)
|}


{{Note|
'''Example configuration:'''
<strong>Do not set <code>query_cache = no</code> to improve performance.</strong> While disabling query cache reduces CDR delay by holding queries in RAM instead of writing to qoq files, this creates a severe risk: if the database becomes unreachable and memory fills up, OOM killer will terminate the VoIPmonitor process and all queued CDRs will be lost. Keep <code>query_cache = yes</code> for data safety, and use <code>quick_save_cdr</code> instead for faster CDR visibility.
<syntaxhighlight lang="ini">
}}
# /etc/voipmonitor.conf
 
mysqlstore_max_threads_cdr = 8
See [[Sniffer_configuration#Performance_.26_Schema|Sniffer Configuration]] for more details on this parameter.
mysqlstore_concat_limit = 2000
 
query_cache = yes
== More threads/connections to a db ==
</syntaxhighlight>
you can also increase the number of threads used for connection to a db for particular use like CDRs - the VoIPmonitor sniffer service in /etc/voipmonitor.conf uses the option
 
mysqlstore_max_threads_cdr = 8
 
=== Important: mysql_enable_set_id Limitation ===
If you are using <code>mysql_enable_new_store = per_query</code> or have enabled <code>mysql_enable_set_id = yes</code>, please note that this configuration '''limits your setup to a single sensor writing to the database'''. This option allows the central server to generate CDR IDs instead of waiting for MySQL's AUTO_INCREMENT, which can improve batch insert performance. However, this architecture only works when one sensor is responsible for all database writes. If you have multiple sensors writing to the same database, do not enable this option as it will cause ID conflicts and data corruption.
 
If you are processing a high volume of SIP OPTIONS, SUBSCRIBE, or NOTIFY messages and see the <code>sip_msg</code> queue growing, you can increase threads specifically for those messages:


mysqlstore_max_threads_sip_msg = 8
Restart after changes: <code>systemctl restart voipmonitor</code>


However, if the database is waiting for storage I/O, increasing the number of threads will not help.
== MySQL Tuning ==


== Clearing File Queue Backlog (qoq* Files) ==
=== Essential Settings ===


When the file queue grows and recent calls are not appearing in CDR despite active calls being visible, you may have a backlog of qoq* files in the spool directory waiting to be processed.
These settings are safe for most deployments and significantly improve write performance:


=== What are qoq* Files? ===
<syntaxhighlight lang="ini">
# /etc/mysql/mysql.conf.d/mysqld.cnf


The VoIPmonitor sniffer creates <code>qoq*</code> (queue) files in the spool directory (default: <code>/var/spool/voipmonitor</code>) to buffer database operations. These files contain queued SQL commands waiting to be inserted into the database.
# Buffer pool - use 50-70% of available RAM
innodb_buffer_pool_size = 8G


When the database cannot keep up with the insertion rate:
# Faster writes - flush log once per second instead of every transaction
* Active calls are visible in the GUI (retrieved from sensor memory in real-time)
# Small risk: up to 1 second of data loss on crash
* Recent CDRs do not appear (waiting in qoq queue files)
innodb_flush_log_at_trx_commit = 2
* SQLq metric stays high or continues growing


=== Solution 1: Wait for Queue Processing ===
# I/O threads - adjust to your CPU core count
innodb_read_io_threads = 4
innodb_write_io_threads = 4
innodb_purge_threads = 2


The system will automatically process the qoq* files as the database catches up. Monitor the progress:
# Skip DNS lookups for faster connections
skip-name-resolve


<syntaxhighlight lang="bash">
# Larger log files = fewer checkpoints = better performance
# Check the SQLf parameter in logs to monitor queue size
innodb_log_file_size = 1G
# SQLf should decrease over time as files are processed
innodb_log_buffer_size = 256M
 
# View current qoq files in spool directory
ls -lh /var/spool/voipmonitor/qoq* 2>/dev/null | wc -l
</syntaxhighlight>
</syntaxhighlight>


This is the preferred approach if losing older CDRs is not acceptable.
=== SSD/NVMe Specific Settings ===


=== Solution 2: Delete qoq* Files (Emergency) ===
If using SSD or NVMe storage, add these optimizations:


If immediate access to recent CDRs is critical and you can afford to lose older CDRs, you can delete the qoq* files to clear the backlog:
<syntaxhighlight lang="ini">
# Disable HDD optimization (not needed for SSD)
innodb_flush_neighbors = 0


<syntaxhighlight lang="bash">
# Tell InnoDB about fast storage (adjust to your disk specs)
# WARNING: This will queue any CDRs still waiting in the files
innodb_io_capacity = 2000
# Stop the VoIPmonitor service first
innodb_io_capacity_max = 4000
systemctl stop voipmonitor


# Delete all qoq* files from the spool directory
# Bypass OS cache - better for dedicated DB servers
rm -f /var/spool/voipmonitor/qoq*
innodb_flush_method = O_DIRECT
 
# Start the VoIPmonitor service
systemctl start voipmonitor
</syntaxhighlight>
</syntaxhighlight>


{{Warning|
Restart MySQL after changes: <code>systemctl restart mysql</code>
'''Data Loss Warning:''' Deleting qoq* files will queue any CDRs that were waiting in the queue files. Only use this method if:
* Immediate access to recent CDRs is critical
* Losing older CDRs is acceptable
* You have exhausted all other options (configuration tuning, hardware upgrade)
}}
 
This clears the backlog and allows new CDRs to be written immediately without waiting for the old queue to process.
 
== When Configuration Tuning Is Not Enough: Hardware Upgrade Required ==
 
If you have applied all the configuration optimizations above and the SQL queue continues to grow or the database remains significantly behind the processing queue, the underlying issue may be insufficient hardware.
 
=== Signs That Hardware Upgrade Is Necessary ===
 
* '''CPU load is consistently at or near 100%''' on both the database and GUI servers, even during normal traffic patterns
* '''Old or aging hardware''' - Servers with CPUs more than several years old may lack the performance of modern processors
* '''Configuration tuning provides only marginal improvement''' - After applying MySQL and VoIPmonitor optimizations, the delay between the "Last CDR in processing queue" and "Last CDR in database" remains significant


=== Diagnosing Hardware Limitations ===
=== Verify Settings ===
 
Monitor CPU usage on both the database and GUI servers:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Check CPU load during peak traffic
# Check current buffer pool size
top
mysql -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"


# Or use sar for historical data
# Check if buffer pool is fully utilized
sar -u 1 10
mysql -e "SHOW STATUS LIKE 'Innodb_buffer_pool_pages%';"
 
# Check CPU core usage per process
mpstat -P ALL 1 5
</syntaxhighlight>
</syntaxhighlight>


If the CPU load is consistently at or near 100% across multiple cores, the hardware cannot keep up with the traffic load. No amount of configuration tuning will solve this issue - the servers themselves need to be upgraded.
== Prevention ==


=== Hardware Upgrade Recommendations ===
* '''Schedule heavy operations off-peak:''' Configure alerts and reports to run at night
* '''Separate components:''' Use dedicated database server for high-traffic deployments
* '''Monitor trends:''' Set up alerts for SQLq growth before it becomes critical


Upgrade to a more modern CPU architecture with significantly better performance:
== Extreme High-Volume Deployments ==


* '''CPU''' - Modern AMD EPYC or Intel Xeon Gold/Silver processors with more cores
For deployments processing '''hundreds or thousands of CDRs per second''' (10,000+ concurrent calls), standard tuning is not sufficient. These scenarios require:
* '''RAM''' - Ensure sufficient memory for the database buffer pool (see [[Scaling#Memory_Configuration|Memory Configuration]])
* '''Storage''' - Use NVMe SSDs for the database to eliminate I/O bottlenecks


=== Architecture Consideration: Merge GUI and Database ===
* Disabling AUTO_INCREMENT and letting the sniffer generate IDs
* Hourly table partitioning instead of daily
* Batch sizes of 10,000+ CDRs
* Aggressive MySQL settings that trade durability for performance


In some cases, merging the GUI and database roles onto a single, powerful new server can be more efficient than maintaining separate, underpowered servers. A single modern server with sufficient CPU cores and RAM can often handle both workloads more effectively than multiple older servers.
See [[High-Performance_VoIPmonitor_and_MySQL_Setup_Manual]] for detailed configuration.


For hardware sizing examples, see the [[Hardware]] page, which includes real-world deployments for various call volumes.
== See Also ==


=== Migrating MySQL Data to Faster Storage ===
* [[Database_troubleshooting]] - Comprehensive database troubleshooting guide
* [[Emergency_procedures]] - RRD graph diagnostics for bottleneck identification
* [[Scaling]] - Performance tuning and scaling guidelines
* [[High-Performance_VoIPmonitor_and_MySQL_Setup_Manual]] - Configuration for 1000+ CDR/s deployments


When upgrading from HDD or slow SATA SSDs to NVMe storage, you can migrate the MySQL data directory (`datadir`) while the system is running. There is no CDR loss expected with this method because the sniffer queues the CDRs internally and will process them after MySQL restarts.


'''Prerequisites:'''
* New SSD/NVMe storage installed and mounted
* Sufficient disk space on the new storage for the existing MySQL data
* MySQL/MariaDB service can be stopped briefly for the migration


'''Migration Procedure:'''


<syntaxhighlight lang="bash">
# 1. Prepare the new storage (if your SSD is hot-pluggable)
# Mount the new filesystem. Example:
mount /dev/nvme0n1p1 /mnt/fast_storage
mkdir /mnt/fast_storage/mysql


# 2. Stop the MySQL service
systemctl stop mysql
# Or for MariaDB:
# systemctl stop mariadb


# 3. Copy the MySQL data directory to the new location
== AI Summary for RAG ==
# The -a flag preserves permissions and ownership, -x skips other filesystems
rsync -avx /var/lib/mysql/ /mnt/fast_storage/mysql/
 
# 4. Verify the copy
ls -la /mnt/fast_storage/mysql/
# Check that all databases are present
 
# 5. Update MySQL configuration to point to the new datadir location
# Edit /etc/mysql/my.cnf or /etc/my.cnf and change:
# datadir = /var/lib/mysql
# To:
# datadir = /mnt/fast_storage/mysql
 
# Also update the socket location if needed:
# socket = /tmp/mysql.sock  (or your preferred location)
 
# 6. Update AppArmor/SELinux (Ubuntu/Debian only)
# Edit /etc/apparmor.d/usr.sbin.mysqld and update the paths:
# /var/lib/mysql/ r,
# /var/lib/mysql/** rwk,
# Change to:
# /mnt/fast_storage/mysql/ r,
# /mnt/fast_storage/mysql/** rwk,
 
# Reload AppArmor:
systemctl reload apparmor


# 7. Start MySQL with the new datadir
'''Summary:''' Troubleshooting guide for SQL queue (SQLq/SQLf) growth during peak traffic periods. The queue grows when the database cannot keep up with CDR insertion rate. Diagnosis involves checking mysqld CPU usage, iowait, and RRD graphs in GUI (Settings > Sensors > graph icon). Immediate mitigations: enable <code>query_cache=yes</code> to prevent OOM, <code>quick_save_cdr=yes</code> to reduce GUI delay, <code>mysqlstore_max_threads_cdr=8</code> for parallel writes. Root causes are typically I/O bottleneck (upgrade HDD to SSD/NVMe), RAM bottleneck (increase <code>innodb_buffer_pool_size</code> to 50-70% of RAM), or CPU bottleneck (hardware upgrade needed). Prevention includes scheduling heavy operations off-peak and using dedicated database servers.
systemctl start mysql


# 8. Verify MySQL is running and databases are accessible
'''Keywords:''' SQL queue, SQLq, SQLf, peak time, database bottleneck, query_cache, quick_save_cdr, mysqlstore_max_threads_cdr, innodb_buffer_pool_size, I/O bottleneck, SSD upgrade, OOM, CDR delay, database performance, peak traffic
mysql -e "SHOW DATABASES;"
mysql -e "SELECT COUNT(*) FROM cdr;"
 
# 9. Monitor the SQL queue in the VoIPmonitor GUI
# Navigate to GUI → Settings → Sensors → Status
# The SQLq value should decrease as queued CDRs are processed
</syntaxhighlight>
 
'''Important Notes:'''
 
* '''No CDR Loss:''' The VoIPmonitor sniffer queues CDRs in memory during MySQL downtime. These will be processed after MySQL restarts.
* '''Backup First:''' Always take a backup of `/var/lib/mysql` before migration.
* '''Service Downtime:''' Plan for MySQL to be stopped for the duration of the copy operation (depends on database size and storage speed).
* '''Storage Mount Options:''' For the new database partition, mount with ext4 optimizations:
<pre>
/dev/nvme0n1p1  /mnt/fast_storage  ext4  defaults,noatime,data=writeback,barrier=0  0  1
</pre>
* '''Permissions:''' Ensure the new datadir and all files are owned by the MySQL user (`mysql:mysql`).
* '''Symbolic Links Alternative:''' You can create a symbolic link instead of changing the datadir in my.cnf:
<syntaxhighlight lang="bash">
# After stopping MySQL and copying data:
mv /var/lib/mysql /var/lib/mysql.old
ln -s /mnt/fast_storage/mysql /var/lib/mysql
systemctl start mysql
</syntaxhighlight>
 
== AI Summary for RAG ==
'''Summary:''' This page explains the delay between Active Calls view and CDR view, which occurs when the database cannot keep up with CDR insertion rates. Symptoms include slow CDR appearance, "Crontab log is too old" warning, lag between call end and reporting, or active calls visible but recent CDRs missing. The SQLq/SQLf metric indicates queue size before CDRs are pushed to the database. When the database is slow, CDRs queue in qoq* files in the spool directory (default: /var/spool/voipmonitor). Two solutions exist: (1) Wait for the system to process the queue - monitor SQLf parameter which should decrease over time; (2) Delete qoq* files from spool directory as emergency measure - this clears backlog and allows new CDRs to be written immediately, but any CDRs still waiting in the queue files will be lost (only use if immediate access is critical and losing older CDRs is acceptable). Configuration fixes include: enabling quick_save_cdr for faster CDR visibility (3s delay with yes, 1s delay with quick, at cost of higher CPU/I/O load), ensuring MySQL config follows recommendations from Scaling page (especially innodb_flush_log_at_trx_commit and innodb_buffer_pool_size), enabling disk-based query cache via query_cache=yes (prevents OOM by storing queued queries in spool files instead of RAM - never set to no as it risks OOM/CDR loss), increasing database connection threads via mysqlstore_max_threads_cdr and mysqlstore_max_threads_sip_msg in voipmonitor.conf, and understanding that mysql_enable_set_id=yes or mysql_enable_new_store=per_query limits setup to single sensor writing to database (for centralized writer architecture). HARDWARE UPGRADE: If CPU load is consistently at or near 100% on database/GUI servers even after configuration tuning, this indicates insufficient hardware and requires hardware upgrade to modern AMD EPYC or Intel Xeon Gold/Silver with more cores, sufficient RAM for buffer pool, and NVMe SSD storage. MYSQL MIGRATION TO SSD: When upgrading storage to SSD/NVMe to fix IOPS bottlenecks, you can migrate MySQL datadir while system is running: stop MySQL, rsync /var/lib/mysql to new SSD, update datadir in my.cnf, update AppArmor paths if using Ubuntu/Debian, start MySQL. No CDR loss expected because sniffer queues CDRs internally during downtime. Monitor SQLq in GUI to verify queued CDRs are processed after restart. Use symbolic link alternative: mv /var/lib/mysql to mysql.old and symlink to new location.


'''Keywords:''' SQL queue, SQLq, SQLf, delay, active calls, CDR, database bottleneck, mysqlstore_max_threads_cdr, mysqlstore_max_threads_sip_msg, innodb_flush_log_at_trx_commit, innodb_buffer_pool_size, query_cache, OOM, out of memory, disk-based queue, spool files, mysql_enable_set_id, mysql_enable_new_store, central writer, single sensor, Crontab log, hardware upgrade, CPU 100%, aging hardware, inadequate hardware, server upgrade, modern CPU, AMD EPYC, Intel Xeon, NVMe SSD, SSD upgrade, datadir migration, move MySQL to SSD, migrate MySQL datadir, rsync MySQL, no CDR loss, queued CDRs processed, AppArmor MySQL paths, symlink database, qoq files, queue files, spool directory, file queue backlog, delete qoq files, missing CDRs recent calls, clearing queue, emergency queue clearing, quick_save_cdr, CDR delay, CDR visibility
'''Key Questions:'''
* Why is the SQL queue growing during peak hours?
* How do I monitor SQL queue size in VoIPmonitor?
* What causes CDR delays in the GUI?
* How do I prevent out-of-memory errors when SQL queue grows?
* Should I upgrade to SSD for VoIPmonitor database?
* What is the recommended innodb_buffer_pool_size setting?
* How do I speed up CDR insertion during high traffic?

Latest revision as of 20:58, 20 January 2026


SQL Queue Growing During Peak Time

When the SQL queue (SQLq/SQLf) grows during peak traffic, the database cannot keep up with the CDR insertion rate. This causes delayed CDR visibility in the GUI and can lead to memory exhaustion.

Quick Diagnosis

Monitor the queue:

journalctl -u voipmonitor -f | grep SQLf

Check RRD graphs: Settings > Sensors > click graph icon > select peak time period. Look for growing "SQL Cache" or "SQL Cache Files".

ℹ️ Note: SQLf value should normally stay below 10. If it consistently exceeds 10 or grows during peak time, apply mitigations.

Storage Speed is Critical

The most common cause of SQL queue growth is slow storage. MySQL write performance depends heavily on disk I/O speed.

Storage Type Typical IOPS CDR Write Capacity Recommendation
HDD (7200 RPM) ~150 10-50 CDR/s Not suitable for production
SATA SSD ~50,000 200-500 CDR/s Suitable for small deployments
NVMe SSD ~500,000+ 1000+ CDR/s Recommended for production

Check your current I/O performance:

# Check disk utilization during peak time
iostat -x 1 5 | grep -E "Device|sda|nvme"

# If %util stays above 80%, storage is the bottleneck

VoIPmonitor Tuning

These parameters in /etc/voipmonitor.conf control how the sniffer writes to the database:

Parameter Default Recommended Description
mysqlstore_max_threads_cdr 1 4-8 Number of parallel threads writing CDRs to database
mysqlstore_concat_limit 400 1000-2000 CDRs batched into single INSERT statement
query_cache no yes Write queries to disk when queue grows (prevents OOM)

Example configuration:

# /etc/voipmonitor.conf
mysqlstore_max_threads_cdr = 8
mysqlstore_concat_limit = 2000
query_cache = yes

Restart after changes: systemctl restart voipmonitor

MySQL Tuning

Essential Settings

These settings are safe for most deployments and significantly improve write performance:

# /etc/mysql/mysql.conf.d/mysqld.cnf

# Buffer pool - use 50-70% of available RAM
innodb_buffer_pool_size = 8G

# Faster writes - flush log once per second instead of every transaction
# Small risk: up to 1 second of data loss on crash
innodb_flush_log_at_trx_commit = 2

# I/O threads - adjust to your CPU core count
innodb_read_io_threads = 4
innodb_write_io_threads = 4
innodb_purge_threads = 2

# Skip DNS lookups for faster connections
skip-name-resolve

# Larger log files = fewer checkpoints = better performance
innodb_log_file_size = 1G
innodb_log_buffer_size = 256M

SSD/NVMe Specific Settings

If using SSD or NVMe storage, add these optimizations:

# Disable HDD optimization (not needed for SSD)
innodb_flush_neighbors = 0

# Tell InnoDB about fast storage (adjust to your disk specs)
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000

# Bypass OS cache - better for dedicated DB servers
innodb_flush_method = O_DIRECT

Restart MySQL after changes: systemctl restart mysql

Verify Settings

# Check current buffer pool size
mysql -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"

# Check if buffer pool is fully utilized
mysql -e "SHOW STATUS LIKE 'Innodb_buffer_pool_pages%';"

Prevention

  • Schedule heavy operations off-peak: Configure alerts and reports to run at night
  • Separate components: Use dedicated database server for high-traffic deployments
  • Monitor trends: Set up alerts for SQLq growth before it becomes critical

Extreme High-Volume Deployments

For deployments processing hundreds or thousands of CDRs per second (10,000+ concurrent calls), standard tuning is not sufficient. These scenarios require:

  • Disabling AUTO_INCREMENT and letting the sniffer generate IDs
  • Hourly table partitioning instead of daily
  • Batch sizes of 10,000+ CDRs
  • Aggressive MySQL settings that trade durability for performance

See High-Performance_VoIPmonitor_and_MySQL_Setup_Manual for detailed configuration.

See Also




AI Summary for RAG

Summary: Troubleshooting guide for SQL queue (SQLq/SQLf) growth during peak traffic periods. The queue grows when the database cannot keep up with CDR insertion rate. Diagnosis involves checking mysqld CPU usage, iowait, and RRD graphs in GUI (Settings > Sensors > graph icon). Immediate mitigations: enable query_cache=yes to prevent OOM, quick_save_cdr=yes to reduce GUI delay, mysqlstore_max_threads_cdr=8 for parallel writes. Root causes are typically I/O bottleneck (upgrade HDD to SSD/NVMe), RAM bottleneck (increase innodb_buffer_pool_size to 50-70% of RAM), or CPU bottleneck (hardware upgrade needed). Prevention includes scheduling heavy operations off-peak and using dedicated database servers.

Keywords: SQL queue, SQLq, SQLf, peak time, database bottleneck, query_cache, quick_save_cdr, mysqlstore_max_threads_cdr, innodb_buffer_pool_size, I/O bottleneck, SSD upgrade, OOM, CDR delay, database performance, peak traffic

Key Questions:

  • Why is the SQL queue growing during peak hours?
  • How do I monitor SQL queue size in VoIPmonitor?
  • What causes CDR delays in the GUI?
  • How do I prevent out-of-memory errors when SQL queue grows?
  • Should I upgrade to SSD for VoIPmonitor database?
  • What is the recommended innodb_buffer_pool_size setting?
  • How do I speed up CDR insertion during high traffic?