SQL queue is growing in a peaktime: Difference between revisions
(Add section on when hardware upgrade is required (CPU at 100%)) |
(Add MySQL datadir migration procedure for SSD/NVMe upgrade) |
||
| Line 72: | Line 72: | ||
For hardware sizing examples, see the [[Hardware]] page, which includes real-world deployments for various call volumes. | For hardware sizing examples, see the [[Hardware]] page, which includes real-world deployments for various call volumes. | ||
=== Migrating MySQL Data to Faster Storage === | |||
When upgrading from HDD or slow SATA SSDs to NVMe storage, you can migrate the MySQL data directory (`datadir`) while the system is running. There is no CDR loss expected with this method because the sniffer queues the CDRs internally and will process them after MySQL restarts. | |||
'''Prerequisites:''' | |||
* New SSD/NVMe storage installed and mounted | |||
* Sufficient disk space on the new storage for the existing MySQL data | |||
* MySQL/MariaDB service can be stopped briefly for the migration | |||
'''Migration Procedure:''' | |||
<syntaxhighlight lang="bash"> | |||
# 1. Prepare the new storage (if your SSD is hot-pluggable) | |||
# Mount the new filesystem. Example: | |||
mount /dev/nvme0n1p1 /mnt/fast_storage | |||
mkdir /mnt/fast_storage/mysql | |||
# 2. Stop the MySQL service | |||
systemctl stop mysql | |||
# Or for MariaDB: | |||
# systemctl stop mariadb | |||
# 3. Copy the MySQL data directory to the new location | |||
# The -a flag preserves permissions and ownership, -x skips other filesystems | |||
rsync -avx /var/lib/mysql/ /mnt/fast_storage/mysql/ | |||
# 4. Verify the copy | |||
ls -la /mnt/fast_storage/mysql/ | |||
# Check that all databases are present | |||
# 5. Update MySQL configuration to point to the new datadir location | |||
# Edit /etc/mysql/my.cnf or /etc/my.cnf and change: | |||
# datadir = /var/lib/mysql | |||
# To: | |||
# datadir = /mnt/fast_storage/mysql | |||
# Also update the socket location if needed: | |||
# socket = /tmp/mysql.sock (or your preferred location) | |||
# 6. Update AppArmor/SELinux (Ubuntu/Debian only) | |||
# Edit /etc/apparmor.d/usr.sbin.mysqld and update the paths: | |||
# /var/lib/mysql/ r, | |||
# /var/lib/mysql/** rwk, | |||
# Change to: | |||
# /mnt/fast_storage/mysql/ r, | |||
# /mnt/fast_storage/mysql/** rwk, | |||
# Reload AppArmor: | |||
systemctl reload apparmor | |||
# 7. Start MySQL with the new datadir | |||
systemctl start mysql | |||
# 8. Verify MySQL is running and databases are accessible | |||
mysql -e "SHOW DATABASES;" | |||
mysql -e "SELECT COUNT(*) FROM cdr;" | |||
# 9. Monitor the SQL queue in the VoIPmonitor GUI | |||
# Navigate to GUI → Settings → Sensors → Status | |||
# The SQLq value should decrease as queued CDRs are processed | |||
</syntaxhighlight> | |||
'''Important Notes:''' | |||
* '''No CDR Loss:''' The VoIPmonitor sniffer queues CDRs in memory during MySQL downtime. These will be processed after MySQL restarts. | |||
* '''Backup First:''' Always take a backup of `/var/lib/mysql` before migration. | |||
* '''Service Downtime:''' Plan for MySQL to be stopped for the duration of the copy operation (depends on database size and storage speed). | |||
* '''Storage Mount Options:''' For the new database partition, mount with ext4 optimizations: | |||
<pre> | |||
/dev/nvme0n1p1 /mnt/fast_storage ext4 defaults,noatime,data=writeback,barrier=0 0 1 | |||
</pre> | |||
* '''Permissions:''' Ensure the new datadir and all files are owned by the MySQL user (`mysql:mysql`). | |||
* '''Symbolic Links Alternative:''' You can create a symbolic link instead of changing the datadir in my.cnf: | |||
<syntaxhighlight lang="bash"> | |||
# After stopping MySQL and copying data: | |||
mv /var/lib/mysql /var/lib/mysql.old | |||
ln -s /mnt/fast_storage/mysql /var/lib/mysql | |||
systemctl start mysql | |||
</syntaxhighlight> | |||
== AI Summary for RAG == | == AI Summary for RAG == | ||
'''Summary:''' This page explains the delay between Active Calls view and CDR view, which occurs when the database cannot keep up with CDR insertion rates. Symptoms include slow CDR appearance, "Crontab log is too old" warning, lag between call end and reporting. The SQLq/SQLf metric indicates queue size before CDRs are pushed to the database. Configuration fixes include: ensuring MySQL config follows recommendations from Scaling page (especially innodb_flush_log_at_trx_commit and innodb_buffer_pool_size), increasing database connection threads via mysqlstore_max_threads_cdr and mysqlstore_max_threads_sip_msg in voipmonitor.conf. | '''Summary:''' This page explains the delay between Active Calls view and CDR view, which occurs when the database cannot keep up with CDR insertion rates. Symptoms include slow CDR appearance, "Crontab log is too old" warning, lag between call end and reporting. The SQLq/SQLf metric indicates queue size before CDRs are pushed to the database. Configuration fixes include: ensuring MySQL config follows recommendations from Scaling page (especially innodb_flush_log_at_trx_commit and innodb_buffer_pool_size), increasing database connection threads via mysqlstore_max_threads_cdr and mysqlstore_max_threads_sip_msg in voipmonitor.conf. HARDWARE UPGRADE: If CPU load is consistently at or near 100% on database/GUI servers even after configuration tuning, this indicates insufficient hardware and requires hardware upgrade to modern AMD EPYC or Intel Xeon Gold/Silver with more cores, sufficient RAM for buffer pool, and NVMe SSD storage. MYSQL MIGRATION TO SSD: When upgrading storage to SSD/NVMe to fix IOPS bottlenecks, you can migrate MySQL datadir while system is running: stop MySQL, rsync /var/lib/mysql to new SSD, update datadir in my.cnf, update AppArmor paths if using Ubuntu/Debian, start MySQL. No CDR loss expected because sniffer queues CDRs internally during downtime. Monitor SQLq in GUI to verify queued CDRs are processed after restart. Use symbolic link alternative: mv /var/lib/mysql to mysql.old and symlink to new location. | ||
'''Keywords:''' SQL queue, SQLq, SQLf, delay, active calls, CDR, database bottleneck, mysqlstore_max_threads_cdr, mysqlstore_max_threads_sip_msg, innodb_flush_log_at_trx_commit, innodb_buffer_pool_size, Crontab log, hardware upgrade, CPU 100%, aging hardware, inadequate hardware, server upgrade, modern CPU, AMD EPYC, Intel Xeon, NVMe SSD, | '''Keywords:''' SQL queue, SQLq, SQLf, delay, active calls, CDR, database bottleneck, mysqlstore_max_threads_cdr, mysqlstore_max_threads_sip_msg, innodb_flush_log_at_trx_commit, innodb_buffer_pool_size, Crontab log, hardware upgrade, CPU 100%, aging hardware, inadequate hardware, server upgrade, modern CPU, AMD EPYC, Intel Xeon, NVMe SSD, SSD upgrade, datadir migration, move MySQL to SSD, migrate MySQL datadir, rsync MySQL, no CDR loss, queued CDRs processed, AppArmor MySQL paths, symlink database | ||
Revision as of 05:32, 6 January 2026
Delay between active call and cdr view
The Active Calls view in the GUI displays the timestart of a calls(INVITEs) obtained from the VoIPmonitor sniffer service, whereas the CDR view shows the stored CDRs (after the call ends) from the database.
Symptoms of Database Delays
When the database cannot keep up with CDR insertion rates, you may experience:
- Slow CDR appearance in GUI - New calls take minutes to appear after they end
- "Crontab log is too old" warning - The cron job runs slowly (every 5-10 minutes or more) instead of every minute due to database overload
- Lag between call end and reporting - Daily reports and alerts process outdated data
To diagnose cron-related delays specifically, see "Crontab log is too old" troubleshooting.
SQLq/SQLf
In the service status (expanded status, status line) under GUI -> Settings -> Sensors : status, the SQLq/SQLf values represent the size of the queue before the CDRs are pushed to the database. When SQLq/SQLf is high, it usually indicates that the database is unable to process requests in a timely manner, causing them to queue. Logging#SQLq/SQLf
Make sure that db config is not causing io overhead
If the MySQL configuration already follows the recommendations from the scaling section of our documentation (especially innodb_flush_log_at_trx_commit and innodb_buffer_pool_size):
https://www.voipmonitor.org/doc/Scaling#SSDs
More threads/connections to a db
you can also increase the number of threads used for connection to a db for particular use like CDRs - the VoIPmonitor sniffer service in /etc/voipmonitor.conf uses the option
mysqlstore_max_threads_cdr = 4
If you are processing a high volume of SIP OPTIONS, SUBSCRIBE, or NOTIFY messages and see the sip_msg queue growing, you can increase threads specifically for those messages:
mysqlstore_max_threads_sip_msg = 4
However, if the database is waiting for storage I/O, increasing the number of threads will not help.
When Configuration Tuning Is Not Enough: Hardware Upgrade Required
If you have applied all the configuration optimizations above and the SQL queue continues to grow or the database remains significantly behind the processing queue, the underlying issue may be insufficient hardware.
Signs That Hardware Upgrade Is Necessary
- CPU load is consistently at or near 100% on both the database and GUI servers, even during normal traffic patterns
- Old or aging hardware - Servers with CPUs more than several years old may lack the performance of modern processors
- Configuration tuning provides only marginal improvement - After applying MySQL and VoIPmonitor optimizations, the delay between the "Last CDR in processing queue" and "Last CDR in database" remains significant
Diagnosing Hardware Limitations
Monitor CPU usage on both the database and GUI servers:
# Check CPU load during peak traffic
top
# Or use sar for historical data
sar -u 1 10
# Check CPU core usage per process
mpstat -P ALL 1 5
If the CPU load is consistently at or near 100% across multiple cores, the hardware cannot keep up with the traffic load. No amount of configuration tuning will solve this issue - the servers themselves need to be upgraded.
Hardware Upgrade Recommendations
Upgrade to a more modern CPU architecture with significantly better performance:
- CPU - Modern AMD EPYC or Intel Xeon Gold/Silver processors with more cores
- RAM - Ensure sufficient memory for the database buffer pool (see Memory Configuration)
- Storage - Use NVMe SSDs for the database to eliminate I/O bottlenecks
Architecture Consideration: Merge GUI and Database
In some cases, merging the GUI and database roles onto a single, powerful new server can be more efficient than maintaining separate, underpowered servers. A single modern server with sufficient CPU cores and RAM can often handle both workloads more effectively than multiple older servers.
For hardware sizing examples, see the Hardware page, which includes real-world deployments for various call volumes.
Migrating MySQL Data to Faster Storage
When upgrading from HDD or slow SATA SSDs to NVMe storage, you can migrate the MySQL data directory (`datadir`) while the system is running. There is no CDR loss expected with this method because the sniffer queues the CDRs internally and will process them after MySQL restarts.
Prerequisites:
- New SSD/NVMe storage installed and mounted
- Sufficient disk space on the new storage for the existing MySQL data
- MySQL/MariaDB service can be stopped briefly for the migration
Migration Procedure:
# 1. Prepare the new storage (if your SSD is hot-pluggable)
# Mount the new filesystem. Example:
mount /dev/nvme0n1p1 /mnt/fast_storage
mkdir /mnt/fast_storage/mysql
# 2. Stop the MySQL service
systemctl stop mysql
# Or for MariaDB:
# systemctl stop mariadb
# 3. Copy the MySQL data directory to the new location
# The -a flag preserves permissions and ownership, -x skips other filesystems
rsync -avx /var/lib/mysql/ /mnt/fast_storage/mysql/
# 4. Verify the copy
ls -la /mnt/fast_storage/mysql/
# Check that all databases are present
# 5. Update MySQL configuration to point to the new datadir location
# Edit /etc/mysql/my.cnf or /etc/my.cnf and change:
# datadir = /var/lib/mysql
# To:
# datadir = /mnt/fast_storage/mysql
# Also update the socket location if needed:
# socket = /tmp/mysql.sock (or your preferred location)
# 6. Update AppArmor/SELinux (Ubuntu/Debian only)
# Edit /etc/apparmor.d/usr.sbin.mysqld and update the paths:
# /var/lib/mysql/ r,
# /var/lib/mysql/** rwk,
# Change to:
# /mnt/fast_storage/mysql/ r,
# /mnt/fast_storage/mysql/** rwk,
# Reload AppArmor:
systemctl reload apparmor
# 7. Start MySQL with the new datadir
systemctl start mysql
# 8. Verify MySQL is running and databases are accessible
mysql -e "SHOW DATABASES;"
mysql -e "SELECT COUNT(*) FROM cdr;"
# 9. Monitor the SQL queue in the VoIPmonitor GUI
# Navigate to GUI → Settings → Sensors → Status
# The SQLq value should decrease as queued CDRs are processed
Important Notes:
- No CDR Loss: The VoIPmonitor sniffer queues CDRs in memory during MySQL downtime. These will be processed after MySQL restarts.
- Backup First: Always take a backup of `/var/lib/mysql` before migration.
- Service Downtime: Plan for MySQL to be stopped for the duration of the copy operation (depends on database size and storage speed).
- Storage Mount Options: For the new database partition, mount with ext4 optimizations:
/dev/nvme0n1p1 /mnt/fast_storage ext4 defaults,noatime,data=writeback,barrier=0 0 1
- Permissions: Ensure the new datadir and all files are owned by the MySQL user (`mysql:mysql`).
- Symbolic Links Alternative: You can create a symbolic link instead of changing the datadir in my.cnf:
# After stopping MySQL and copying data:
mv /var/lib/mysql /var/lib/mysql.old
ln -s /mnt/fast_storage/mysql /var/lib/mysql
systemctl start mysql
AI Summary for RAG
Summary: This page explains the delay between Active Calls view and CDR view, which occurs when the database cannot keep up with CDR insertion rates. Symptoms include slow CDR appearance, "Crontab log is too old" warning, lag between call end and reporting. The SQLq/SQLf metric indicates queue size before CDRs are pushed to the database. Configuration fixes include: ensuring MySQL config follows recommendations from Scaling page (especially innodb_flush_log_at_trx_commit and innodb_buffer_pool_size), increasing database connection threads via mysqlstore_max_threads_cdr and mysqlstore_max_threads_sip_msg in voipmonitor.conf. HARDWARE UPGRADE: If CPU load is consistently at or near 100% on database/GUI servers even after configuration tuning, this indicates insufficient hardware and requires hardware upgrade to modern AMD EPYC or Intel Xeon Gold/Silver with more cores, sufficient RAM for buffer pool, and NVMe SSD storage. MYSQL MIGRATION TO SSD: When upgrading storage to SSD/NVMe to fix IOPS bottlenecks, you can migrate MySQL datadir while system is running: stop MySQL, rsync /var/lib/mysql to new SSD, update datadir in my.cnf, update AppArmor paths if using Ubuntu/Debian, start MySQL. No CDR loss expected because sniffer queues CDRs internally during downtime. Monitor SQLq in GUI to verify queued CDRs are processed after restart. Use symbolic link alternative: mv /var/lib/mysql to mysql.old and symlink to new location.
Keywords: SQL queue, SQLq, SQLf, delay, active calls, CDR, database bottleneck, mysqlstore_max_threads_cdr, mysqlstore_max_threads_sip_msg, innodb_flush_log_at_trx_commit, innodb_buffer_pool_size, Crontab log, hardware upgrade, CPU 100%, aging hardware, inadequate hardware, server upgrade, modern CPU, AMD EPYC, Intel Xeon, NVMe SSD, SSD upgrade, datadir migration, move MySQL to SSD, migrate MySQL datadir, rsync MySQL, no CDR loss, queued CDRs processed, AppArmor MySQL paths, symlink database