patternsqlMinor
MySQL slow log configuration
Viewed 0 times
configurationmysqlslowlog
Problem
Is there a way of making MySQL slow logs to start new log file every day ?
At the moment its just a single large file, and have to grep lines for every day. It would be much more convenient to have separate files for every days slow logs.
Do I have to configure a my.cnf or some linux feature ?
At the moment its just a single large file, and have to grep lines for every day. It would be much more convenient to have separate files for every days slow logs.
Do I have to configure a my.cnf or some linux feature ?
Solution
Update
As Aaron points out, there is the chance the copy-and-truncate can miss some entries. So the safer method is to move and
Original
This article has the basic principle to rotating the slow query log that I use. Basically you need to copy the slow log to a new file, then truncate the contents of the slow.log:
If you just move the slow log to a new file and creating a new 'slow.log', it won't work because the moved file still has the same inode, and mysql still has it open. I suppose moving the file and then issuing a
His article mentions using logrotate in Linux, but I just made a cronjob to run once a day at midnight to do this for me.
Also, to address the issue of replication on
FLUSH LOGS, FLUSH MASTER, FLUSH SLAVE, and FLUSH TABLES WITH READ LOCK (with or without a table list) are not written to the binary log in any case because they would cause problems if replicated to a slave. [src]
So no, since those statements are not written to the binary log, it will not interfere with replication. For your purposes I would specify
As Aaron points out, there is the chance the copy-and-truncate can miss some entries. So the safer method is to move and
FLUSH.Original
This article has the basic principle to rotating the slow query log that I use. Basically you need to copy the slow log to a new file, then truncate the contents of the slow.log:
cp log/slow.log log/slow.log.`date +%M`; > log/slow.logIf you just move the slow log to a new file and creating a new 'slow.log', it won't work because the moved file still has the same inode, and mysql still has it open. I suppose moving the file and then issuing a
FLUSH SLOW LOGS command would work, as that closes the file and reopens, but I find the copy-and-truncate to be just as effective and doesn't require logging into mysql.His article mentions using logrotate in Linux, but I just made a cronjob to run once a day at midnight to do this for me.
Also, to address the issue of replication on
FLUSH LOGS:FLUSH LOGS, FLUSH MASTER, FLUSH SLAVE, and FLUSH TABLES WITH READ LOCK (with or without a table list) are not written to the binary log in any case because they would cause problems if replicated to a slave. [src]
So no, since those statements are not written to the binary log, it will not interfere with replication. For your purposes I would specify
FLUSH SLOW LOGS to only close/open the slow query log.Code Snippets
cp log/slow.log log/slow.log.`date +%M`; > log/slow.logContext
StackExchange Database Administrators Q#15863, answer score: 5
Revisions (0)
No revisions yet.