patternsqlMinor
When to disable MySQL Query Cache
Viewed 0 times
disablequerymysqlcachewhen
Problem
I have a high volume MariaDB 10 server thousands of databases of with over a million tables. Currently we are only hitting around 20% query cache hits. We are already employing Memcache and other layers of caching. Is there a rule of thumb for when to disable query cache for better performance?
Solution
Do you have 4 or more cores/processors machine? Current MySQL query cache implementation does not scale well and there is a mutex contention problem on systems with more thread (the query might spend more time waiting to check the query cache than it would need to be executed fresh if the relevant data are already in other buffers which allow parallel access).
Do you have cacheable queries? If most queries do not repeat (they contain some random or time-dependent constants etc. - imagine a query getting "active articles" with 1 second precision - such query would be cached and maybe returned for one second if it were fired multiple times, but next second a new version is queried with a different timestamp so it does not match and is executed anew - you get your QC full of these queries, each only effective for 1 second). Then there are those containing rand(), now() and similar functions which cannot be cached at all.
Transactions may collide with QC - when one query inside a transaction invalidates some table in the query cache, later selects to that table inside the same transaction cannot use query cache at all (because of the repeatable-read consistency - other transactions might fill the QC with data which would be stale from the viewpoint of this transaction).
Thousands of databases and millions of tables sounds like some kind of hosting - you probably have no way to fix the queries or guarantee they will be cacheable at all? It seems only small percent is and giving it more memory will only slow down invalidations, not help with hit ratio.
Do you have cacheable queries? If most queries do not repeat (they contain some random or time-dependent constants etc. - imagine a query getting "active articles" with 1 second precision - such query would be cached and maybe returned for one second if it were fired multiple times, but next second a new version is queried with a different timestamp so it does not match and is executed anew - you get your QC full of these queries, each only effective for 1 second). Then there are those containing rand(), now() and similar functions which cannot be cached at all.
Transactions may collide with QC - when one query inside a transaction invalidates some table in the query cache, later selects to that table inside the same transaction cannot use query cache at all (because of the repeatable-read consistency - other transactions might fill the QC with data which would be stale from the viewpoint of this transaction).
Thousands of databases and millions of tables sounds like some kind of hosting - you probably have no way to fix the queries or guarantee they will be cacheable at all? It seems only small percent is and giving it more memory will only slow down invalidations, not help with hit ratio.
Context
StackExchange Database Administrators Q#136790, answer score: 8
Revisions (0)
No revisions yet.