HiveBrain v1.2.0
Get Started
← Back to all entries
snippetsqlMinor

mysql large number of opened tables -- what does this mean and how can I reduce?

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
thistablesnumbercanwhatmeanmysqllargereducedoes

Problem

I have 300 databases with a maximum of 59 tables each. My hosting provider says I have a lot of opened tables. I am not having any performance issues; but it is causing issues with backup (I use rackspace's cloud databases which uses Percona XtraBackup)

What can cause a lot of open tables and how can I decrease this? I don't get too much traffic and performance is good. I am not sure I understand exactly what opened tables is either.

I do have table_definition_cache set to 4096 as I wasn't able to run queries on views until I increased it.

mysql> SHOW GLOBAL STATUS LIKE 'Open_%';
mysql> +--------------------------+----------+
    -> | Variable_name            | Value    |
    -> +--------------------------+----------+
    -> | Open_files               | 7        |
    -> | Open_streams             | 0        |
    -> | Open_table_definitions   | 4102     |
    -> | Open_tables              | 4096     |
    -> | Opened_files             | 44025318 |
    -> | Opened_table_definitions | 1660409  |
    -> | Opened_tables            | 1857375  |
    -> +--------------------------+----------+


Here are the commands requested:

https://gist.github.com/blasto333/aa4241a4e37447961188356719ea6984

Solution

table_open_cache is the setting that is most likely to get the host off your back.

SHOW VARIABLES LIKE '%open%'; And how long had the system been up when the STATUS was captured? (Need "per second", not absolute counts.)

Provide me with ram size, SHOW VARIABLES and SHOW GLOBAL STATUS; I will critique dozens of things.

Are any of the tables PARTITIONed? (Each 'partition' is effectively a separate 'table'.)

Also, rethink your 300x59; those are moderately large numbers.

VARIABLES & STATUS Analysis

Observations:
Version: 5.6.17-log;
4 GB of RAM;
Uptime = 145d 22:45:14;
You are not running on Windows;
Running 64-bit version;
You appear to be running entirely (or mostly) InnoDB.

The More Important Issues

key_buffer_size -- lower to 20M; your 400M is wasting RAM.

table_open_cache seems to be doing fine. So, I don't see the need in increasing open file limit. (Nor is there any harm.)

A lot of table scans, etc. Suggest you lower long_query_time to 1 and turn on the slow_query_log. Then we can look in the slowlog for which queries need improvement -- either via composite indexes or other techniques.

expire_logs_days is currently 1. That gie you only one day to discover that something is broken in replication and/or binlogs before you lose info.

Remove most or all OPTIMIZE TABLE calls.

Details and other observations

( Opened_tables ) = 0.15/second -- This is a reasonably low and quite respectable rate. After all, you have been up for 20 weeks, and this is a 'counter'. Point out the 20 weeks to the Hosting provider. table_open_cache is number of entries in a cache for "Opened" tables. Perhaps there is a flurry of opening during backup, but no opening the rest of the time? As for "I do have table_definition_cache set to 4096 as I wasn't able to run queries on views until I increased it." -- Do your VIEWs touch hundreds or thousands of tables? (Yikes!)

( (key_buffer_size - 1.2 Key_blocks_used 1024) / _ram ) = (400M - 1.2 4180 1024) / 4096M = 9.6% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size.

( Key_blocks_used 1024 / key_buffer_size ) = 4,180 1024 / 400M = 1.0% -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size to avoid unnecessary memory usage.

( innodb_buffer_pool_size / _ram ) = 2,306,867,200 / 4096M = 53.7% -- % of RAM used for InnoDB buffer_pool -- For only 4GB of RAM, this is probably OK, unless you have other applications in the same server.

( table_open_cache ) = 4,096 -- Number of table descriptors to cache
-- Several hundred is usually good. However, you have a lot of tables, so, this may be OK, or even too small.

( table_open_cache_instances ) = 1 -- Suggest changing to 8.

Table_open_cache_overflows and _misses are each less than 0.1/sec. -- the table_open_cache is big enough.

( Open_tables / table_open_cache ) = 100% -- But, since you have been Up for several months, this is not necessarily bad.

( (Com_show_create_table + Com_show_fields) / Questions ) = (153314 + 9291657) / 543294681 = 1.7% -- Naughty framework -- spending a lot of effort rediscovering the schema.
-- Complain to the 3rd party vendor.

Query cache -- various STATUS values indicate that it is pretty good as it stands.

( Created_tmp_disk_tables / (Created_tmp_disk_tables + Created_tmp_tables) ) = 10,616,310 / (10616310 + 13112655) = 44.7% -- Percent of temp tables that spilled to disk
-- maybe increase tmp_table_size and max_heap_table_size; avoid blobs, etc.

( Com_rollback / Com_commit ) = 646,634 / 1329015 = 48.7% -- Rollback : Commit ratio
-- Rollbacks are costly; change app logic

( Select_scan ) = 16,557,535 / 12609914 = 1.3 /sec -- full table scans
-- Add indexes / optimize queries (unless they are tiny tables)

( Select_scan / Com_select ) = 16,557,535 / 57256550 = 28.9% -- % of selects doing full table scan. (May be fooled by Stored Routines.)
-- Add indexes / optimize queries

( ( Com_stmt_prepare - Com_stmt_close ) / ( Com_stmt_prepare + Com_stmt_close ) ) = ( 1587 - 1526 ) / ( 1587 + 1526 ) = 2.0% -- Are you closing your prepared statements?
-- Add Closes.

( binlog_format ) = MIXED -- STATEMENT/ROW/MIXED. ROW is preferred; it may become the default.

( expire_logs_days ) = 1 -- How soon to automatically purge binlog (after this many days)
-- Too large (or zero) = consumes disk space; too small = need to respond quickly to network/machine crash.

( slow_query_log ) = OFF -- Whether to log slow queries. (5.1.12)

( long_query_time ) = 10.000000 = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2

( max_connect_errors ) = 10,000 -- A small protection against hackers.
-- Perhaps no more than 200.

( Connections ) = 28,367,185 / 12609914 = 2.2 /sec -- Connections
-- Increase wait_timeout; use pooling?

( Max_used_connections / max_connections ) = 167 / 410 = 41% -- Suggest lowering max_connections to, say, 200.

( innodb_io_capacity_max ) = 800 -- This is rather low, but perhaps OK.

( report_port ) = 1 -- This is pro

Context

StackExchange Database Administrators Q#133694, answer score: 9

Revisions (0)

No revisions yet.