HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlModerate

MySQL not releasing memory

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
mysqlnotreleasingmemory

Problem

MySQL seems to want to keep an entire table in cache (table size = ~20GB) after any large inserts or select statements are performed on it. Right now my innodb buffer pool is 20GB. Total RAM is 32GB. I will provide some memory usage and output from innodb status as well as output from mysqltuner. It's been driving me nuts for the past few days. Please help! I appreciate any feedback and please let me know if you need more information.

Also, performing a 'FLUSH TABLES' just closes and re-opens them in memory. At least I think that's what is happening. Here's the innodb current memory status before I performed a bunch of inserts:

----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 21978152960; in additional pool allocated 0
Dictionary memory allocated 6006471
Buffer pool size   1310719
Free buffers       347984
Database pages     936740
Old database pages 345808
Modified db pages  0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 78031, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 551887, created 384853, written 4733512
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
No buffer pool page gets since the last printout
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 936740, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]


mysqld percent memory usage: 60.9%

mysqld percent memory usage after inserts (1 mil records): 63.3%

and then after more inserts (3 mil records): 70.2%

shouldn't it cap out at about 62.5%? (20/32GB) total ram?

output from top sorting my MEM usage:

```
top - 14:30:56 up 23:25, 3 users, load average: 3.63, 2.31, 1.91
Tasks: 208 total, 4 running, 204 sleeping, 0 stopped, 0 zombie
Cpu(s): 96.0%us, 3.0%sy, 0.0%ni, 0.0%id, 1.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 28821396k total, 28609868k used, 211528k free, 138696k buffers
Swap: 33554428k total, 30256k used, 33524172k free, 1208184k cached

PI

Solution

First all take a look at the InnoDB Architecture (courtesy of Percona CTP Vadim Tkachenko)

InnoDB

Your status for the Buffer Pool says

Buffer pool size 1310719

That's your Buffer Size in Pages. Each page is 16K. That turns out 20G - 16K.

Please note the the following: You pushed data into the InnoDB Buffer Pool. What changed ?

Buffer pool size   1310719 
Free buffers       271419 (It was 347984)
Database pages     1011886 (Is was 936740)
Old database pages 373510 (It was 345808)
Modified db pages  4262 (It was 0)


Also, note the difference between the Buffer Pool Size in Pages.

1310719 (Buffer pool size) - 1011886 (Database pages) = 298833

That's 298833 InnoDB pages. How much space is that ???

mysql> select FORMAT(((1310719  - 1011886) * 16384) / power(1024,3),3) SpaceUsed;
+-----------+
| SpaceUsed |
+-----------+
| 4.560     |
+-----------+


That's 4.56GB. That space is used for the Insert Buffer Section of the InnoDB Buffer Pool (a.k.a. Change Buffer). This is used to mitigate changes to nonunique indexes into the System Tablespace File (which all have come to know as ibdata1).

The InnoDB Storage Engine is managing the Buffer Pool's internals. Therefore, InnoDB will never surpass 62.5% of RAM. What is more, the RAM for the Buffer Pool is never given back.
WHERE IS THE 70.2% OF RAM COMING FROM ???

Look back at the output of mysqltuner.pl at these lines

[OK] Maximum possible memory usage: 22.6G (82% of installed RAM)
Key buffer size / total MyISAM indexes: 2.0G/58.7M
[--] Total buffers: 22.2G global + 2.7M per thread (151 max threads)


mysqld has three major ways of allocating RAM

  • You set 20G of the InnoDB Buffer Pool



  • You have 2G for MyISAM Key Cache



  • The remaining 0.6G comes from 151 (max_connections) times (2.7M per DB Connection or thread). The 2.7M comes from (join_buffer_size + sort_buffer_size + read_buffer_size)



Any small spike in DB Connections will raise RAM past the 62.5% threshold you see for InnoDB.
MyISAM (Side Note)

What catches my eye is

Key buffer size / total MyISAM indexes: 2.0G/58.7M


Since you have so little indexes for MyISAM. You could set the key_buffer_size to 64M.

You do not need to restart mysql for that. Just run

SET GLOBAL key_buffer_size = 1024 * 1024 * 64;


Then, modify this in my.cnf

[mysqld]
key_Buffer_size = 64M


This will give the OS 2GB of RAM. Your VM will simply love you for it !!!
Give it a Try !!!
CAVEAT

Running FLUSH TABLES on InnoDB tables simply closes files against the .ibd files. This will not really push changes directly. The changes have to migrate its way through the pipes of InnoDB. This is why you see the spike in Modified db pages. The 4262 changed pages (66.59 MB) gets flushed when InnoDB's scheduless its flush.

Code Snippets

Buffer pool size   1310719 
Free buffers       271419 (It was 347984)
Database pages     1011886 (Is was 936740)
Old database pages 373510 (It was 345808)
Modified db pages  4262 (It was 0)
mysql> select FORMAT(((1310719  - 1011886) * 16384) / power(1024,3),3) SpaceUsed;
+-----------+
| SpaceUsed |
+-----------+
| 4.560     |
+-----------+
[OK] Maximum possible memory usage: 22.6G (82% of installed RAM)
Key buffer size / total MyISAM indexes: 2.0G/58.7M
[--] Total buffers: 22.2G global + 2.7M per thread (151 max threads)
Key buffer size / total MyISAM indexes: 2.0G/58.7M
SET GLOBAL key_buffer_size = 1024 * 1024 * 64;

Context

StackExchange Database Administrators Q#62021, answer score: 12

Revisions (0)

No revisions yet.