patternsqlMinor
Innodb: after 48 hours of optimizing 10mb/sec write speed
Viewed 0 times
afterhoursinnodbwriteoptimizingsec10mbspeed
Problem
I run this on a 8 core server with NVME storage (2TB) and 64 GB ram.
The disks are hell fast, 1.1 GB/sec seq and 70-100k IOPS full duplex.
Because I had so horrible performance with mysql 5.7 I installed Mariadb 10.3.8 on a slim docker container.
In total I have tables to write that are 2 TB and a billion rows in size. But let me make clear: this speed performance happens on empty disk at the first few thousand rows, it's not related to a large table.
I invested roughly 50 hours of work into this in the past week, day and night, I read every documentation page I could find and hundreds of guides and questions on various platforms.
I tested it all out, in almost any combination you could think off.
I tried it pure memory buffered, pure disk buffered, with and without large logs, log buffers, various flushing methods, no flushing, all those settings you can think off.
I tested importing using:
mydumper, mysql console, mysqlimport, load data infile, PHP inserts, multithreaded PDO scripts I wrote.
I tested tables with and without index, only primary indexed.
I tried importing with and without TRANSACTIONS, tried single row and multi row INSERTs.
I tried different table types, usually 20-30 columns containing mostly varchars and a few datetimes.
Performance in single thread is 3-5k rows/second and multithreaded (ridiculous..) 10-25k/second.
The CPU and DISK are mostly idle all the time, iostat shows 3-20mb/sec write performance, usually around 7mb-12mb. Depending on which settings I try.
So about 100 times slower than it should perform, there is nothing obvious holding it back.
That's the current configuration:
```
innodb_buffer_pool_size = 14G
innodb_buffer_pool_chunk_size=1G
innodb_log_buffer_size = 32M
innodb_file_per_table = 1
innodb_open_files = 600
#innodb_flush_method = O_DIRECT
innodb_flush_method = O_DSYNC
innodb_log_file_size = 512M
innodb_io_capacity=800
innodb_io_capacity_max=3000
innodb_flush_neighbors=
The disks are hell fast, 1.1 GB/sec seq and 70-100k IOPS full duplex.
Because I had so horrible performance with mysql 5.7 I installed Mariadb 10.3.8 on a slim docker container.
In total I have tables to write that are 2 TB and a billion rows in size. But let me make clear: this speed performance happens on empty disk at the first few thousand rows, it's not related to a large table.
I invested roughly 50 hours of work into this in the past week, day and night, I read every documentation page I could find and hundreds of guides and questions on various platforms.
I tested it all out, in almost any combination you could think off.
I tried it pure memory buffered, pure disk buffered, with and without large logs, log buffers, various flushing methods, no flushing, all those settings you can think off.
I tested importing using:
mydumper, mysql console, mysqlimport, load data infile, PHP inserts, multithreaded PDO scripts I wrote.
I tested tables with and without index, only primary indexed.
I tried importing with and without TRANSACTIONS, tried single row and multi row INSERTs.
I tried different table types, usually 20-30 columns containing mostly varchars and a few datetimes.
Performance in single thread is 3-5k rows/second and multithreaded (ridiculous..) 10-25k/second.
The CPU and DISK are mostly idle all the time, iostat shows 3-20mb/sec write performance, usually around 7mb-12mb. Depending on which settings I try.
So about 100 times slower than it should perform, there is nothing obvious holding it back.
That's the current configuration:
```
innodb_buffer_pool_size = 14G
innodb_buffer_pool_chunk_size=1G
innodb_log_buffer_size = 32M
innodb_file_per_table = 1
innodb_open_files = 600
#innodb_flush_method = O_DIRECT
innodb_flush_method = O_DSYNC
innodb_log_file_size = 512M
innodb_io_capacity=800
innodb_io_capacity_max=3000
innodb_flush_neighbors=
Solution
Suggestions to consider for your my.cnf [mysqld] section based on visible information provided.
Entire block to go at END of [mysqld] and REMOVE any SAME NAMED VARIABLE appearing higher in the section to avoid conflicts on requests.
RAM use strategy while you are loading 10,000+ rows per second
Total RAM = 64GB, allow mysqld up to 48GB (~ 75%)
While you are loading this high volume,
when the loading has completed,
SET GLOBAL innodb_change_buffer_max_size=15 # for 15% set aside for routine maintenance requirements;
and you will settle into typical data needed available in the innodb buffer pool in a reasonable length of uptime.
For additional Suggestions, view my profile, Network Profile for contact info, including SKYPE ID to get in touch, PLEASE. Thanks
Entire block to go at END of [mysqld] and REMOVE any SAME NAMED VARIABLE appearing higher in the section to avoid conflicts on requests.
innodb_io_capacity=40000 # from 5000 to open the door for NVME speed
read_rnd_buffer_size=256K # from 1M to reduce handler_read_rnd_next RPS
innodb_lru_scan_depth=128 # from 1024 to conserve CPU every second
innodb_adaptive_max_sleep_delay=10000 # from 150000 for 1 sec sleep delay
innodb_flushing_avg_loops=4 # from 30 for reduce the loop delay
innodb_thread_concurrency=0 # from 144 see dba.stackexhange Question 5666
max_seeks_for_key=32 # to limit optimizer to nn vs ~ 4 Billion possible
max_write_locks_count=16 # to allow RD after nn lcks vs up to 4 Billion lcks
thread_concurrency=30 # from 10 for additional conc - may be DEPR
innodb_buffer_pool_instances=8 # from 64 see REFMAN for innodb_lru_scan_depth details
innodb_log_file_size=6G # from ~ 512M to reduce log rotation
innodb_log_buffer_size=3G # from 16M for ~ 30 minutes buffering
query_cache_type=OFF # from ON no need to waste CPU for mgmt
query_cache_size=0 # from ~256M to conserve RAM and CPU cycles
slow_query_log=ON # from OFF always good to have ONRAM use strategy while you are loading 10,000+ rows per second
Total RAM = 64GB, allow mysqld up to 48GB (~ 75%)
While you are loading this high volume,
innodb_buffer_pool_size=30G # for 62.5% of your 48G
innodb_change_buffer_max_size=50 # for 50% to have best insert rate per secondwhen the loading has completed,
SET GLOBAL innodb_change_buffer_max_size=15 # for 15% set aside for routine maintenance requirements;
and you will settle into typical data needed available in the innodb buffer pool in a reasonable length of uptime.
For additional Suggestions, view my profile, Network Profile for contact info, including SKYPE ID to get in touch, PLEASE. Thanks
Code Snippets
innodb_io_capacity=40000 # from 5000 to open the door for NVME speed
read_rnd_buffer_size=256K # from 1M to reduce handler_read_rnd_next RPS
innodb_lru_scan_depth=128 # from 1024 to conserve CPU every second
innodb_adaptive_max_sleep_delay=10000 # from 150000 for 1 sec sleep delay
innodb_flushing_avg_loops=4 # from 30 for reduce the loop delay
innodb_thread_concurrency=0 # from 144 see dba.stackexhange Question 5666
max_seeks_for_key=32 # to limit optimizer to nn vs ~ 4 Billion possible
max_write_locks_count=16 # to allow RD after nn lcks vs up to 4 Billion lcks
thread_concurrency=30 # from 10 for additional conc - may be DEPR
innodb_buffer_pool_instances=8 # from 64 see REFMAN for innodb_lru_scan_depth details
innodb_log_file_size=6G # from ~ 512M to reduce log rotation
innodb_log_buffer_size=3G # from 16M for ~ 30 minutes buffering
query_cache_type=OFF # from ON no need to waste CPU for mgmt
query_cache_size=0 # from ~256M to conserve RAM and CPU cycles
slow_query_log=ON # from OFF always good to have ONinnodb_buffer_pool_size=30G # for 62.5% of your 48G
innodb_change_buffer_max_size=50 # for 50% to have best insert rate per secondContext
StackExchange Database Administrators Q#212884, answer score: 3
Revisions (0)
No revisions yet.