HiveBrain v1.2.0
Get Started
← Back to all entries
snippetsqlMinor

How do I insert tens of thousands of rows without locking?

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
rowswithouttensinsertlockingthousandshow

Problem

I have a server that receives data from thousands of locations all over the world. Periodically, that server connects to my DB server and inserts records, one row at a time in rapid succession. There can be up to 6 of these processes (perl) running simultaneously and each may have over 50,000 insert statements, and subsequently, the tables are getting locked.

What I am trying to figure out is what causes locking? Am I better off creating a multi-insert with, say, 100 rows at a time and doing them end to end? What do I use for guidelines?

The DB server has 100GB RAM and 12 processors. It is very lightly used but when these inserts come in, everyone freezes up for a couple minutes which disrupts peopel running reports, etc.

Thanks for any advice. I know I need to stagger the inserts, I am just asking what is a recommended way to do this.

Solution

You cannot get rid of all locking. However, there are ways to speed up what you are doing:

  • Use multi-row INSERTs. 100 rows at a time will run 10 times as fast.



  • As mentioned, switch to InnoDB using pt-online-schema-change. Then put multiple INSERTs into a transaction (BEGIN ... COMMIT).



  • The 6 threads could write to 6 different tables (such as CREATE TEMPORARY TABLE), then INSERT INTO real_table SELECT FROM tmp_table. Even on MyISAM this will run significantly faster because of 6 locks instead of 650K. Read about concurrent_insert to see how to avoid 50K table locks into the tmp tables.



Please show us the SHOW CREATE TABLE. There could be significant other issues, especially with indexes.

Context

StackExchange Database Administrators Q#117351, answer score: 9

Revisions (0)

No revisions yet.