I have a server that receives data from thousands of locations all over the world. Periodically, that server connects to my DB server and inserts records, one row at a time in rapid succession. There can be up to 6 of these processes (perl) running simultaneously and each may have over 50,000 insert statements, and subsequently, the tables are getting locked.
What I am trying to figure out is what causes locking? Am I better off creating a multi-insert with, say, 100 rows at a time and doing them end to end? What do I use for guidelines?
The DB server has 100GB RAM and 12 processors. It is very lightly used but when these inserts come in, everyone freezes up for a couple minutes which disrupts peopel running reports, etc.
Thanks for any advice. I know I need to stagger the inserts, I am just asking what is a recommended way to do this.
Best Answer
You cannot get rid of all locking. However, there are ways to speed up what you are doing:
INSERTs
. 100 rows at a time will run 10 times as fast.pt-online-schema-change
. Then put multipleINSERTs
into a transaction (BEGIN
...COMMIT
).CREATE TEMPORARY TABLE
), thenINSERT INTO real_table SELECT * FROM tmp_table
. Even on MyISAM this will run significantly faster because of 6 locks instead of 6*50K. Read aboutconcurrent_insert
to see how to avoid 50K table locks into the tmp tables.Please show us the
SHOW CREATE TABLE
. There could be significant other issues, especially with indexes.