Mysql – How to optimize a table/db with 200 insert/sec

myisamMySQLperformancequery-performance

I'm facing a very hard problem.

I have a mysql DB (MyISAM) with a main table (user) where I write a row for every new user (around 10 fields). Every INSERT triggers around 10 new INSERT in 10 different tables (counters, settings, etc).

In normal days (10 users/min) it works fine. In high-traffic days (100-200 users/min) my app just become unusable.

I worked a lot on system variables, tables indexes, etc., without any improvements. Today I figured out that probably the problem is on my main table (user): the slow query log shows that every INSERT can reach 180s to be executed!

This is the main query because I need the user ID in order to go on and let the user open and use the app. Moreover, this table receives a lot of SELECT, some directly and some in JOIN. It means that the locks for INSERT also block all the SELECT.

90% of new users, in high-traffic days, can't "pass" the signup, they get an error and they leave the app.

I'm reading a lot and trying a lot of new settings but probably the problem is not in mysql settings. It's in the app structure or in the first INSERT.

Do you have any suggestion for me? I'm completely open to anything. I would like to discuss with you possible solutions.

Thanks a lot in advance for any help.

p.s. I didn't put any info about DB, query, etc… If you need more information just let me know.

EDIT:

I don't have privilages for commenting so I'll add here this information:

1) do you mean changing InnoDB only the user table or the whole DB?

2) I just checked the values of Max_used_connections (it's 271) and Threads_running (it's 29). This is my file my.cnf

Best Answer

There are multiple design changes to make.

Use InnoDB, not MyISAM. You need row-level locking, not table-level locking. (And set innodb_flush_log_at_trx_commit = 2) A blog on conversion tips: http://mysql.rjweb.org/doc.php/myisam2innodb . A MyISAM INSERT always blocks a SELECT. An InnoDB INSERT rarely blocks a SELECT.

Rethink your counters. You have reached the "limit" of bumping counters on the fly. Instead of bumping them one at a time, batch them. One approach is to write the actions to a table, then do batch operations on it to update the various counters.

At 200/sec or more, I recommend ping-ponging two staging tables, as described here: http://mysql.rjweb.org/doc.php/staging_table Note that you can also batch the INSERTs with this technique, thereby speeding them up considerably.

Get rid of all the triggers, unless you can explain something that won't be efficiently covered by the above suggestions.

180s for an insert -- that is what can happen if connections and queries pile up. Your Max_used_connections and Threads_running were probably very high. More than a few dozen for either of those can imply things like "180s".

It would help to know the "flavor" of the application -- Data Warehousing? MOOG? Search?

EDIT:

Yes, convert all of your tables to InnoDB. Then lower key_buffer_size to 40M and raise innodb_buffer_pool_size to 70% of RAM. (Reference: http://mysql.rjweb.org/doc.php/memory )

max_heap_table_size and tmp_table_size, currently at 512M are dangerously high, especially when hundreds of queries are running at the same time. Perhaps no more than 1% of RAM is wise. This change may have no impact, or it may impact large, complex, SELECTs. But the change should help prevent swapping, which could be the real problem you are experiencing. (Swapping is terrible for MySQL.)