I'm the tech lead of a Social Media SAAS, where we're currently experiencing a big increase in data amount we're storing in our mySQL-database.
We're having a single MySQL
db, and besides many really small tables, we have two tables with approx 500k-1.5m
rows.
Currently, when performing DDL queries (adding new columns to these tables), we have to deal with quite a long execution time (around 4-5 mins)
.
I started to search for online schema migration tips and found the following:
- Alter table on live production databases
- mysql change schema on the fly
- PT Online Schema Change
- OpenARK: Online Alter Table
- Facebook's Online Schema change
Although these tactics are worth investigating, I started to feel unconfortable at all: I don't think, that tables with such a low number of rows have to use such advanced methods. (What about really huge tables then?)
So I was wondering, if anyone of you have tips regarding the mysql.cnf itself, or other performance tips for the ddl queries?
Best Answer
I think that Percona's pt-online-schema-change might be a better solution for you as it can do an ALTER with very little downtime. It does, however, require adding a TRIGGER to the table.
Runnig DDL on your table will lead you into rebuilding your table, mysql is a row based engine so you will suffer the index rebuild this can be quite expensive depending on your engine, table complexity is factor as well.