MySQL/Percona Server is taking long and I’d prefer to skip all this waiting even if it means data loss

innodbMySQLpercona-servershutdowntransaction

Originally posted to https://www.facebook.com/groups/GNUAndLinux/permalink/10155020539805019/ but no replies received.

I run Percona Server 5.6.21-70.0-688 x64 on Ubuntu Server 12.04 LTS x64, 64GB RAM, 2x2TB SATA* in hardware RAID0 and Intel Xeon E5-1620 v2 3.70GHz. InnoDB/XtraDB is the primary engine (with Barracuda format), with 20GB (21 after recent change) buffer pool size divided to 20 (21 after recent change) pools. The zlib compression is maximized to 9.

The Database Servers and its threads are dropped to nice -20 and ionice -c 1 -n 1.

Almost every Server on rautamiekka.org depends on MySQL databases, especially Minecraft.

I found out how to make a perfect list of IPv4 addresses into a text file. I created a table with auto-increment column and 1 column for each address section, to maximum of 5 columns in total. I made a Python script that started a transaction and explicitly disabled auto-commit and uploaded those addresses. The script had uploaded so many addresses from the 115GB file it had about 51GB of data in, likely already compressed. Then Percona Server 5.6.21-70.1 came out. I thought I'd run the routine update process cuz I thought there'd be no problems so I shutted down Minecraft Servers and any other Server depending on MySQL and forgot about the Python script. However, after fighting with aptitude and apt-get for few minutes I realized Database Server was still running and not accepting any connections since it was the in the shutdown process. I noticed the Python script had quitted by way or another.

That was Monday 2014-11-24, and the many processes/threads have been running since then, at 20-50KB/s (read) or even 200, and 120-500KB/s (write). It's mainly 1 that runs read and 2 that run write, with 1 process running at least twice as fast as the second, and only 2 processes, mainly, use CPU, but they're not the only ones. I presume it's committing the data into the table and reading the compressed data to verify it.

My estimations say the shutdown will take 9 days if we assume it writes 80GB at 120KB/s at minimum, and I warned ppl it might not be ready until Monday 2014-12-08 although it's been running for days now.

I made changes to the config just recently:

[mysqld]
innodb_buffer_pool_dump_at_shutdown=1
innodb_buffer_pool_load_at_startup=1

I've had

[mysqld]
innodb_fast_shutdown = 0
innodb_flush_method=O_DIRECT

for ages to ensure InnoDB will write everything before going down so we'll have smaller chances of data loss, but due to that we've had to wait for ages, and probly have to for at least a week, which I hate.

For reference, I posted a change request to MySQL bugs asking for changing the way MySQL handles order to shut down to prevent this kind of downtime: http://bugs.mysql.com/bug.php?id=75110

What can I do here ? I can consider every method to bring the process down, even loss of data if the loss ain't big, particularly if I lose just a portion of the IPv4 cuz I'll use MySQL script to continue filling up the table. However, I'm afraid I could lose something else, which is almost not an option, but I'll see.

Best Answer

Unfortunately, you have to let InnoDB purge all its moving parts (Redo Logs, Undo Logs, Dirty Pages, Index Changes, Log Buffer Contents, etc)

Going forward, I recommend the following : Set innodb_max_dirty_pages_pct = 0

By default, innodb_max_dirty_pages_pct is 75. If you set it to 0, it forces the InnoDB Storage Engine to flush every dirty page and its grandmother out of the InnoDB Buffer Pool without any pausing. It will either flush every page or flush it low enough so that new dirty pages enter the Buffer Pool as fast as old dirty pages are flushed out.

Flushing dirty pages and cleaning up transaction is what consumes the most time in a shutdown. Minimizing the number of pages shortens shutdown time.

You can monitor the following status variables

Watch them decrease until it hits zero or levels off. Then, you can issue a shutdown.

Here are additional settings you could use

Give it a Try !!!

Afterthoughts

Don't worry about dumping the buffer pool map and reloading. The output file is very small.

SATA/RAID0 sound scary to me. You should be using SAS/RAID10 for InnoDB.

At the very least you think about splitting up InnoDB data, logs, and system tablespaces into separate devices. See my earlier post MySQL on SSD - what are the disadvantages?

You should also think about getting away from Barracuda. See my old post innodb_file_format Barracuda

UPDATE 2014-12-07 18:14 EST

If you want to interrupt an InnoDB purge of everything and you don't want to preserve the data being fixed up, your only option is to set InnoDB Recovery Options. See my post The InnoDB log sequence number is in the future. If you use the innodb_force_recovery options, it may not let you do any more writes of InnoDB. If you a moderate amount of data, you could mysqldump whatever data can come out, shutdown mysqld, delete all data folders (except mysql schema), ib_logfile0, ib_logfile1, and ibdata1, start up mysql, reload data.

I cannot really help you if you have 1TB in a forced recovered state.