Believe it or not, I once wrote a post about why you should not do that (How can I disable utf8mb4 entirely on MySQL 5.5?). However, in the spirit of my old post and the commentary in it from @ChristopherSchultz, I will go out on a limb and tell you how you can do it, then tell you why you should not.
I once wrote a post about the home position of any empty binary log:
Over the years in this forum, I learned from someone (I think it was either Aaron Brown or Morgan Tocker) that there is a universal position for all binary logs regardless of the MySQL Version: position 4.
I once put that in an answer (Mar 05, 2013
: MySQL Replication without stopping master). In Step 06 from my answer I wrote this:
CHANGE MASTER TO
MASTER_HOST='10.1.20.30',
MASTER_PORT=3306,
MASTER_USER='repluser',
MASTER_PASSWORD='replpass',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=4;
I also used position 4 in these other posts
Rarely do I repeat this info in any other posts for a reason. Personally, I fear that binlog events might be represented differently from version to version in terms of the size (in bytes) of each event. Believe it or not, over the past two weeks I have been upgrading DB Servers from MySQL 5.5. to MySQL 5.6. Due to mixed mode binary logging, there have been rare events when replication breaks and you cannot reset it from binlog files and positions by standard replication techniques. I have had to hose binary logs on Master, copy data, and setup replication from scratch a few times (5 out 400 VMs, but it still happened 5 times). I am very sure that replicating from a new Master to an old Slave would cause many more problems along these lines.
Therefore, I can only say that you can do it theoretically and MySQL may not object, that is, until MySQL Replication encounters a binlog event that is in a format it does not recognize and cannot interpret.
UPDATE 2014-11-18 22:32 EST
Just for official reference, this example CHANGE MASTER TO command
CHANGE MASTER TO
MASTER_HOST='master2.mycompany.com',
MASTER_USER='replication',
MASTER_PASSWORD='bigs3cret',
MASTER_PORT=3306,
MASTER_LOG_FILE='master2-bin.001',
MASTER_LOG_POS=4,
MASTER_CONNECT_RETRY=10;
appears in the MySQL 5.6 Documentation. It's also in the MySQL 4.1 Documentation.
Thus, position 4 has always been known (I have only known a couple of years). Notwithstanding, I trust MySQL Replication from old Master to new Slave (but not on a permanent basis). I do not trust MySQL Replication from new Master to old Slave.
UPDATE 2014-11-19 17:47 EST
Please don't go down the Circular Replication path as it just adds to the risk of lost binlog events due to different versions. You should always replicate one direction to a newer version. Then, just failover to the newer version.
It is a brave soul indeed that still uses an all-MyISAM database.
Why are writes being quietly discarded instead of being queued, as they would be for a regular table lock?
Look at the error message again
[20-Nov-2014 01:41:56 UTC] Wordpress database error The MySQL server is running
with the --read-only option so it cannot execute this statement for query
INSERT INTO wp_options
...
WordPress is reporting this as the error, but it not a failure on the part of MySQL. You turned on read_only. That's what WordPress sees.
And more importantly, how can I fix that so they DO get queued and are eventually run in the order they were requested? :)
Just remove read_only:
FLUSH TABLES WITH READ LOCK;
system time cp -pdRu /mysql_data_dir/thedb /backup_dir/thedb
unlock tables;
The DB Connections should start to queue up all INSERTs, UPDATEs, and DELETEs.
You said something far more profound in the question
Both reads AND writes need to be possible during the backup. But the backup time is very short, so it's fine for the writes to be queued until the "unlock tables" command.
SUGGESTION #1
You need to convert all your tables to InnoDB. Reads and Writes can continue and queue up just fine without FLUSH TABLES WITH READ LOCK;
. You would dump the database like this
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
MYSQLDUMP_OPTIONS="--single-transaction --routines --triggers"
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} thedb > thedb.sql
SUGGESTION #2
If you want everything to remain MyISAM, your should get another DB Server and setup MySQL Replication. All your writes would happen on the Master. On the Slave, you would do the backups like this
STOP SLAVE;
FLUSH TABLES WITH READ LOCK;
system time cp -pdRu /mysql_data_dir/thedb /backup_dir/thedb
START SLAVE;
You can also split your reads between Master and Slave, or do all reads on the Slave.
EPILOGUE
If you go with SUGGESTION #1, convert everything to InnoDB with this script
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
MYISAM_TO_INNODB_CONVERSION_SCRIPT=/tmp/ConvertMyISAMToInnoDB.sql
SQL="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;')"
SQL="${SQL} FROM information_schema.tables WHERE engine='MyISAM' AND"
SQL="${SQL} table_schema NOT IN ('information_schema','mysql','performance_schema')"
SQL="${SQL} ORDER BY (data_length+index_length)"
mysql ${MYSQL_CONN} -ANe"${SQL}" > ${MYISAM_TO_INNODB_CONVERSION_SCRIPT}
mysql ${MYSQL_CONN} < ${MYISAM_TO_INNODB_CONVERSION_SCRIPT}
GIVE IT A TRY !!!
UPDATE 2014-11-21 10:52 EST
In your comment, you just asked
Is there any major reason to NOT install a 2nd MySQL instance (the slave) on the same server?
There is a huge reason why you do not want to setup a 2nd MySQL instance on the same box to do replication.
- Everything is MyISAM
- MyISAM only caches indexes, never data
- MyISAM defers caching data to the OS
- Master and Slave caching the same data in two different DB means twice as much data for the OS to cache
- See my other reasons in my old post Is it unwise to run replication on same physical server?
If you convert everything to InnoDB (which caches data and indexes, has failsafes for crash recovery and transactions), then, yes, you could setup a 2nd MySQL instance for replication on the same server (provided you have sufficient RAM and diskspace for the 2nd instance).
Best Answer
In order to perform an upgrade with (almost) no downtime, you need to add some extra infrastructure for high availability. The most common way to perform that is with the help of MySQL built-in replication:
There are many things to have into account here: an upgrade from 5.1 to 5.6 (even if you follow upstream MySQL, and with more reason if you change provider) does not guarantee that it will go smoothly. You will suffer configuration changes, query optimiser changes, SQL syntax changes. You may want to do both both load and validation tests. There are many tools for load testing from your own logs (mysqlslap, percona playback), but I would recommend at least pt-upgrade for compatibility testing. You may also want to watch closely the "What's new" MySQL documents to take advantage of the new features and its equivalent for MariaDB/Percona.
Even if you decide to perform the upgrade not using replication (doing a cold restart), it is still a very useful tool to test the new server version beforehand.
Regarding the upgrade itself, 5.1 -> 5.6 itself can be done in binary format (I would recommend transitioning through 5.5 for easier fix of errors), but if you do not have a lot of data I would recommend you alternatively thinking if a logical export/import is feasible to avoid later rebuilds. This is very dependent on your actual setup.
Final words: remember to perform a backup beforehand and execute
mysql_upgrade
afterwards.Check a recent article of how GitHub did a datacenter migration doing a similar process.