What could go wrong? If the way MyISAM changed the format of how something is stored in their files, it will break on 5.1. This will render your stats database on the master corrupt and unusable.
Even if the initial load went fine, it doesn't mean something won't come along later with the result listed above...a corrupt database. It's a risk, and if it can be avoided, I personally would do so.
And in your case, it can be avoided by dumping the data using mysqldump
quickly and then load into the 5.1 instance.
I see this query in your SHOW INNODB STATUS\G
CREATE TABLE 1_temp_foo AS
SELECT SQL_NO_CACHE
a.*
FROM
crm_companies AS a
LEFT JOIN users b ON a.zipcode = b.uid
LEFT JOIN calc_base_materials c ON a.zipcode = c.material_id
LEFT JOIN calc_base_material_langtext d ON a.zipcode = d.material_id
LEFT JOIN crm_people e ON a.zipcode = e.telephone1_number
ORDER BY a.country, a.name1
This query gives me the creeps because it combines three things you may not have thought of:
- InnoDB is involved based on your initial premise :
Using: MySQL 5.1.41 and InnoDB Tables
- MyISAM is also involved. Why is MyISAM involved? ALL INTERNAL TEMP TABLES ARE MyISAM !!! The resulting join is a MyISAM table that must be converted into InnoDB when the temp table has been populated. What is the default lock level for MyISAM tables? Table Level Locking.
- DDL is involved since a newly created table must be brought into existence. That new table would not be manifested until the temp table is populated, converted to InnoDB, and finally renamed
1_temp_foo
.
There is another side effect worth noting. When you do
CREATE TABLE tblname AS SELECT ...
The resulting table has no indexes.
I have something you might find helpful to bypass the locking issue. It involves making the table first as a separate query, then populating it. There are two options for making your temp table:
OPTION #1 : Try creating the table with the same layout
CREATE TABLE 1_temp_foo LIKE crm_companies;
This will create the table 1_temp_foo
to have the exact same indexes and storage engine as the original table crm_companies
.
OPTION #2 : Try creating the table with the same storage engine only, but no indexes.
CREATE TABLE 1_temp_foo SELECT * FROM crm_companies WHERE 1=2;
ALTER TABLE 1_temp_foo ENGINE=InnoDB;
After creating the table (whichever way you choose), you can now populate the table like this:
INSERT INTO 1_temp_foo
SELECT SQL_NO_CACHE a.*
FROM
crm_companies AS a
LEFT JOIN users b ON a.zipcode = b.uid
LEFT JOIN calc_base_materials c ON a.zipcode = c.material_id
LEFT JOIN calc_base_material_langtext d ON a.zipcode = d.material_id
LEFT JOIN crm_people e ON a.zipcode = e.telephone1_number
ORDER BY a.country, a.name
;
Now, this query should produced row-level locks for the sake of having data available for repeatable reads. In other words, this is a transactional query.
CAVEAT
OPTION #2 has advantages over OPTION #1
- Advantage #1 : If crm_companies has any foreign key constraints, OPTION #1 is not really possible. You would have to choose OPTION #2 for the sake of simplicity.
- Advantage #2 : Since OPTION #2 creates a table with no user-defined indexes, the table should load faster than if the table were made via OPTION #1.
Best Answer
+1 to @RolandoMySQLDBA of course for another valiant answer. But more to the point of your question:
YES.
You can't get a consistent backup even for a single MyISAM table unless you do some type of locking to prevent writes. Rolando gave a pretty thorough answer with options available to you for locking.
One other option that people use for backing up MyISAM data: LVM snapshots. See http://www.lenzg.net/mylvmbackup/ for a great tool to assist with this.
The final recommendation is to stop using MyISAM, and use InnoDB instead. Then you can do fast, non-locking physical backups with Percona XtraBackup.
Re your comment:
Because reading through a large file isn't instantaneous or atomic. While your backup is progressing through the table, other concurrent updates could change both rows that your backup has already read, and rows that your backup hasn't reached yet.
Take a textbook example for transaction behavior: I do a bank transfer by debiting my bank account and crediting your bank account. My bank account is stored on a row that is physically early in the file, and your bank account is stored on a row later in the file.
While this is going on, the backup is reading through the file, and it has read up to a mid-point at the time our transaction happens. When we restore, we get my original account balance, without the debit applied, because the backup had already passed that point when the debit occurred (and it isn't able to go backwards). But the restore includes your updated account balance, because the backup got to that point in the file after we increased your balance.
Ergo, free money! ;-)
MyISAM solves this by requiring the table to be locked against updates while it's doing a read.
InnoDB solves this by keeping multiple versions of rows, for as long as a reading transaction needs to see them for the sake of a consistent view of the database. So anyone can update the data without waiting, even though the backup is in progress.
Percona XtraBackup solves this in a slightly different way: it can go backwards, in a way. While it's reading the data file, it keeps checking the transaction log continually, to see if there are any late changes it needs to include. These changes may apply to parts of the datafile that Percona XtraBackup has already read. But as long as it gets the data file plus any changes that were logged since the backup started, it can reconstruct the full database.
But that only works for storage engines like InnoDB, that create a reliable transaction log. Percona XtraBackup can also back up MyISAM, but only by using locking, like any other backup tool.