I can see mysqldump being a little pain in the neck because of transaporting data out and then in.
Let's say you are copying data from proddb to stagedb. Here is a query to create the script for lateral copying of every table:
select
concat
(
'drop table if exists ',db2,'.',tbl,'; ',
'create table ',db2,'.',tbl,' like ',db1,'.',tbl,'; ',
'insert into ',db2,'.',tbl,' select * from ',db1,'.',tbl,';'
)
from (select table_schema db1,table_name tbl
from information_schema.tables where table_schema='stack_dev') tbls,
(select 'stack_test' db2) dbname;
Just generate the script as a SQL text file and import the text file to the target server:
PROD_IP-10.1.2.20
STAG_IP=10.1.2.40
mysql -h${PROD_IP} -uusername -p --skip-column-names -A -e"select concat('drop table if exists ',db2,'.',tbl,'; ','create table ',db2,'.',tbl,' like ',db1,'.',tbl,'; ','insert into ',db2,'.',tbl,' select * from ',db1,'.',tbl,';') from (select table_schema db1,table_name tbl from information_schema.tables where table_schema='stack_dev') tbls,(select 'stack_test' db2) dbname;" > /root/CopyFromProdToStage.sql
mysql -h${STAG_IP} -uusername -p -A < /root/CopyFromProdToStage.sql
This should work just fine for MyISAM. It should also work for InnoDB if there are no constraints. If there are constraints, you may have to disable them for the import session:
PROD_IP-10.1.2.20
STAG_IP=10.1.2.40
echo "SET FOREIGN_KEY_CHECKS=0;" > /root/CopyFromProdToStage.sql
mysql -h${PROD_IP} -uusername -p --skip-column-names -A -e"select concat('drop table if exists ',db2,'.',tbl,'; ','create table ',db2,'.',tbl,' like ',db1,'.',tbl,'; ','insert into ',db2,'.',tbl,' select * from ',db1,'.',tbl,';') from (select table_schema db1,table_name tbl from information_schema.tables where table_schema='stack_dev') tbls,(select 'stack_test' db2) dbname;" >> /root/CopyFromProdToStage.sql
mysql -h${STAG_IP} -uusername -p -A < /root/CopyFromProdToStage.sql
UPDATE 2012-03-09 17:14 EST
I have a stored procedure that will accomplish this. Run the following only once:
DROP DATABASE IF EXISTS utility;
CREATE DATABASE utility;
DELIMITER $$
DROP PROCEDURE IF EXISTS `utility`.`CopyDB` $$
CREATE PROCEDURE `utility`.`CopyDB` (sourceDB VARCHAR(64),targetDB VARCHAR(64))
TheStoredProcedure:BEGIN
DECLARE found_count,ndx,ndx_last INT;
DECLARE sqlcmd VARCHAR(1024);
SELECT COUNT(1) INTO found_count
FROM information_schema.tables
WHERE table_schema = sourceDB;
IF found_count = 0 THEN
LEAVE TheStoredProcedure;
END IF;
DROP TABLE IF EXISTS DBTablesToCopy;
CREATE TABLE DBTablesToCopy
(
id INT NOT NULL AUTO_INCREMENT,
src VARCHAR(64),
tgt VARCHAR(64),
tbl VARCHAR(64),
PRIMARY KEY (id)
) ENGINE=MyISAM;
DROP TABLE IF EXISTS SQLScriptToCopyTables;
CREATE TABLE SQLScriptToCopyTables
(
id INT NOT NULL AUTO_INCREMENT,
sqltext VARCHAR(1024),
PRIMARY KEY (id)
) ENGINE=MyISAM;
INSERT INTO DBTablesToCopy (src,tgt,tbl)
SELECT sourceDB,targetDB,table_name
FROM information_schema.tables
WHERE table_schema = sourceDB;
INSERT INTO SQLScriptToCopyTables (sqltext) VALUES
(CONCAT('DROP DATABASE IF EXISTS ',targetDB));
INSERT INTO SQLScriptToCopyTables (sqltext) VALUES
(CONCAT('CREATE DATABASE ',targetDB));
SELECT MAX(id) INTO ndx_last FROM DBTablesToCopy;
SET ndx = 0;
WHILE ndx < ndx_last DO
SET ndx = ndx + 1;
INSERT INTO SQLScriptToCopyTables (sqltext)
SELECT CONCAT('CREATE TABLE ',tgt,'.',tbl,' LIKE ',src,'.',tbl)
FROM DBTablesToCopy WHERE id = ndx;
END WHILE;
SET ndx = 0;
WHILE ndx < ndx_last DO
SET ndx = ndx + 1;
INSERT INTO SQLScriptToCopyTables (sqltext)
SELECT CONCAT('INSERT INTO ',tgt,'.',tbl,' SELECT * FROM ',src,'.',tbl)
FROM DBTablesToCopy WHERE id = ndx;
END WHILE;
SELECT MAX(id) INTO ndx_last FROM SQLScriptToCopyTables;
SET ndx = 0;
WHILE ndx < ndx_last DO
SET ndx = ndx + 1;
SELECT sqltext INTO @stmt FROM SQLScriptToCopyTables WHERE id = ndx;
PREPARE s1 FROM @stmt;
EXECUTE s1;
DEALLOCATE PREPARE s1;
END WHILE;
END $$
DELIMITER ;
then anytime you feel like copying stack_dev
to stack_test
, do this:
call utility.copydb('stack_dev','stack_test');
and that's all.
Give it a Try !!!
MySQL RDS allows you to do two options
OPTION #1
You set up a Read Slave. Then, run the reports from it. There is a tool in the RDS CLI called rds-create-db-instance-read-replica
to create a Read Slave from a Running RDS instance. The following are done under the hood:
- Snapshot Made
- Snapshot Brought Up
- CHANGE MASTER TO executed with proper binary log and position
OPTION #2
Take a live snapshot of Production. Then, open a new instance using that snapshot.
The tools for this are
rds-create-db-snapshot
rds-restore-db-instance-from-db-snapshot
CONCLUSION
The only difference between these options
- the Read Slave is continuous
- Snaphot brought up as a point-in-time Instance
In both cases, once you done, simply drop the new Instance (saves money) at your discretion.
If you combine the two ideas, you could load your PHP script's output into the point-time instance. Any needed data to retrieve from production could be read from the Read Slave to prevent further load issues with production.
WARNING
Bringing up instances can be time-consuming. So, plan to have the instance brought up early enough for report and import time.
This is more ranting and brainstorming than a concrete answer.
Best Answer
It looks like it must have been the zone (sub-region) that was the issue. The slow loading happened when the EC2 container with the *.sql file was in us-east-1a and the RDS instance was in us-east-1b.