How can I dump a specific table or set of tables without including the rest of the db tables?
MySQL – How to mysqldump Specific Tables
backupMySQLmysqldump
Related Solutions
You will have to custom script each table
The --where
option is designed to help mysqldump a single table
Here is a post I made back in August 15, 2011
Is it possible to mysqldump a subset of a database required to reproduce a query?
Here is an crazy example
Suppose the table is this in the myworld database:
CREATE TABLE data_for_world
(
id int not auto_increment,
continent varchar(16),
primary key (id)
);
insert into data_world (continent)
values ('Asia'),('Africa'),('Eupore'),
('NorthAmerica'),('SouthAmerica'),
('Australia'),('Antarctica');
and you want to break it up into two dumps
- one with continents that start with A
- one with continents that do not start with A
You will need three files (one for the database schema, and two for the data)
mysqldump --no-data --database myworld > myworld_schema.sql
mysqldump --no-create-info myworld data_for_world --where="substr(continent,1,1)='A'" > data1.sql
mysqldump --no-create-info myworld data_for_world --where="substr(continent,1,1)<>'A'" > data2.sql
Conjecture #1 : Single Transaction
The dump for the sys_data
database has to create far more MVCC information to load and dump from ibdata1. Please Click Here to See the InnoDB Infrastructure Map and notice the section in ibdata1 that has 1023 undo logs.
When you were dumping an entire database, mysqldump will export the tables in alphabetical order. There can times that undo logs are populated with row data in the event of a crash and recovery as needed. Perhaps those logs are being loaded and unloaded from previous dumps. The undo space populated by the dump of previous tables (tables before tb_trade_376
) may need some housecleaning performed while the dump of tb_trade_376
is in progress. This would be particularly true for a very busy server where INSERTs, UPDATEs, and DELETEs are being done to the previous tables.
Conjecture #2 : InnoDB Buffer Pool
If you are not writing anything to the database at all, another place to look would be the InnoDB Buffer Pool. Think about it: the data pages of every table being dumped would have to be loaded in the Buffer Pool simply because you are doing a SELECT
. The SQL_NO_CACHE
prevents the query results from entering the query cache, but does not prevent data movement in and out of the Buffer Pool. The data pages accessed from the previous tables have to be invalidated and overwritten in the Buffer Pool for each and every table prior to tb_trade_376
.
Conjecture #3 : innodb_file_per_table
If innodb_file_per_table is disabled, then everything and its grandmother is in ibdata1
. What would it contain? ibdata1 would have the following:
- Data Pages for Every InnoDB Table
- Index Pages for Every InnoDB Table
- Data Dictionary
- Double Write Buffer (support data consistency; used for Crash Recovery)
- Insert Buffer (Buffers Changes to Secondary Non-Unique Indexes)
- Rollback Segments
- Undo Space (where the most uncontrolled growth can happen)
This means that the Undo Logs are competing for space with the data and index pages of every table. This would cause ibdata1 to grow rapidly. If you have innodb_file_per_table disabled, you need to cleanup the InnoDB Infrastructure to keep InnoDB tables out of ibdata1.
Conjecture #4 : MyISAM Tables
How can MyISAM tables cause a problem? If any of the tables in the mysqldump are MyISAM and still receiving INSERTs, UPDATEs, and DELETEs during the database dump, it could possibly disable the checkpoint mechanism of the --single-transaction
option midstream. This is plausible because MyISAM is a non-transactional storage engine. If any of the previous tables are MyISAM and are still receiving writes, all bets are off for all the tables in the dump to have the same point-in-time. Each table being dumped after a MyISAM table in encountered is basically on it own in a transactional sense.
SUMMARY
You may have one or more of these issues going on. Please compensate with one of the following:
- Arrange to dump of each table into separate files
- Setup MySQL Replication
- You have freedom to tune MySQL on the Slave
- You can stop the Slave and dump from the Slave. This will make point-in-time simple since a stopped slave means no more writes. YOu can start the slave once all dumps are done.
- Deal with MyISAM
- Convert all MyISAM tables to InnoDB. Prior to MySQL 5.6, a MyISAM table cannot be converted to InnoDB if there are FULLTEXT indexes.
- If you cannot convert a MyISAM table, please stop writing to it during the dump
- Do the ibdata1 cleanup
- Use a larger buffer pool
Best Answer
If you are dumping tables t1, t2, and t3 from mydb
If you have a ton of tables in mydb and you want to dump everything except t1, t2, and t3, do this:
Give it a Try !!!
UPDATE 2014-03-06 10:15 EST
@RoryDonohue pointed out to me that the GROUP_CONCAT function needs to have its max length extended. I added the session variable group_concat_max_len to my answer with a length max of 10K. Thanks, @RoryDonohue.