MySQL – How to Perform Incremental mysqldump Without Dropping Table

backupMySQLmysqldump

I'd like to fetch the last 100 rows of a table via a mysqldump, but I already have the rest of the table. How can I only fetch the last 100 rows for a mysqldump and then replace (or overwrite) any duplicates I may have?

db1 Incremental Export

mysqldump -umy_user my_db1 my_table --single-transaction --replace --where="id > 900" | pigz > my_db1-my_table.sql.gz

db2 Incremental Import

pv my_db1-my_table.sql.gz | gunzip | mysql -umy_user my_db2

Unfortunately the mysqldump above creates a drop table my_table query. How can I skip this?

What I want to happen…

Say db2 has 950 records. And db1 has 1000. I want records 901-1000 from db1 on db2, as well as the 1-900 already on db2 (just leave them be). Replacing/Overwriting records 901-950 on db2 is preferred.

What Currently Happens…

After running the above, db2 only contains records 901-1000, and is now missing 1-900 (due to drop table).

Best Answer

You can use replace into flag (--replace) and where condition (--where) while also omitting the create statement (-t):

mysqldump -t --replace --where="id between 901 and 1000" my_db1 table my_table