Suppose you have mydb.mytb
and you want to create mydb.mytbcopy
I have five(5) approaches to doing this copy
APPROACH #1
In the mysql
client, run the following
USE mydb
CREATE TABLE mytbcopy LIKE mytb;
INSERT INTO mytbcopy SELECT * FROM mytb;
APPROACH #2
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysql ${MYSQL_CONN} -ANe"CREATE DATABASE IF NOT EXISTS test"
mysqldump ${MYSQL_CONN} mydb mytb | mysql ${MYSQL_CONN} -Dtest
mysql ${MYSQL_CONN} -ANe"ALTER TABLE test.mytb RENAME mydb.mytbcopy"
APPROACH #3
DUMPFILE=/some/path/tabledata.sql
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysql ${MYSQL_CONN} -ANe"CREATE DATABASE IF NOT EXISTS test"
mysqldump ${MYSQL_CONN} mydb mytb > ${DUMPFILE}
mysql ${MYSQL_CONN} -Dtest < ${DUMPFILE}
rm -f ${DUMPFILE}
mysql ${MYSQL_CONN} -ANe"ALTER TABLE test.mytb RENAME mydb.mytbcopy"
APPROACH #4
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysqldump ${MYSQL_CONN} mydb mytb | sed 's/mytb/mytbcopy' | mysql ${MYSQL_CONN} -Dmydb
APPROACH #5
DUMPFILE=/some/path/tabledata.sql
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysqldump ${MYSQL_CONN} mydb mytb | sed 's/mytb/mytbcopy' > ${DUMPFILE}
mysql ${MYSQL_CONN} -Dmydb < ${DUMPFILE}
rm -f ${DUMPFILE}
ANALYSIS
- APPROACH #1 is the easiest in terms of steps, but requires pushing 40 million rows into one transaction. This will be the most taxing on the InnoDB Storage Engine.
- For the other approaches, mysqldump will send 40 million row in chucks of thousands of rows
- APPROACH #2 and APPROACH #3 will mysqldump the table into the test database. After creating the table in the test database, it is subsequently renamed and moved into the original database
- APPROACH #4 and APPROACH #5 rename the table using sed against the stream coming from the mysqldump as it echoes the INSERT commands
- APPROACH #2 and APPROACH #4 use pipes instead of an output file
- APPROACH #3 and APPROACH #5 use an outpuit file for subsequent reload
If you want to copy mydb.mytb
to an already existing table mydb.mytbcopy
, and the two tables have identical structures:
APPROACH #6
INSERT INTO mytbcopy SELECT * FROM mytb;
Like #APPROACH 1, #APPROACH 6 would have a single transaction of 40 million rows
APPROACH #7
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysqldump ${MYSQL_CONN} -t mydb mytb | sed 's/mytb/mytbcopy' | mysql ${MYSQL_CONN} -Dmydb
This approaches does not drop the table. It simply generates the INSERTs
EPILOGUE
I cannot give you a time estimate since I do not know the make up of the DB Server, table structure, index layout, and things like these.
GIVE IT A TRY !!!
The 12c Multitenant option is a nice tool for this.
Create a "master" pluggable database that is your starting point, and you can create other pluggable databases by cloning this PDB.
When you clone a PDB, you have the option to clone the PDB as a "snapshot copy". This does not actually copy the files, but uses storage snapshots, and that is what makes this feature great. For example, cloning a 111 GB database took 20 seconds in this example: Snapshot clone of a PDB using ACFS
When I experimented with it, smaller databases took even less time to be cloned. When you do not need the database anymore, you can just simply drop it.
Also you are not limited to 1 schema, because you can create many clones and use them at the same time concurrently, up to 252 PDBs.
Note that, the Multitenant option is an extra cost option for Enterprise Edtition databases. So you need the proper license to use it.
SNAPSHOT COPY
Best Answer
You have two questions here,
Yes you can load up multiple files. Functionally, it doesn't matter how many times you run the
COPY
command -- it'll work more than once.Normally, there isn't much an advantage to split up the copy command if the target is the same table, or even on the same tablespace. One background-writer process can easily saturate a disk. The only time this would matter is if you had something that was computationally intensive. Moreover, there are some reasons to not break up copys. For instance, if you create and load a table in the same transaction you have less WAL,
If you need faster writes, check out the Non-Durable Settings