Does all of the rows affected by the insert statement from db1 gets
transfered into db2 immediately and are waiting for the commit line to
execute in order to make the insert official?
Transafer is made immediatly, but other sessions will not see inserted data until commit (depends on transaction isolation level)
Or, are the rows affected by the insert statement are still inside db1
and are just waiting to be transfered into db2 once the "commit"
command executes?
Data are not waiting for commit, just storage engine wait for commit to make changes persisted.
I wanted to know this facts because I will be copying data from a
database through the internet and I'm worried that I might lose some
data in the process... Any help would be greaty appreciated.
Every TCP based connection guarantee data integrity.
You should use replication instead of this approach
Your issue may involve the configuration settings for either OS or MySQL
HISTORICAL EXAMPLES
Years ago when I worked at a website firm, I used --skip-extended-insert to load a DB Server that had 2GB RAM. That made a whole world of sense to me because the server I dumped it from had 64GB RAM and the client wanted the data on a smaller machine.
At my current placement of employment, an outside vendor dumped data from SQL Server and produced a mysqldump compatible dumpfile. The problem with that dump file was each table had a single insert with all the rows. One table had 3.5 million. They could not be loaded into in-house VM will less than 8GB RAM configured. Shockingly, the solution to that was to scp the file to may laptop, load it into MySQL 5.5.37 on my laptop, mysqldump from it, send the new mysqldump back to the Linux CentOS VM, and load the mysqldump.
BTW, the 3.5 million row table loaded as a single insert into my Windows 7 laptop in 4.5 hours without changing any MySQL settings. To be honest with you, I did believe that "shot in the dark" would work because I did not reconfigure mysqldump on the laptop to work with the dump. My guess is that it probably loaded because the undo log in ibdata1 just grew on disk and held everything.
You should not have to live to skip-extended-insert
if you dump on reload on the same machine.
SUGGESTIONS
While I was typing my answer, I saw you inject the following into the question
According to the a query I ran, the table clocks in at around 1.7GB, but it's MyISAM and apparently that number is unreliable (but it's a big table, anyway).
SUGGESTION #1
If that one MyISAM table has 1.7GB and has the 11 million rows, my suggestion would be for you to increase the size of the bulk insert buffer. The bulk insert buffer is for MyISAM usage only and supports extended inserts. Set bulk_insert_buffer_size to 256M.
Go to my.cnf or my.ini and add this
bulk_insert_buffer_size = 256M
Then, login to MySQL and run
mysql> SET GLOBAL bulk_insert_buffer_size = 1024 * 1024 * 256;
SUGGESTION #2
As for your InnoDB, you need to use a bigger log buffer. Set it to 64M by adding this to my.cnf
innodb_log_buffer_size = 64M
You must restart MySQL to use that new value.
SUGGESTION #3
You may also need bigger InnoDB Logs by increasing innodb_log_file_size. This require manual intervention as follows:
STEP 01: Run this command as root
mysql> SET GLOBAL innodb_fast_shutdown = 0;
STEP 02: Shutdown mysql
- Linux :
service mysql stop
- Windows (as Administrator) :
net stop mysql
STEP 03: Add this option to my.cnf
or my.ini
innodb_log_file_size = 1G
STEP 04: In the OS, goto the folder where ib_logfile0 and ib_logfile1 are located and run
Linux
mv ib_logfile0 ib_logfile0.bak
mv ib_logfile1 ib_logfile1.bak
Windows
rename ib_logfile0 ib_logfile0.bak
rename ib_logfile1 ib_logfile1.bak
STEP 05: Start mysql (the log files get recreated so start will take an extra 1-2 min)
- Linux :
service mysql start
- Windows (as Administrator) :
net start mysql
After making one or more of these suggested changes, try dumping and reloading. It goes go much smoother. Start with just SUGGESTION #1
and try it. Try the other other two to see if it improves more.
SUMMARY
You need these options for your target MySQL DB Server in your my.cnf or my.ini
bulk_insert_buffer_size = 256M
innodb_log_buffer_size = 64M
innodb_log_file_size = 1G
max_allowed_packet = 1G
GIVE IT A TRY !!!
Best Answer
The only risk is time and missing out on trigger stored procedures.
Normally, mysqldumps are done in alphabetical order.
When loading a dump, expect databases
aardvark
,alligator
andantelope
to load early. Any database likezoo
andzebra
would require time because--one-database
does not force a mysqldump to load the specified database. The entire dump must be read until the specified database is in use.For example,
mysql --one-database zebra
will read the entire mysqldump but start loading when it gets to the line the dump that saysThen, all
INSERT INTO
commands are processed until the EOF or until it see another line likeSome developers I know would wait until the database they chose is loaded. Then, hit Ctrl-C. I would not risk that. You should make sure that all triggers and stored procedures are loaded as well.
If you cannot wait and want to Ctrl-C, login to mysql while the dump is still loading and make sure all tables, views, triggers, and stored procedures have been loaded.