I have a table that contains almost 4 million rows. I exported it with mysqldump
and transferred it via scp
to another server and imported it with the mysql
command but now it's missing several hundred thousand rows. I've repeated this process several times and each time, it's missing a different amount of rows.
I have tried mysqldump --compatible=ansi
because on one export, there was an error in the syntax for some reason. That dump wound up having the most rows on import but still missing hundreds of thousands.
Edit: I tried the -f option but it still prints out a mysql error before I'm returned to the command prompt:
ERROR 1064 (42000) at line 41458: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '286,'54.227.215.70','http://example.com/'),(81547367,'CbLc4lXH',1501560286,'54.7' at line 1
I'm not sure if it exited because of the error or not but I do know I'm missing about a million records after import. The file is 40GB in size and I don't have access to the previous mysql server anymore to do another export. I have all the data I need. I just can't get it imported.
Best Answer
Sometimes this can occur because of a combination of a certain sequence of characters and the character set of the MySQL client at the time of the import.
You could have trapped such errors without additional tools by doing the following
This would have recorded your trouble spots into
errors_encountered.txt
What could have been done with the mysqldump to begin with ???
SIMPLIFY EXPORT CONTENTS
You could have used the --hex-blob options to convert the characters into a hexadecimal representation. This could have eased any character set misinterpretations or misunderstandings on the part of the MySQL client program.
DUMP ONE ROW AT A TIME
Even when using --force for the MySQL client, this can still result in hundreds or thousands of rows not being inserted. Why ?
By default, mysqldump has --opt enabled. This sets the following
Notice that one of the options is --extended-insert. This causes mysqldump to set up inserts in chunks of hundreds or thousands of rows at a time.
If just one row in a chunk of rows had an issue with being imported and you are using --force, the entire chunk of rows is not inserted.
How do you get a hold of all the good parts of a chunk. For a mysqldump created with --extended-insert, there is no simple way. What can you do when this happens ? I have good news and bad news.
GOOD NEWS: You have to launch a new mysqldump with
--skip-extended-insert
. That forces a mysqldump to create anINSERT
for every row. That way, when using --force during import, an invalidINSERT
due to whenever circumstance will not affect surrounding rows.BAD NEWS #1 : This makes the resulting mysqldump file much larger.
BAD NEWS #2 : Importing the resulting mysqldump file takes much, much longer.
This is something I recommended
Aug 09, 2013
(Backup / Export data from MySQL 5.5 attachments table keeps failing!)SUGGESTION
To nullify the size problem this creates, you could do the following
gzip the mysqldump while doing mysqldump
ungzip the mysqldump and pipe it to mysql for import
GIVE IT A TRY !!!