The following is a repost of an answer I posted with similar circumstances back on Sep 1, 2011
One of the silent killers of MySQL Connections is the MySQL Packet. In fact, even the I/O Thread of MySQL Replication can be victimized by this.
According to the MySQL Documentation
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section C.5.2.10, “Packet too large”.
Any INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.
At the very least, you must make sure the packet sizes for both the machine you mysqldump'd from and the machine you are loading are identical.
There may be two(2) approaches you can take:
APPROACH #1 : Perform the mysqldump using --skip-extended-insert
This will make sure the MySQL Packet is not inundated with multiple BLOBs, TEXT fields. That way SQL INSERTs are performed one at a time. The major drawbacks are
- the mysqldump is much larger
- reloading such a dump takes much longer.
APPROACH #2 : Increase max_allowed_packet
This may be the preferred approach because implementing this is just a mysql restart away. Understanding what the MySQL Packet is may clarify this.
According to the page 99 of "Understanding MySQL Internals" (ISBN 0-596-00957-7), here are paragraphs 1-3 explaining it:
MySQL network communication code was
written under the assumption that
queries are always reasonably short,
and therefore can be sent to and
processed by the server in one chunk,
which is called a packet in MySQL
terminology. The server allocates the
memory for a temporary buffer to store
the packet, and it requests enough to
fit it entirely. This architecture
requires a precaution to avoid having
the server run out of memory---a cap
on the size of the packet, which this
option accomplishes.
The code of interest in relation to
this option is found in
sql/net_serv.cc. Take a look at my_net_read(), then follow the call to my_real_read() and pay
particular attention to
net_realloc().
This variable also limits the length
of a result of many string functons.
See sql/field.cc and
sql/intem_strfunc.cc for details.
Given this explanation, making bulk INSERTs will load/unload a MySQL Packet rather quickly. This is especially true when max_allowed_packet is too small for the given load of data coming at it.
CONCLUSION
In most installs of MySQL, I usually set this to 256M or 512M. You should experiement with larger values when data loads produces "MySQL has gone away" errors.
This looks oddly familiar.
I have seen this occur with one of my web hosting client's DB servers. There was a particular table that crashed mysqld every single time you accessed it, even with SHOW CREATE TABLE
.
The problem stems from a corrupt data dictionary. There is really no way to correct it. You could attempt to alter the tablespace_id within the .ibd file but the headache stems from locating the tablespace_id list internal to ibdata1.
Even if you create a MyISAM table with the same name in the same database as the original InnoDB table, you cannot convert it to InnoDB because the tablespace_id is already associated with the table name. This, of course, is a corrupted state. It's like having a pidgeon hole in ibdata1 that you cannot patch up without some exploratory surgery.
You may have to mysqldump everything except the database that houses the corrupt table. You would then have to mysqldump every table in that database except the corrupt table. Remember, it is the data dictionary's view of the table that is screwed up, not necessarily the table's data.
The only sure way to clean everything up is to perform the mysqldumps as I just specified, shutdown mysql, rm -rf all DB folders except /var/lib/mytsql/mysql, delete ibdata1, delete ib_logfile0, delete ib_logfile1, startup mysql, reload all mysqldumps. See my StackOverflow post about cleaning up your InnoDB infrastructure.
Since you are not using innodb_file_per_table, any tables with this corrupt state of things within ibdata1 are lost as casualities of war. My condolences.
For future reference, click here to see an artistic conception of InnoDB and its Internals.
Best Answer
The error
simply means you ran out of space.
The question is: Where is the space issue coming from ???
Using this requires creating a temp table in the folder configured in tmpdir
Bigger mysqldumps need a much larger tmpdir when using --order-by-primary
If you get rid of --order-by-primary, you can handle bigger dumps.
You just asked
There are improvements to the dump's creation
IMPROVEMENT #1 : Faster Dump
Every time you dump a table using --order-by-primary, a temp table must be created. That can pause the mysqldump on the source side. Such pausing will never occur anymore
IMPROVEMENT #2 : No bloated indexes
When you load a table in PRIMARY KEY order, it can generate up to 45% additonal wasted space in the indexes due to lopsided page splits in your BTREE indexes.
Jun 28, 2012
: Benefits of BTREE in MySQLOct 26, 2012
: How badly does innodb fragment in the face of somewhat out-of-order insertions?Nov 26, 2012
: Does the Int type primary key insert order affect query performance in MySQL? (discusses this from MyISAM point-of-view)IMPROVEMENT #3 : AUTO_INCREMENT PRIMARY KEYs
If the majority of your tables has a single integer that is auto incremented and is the PRIMARY KEY, the table is already ordered. No need for the overkill of --order-by-primary
Concerning your next comment
Please make sure you have extra diskspace on the RDS Instance. Why ?
It is possible for a dump to be bigger when reloaded, especially when exacerbated by having keys ordered.
Sep 09 , 2016
: mysqldump: ERROR 2006: MySQL server has gone away when attempting to push data to RDS instanceMay 01, 2014
: Why would the size of MySQL MyISAM table indexes (aka MYI file) not match after mysqldump import?