MySQL server has gone away obstructing import of large dumps

dumpinnodbmac os xMySQL

I am trying to import a large sql dump (2GB) to my local mysql on my mac. I have been able to do this in the past (I was using MAMP), but now I get a
ERROR 2006 (HY000) at line 7758: MySQL server has gone away
everytime I try to import the dump. The database contains innodb tables.

I tried copying the my-innodb-heavy-4G.cnf file to my my.cnf to see if those settings would help, but no luck.

Any ideas on what to tweak?

I am using the "Mac OS X ver. 10.6 (x86, 64-bit), DMG Archive" from here: http://dev.mysql.com/downloads/mysql/

Best Answer

One of the silent killers of MySQL Connections is the MySQL Packet. Even the I/O Thread of MySQL Replication can be victimized by this.

According to the MySQL Documentation

  • You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section C.5.2.10, “Packet too large”.

  • An INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.

At the very least, you must make sure the packet sizes for both the machine you mysqldump'd from and the machine you are loading are identical.

There may be two(2) approaches you can take:

APPROACH #1 : Perform the mysqldump using --skip-extended-insert

This will make sure the MySQL Packet is not inundated with multiple BLOBs, TEXT fields. That way SQL INSERTs are performed one at a time. The major drawbacks are

  1. the mysqldump is much larger
  2. reloading such a dump takes much longer.

APPROACH #2 : Increase max_allowed_packet

This may be the preferred approach because implementing this is just a mysql restart away. Understanding what the MySQL Packet is may clarify this.

According to the page 99 of "Understanding MySQL Internals" (ISBN 0-596-00957-7), here are paragraphs 1-3 explaining it:

MySQL network communication code was written under the assumption that queries are always reasonably short, and therefore can be sent to and processed by the server in one chunk, which is called a packet in MySQL terminology. The server allocates the memory for a temporary buffer to store the packet, and it requests enough to fit it entirely. This architecture requires a precaution to avoid having the server run out of memory---a cap on the size of the packet, which this option accomplishes.

The code of interest in relation to this option is found in sql/net_serv.cc. Take a look at my_net_read(), then follow the call to my_real_read() and pay particular attention to net_realloc().

This variable also limits the length of a result of many string functons. See sql/field.cc and sql/intem_strfunc.cc for details.

Given this explanation, making bulk INSERTs will load/unload a MySQL Packet rather quickly. This is especially true when max_allowed_packet is too small for the given load of data coming at it.

CONCLUSION

In most installs of MySQL, I usually set this to 256M or 512M. You should experiement with larger values when data loads produces "MySQL has gone away" errors.