One of the silent killers of MySQL Connections is the MySQL Packet. Even the I/O Thread of MySQL Replication can be victimized by this.
According to the MySQL Documentation
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section C.5.2.10, “Packet too large”.
An INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.
At the very least, you must make sure the packet sizes for both the machine you mysqldump'd from and the machine you are loading are identical.
There may be two(2) approaches you can take:
APPROACH #1 : Perform the mysqldump using --skip-extended-insert
This will make sure the MySQL Packet is not inundated with multiple BLOBs, TEXT fields. That way SQL INSERTs are performed one at a time. The major drawbacks are
- the mysqldump is much larger
- reloading such a dump takes much longer.
APPROACH #2 : Increase max_allowed_packet
This may be the preferred approach because implementing this is just a mysql restart away. Understanding what the MySQL Packet is may clarify this.
According to the page 99 of "Understanding MySQL Internals" (ISBN 0-596-00957-7), here are paragraphs 1-3 explaining it:
MySQL network communication code was
written under the assumption that
queries are always reasonably short,
and therefore can be sent to and
processed by the server in one chunk,
which is called a packet in MySQL
terminology. The server allocates the
memory for a temporary buffer to store
the packet, and it requests enough to
fit it entirely. This architecture
requires a precaution to avoid having
the server run out of memory---a cap
on the size of the packet, which this
option accomplishes.
The code of interest in relation to
this option is found in
sql/net_serv.cc. Take a look at my_net_read(), then follow the call to my_real_read() and pay
particular attention to
net_realloc().
This variable also limits the length
of a result of many string functons.
See sql/field.cc and
sql/intem_strfunc.cc for details.
Given this explanation, making bulk INSERTs will load/unload a MySQL Packet rather quickly. This is especially true when max_allowed_packet is too small for the given load of data coming at it.
CONCLUSION
In most installs of MySQL, I usually set this to 256M or 512M. You should experiement with larger values when data loads produces "MySQL has gone away" errors.
The following is a repost of an answer I posted with similar circumstances back on Sep 1, 2011
One of the silent killers of MySQL Connections is the MySQL Packet. In fact, even the I/O Thread of MySQL Replication can be victimized by this.
According to the MySQL Documentation
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section C.5.2.10, “Packet too large”.
Any INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.
At the very least, you must make sure the packet sizes for both the machine you mysqldump'd from and the machine you are loading are identical.
There may be two(2) approaches you can take:
APPROACH #1 : Perform the mysqldump using --skip-extended-insert
This will make sure the MySQL Packet is not inundated with multiple BLOBs, TEXT fields. That way SQL INSERTs are performed one at a time. The major drawbacks are
- the mysqldump is much larger
- reloading such a dump takes much longer.
APPROACH #2 : Increase max_allowed_packet
This may be the preferred approach because implementing this is just a mysql restart away. Understanding what the MySQL Packet is may clarify this.
According to the page 99 of "Understanding MySQL Internals" (ISBN 0-596-00957-7), here are paragraphs 1-3 explaining it:
MySQL network communication code was
written under the assumption that
queries are always reasonably short,
and therefore can be sent to and
processed by the server in one chunk,
which is called a packet in MySQL
terminology. The server allocates the
memory for a temporary buffer to store
the packet, and it requests enough to
fit it entirely. This architecture
requires a precaution to avoid having
the server run out of memory---a cap
on the size of the packet, which this
option accomplishes.
The code of interest in relation to
this option is found in
sql/net_serv.cc. Take a look at my_net_read(), then follow the call to my_real_read() and pay
particular attention to
net_realloc().
This variable also limits the length
of a result of many string functons.
See sql/field.cc and
sql/intem_strfunc.cc for details.
Given this explanation, making bulk INSERTs will load/unload a MySQL Packet rather quickly. This is especially true when max_allowed_packet is too small for the given load of data coming at it.
CONCLUSION
In most installs of MySQL, I usually set this to 256M or 512M. You should experiement with larger values when data loads produces "MySQL has gone away" errors.
Best Answer
The most likely explanation for this is that an intermediate piece of network hardware is tearing down or "forgetting" your idle TCP connection, assuming your wait_timeout system variable is not set too low.
Stateful firewalls maintain internal data structures of all current "flows" -- source/destination address/port quads -- and if a flow has no traffic for too long, the record of the flow is simply dropped from the firewall's memory. The next TCP packet to arrive doesn't match any established flows, and doesn't have the
SYN
flag set, so it's not a new connection, and the firewall considers it invalid and either drops it or spoofs anRST
packet from the destination.Most likely, you would get this same error with any query
SELECT NOW();
for example, would probably return the same result, once the connection has been totally idle for too long. Note that running a query doesn't prevent the connection from being "idle" at the TCP layer.The connection might not disappear from the processlist in the mean time, because, since it's completely idle, the server would not have sent anything and would therefore not have realized that at some point on the network, it is an invalid TCP connection.
This occurs, for just one example in my experience, within Amazon EC2.
The workaround is to enable TCP keepalives in the operating system itself on at least one end of the connection. This will cause TCP traffic to continue to be exchanged by the hosts during the idle time, even though there's no payload on the connection. Enabling these should mediate this issue.
Linux: http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.html
Windows: https://technet.microsoft.com/en-us/library/cc957549.aspx
If curious, then you should also, by trial and error, be able to discover a period of time after which idle connections are no longer usable... for example, connections left idle for 14 minutes may be fine, but connections left idle for 16 minutes will always be dead, suggesting the firewall has a 15 minute flow expiration timer.