MySQL query ‘going away’ on executing INSERT ON DUPLICATE UPDATE statement with a 12524 character blob

MySQLoptimizationperformancestorage-engine

I have a mysql insert on update query like so

insert into table (col1, col2, col3) values (1,2,'huge blob with 12524 chars') on duplicate key update col3 = 'huge blob with 12524 chars';
  • col1, col2 is a composite key and it has index on it.
  • There are a total of 91 records in table
  • All other blobs in table have less than 1000 chars
  • If I give col3 a small value then this query runs instantly
  • If I give a small value in the insert part but large value in the update part it takes 5 seconds to execute
  • If I give large value in insert part but small value in second part it dies with a 1 minute timeout.

What could be causing this query to die on execution after being stuck for 1 minute?
What should I look at?

What could be causing this query to take so long to execute and dying if blob value is huge?

Best Answer

This sounds like you have to increase the size of your MySQL Packets

According to the page 99 of "Understanding MySQL Internals" (ISBN 0-596-00957-7), here are paragraphs 1-3 explaining it:

MySQL network communication code was written under the assumption that queries are always reasonably short, and therefore can be sent to and processed by the server in one chunk, which is called a packet in MySQL terminology. The server allocates the memory for a temporary buffer to store the packet, and it requests enough to fit it entirely. This architecture requires a precaution to avoid having the server run out of memory---a cap on the size of the packet, which this option accomplishes.

The code of interest in relation to this option is found in sql/net_serv.cc. Take a look at my_net_read(), then follow the call to my_real_read() and pay particular attention to net_realloc().

This variable also limits the length of a result of many string functons. See sql/field.cc and sql/intem_strfunc.cc for details.

Since MySQL Packets can hold rows of data, larger items in the packet can cause a lot of packets to filter in-and-out to prevent whole chunks of related data from splitting during processing. This can be a silent killer of DB connections for no apparent reason. If fact, I wrote a post about how this can affect certain mysqldumps.

Try increasing the max_allowed_packet (256M) using the following command:

SET max_allowed_packet = 1024 * 1024 * 256;