When you said InnoDB Index Corruption, I immediately thought of the InnoDB Buffer Pool
Let's start with what InnoDB Buffer Pool actually holds. Please take a look at the upper left hand corner of this Pictorial Representation of InnoDB (courtesy of Percona TCO Vadim Tkachenko)
The InnoDB Buffer Pool has a section called the Change Buffer (a.k.a. Insert Buffer, which is dedicated to updating changes to non-unique indexes. Note how those changes are moved from the Buffer Pool into the System Tablespace (ibdata1). A lot of work adjusting non-unique indexes is involved. Note the MySQL Documentation Clustered and Secondary Indexes under the subheading How Secondary Indexes Relate to the Clustered Index
:
All indexes other than the clustered index are known as secondary indexes. In InnoDB, each record in a secondary index contains the primary key columns for the row, as well as the columns specified for the secondary index. InnoDB uses this primary key value to search for the row in the clustered index.
If the primary key is long, the secondary indexes use more space, so it is advantageous to have a short primary key.
CONJECTURE #1
If you have large PRIMARY KEYs, I suspect the Change Buffer becoming a bit of a hog within the Buffer Pool. Changes can reach up to 50% of the Buffer Pool. You can tune that down with innodb_ibuf_max_size. The default is half the buffer pool. In your case, that would be 4096M (4G). Perhaps lowering it could throttle the amount of index maintenance needed.
CONJECTURE #2
I don't see innodb_buffer_pool_instances configured. For MySQL 5.5, the default is 1. You have innodb_buffer_pool_size set at 8192M (8G). If the Buffer Pool is more that half the installed RAM, YIKES !!! You will experience lots of swap. I recommend setting it to 2 or 4 or the number of cores assigned to the VM. I mentioned this back on Feb 12, 2011 (How do you tune MySQL for a heavy InnoDB workload?)
SUGGESTIONS
Please do one or more of the following
I would suggest rewriting the query in a way that minimises the number of columns necessary to put into GROUP BY. In your case you can do that by applying the grouping to the Product_Category
table only.
According to your example, that table has the following entries:
+------------+-------------+
| product_id | category_id |
+------------+-------------+
| 1 | 1 |
| 2 | 1 |
| 3 | 1 |
| 1 | 2 |
| 4 | 2 |
| 5 | 2 |
+------------+-------------+
Since you want product names to be unique in the output, group this table by product_id
, and for category_id
select e.g. the minimum value per product:
SELECT
product_id,
MIN(category_id) AS category_id
FROM
Product_Category
GROUP BY
product_id
That will give you an output like this:
+------------+-------------+
| product_id | category_id |
+------------+-------------+
| 1 | 1 |
| 2 | 1 |
| 3 | 1 |
| 4 | 2 |
| 5 | 2 |
+------------+-------------+
You can see that each product is listed only once. Joining that table to the other two will not produce duplicates. Therefore, just substitute the above query, as a derived table, for the Product_Category
in your query (also removing your GROUP BY from it, of course):
SELECT
*
FROM
Product
JOIN (
SELECT
product_id,
MIN(category_id) AS category_id
FROM
Product_Category
GROUP BY
product_id
) AS pc USING(product_id)
JOIN Category USING(category_id)
;
Best Answer
Actually, you can now. Since MySQL 8.0.1, while
expire_log_days
still works, the canonical way to setup expiration is by changingbinlog_expire_logs_seconds
, which, as you may guess, has second resolution. The default value since is 8.0.11 is 2592000 (= 30 days), but can be configured as usual on the configuration file to any other period in seconds.If you cannot use MySQL 8.0 yet, the way to fix this is to use the
PURGE BINARY LOGS
syntax, withBEFORE
(PURGE BINARY LOGS BEFORE now() - INTERVAL 1 HOUR;
' and setting themax-binlog_size
to an appropiate value (you cannot delete currently-being-written binlog, although you can always force the rotation withFLUSH BINARY LOGS
). You can setup an event to do that regularly, or externally, on a cron/programmed task.Alternatively, forks of MySQL, like Percona, used to (and still does) allow to setup a max binlog size in bytes -by combining
max_binlog_size
andmax_binlog_files
-, in addition to the days limit.