First of all: This is not because of Percona, the differences you are getting are because of the upgrade to 5.6. Percona Server rarely modifies the SQL optimiser from upstream. The changes come from the new optimiser in MySQL 5.6, which Percona Server 5.6 uses and that was heavily optimised between the two version (usually, for the best).
The reason why the execution is slower is because with an index on (item_format, item_private), it thinks that the query will be faster by getting the results using that index, but in reality, that would cause the join to be done in the "wrong" order. The best index here is (item_private) -assuming it is selective enough, if not it would be the primary key-, as it can use item_private for filtering and the hidden PRIMARY KEY inside the secondary key for ordering, while it uses the PRIMARY KEY of format for the join. Please note that (item_filtering) or (item_filtering, item_private) are not good indexes in this case.
By looking at the optimiser trace and the handler status, the problem seems to come by the predicted number of rows: the old method, in 5.6, seems to predict a full table scan, while the actual number of rows read is -more or less- the number of rows in the LIMIT clause. This seems to be a regression in the query optimiser, and it should be reported if you confirm that it is not due to any special personal configuration. It is specially bad, as it prefers the creation of a temporary table for the join (potentially on disk, so it may be very slow in some cases) over a very light scan.
You have been already told several ways of avoiding this problem for the time being: not creating an index containing item_filtering, using STRAIGHT_JOIN or forcing the usage of item_filtering (or PRIMARY).
The problem here is just a misunderstanding
Using --flush-logs with mysqldump simply closes and opens all files handles against the following:
In the case of binary logs, the current binary log is closed and a new binary log is open. In essence, it is the only log file that is numerically rotated For example, suppose you run SHOW MASTER STATUS;
and you see the binary log file names mysql-bin.000027
. Running FLUSH LOGS;
or FLUSH BINARY LOGS;
would close mysql-bin.000027
(which stops recording transactions to that file) and open mysql-bin.000028
(which now records incoming transactions).
However, rotation does not imply automatic deletion.
There are two ways to clean up the binary logs
DELETE ALL BINARY LOGS
If the DB Server is not used as MySQL Replication Master, then run
BINARY LOGGING IS ENABLED
mysql> RESET MASTER;
That will erase all binary logs and start with the first one (like mysql-bin.000001
)
BINARY LOGGING IS NOT ENABLED
Simply go the the OS and run the delete command (rm
for Linux,del
for Windows)
PURGE BINARY LOGS
If you want to keep binary logs from the last 48 hours, you run
mysql> PURGE BINARY LOGS BEFORE NOW() - INTERVAL 48 HOUR;
mysqld will erase all binary logs older than 48 hours.
AUTOMATIC PURGING
Simply set the following in my.cnf (or my.ini)
[mysqld]
expire_logs_days=2
Then, either restart mysqld or run the following as root
mysql> SET GLOBAL expire_logs_days = 2;
That way, every manual or automatic flush of binary logs will delete logs older than 2 days.
Once expire_logs_days is set, mysqldump will cleanup old logs every time it does --flush-logs.
Best Answer
It isn't clear what you're asking in scenario 1.
As for scenario 2, you can have a slave that's a higher version than the master, but not vice-versa. From the documentation: