From Windows Platform Limitations in the MySQL 5.5 Reference Manual:
On Windows 32-bit platforms, it is not possible by default to use more than 2GB of RAM within a single process, including MySQL.
What's to investigate? We already know what happens when you max the memory: Really Bad Thingsā¢.
If there's a remote chance that your setup will periodically require this much memory, 32-bit Windows is the wrong platform.
Almost universally, the biggest memory consumer is of course the InnoDB Buffer Pool, but there's no need for any queries to test this... you don't need any activity at all, or any tables, because the entire amount of memory declared for innodb_buffer_pool_size
is allocated immediately when the server starts. The buffer pool never grows, never shrinks, never changes, ever. The number of free pages changes, but those are not "free" from the operating system's perspective -- they're still just as allocated, merely marked as containing nothing of interest within InnoDB.
If the operating system refuses to allocate the amount of memory provisioned for the buffer pool, MySQL will simply refuse to start.
This is illustrated here, where the OP mistakenly believed that the server was crashing and restarting "because" memory couldn't be allocated for the buffer pool, but was in fact crashing for a Linux-specific reason but then refusing to restart because the system would not allocate the total amount of memory required for the pool, due to overuse of available memory by something else... but this allocate-all-at-startup behavior for the InnoDB buffer pool is not platform-specific.
So, you should be able to set this value near the max and then find that taking the server process over the edge should not require very much additional effort at all. But I'm still not sure what the point is.
As you realize, MySQL uses memory for a variety of different purposes, several of which are dynamically-sized, definable on a per-connection basis, and allocated on demand, which makes it virtually impossible to provision a server based on limiting memory usage to some worst-case scenario absolute value, yet expecting that server to be able to handle its typical load efficiently.
The simplest illustration of this is the fact that you can obviously reduce the theoretical maximum memory utilization of a given instance by restricting the maximum number of simultaneous client connections... but any given application needs a certain number of available connections to perform efficiently, and if that number is below your target value, then you're not really solving anything -- it just feels like you have.
I say, either your server has enough memory for the workload, or it doesn't. If it doesn't, then attempting to "tune" your way out of potential trouble is unlikely to offer much in the way of solutions.
Some ideas on how to easily generate demand for more memory...
SELECT * FROM large_table ORDER BY non_indexed_column;
SELECT * FROM large_table WHERE non_indexed_column = some_value;
SELECT * FROM large_table WHERE some_column LIKE '%a_freqent_match%';
Queries like these could trigger the allocation of a sort buffer, a read buffer and/or a random read buffer, which should make a new request to the OS for the memory that buffer requires.
Simple solution:
DELIMITER $$
DROP PROCEDURE IF EXISTS `test`.`eat_memory_until_server_crashes` $$
CREATE PROCEDURE `test`.`eat_memory_until_server_crashes`()
BEGIN
-- this procedure is intended to eat as much memory as it can
-- it creates a series of consecutively-numbered session variables as large
-- as your configuration will allow them to be.
-- do not run this unless you intend to crash your server
-- also, do not run from a gui tool -- use the mysql command line client:
-- mysql> CALL test.eat_memory_until_server_crashes;
-- if you kill the query or thread before the server crashes,
-- the memory consumed will be returned to the OS
DECLARE counter INT DEFAULT 0;
LOOP
SET counter = counter + 1;
SET @qry = CONCAT('SET @crash_me_',counter,' := REPEAT(\'a\', @@max_allowed_packet)');
SELECT counter, @qry;
PREPARE hack FROM @qry;
EXECUTE hack;
DEALLOCATE PREPARE hack;
-- adjust timing or remove this entirely depending on how quickly you want this to happen
DO SLEEP(0.1);
END LOOP;
END $$
DELIMITER ;
Inspiration for this: Schwartz, Baron; Zaitsev, Peter; Tkachenko, Vadim (2012-03-05). High Performance MySQL: Optimization, Backups, and Replication (Kindle Location 12194). OReilly Media - A. Kindle Edition.
First of all: This is not because of Percona, the differences you are getting are because of the upgrade to 5.6. Percona Server rarely modifies the SQL optimiser from upstream. The changes come from the new optimiser in MySQL 5.6, which Percona Server 5.6 uses and that was heavily optimised between the two version (usually, for the best).
The reason why the execution is slower is because with an index on (item_format, item_private), it thinks that the query will be faster by getting the results using that index, but in reality, that would cause the join to be done in the "wrong" order. The best index here is (item_private) -assuming it is selective enough, if not it would be the primary key-, as it can use item_private for filtering and the hidden PRIMARY KEY inside the secondary key for ordering, while it uses the PRIMARY KEY of format for the join. Please note that (item_filtering) or (item_filtering, item_private) are not good indexes in this case.
By looking at the optimiser trace and the handler status, the problem seems to come by the predicted number of rows: the old method, in 5.6, seems to predict a full table scan, while the actual number of rows read is -more or less- the number of rows in the LIMIT clause. This seems to be a regression in the query optimiser, and it should be reported if you confirm that it is not due to any special personal configuration. It is specially bad, as it prefers the creation of a temporary table for the join (potentially on disk, so it may be very slow in some cases) over a very light scan.
You have been already told several ways of avoiding this problem for the time being: not creating an index containing item_filtering, using STRAIGHT_JOIN or forcing the usage of item_filtering (or PRIMARY).
Best Answer
I have an article on this here: http://www.tocker.ca/configuring-mysql-to-use-minimal-memory.html: