From Windows Platform Limitations in the MySQL 5.5 Reference Manual:
On Windows 32-bit platforms, it is not possible by default to use more than 2GB of RAM within a single process, including MySQL.
What's to investigate? We already know what happens when you max the memory: Really Bad Thingsā¢.
If there's a remote chance that your setup will periodically require this much memory, 32-bit Windows is the wrong platform.
Almost universally, the biggest memory consumer is of course the InnoDB Buffer Pool, but there's no need for any queries to test this... you don't need any activity at all, or any tables, because the entire amount of memory declared for innodb_buffer_pool_size
is allocated immediately when the server starts. The buffer pool never grows, never shrinks, never changes, ever. The number of free pages changes, but those are not "free" from the operating system's perspective -- they're still just as allocated, merely marked as containing nothing of interest within InnoDB.
If the operating system refuses to allocate the amount of memory provisioned for the buffer pool, MySQL will simply refuse to start.
This is illustrated here, where the OP mistakenly believed that the server was crashing and restarting "because" memory couldn't be allocated for the buffer pool, but was in fact crashing for a Linux-specific reason but then refusing to restart because the system would not allocate the total amount of memory required for the pool, due to overuse of available memory by something else... but this allocate-all-at-startup behavior for the InnoDB buffer pool is not platform-specific.
So, you should be able to set this value near the max and then find that taking the server process over the edge should not require very much additional effort at all. But I'm still not sure what the point is.
As you realize, MySQL uses memory for a variety of different purposes, several of which are dynamically-sized, definable on a per-connection basis, and allocated on demand, which makes it virtually impossible to provision a server based on limiting memory usage to some worst-case scenario absolute value, yet expecting that server to be able to handle its typical load efficiently.
The simplest illustration of this is the fact that you can obviously reduce the theoretical maximum memory utilization of a given instance by restricting the maximum number of simultaneous client connections... but any given application needs a certain number of available connections to perform efficiently, and if that number is below your target value, then you're not really solving anything -- it just feels like you have.
I say, either your server has enough memory for the workload, or it doesn't. If it doesn't, then attempting to "tune" your way out of potential trouble is unlikely to offer much in the way of solutions.
Some ideas on how to easily generate demand for more memory...
SELECT * FROM large_table ORDER BY non_indexed_column;
SELECT * FROM large_table WHERE non_indexed_column = some_value;
SELECT * FROM large_table WHERE some_column LIKE '%a_freqent_match%';
Queries like these could trigger the allocation of a sort buffer, a read buffer and/or a random read buffer, which should make a new request to the OS for the memory that buffer requires.
Simple solution:
DELIMITER $$
DROP PROCEDURE IF EXISTS `test`.`eat_memory_until_server_crashes` $$
CREATE PROCEDURE `test`.`eat_memory_until_server_crashes`()
BEGIN
-- this procedure is intended to eat as much memory as it can
-- it creates a series of consecutively-numbered session variables as large
-- as your configuration will allow them to be.
-- do not run this unless you intend to crash your server
-- also, do not run from a gui tool -- use the mysql command line client:
-- mysql> CALL test.eat_memory_until_server_crashes;
-- if you kill the query or thread before the server crashes,
-- the memory consumed will be returned to the OS
DECLARE counter INT DEFAULT 0;
LOOP
SET counter = counter + 1;
SET @qry = CONCAT('SET @crash_me_',counter,' := REPEAT(\'a\', @@max_allowed_packet)');
SELECT counter, @qry;
PREPARE hack FROM @qry;
EXECUTE hack;
DEALLOCATE PREPARE hack;
-- adjust timing or remove this entirely depending on how quickly you want this to happen
DO SLEEP(0.1);
END LOOP;
END $$
DELIMITER ;
Inspiration for this: Schwartz, Baron; Zaitsev, Peter; Tkachenko, Vadim (2012-03-05). High Performance MySQL: Optimization, Backups, and Replication (Kindle Location 12194). OReilly Media - A. Kindle Edition.
Best Answer
The buffer pool is not used to materialize query results. It stores table and index pages to be shared between all connected sessions.
Each session allocates its own result buffer, where the selected data is copied from the buffer pool. This applies whether a join is involved or not.
If no additional operation is required, partial results will be sent to the client as soon as the result buffer is full. If the result set must be sorted, that will be done in the sort buffer (and temporary tables if necessary), then sorted data will be sent to the client, via the result buffer, as before.
You can read more about how MySQL uses memory in the documentation.