I have a MySql server with 2Gb of memory. What's the best configuration to use then with best results? How can I "distribute" this memory fine to MySql server?
Mysql – the best use of memory in MySQL
memoryMySQLperformance
Related Solutions
Memory can help you by caching and thus reducing I/O.
However, that won't reduce CPU usage which is your problem. This is an unusual bottleneck, as CPUs are insanely fast for most database work and I/O tends to be the bottleneck.
In your case, it is even more inusual, because you have 16-cores and VPSs tend not to have great I/O performance. First of all, make sure that you are CPU bound. vmstat
should help here.If you have high numbers on the wa
column, you are probably I/O bound; high numbers on the us
column could indicate that you are really CPU bound.
If you are CPU bound, analyze what queries are you executing and why they take long to complete. You are either executing a lot of queries/s or they include complex calculations (i.e. aggregates, functions, etc.). The solution for the former is usually caching on the frontend, which means executing less queries. The latter is solved by simplifying your queries (if possible- you might have queries which are needlessly complex) and calculating stuff once and reusing it (say you have lots of aggregate queries; create a table with the aggregation results and query that instead of running aggregates continuously). The most efficient way to research about this is by logging which queries you are running and analyzing the log- tools exist which do this neatly.
If you are I/O bound, then you can tune memory usage, although the OS cache is often working correctly. Take a look at free
:
$ free
total used free shared buffers cached
Mem: 6122892 5903564 219328 0 257020 3119240
-/+ buffers/cache: 2527304 3595588
Swap: 11956220 65980 11890240
The first line of numbers accounts for memory usage including OS caches, the second doesn't; by comparing both you can see how OS caching is working. Also, vmstat
will already tell you how much I/O you are performing (bi
, bo
columns). Often, the key to solving I/O problems is query tuning and indexing; indexing prevents full table scans (i.e. reading the entire table to get a limited set of data, which causes excessive and unnecessary I/O). Again, logging queries is most effective here- running EXPLAIN
on the queries will tell you which operations the database is performing to execute the query, which often leads you to understanding inefficiencies in the query (and altering the query to solve them) or finding out about needed indexes.
From Windows Platform Limitations in the MySQL 5.5 Reference Manual:
On Windows 32-bit platforms, it is not possible by default to use more than 2GB of RAM within a single process, including MySQL.
What's to investigate? We already know what happens when you max the memory: Really Bad Thingsā¢.
If there's a remote chance that your setup will periodically require this much memory, 32-bit Windows is the wrong platform.
Almost universally, the biggest memory consumer is of course the InnoDB Buffer Pool, but there's no need for any queries to test this... you don't need any activity at all, or any tables, because the entire amount of memory declared for innodb_buffer_pool_size
is allocated immediately when the server starts. The buffer pool never grows, never shrinks, never changes, ever. The number of free pages changes, but those are not "free" from the operating system's perspective -- they're still just as allocated, merely marked as containing nothing of interest within InnoDB.
If the operating system refuses to allocate the amount of memory provisioned for the buffer pool, MySQL will simply refuse to start.
This is illustrated here, where the OP mistakenly believed that the server was crashing and restarting "because" memory couldn't be allocated for the buffer pool, but was in fact crashing for a Linux-specific reason but then refusing to restart because the system would not allocate the total amount of memory required for the pool, due to overuse of available memory by something else... but this allocate-all-at-startup behavior for the InnoDB buffer pool is not platform-specific.
So, you should be able to set this value near the max and then find that taking the server process over the edge should not require very much additional effort at all. But I'm still not sure what the point is.
As you realize, MySQL uses memory for a variety of different purposes, several of which are dynamically-sized, definable on a per-connection basis, and allocated on demand, which makes it virtually impossible to provision a server based on limiting memory usage to some worst-case scenario absolute value, yet expecting that server to be able to handle its typical load efficiently.
The simplest illustration of this is the fact that you can obviously reduce the theoretical maximum memory utilization of a given instance by restricting the maximum number of simultaneous client connections... but any given application needs a certain number of available connections to perform efficiently, and if that number is below your target value, then you're not really solving anything -- it just feels like you have.
I say, either your server has enough memory for the workload, or it doesn't. If it doesn't, then attempting to "tune" your way out of potential trouble is unlikely to offer much in the way of solutions.
Some ideas on how to easily generate demand for more memory...
SELECT * FROM large_table ORDER BY non_indexed_column;
SELECT * FROM large_table WHERE non_indexed_column = some_value;
SELECT * FROM large_table WHERE some_column LIKE '%a_freqent_match%';
Queries like these could trigger the allocation of a sort buffer, a read buffer and/or a random read buffer, which should make a new request to the OS for the memory that buffer requires.
Simple solution:
DELIMITER $$
DROP PROCEDURE IF EXISTS `test`.`eat_memory_until_server_crashes` $$
CREATE PROCEDURE `test`.`eat_memory_until_server_crashes`()
BEGIN
-- this procedure is intended to eat as much memory as it can
-- it creates a series of consecutively-numbered session variables as large
-- as your configuration will allow them to be.
-- do not run this unless you intend to crash your server
-- also, do not run from a gui tool -- use the mysql command line client:
-- mysql> CALL test.eat_memory_until_server_crashes;
-- if you kill the query or thread before the server crashes,
-- the memory consumed will be returned to the OS
DECLARE counter INT DEFAULT 0;
LOOP
SET counter = counter + 1;
SET @qry = CONCAT('SET @crash_me_',counter,' := REPEAT(\'a\', @@max_allowed_packet)');
SELECT counter, @qry;
PREPARE hack FROM @qry;
EXECUTE hack;
DEALLOCATE PREPARE hack;
-- adjust timing or remove this entirely depending on how quickly you want this to happen
DO SLEEP(0.1);
END LOOP;
END $$
DELIMITER ;
Inspiration for this: Schwartz, Baron; Zaitsev, Peter; Tkachenko, Vadim (2012-03-05). High Performance MySQL: Optimization, Backups, and Replication (Kindle Location 12194). OReilly Media - A. Kindle Edition.
Best Answer
This strongly depends on the application using the database. A good starting point for optimization is the MySQL Performance Tuning Primer Script