Mysql – Why would increase in innodb_buffer_pool_size slow down MySQL

buffer-poolinnodbmemoryMySQL

5.1.68-cll – MySQL Community Server on CentOS

The system has 32GB of RAM.

I increased innodb_buffer_pool_size from 10240M to 15360M (10GB -> 15GB).

Time taken for a series of identical operations increased from 720 to 822 seconds (14% increase).

This was the result only a single test at each setting. But 4 previous tests performed a few months ago resulted in times between 726 and 740s.

I just tried running it again with 8GB, and the time taken was 719s.

Why would more memory result in a slower process?

EDIT: More details on process

The process that I'm testing involves emptying some tables and rebuilding them from data from existing tables. I'm not sure if it's using SELECT INSERT or if it's SELECTing the data, then using PHP to create long INSERT statements. If that matters then I can find out.

There are no schema definition changes being made.

Here is the output of numactl --hardware while the server is relatively idle:

root@server [~]# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 32740 MB
node 0 free: 6216 MB
node distances:
node   0
  0:  10

And free -m

root@server [~]# free -m
             total       used       free     shared    buffers     cached
Mem:         32081      25864       6216          0       2591      12791
-/+ buffers/cache:      10482      21599
Swap:        15994         16      15977

Edit by RolandoMySQLDBA

Please run this query

SELECT
    InnoDBSpace / POWER(1024,1) InnoDB_KB,
    InnoDBSpace / POWER(1024,2) InnoDB_MB,
    InnoDBSpace / POWER(1024,3) InnoDB_GB
FROM
(
    SELECT SUM(data_length+index_length) InnoDBSpace
    FROM information_schema.tables
    WHERE ENGINE='InnoDB'
) A;

RESULT:

InnoDB_KB InnoDB_MB InnoDB_GB
8413536 8216.34375 8.02377319335938

and this one

SHOW GLOBAL STATUS LIKE 'innodb_buffer_pool_pages%';

RESULT:

Innodb_buffer_pool_pages_data
410035
Innodb_buffer_pool_pages_dirty
204
Innodb_buffer_pool_pages_flushed
826954
Innodb_buffer_pool_pages_free
99231
Innodb_buffer_pool_pages_misc
15022
Innodb_buffer_pool_pages_total
524288

During running of process:

root@server [~]# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 32740 MB
node 0 free: 5461 MB
node distances:
node   0
  0:  10

and:

             total       used       free     shared    buffers     cached
Mem:         32081      26658       5423          0       2603      12948
-/+ buffers/cache:      11106      20975
Swap:        15994         16      15977

Best Answer

From what you gave me thus far

InnoDB Architecture

enter image description here

CONJECTURE #1

Since InnoDB allocates its buffer pool contiguously, one could only imagine if there is any form of fragmentation of data and index pages among other things in the Buffer Pool. What other things? There are also updates for secondary non-unique indexes posted first inside the Buffer Pool before they are merged into the System Tablespace file (better known as ibdata1). Please notice the Insert Buffer within the InnoDB Architecture.

CONJECTURE #2

While a larger buffer pool can reduce disk I/O, a buffer pool that's too large may have issues, such as the fragmentation issue I just mentioned.

Perhaps sparse memory, though contiguous, is not a good thing either. I discussed something of that nature back on Oct 22, 2012 : How large should be mysql innodb_buffer_pool_size?

I noticed the miscellaneous pages (Innodb_buffer_pool_pages_misc) within the buffer pool. That takes up 235 MB (16384 * 15022). Just some overhead for indexing and row locks. Notwithstanding, this can be victimized by a large buffer pool with lots of sizable empty gaps.

CONJECTURE #3

Data and index pages are not the only things that occupy the Buffer Pool. An annoying neighbor they must share space with is the Insert Buffer. That section of the Buffer Pool contains the changes for all non-unique indexes. Depending on the number of indexes each InnoDB table has, up to half of the Buffer Pool could be used. This should be a good reminder to look over all InnoDB table and remove redundant indexes

Here are other posts that describes how to hunt down redundant indexes and why to get rid of them:

Think about it: If a table has redundant indexes, INSERTs and UPDATEs to a table will results in rebalancing tree pages 45% of of the time (in the worst case scenario). Such tree rebalancing with happen multiple times for a table with a lot of indexes. Removing redundant indexes cuts down the time spent writing those changes to the Insert Buffer and in turn, to the Index Pages of each InnoDB table.

Increasing the Buffer Pool Size leaves room for the Insert Buffer to run amok if you have InnoDB tables with too many indexes, especially redundant indexes. Another telltale sign that you need to look for such tables is to run this query:

SELECT table_schema,table_name,data_length,index_length
FROM information_schema.tables WHERE engine='InnoDB'
AND data_length/(data_length+index_length) < 0.8;

This will give you a general idea which tables need to have the indexes looked over for redundancy of leading columns.

EPILOGUE

Keep in mind that the Buffer Pool holds

  • Data Pages
  • Index Pages
  • Insert Buffer for Index Changes
  • Adaptive Hash Index
  • Opened tables info
  • Miscellaneous Overhead
  • Refer to the InnoDB Architecture

Wasted space in a Buffer Pool could take more time to sift through, especially with a single core that is solely responsible for all of RAM.

My advice would be to keep the innodb_buffer_pool_size at 10G.