MySQL Performance vs. Memory

innodbmemoryMySQLmysql-5.7performance

We are running MySQL Ver 5.7.18 on an Amazon Lightsail Linux instance (2GHz Single Core CPU, 2GB RAM, 40GB SSD).

Besides MySQL, we also run some Java services on this machine, which access the database.

MySQL runs on the default configurations.

After rebooting the machine or restarting the MySQL service we observe a huge performance degradation on some SELECT statements, primarily executed on a table which has about 15M rows (accumulated over 1 year). The SELECT statements are user-driven (Web Frontend) and mostly fetch the newest entries.
After some hours the performance of these statements begin to increase and are sufficiently fast (mainly <1000ms). However, if we begin to run queries which affect older values the performance begins to decrease dramatically again.

If I run mysqltuner.pl I get the following output:

-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MEMORY +MRG_MYISAM +MyISAM +PERFORMANCE_SCHEMA
[--] Data in InnoDB tables: 9.2G (Tables: 33)
[OK] Total fragmented tables: 0

-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 20h 4m 38s (4M q [59.289 qps], 20K conn, TX: 8G, RX: 1G)
[--] Reads / Writes: 82% / 18%
[--] Binary logging is disabled
[--] Physical Memory     : 2.0G
[--] Max MySQL memory    : 465.9M
[--] Other process memory: 1.1G
[--] Total buffers: 296.0M global + 1.1M per thread (151 max threads)
[--] P_S Max memory usage: 72B
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 333.1M (16.64% of installed RAM)
[OK] Maximum possible memory usage: 465.9M (23.28% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (16/4M)
[OK] Highest usage of available connections: 21% (33/151)
[OK] Aborted connections: 0.11%  (23/20438)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
[OK] Query cache is disabled by default due to mutex contention on multiprocessor machines.
[OK] Sorts requiring temporary tables: 1% (67 temp sorts / 3K sorts)
[OK] No joins without indexes
[OK] Temporary tables created on disk: 0% (623 on disk / 213K total)
[OK] Thread cache hit rate: 99% (45 created / 20K connections)
[OK] Table cache hit rate: 75% (1K open / 1K opened)
[OK] Open file limit used: 1% (66/5K)
[OK] Table locks acquired immediately: 100% (24K immediate / 24K locks)

-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[!!] InnoDB buffer pool / data size: 256.0M/9.2G
[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (37.5 %): 48.0M * 2/256.0M should be equal 25%
[OK] InnoDB buffer pool instances: 1
[--] Number of InnoDB Buffer Pool Chunk : 2 for 1 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 98.28% (338433973 hits/ 344356426 total)
[OK] InnoDB Write log efficiency: 91.11% (3684095 hits/ 4043723 total)
[OK] InnoDB log waits: 0.00% (0 waits / 359628 writes)

-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
    Set up a Password for user with the following SQL statement ( SET PASSWORD FOR 'user'@'SpecificDNSorIp' = PASSWORD('secure_password'); )
    Restrict Host for user@% to user@SpecificDNSorIp
    MySQL was started within the last 24 hours - recommendations may be inaccurate
    Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
    Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: [link]
Variables to adjust:
    innodb_buffer_pool_size (>= 9.2G) if possible.
    innodb_log_file_size should be (=32M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.

The output suggests to tweak the innodb_buffer_pool_size to be at least the size of the database, which unfortunately is a lot more than we have RAM memory.

Is the above behavior caused by the buffering/caching strategy of MySQL or could it be caused by a bad configuration?

Is it a requirement that the RAM memory of a MySQL server instance has to be at least the size of the database to efficiently access the data?

We excpect that the database will grow at rate of 20M rows per month in the future. If I do the math (about 10GB memory per 20M rows) we would require more than 200GB RAM memory after a year.

Is there a rule of thumb when to distrubute the database rather than installing more RAM memory?

Best Answer

  • Leave the Query cache off -- You don't have enough RAM to waste it on the QC.
  • With only 2GB of RAM, 256M may be as big as is safe for innodb_buffer_pool_size. Raising it would lead to swapping, which is worse for performance.
  • The best solution is speeding up queries. We need to find the worst queries. They probably involve table scans, and some index (probably a composite index) would greatly speed them up. Or reformulating the queries.
  • If that fails, then the 'right' solution is to get a bigger VM. (But I think we can improve the indexes/queries.)
  • "Distribute the database" sounds like 'sharding' or some home-grown solution. This is a big task; I would put this 3rd on the list of options.
  • Be aware that Java is taking valuable RAM space.