Amazon-rds – IOPS on Amazon RDS

amazon-rds

I'm very very new to AWS. Is one insert record operation on an Amazon RDS table counted as 1 IOP?

I have a use-case of inserting about 1 million records every day, and would like the job to finish running in about 10-20 minutes. Will this count as 1 million IOP? Would that mean my IOPS should be 1000000/(10*60) = 1667 IOPS?

Besides, will select query count as 1 IOP or would it equal the number of records being returned?

Best Answer

will select query count as 1 IOP or would it equal the number of records being returned?

No. Assuming that Amazon accounts IOPs accurately (after all, they are virtual machines), there are some abstractions between SQL and disk IO operations:

  • SQL Operations: can be seen with the com_select, com_insert, etc. counters on SHOW GLOBAL STATUS. Please note that if you have a cache like the query_cache, close to no disk operations will be done (because results will be returned directly from memory).
  • Handler ("row") operations: can be seen with the 'Handler_%'counters on SHOW GLOBAL STATUS
  • Engine operations: reading a single row, or inserting a single row (Handler_write) will be done differently for each engine, implying MySQL buffering or not. For example, for InnoDB you can check the GLOBAL STATUS variables for the read request to the buffer pool versus the disk read requests. A single row read may require several reads on indexes and different versions of the rows in the UNDO history before being returned.
  • Filesystem cache: data that is frequently read or written will usually be cached on memory by the operating system. This will speed up subsequent reads and writes by avoiding actual disk IO (specially on places like Amazon, where de IO can be on a local disk or over the network with non-deterministic response).
  • Disk IO: even when dealing with actual IO, writes are usually done in blocks.

I have not gone in detail about MySQL and OS internals, and even without doing that you can see that things are not obvious- in fact reducing IO and knowing why there is so much of it in some cases is one of the most important fields for database optimization.

An engine like InnoDB will require all of its pages to be written 3 or 4 times on insertion (one on the transaction log, one on the actual tablespace, one on the double write buffer and optionally, on the binary log)- and this is a simplification- it does not have into account index updates, metadata, statistics, etc. The best way to know how many IOPS you will need is to test on a particular setup. Even with some fake smaller tests it will be more reliable than tying to theorise. Caching both at OS and DB level will make the numbers dramatically different.

For example, the other day I inserted dozens of millions of records at 200.000 rows/s with LOAD DATA (no SQL overhead) because I had enough memory to write almost exclusively to the InnoDB buffer pool. It took several minutes for the disk to synchronise with the memory contents, though.