5000 inserts per minute are about 83 inserts per second. With 5 indexes that's 400 physical rows inserted per second. If the workload was in-memory this would not pose a problem even to the smallest of servers. Even if this was a row-by-row insert using the most inefficient way I can think of. 83 trivial queries per second are just not interesting from a CPU standpoint.
Probably, you are disk-bound. You can verify this by looking at wait stats or STATISTICS IO
.
Your queries probably touch a lot of different pages so that the buffer pool does not have space for all of them. This causes frequent page reads and probably random disk writes as well.
Imagine a table where you only physically insert at the end because of an ever-increasing key. The working set would be one page: the last one. This would generate sequential IO as well wen the lazy writer or checkpoint process writes the "end" of the table to disk.
Imagine a table with randomly-placed inserts (classic example: a guid key). Here, all pages are the working set because a random page will be touched for each insert. IOs are random. This is the worst case when it comes to working set.
You're in the middle. Your indexes are of the structure (SomeValue, SequentialDateTime)
. The first component partially randomizes the sequentiality provided by the second. I guess there are quite a few possible values for "SomeValue
" so that you have many randomly-placed insert-points in your indexes.
You say that data is split into 10GB tables per week. That's a good starting point because the working set is now bounded by 10GB (disregarding any reads you might do). With 12GB of server memory it is unlikely, though, that all relevant pages can stay in memory.
If you could reduce the size of the weekly "partitions" or increase server memory by a bit you are probably fine.
I'd expect that inserts at the beginning of the week are faster then at the end. You can test this theory on a dev server by running a benchmark with a certain data size and gradually reducing server memory until you see performance tank.
Now even if all reads and writes fit into memory you might still have random dirty page flushing IO. The only way to get rid of that is to write into co-located positions in your indexes. If you can at all convert your indexes to use (more) sequential keys that would help a lot.
As a quick solution I'd add a buffering layer between the clients and the main table. Maybe accumulate 15min of writes into a staging table and periodically flush it. That takes away the load spikes and uses a more efficient plan to write to the big table.
Let's break this question up into a few parts.
Q: I need to insert 1mm rows a day. Is that a lot?
Not really. 1mm divided by 24 hours divided by 60 minutes divided by 60 seconds gives you about 12 inserts per second. For a rough frame of perspective, it's not unusual to see 1,000 inserts per second in typical commodity servers with no tuning.
Granted, your load won't be perfectly averaged out like that - you'll have bursts of load - but I wouldn't make database platform decisions based on less than 10k-20k inserts per second. Any platform out there will work fairly well.
Q: How should I structure the data?
Zoom out - don't think table, think databases. If you're going to be keeping this data permanently, and it's truly insert-only with no updates, then you probably want to start a new database for time lengths. Your inserts may only go into one table in one database, but every year, create a new database (MyApp_2015) and seal the old 2014 data as read-only. You can stop backing it up (as long as you've still got a good backup once), stop doing index maintenance, statistics updates, etc.
The PHP will only ever have to know about the current database for inserts, making your design a lot easier. The archival process becomes a DBA task much later down the road as long as you go in knowing that there will be more than one database involved.
If you were doing more than 1,000 inserts per second sustained, and you wanted easier performance management, then I'd also suggest building sharding into the initial design regardless of the database platform. Don't get me wrong, any modern database can handle over 1,000 inserts per second, but designing sharding in now just gives you more flexibility later on. At 12 inserts per second, it's just not worth the design/testing hassle.
Q: How should I do reporting?
In an ideal world, reports would not be done against the live server. Run the reports against a restored or replicated copy of the database. This does two things: it reduces load on the live server, and it validates your backups, guaranteeing that you've got your valuable data elsewhere.
Best Answer
Solution to the query:
http://sqlfiddle.com/#!2/65fbf0/9