I may be overthinking this but:
I have an API; I want to log calls to the API for stat/debugging purposes. Ideally the speed of reads should be quick but is not essential (as will primarily be read by me). However, the speed of writes should be good as I don't want to slow the API down.
We'll be starting off from scratch, but the API (and logging) will last years. The older the data gets, the less useful it is. At some point we might want to drop the old data entirely (say, dropping data older than three years).
Running MySql 5.5 at the moment.
What's the best way to structure the table? Partition it by year and just make up a load of future partitions?
PARTITION BY RANGE( YEAR(date) ) (
PARTITION p0 VALUES LESS THAN (2016),
PARTITION p1 VALUES LESS THAN (2017),
PARTITION p2 VALUES LESS THAN (2018),
PARTITION p3 VALUES LESS THAN (2019),
PARTITION p4 VALUES LESS THAN MAXVALUE
);
Or is there a way to create automatic rolling partitions?
Or do I just index on year?
Best Answer
Yes, there is. I would change the partition names to something more meaningful.
Let say the table look like this in the
mydb
database:To add a partition with the year 2019
To drop the oldest partition
Want a Stored Procedure to do the following ?
Here is the code
From here, just call it
On January 1, you could backup the old partition's data using mysqldump before rotating like this:
Based on VĂ©race's suggestion, if you want to backup the data into a table, an ARCHIVE table can compress the data and have no additional overhead for indexes.
You can then move
.frm
and.ARZ
files of the ARCHIVE table to a designated backup volume.GIVE IT A TRY !!!