You have two "ranges" filtering the data. MySQL has no good way to do that.
First, let's check some numbers.
- How many rows in the result set?
- How many rows without the IN clause?
- How many rows without the time range?
- Is this IN list and this time range typical?
With those, we may be able to decide to focus on filtering on the time range in preference over the IN range.
Meanwhile, let's get rid of the "midnight" bug... Your time range seems to be from the morning of May 1 through midnight of the morning of Oct 31. Perhaps you meant to include all of Oct 31, not just one millisecond of that day? Did you mean to ask for a full 6 months? Suggest this coding style; it's less error-prone:
AND data_received_time >= 1000 * UNIX_TIMESTAMP('2015-05-01')
AND data_received_time < 1000 * (UNIX_TIMESTAMP('2015-05-01') + INTERVAL 6 MONTH)
Back to the performance question...
If you are looking at a smallish fraction of the total time, then PARTITIONing
may be useful for simulating a 2D index. (I mention this as Case #2 in http://mysql.rjweb.org/doc.php/partitionmaint )
If you break the table into, say, 20-40 partitions PARTITION BY RANGE(data_received_time)
, then "partition pruning" will first filter on the time range. Then your PRIMARY KEY
, with imei_no
first, filters on the other dimension.
I cannot say how much (if any) this partitioning will speed things up. I would expect small time ranges to benefit (percentage-wise) the most. A time range that hits all partitions will actually be a little slower than without partitioning. (Hence, my "check some numbers" questions.)
Schema notes... (Smaller is faster)
BIGINT
is 8 bytes. Can odom
go beyond 16M miles, the limit for `MEDIUMINT UNSIGNED (3 bytes)?
INT(4)
-- the (n)
means nothing; suggest TINYINT
(1 byte) or SMALLINT
(2 bytes).
Some of the VARCHARs
could be 'normalized' to say a 2-byte SMALLINT UNSIGNED
.
Best Answer
From here, the complete
mysqldump
command for a hot backup of a MySQL database using only InnoDB tables is: (from here).In addition to the Enterprise solution mentioned by @TrentLloyd, there is a F/LOSS solution available from Percona (who are big hitters in the MySQL world). I would recommend without hesitation any tool by Percona and I would put this first on my list for testing on any MySQL system which I work with.
What they offer is Percona XtraBackup which has an incremental backup functionality. The documentation is here.
You might also be interested in LVM snapshots - it's not a technology with which I am currently familiar but it's on my to-do list (like a lot of stuff :-) ). If you're familiar with all that stuff about mounting logical volumes &c., this may be of interest to you.
You could also look at zmanda - (mentioned in "High Performance MySQL").
I would concur with @TrentLloyd that regularly taking a full backup (at such small time intervals) doesn't make sense - your system will not be able to do any real work with a constant resource drain like that.
For completeness, you might want to look at the options here: (Credit where credit is due, I "stole" this image from here :-) )
On a final note, there is an endless number of shell (mostly bash) scripts out there. A Google of "mysql incremental backup bash script" (or similar) will get you a shedload of these. One of the best (IMHO) of which (and one of the most popular) is automysqlbackup, but you may like to try out any/all/some of the others.
With bash now being available natively on Windows (traditionally poor in the MySQL backup area), these might be worth investigating for users of that OS.
An interesting variant on automysqlbackup (disclaimer - I haven't used it) is a script which will perform incremental backups which is to be found here.
Another way to backup - which is "hot" is to use replication - this process is explained in detail here. The MySQL documentation is available here.