Mysql – how to estimate rebuild index on 140 million records

MySQL

My existing project has data of about 140million records in Contentq table. And I have planned to rotate the table just incase not to face any load on the table.
So I have planned to rotate data older than 2 months to another table contentq_old and rename current latest 2 months data to contentq1 and create a structure only table contentq. As I have only updates/deletes for the latest inserted records I can take that risk. Let the application to point to empty table contentq for its operation. As there were almost 80million records deleted I should be going for rebuild indexes on contentq1 that has latest 2 months records. Once rebuilt done lock table contentq and do –

"insert into contentq1 select * from contentq';" . 

Then do drop contentq and rename contentq1 to contentq.

My problem is no down time for application hits, but I can park for somewhile. I should be knowing how long would the rebuild go on a latest 2months record that holds 50 million records max. Below is the structure of the table contentq. But how can I estimate how long an index rebuild would take?

Mysql 5.0.77 on 16GB RAM

 contentq | CREATE TABLE `contentq` (
 `idcontent` bigint(120) unsigned NOT NULL auto_increment,
 `keyvendor` varchar(32) default '0',
 `msisdn` varchar(16) default NULL,
 `cli` varchar(16) default NULL,
 `message` text,
 `udh` text,
 `priority` int(10) unsigned default '1',
 `recorddate` timestamp NOT NULL default CURRENT_TIMESTAMP,
 `callback_url` text,
 `transactionid` varchar(256) default NULL,
 `status` tinyint(4) default '0',
 PRIMARY KEY  (`idcontent`),
 KEY `PriorityIndex` (`status`,`priority`,`keyvendor`),
 KEY `NewIndex1` (`keyvendor`,`status`),
 KEY `idx_contentq_msisdn` (`msisdn`)
 ) ENGINE=InnoDB AUTO_INCREMENT=136583422 DEFAULT CHARSET=latin1 

Please advise. Thanks!!

Best Answer

Partition on day or week. That will give you the best possible way to delete old data (via DROP PARTITION). There is not (unless you have 5.6) a way to move a partition from one table to another; that would be even nicer for "archiving".

The size of a table does not necessarily impact its performance. It depends on the queries. Queries that look just at "recent" data (at one end of the table) will probably perform nicely. Queries that need to scan the entire table, or need to randomly fetch rows (either from the data or from an index) will get slower and slower as the table grows. This is because of the table being so much bigger than can be cached in RAM. "Count the disk hits."

Partition discussion: http://mysql.rjweb.org/doc.php/partitionmaint

Deleting lots of rows without bogging down: http://mysql.rjweb.org/doc.php/deletebig

Make sure your cache is the right size: http://mysql.rjweb.org/doc.php/memory

ALTERing a huge table: http://mysql.rjweb.org/doc.php/alterhuge

(Hehe... I guess your question hits more of my canned answers than any other question.)

InnoDB has a hard limit at 64TB (data+index) for table size.