You should think about partitioning the table for a big reason.
All indexes you have on a giant table, even just one index, can generated a lot of CPU load and disk I/O just to perform index maintenance when executing INSERTs, UPDATEs, and DELETEs.
I wrote an earlier post back on October 7, 2011 on why Table Partitioning would be a big help. Here is one excerpt from my past post:
Partitioning of data should serve to group data that are logically and
cohesively in the same class. Performance of searching each partition
need not be the main consideration as long as the data is correctly
grouped. Once you have achieved the logical partitioning, then
concentrate on search time. If you are just separating data by id
only, it is possible that many rows of data may never be accessed for
reads or writes. Now, that should be a major consideration: Locate all
ids most frequently accessed and partition by that. All less
frequently accessed ids should reside in one big archive table that is
still accessible by index lookup for that 'once in a blue moon' query.
You can read my entire post later on this.
To cut right to the chase, you need to research and find out what data is rarely used in your 10GB table. That data should be placed in an archive table that is readily accessible should you need adhoc queries for a historical nature. Migrating that archival from the 10GB, followed by OPTIMIZE TABLE
on the 10GB table, can result in a Working Set that is faster to run SELECTs, INSERTs, UPDATEs, and DELETEs. Even DDL would go faster on a 2GB Working Set than a 10GB table.
UPDATE 2012-02-24 16:19 EDT
Two points to consider
- From your comment, it sounds like normalization is what you may need.
- You may need to migrate out everything over 90 days old into an archive table but still access archive and working set at the same time. If your data is all MyISAM, I recommend using the MERGE storage engine. First, you create the MERGE table map once that unites a working set MyISAM table and an archive MyISAM table. You would keep data less than 91 days in one MyISAM table and rollover any data over 90 days old into the archive. You would query the MERGE table map only.
Here are two posts I made on how to use it:
Here is an additional post I made on tables with a lot of columns
Too many columns in MySQL
This looks oddly familiar.
I have seen this occur with one of my web hosting client's DB servers. There was a particular table that crashed mysqld every single time you accessed it, even with SHOW CREATE TABLE
.
The problem stems from a corrupt data dictionary. There is really no way to correct it. You could attempt to alter the tablespace_id within the .ibd file but the headache stems from locating the tablespace_id list internal to ibdata1.
Even if you create a MyISAM table with the same name in the same database as the original InnoDB table, you cannot convert it to InnoDB because the tablespace_id is already associated with the table name. This, of course, is a corrupted state. It's like having a pidgeon hole in ibdata1 that you cannot patch up without some exploratory surgery.
You may have to mysqldump everything except the database that houses the corrupt table. You would then have to mysqldump every table in that database except the corrupt table. Remember, it is the data dictionary's view of the table that is screwed up, not necessarily the table's data.
The only sure way to clean everything up is to perform the mysqldumps as I just specified, shutdown mysql, rm -rf all DB folders except /var/lib/mytsql/mysql, delete ibdata1, delete ib_logfile0, delete ib_logfile1, startup mysql, reload all mysqldumps. See my StackOverflow post about cleaning up your InnoDB infrastructure.
Since you are not using innodb_file_per_table, any tables with this corrupt state of things within ibdata1 are lost as casualities of war. My condolences.
For future reference, click here to see an artistic conception of InnoDB and its Internals.
Best Answer
In my answer to your previous question, I mentioned how the size of an individual table is 2TB when you use innodb_file_per_table. There may be a way to surpass the table limit by doing just the the opposite, leaving innodb_file_per_table disabled.
With innodb_file_per_table disabled, there are three ways you can go about this:
TECHNIQUE #1 : Use multiple system tablespaces in ext3
You could chain InnoDB tablespaces 2TB at a time. Start off with this
When ibdata1 is about to hit 2TB, add another ibdata file
When ibdata2 is about to hit 2TB, add another ibdata file
and so on...
UPDATE 2013-06-01 20:45 EDT
I just discovered something with reference to setting the max filesize for a system tablespace. Here is the situation:
Client has the following InnoDB files
and has this setting
MySQL was reporting table full errors.
I tried creating a new file
ibdata3
withThat value did not work
I tried using new sizes for the max value
2046G
2045G
2046000M
After trying for 15 min with different combinations, none of these values worked. Looks like mysqld was never going to startup again. I decided to try the exact number of bytes for the filesize of ibdata2:
When I started mysql, LO AND BEHOLD, MySQL was up.
ibdata3
was created at 10Mibdata3
was soon growing in 8M chunksMoral of the story : At times, you may need to use the exact filesize for a system tablespace in innodb_data_file_path
TECHNIQUE #2 : RAW DISK PARTITION
According to MySQL 5.0 Certification Study Guide, Page 428
Just create a RAID10 Disk with whatever size you need. Just don't put any filesystem on it.
TECHNIQUE #3 : Use ext4 with TECHNIQUE #1
You could chain InnoDB tablespaces 16TB at a time. Start off with this
When ibdata1 is about to hit 16TB, add another ibdata file
When ibdata2 is about to hit 16TB, add another ibdata file
And so on...