The biggest thing most people forget about TRUNCATE TABLE is that TRUNCATE TABLE is DDL and not DML. In InnoDB, the metadata within ibdata1 contains a numbered list of InnoDB tables. Using TRUNCATE TABLE causes the internal metadata id of the InnoDB table to shift. This happens because TRUNCATE TABLE effectively does the following:
Example: To truncate an InnoDB Table called mydb.mytb
USE mydb
CREATE TABLE newtb LIKE mytb;
ALTER TABLE mytb RENAME oldtb;
ALTER TABLE newtb RENAME mytb;
DROP TABLE oldtb;
The new mytb would thus have a different internal metadata id.
When you copied the .ibd file to some other place, the .ibd contains within it the original internal metadata id. Simply putting the .ibd file back does not cause a reconciliation of the internal metadata id with that of the one in ibdata1.
What you should have done is this:
Copy the .ibd file of the InnoDB table. Then, run this
ALTER TABLE tablename DISCARD TABLESPACE;
To bring it back later on, copy the .ibd file back into datadir and then run
ALTER TABLE tablename IMPORT TABLESPACE;
This would have preserved the internal metadata id.
Make sure .frm is always present.
I once helped a client restore 30 InnoDB tables he hosed in the same manner. I had to use another DB server and play some games with adding and dropping InnoDB tables to hunt down the correct internal metadata id.
The client found this article : http://www.chriscalender.com/?tag=innodb-error-tablespace-id-in-file . We used it and it helped a great deal. I hope it helps you.
I'm not certain. I tried to start out be reproducing your problem but the alter worked fine for me.
test > CREATE TABLE `bar` ( `content` text ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO bar VALUES (0x8081828384858687898A8B8C8D8E8F909192939495969798999A9B9C9D9E9F);
Query OK, 0 rows affected (0.02 sec)
Query OK, 1 row affected (0.00 sec)
test > ALTER TABLE bar CHANGE content content TEXT CHARACTER SET UTF8;
Query OK, 1 row affected (0.04 sec)
Records: 1 Duplicates: 0 Warnings: 0
test > select * from bar;
+---------------------------------+
| content |
+---------------------------------+
| ����������������������������� |
+---------------------------------+
1 row in set (0.00 sec)
test > set names utf8;
Query OK, 0 rows affected (0.00 sec)
test > select * from bar;
+---------------------------------------------------------------------------------+
| content |
+---------------------------------------------------------------------------------+
| €‚ƒ„…†‡‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ |
+---------------------------------------------------------------------------------+
1 row in set (0.00 sec)
Here's my related char settings
test > show variables like '%char%';
+--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | latin1 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
Edit
My char settings before running set names utf8
test > show variables like '%char%';
+--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | latin1 |
| character_set_connection | latin1 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | latin1 |
| character_set_server | latin1 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.00 sec)
Version
test > select version();
+-------------------------+
| version() |
+-------------------------+
| 5.1.41-3ubuntu12.10-log |
+-------------------------+
1 row in set (0.00 sec)
Best Answer
Use MySQL Workbench. You can right-click a table and select "Send to SQL Editor" --> "Create Statement". This way no table "properties" will be forgotten to add (including
CHARSET
orCOLLATE
).With this huge amount of data I'd recommend cleaning up either the table or the data structure you use (a good DBA comes handy). If not possible:
ALTER
) and create a new one with theCREATE
script you get from Workbench. You can also extend that query with the new field you needThis way you avoid indexing/etc to run record by record. The "update" to the table still will be slow (as the amount of data is huge) but this is the fastest way I can think of.
EDIT: Read this article to get details about the commands used in the above sample query ;)