You could use the INFORMATION_SCHEMA to sculpt the SQL commands for you.
SELECT CONCAT('ALTER TABLE ',db,'.',tb,' COMMENT ''',com,''';')
FROM (SELECT table_schema db,table_name tb,table_comment com
FROM information_schema.tables WHERE table_schema NOT IN
('inforamtion_schema','performance_schema','mysql')
AND table_comment <> '') A;
Output this query to a text file and view the text file
SQL="SELECT CONCAT('ALTER TABLE ',db,'.',tb,' COMMENT ''',com,''';')"
SQL="${SQL} FROM (SELECT table_schema db,table_name tb,table_comment com"
SQL="${SQL} FROM information_schema.tables WHERE table_schema NOT IN"
SQL="${SQL} ('inforamtion_schema','performance_schema','mysql')"
SQL="${SQL} AND table_comment <> '') A"
mysql -uroot -ppass -ANe"${SQL}" > AddCommentsToTables.sql
less AddCommentsToTables.sql
If the output looks like what you need, then copy the script to the production server, login to mysql and run this to execute the script:
mysql> source AddCommentsToTables.sql
Give it a Try !!!
UPDATE 2014-04-04 16:34 EDT
It is a little harder to change each column definition's COMMENT. Why ???
According to the MySQL Documentation
When you use CHANGE or MODIFY, column_definition must include the data type and all attributes that should apply to the new column, other than index attributes such as PRIMARY KEY or UNIQUE. Attributes present in the original definition but not specified for the new definition are not carried forward. Suppose that a column col1 is defined as INT UNSIGNED DEFAULT 1 COMMENT 'my column' and you modify the column as follows:
ALTER TABLE t1 MODIFY col1 BIGINT;
The resulting column will be defined as BIGINT, but will not include the attributes UNSIGNED DEFAULT 1 COMMENT 'my column'. To retain them, the statement should be:
ALTER TABLE t1 MODIFY col1 BIGINT UNSIGNED DEFAULT 1 COMMENT 'my column';
Warning This conversion may result in alteration of data. For example,
if you shorten a string column, values may be truncated. To prevent
the operation from succeeding if conversions to the new data type
would result in loss of data, enable strict SQL mode before using
ALTER TABLE (see Section 5.1.7, “Server SQL Modes”).
You would have to express the entire column definition from scratch each time. Doing that is not only tedious but dangerous because it would just copy the data into a fresh temp table and rename the temp table afterwards.
ADDITIONAL WARNING : StackOverflow has this : https://stackoverflow.com/questions/2162420/alter-mysql-table-to-add-comments-on-columns
One of the comments on the accepted answer says : This solution can break the auto increments. – workdreamer Jul 31 '13 at 12:10
That's makes sense because of what I just said: The table get's copied, so there will be some auto_increment movement.
Suppose you have mydb.mytb
and you want to create mydb.mytbcopy
I have five(5) approaches to doing this copy
APPROACH #1
In the mysql
client, run the following
USE mydb
CREATE TABLE mytbcopy LIKE mytb;
INSERT INTO mytbcopy SELECT * FROM mytb;
APPROACH #2
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysql ${MYSQL_CONN} -ANe"CREATE DATABASE IF NOT EXISTS test"
mysqldump ${MYSQL_CONN} mydb mytb | mysql ${MYSQL_CONN} -Dtest
mysql ${MYSQL_CONN} -ANe"ALTER TABLE test.mytb RENAME mydb.mytbcopy"
APPROACH #3
DUMPFILE=/some/path/tabledata.sql
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysql ${MYSQL_CONN} -ANe"CREATE DATABASE IF NOT EXISTS test"
mysqldump ${MYSQL_CONN} mydb mytb > ${DUMPFILE}
mysql ${MYSQL_CONN} -Dtest < ${DUMPFILE}
rm -f ${DUMPFILE}
mysql ${MYSQL_CONN} -ANe"ALTER TABLE test.mytb RENAME mydb.mytbcopy"
APPROACH #4
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysqldump ${MYSQL_CONN} mydb mytb | sed 's/mytb/mytbcopy' | mysql ${MYSQL_CONN} -Dmydb
APPROACH #5
DUMPFILE=/some/path/tabledata.sql
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysqldump ${MYSQL_CONN} mydb mytb | sed 's/mytb/mytbcopy' > ${DUMPFILE}
mysql ${MYSQL_CONN} -Dmydb < ${DUMPFILE}
rm -f ${DUMPFILE}
ANALYSIS
- APPROACH #1 is the easiest in terms of steps, but requires pushing 40 million rows into one transaction. This will be the most taxing on the InnoDB Storage Engine.
- For the other approaches, mysqldump will send 40 million row in chucks of thousands of rows
- APPROACH #2 and APPROACH #3 will mysqldump the table into the test database. After creating the table in the test database, it is subsequently renamed and moved into the original database
- APPROACH #4 and APPROACH #5 rename the table using sed against the stream coming from the mysqldump as it echoes the INSERT commands
- APPROACH #2 and APPROACH #4 use pipes instead of an output file
- APPROACH #3 and APPROACH #5 use an outpuit file for subsequent reload
If you want to copy mydb.mytb
to an already existing table mydb.mytbcopy
, and the two tables have identical structures:
APPROACH #6
INSERT INTO mytbcopy SELECT * FROM mytb;
Like #APPROACH 1, #APPROACH 6 would have a single transaction of 40 million rows
APPROACH #7
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
mysqldump ${MYSQL_CONN} -t mydb mytb | sed 's/mytb/mytbcopy' | mysql ${MYSQL_CONN} -Dmydb
This approaches does not drop the table. It simply generates the INSERTs
EPILOGUE
I cannot give you a time estimate since I do not know the make up of the DB Server, table structure, index layout, and things like these.
GIVE IT A TRY !!!
Best Answer
You can query information schema for list of all tables in your source database and build the import commands.
Then copy/paste the output in query window.