You can use sys.indexes view to get indexes. You can join this table to sys.tables, sys.columns and sys.index_column to get informations to create a dynamic drop index. You can use this simple select to generate your drop index. You can use the where clause to filter the tables and the columns. If you want to change Table1.Column1 you must use those name to filter the select to get the right delete statement
use [testdb]
go
declare @sqlDropIndex NVARCHAR(1000)
select @sqlDropIndex = 'DROP INDEX ' + idx.name + ' ON ' + tbl.name
from sys.indexes idx inner join
sys.tables tbl on idx.object_id = tbl.object_id inner join
sys.index_columns idxCol on idx.index_id = idxCol.index_id inner join
sys.columns col on idxCol.column_id = col.column_id
where idx.type <> 0 and
tbl.name = 'MyTableName' and
col.name = 'MyColumnName'
group by idx.name, tbl.name
order by idx.name desc
print @sqlDropIndex
--exec sp_executeSql @sqlDropIndex
go
Hope this will help you
If you are a little adventurous, you could take matters into your hands by performing the ALTER TABLE in stages you can see. Suppose the table you want to change is called WorkingTable. You could perform the changes in stages like this:
#
# Script 1
# Alter table structure of a single column of a large table
#
CREATE TABLE WorkingTableNew LIKE WorkingTable;
ALTER TABLE WorkingTableNew MODIFY BigColumn VARCHAR(50);
INSERT INTO WorkingTableNew SELECT * FROM WorkingTable;
ALTER TABLE WorkingTable RENAME WorkingTableOld;
ALTER TABLE WorkingTableNew RENAME WorkingTable;
DROP TABLE WorkingTableOld;
You can perform this on all slaves. What about the master ??? How do you prevent this from replicating to the slaves. Simple: Don't send the SQL into the master's binary logs. Simply shut off binary logging in the session before doing the ALTER TABLE stuff:
#
# Script 2
# Alter table structure of a single column of a large table
# while preventing it from replicating to slaves
#
SET SQL_LOG_BIN = 0;
CREATE TABLE WorkingTableNew LIKE WorkingTable;
ALTER TABLE WorkingTableNew MODIFY BigColumn VARCHAR(50);
INSERT INTO WorkingTableNew SELECT SQL_NO_CACHE * FROM WorkingTable;
ALTER TABLE WorkingTable RENAME WorkingTableOld;
ALTER TABLE WorkingTableNew RENAME WorkingTable;
DROP TABLE WorkingTableOld;
But wait !!! What about any new data that comes in while processing these commands ??? Renaming the table in the beginning of the operation should do the trick. Let alter this code a little to prevent entering new data in that respect:
#
# Script 3
# Alter table structure of a single column of a large table
# while preventing it from replicating to slaves
# and preventing new data from entering into the old table
#
SET SQL_LOG_BIN = 0;
ALTER TABLE WorkingTable RENAME WorkingTableOld;
CREATE TABLE WorkingTableNew LIKE WorkingTableOld;
ALTER TABLE WorkingTableNew MODIFY BigColumn VARCHAR(50);
INSERT INTO WorkingTableNew SELECT SQL_NO_CACHE * FROM WorkingTableOld;
ALTER TABLE WorkingTableNew RENAME WorkingTable;
DROP TABLE WorkingTableOld;
- Script 1 can be executed on any slave that do not have binary logs
enabled
- Script 2 can be executed on any slave that does have binary
logs enabled
- Script 3 can be executed on a master or anywhere else
Give it a Try !!!
Best Answer
You should only need to drop and recreate the affected NC index.
Saying that, on a test server, you can see the differences in doing this compared to your strategy of dropping all indexes above.
I reckon dropping/creating the single index would be quicker overall because the data will be shifted around twice otherwise: once char to varchar, another to build the clustered index. Then you have the NC creation overhead.