Yes having a column in multiple unique keys is sometimes perfectly reasonable. In the case that you gave above I'm not sure I would bother since the ProductId key is unique regardless. But let's say that you have a product table like this:
ProductVendor PK
ProductCode PK
ProductDescription
.....
In this particular case the ProductVendor
and ProductCode
are together unique and are your primary key and clustered index. However there is an additional business rule that ProductDescription
must also be unique by ProductVendor
. In this case you could create a non-clustered index on ProductVendor, ProductDescription
.
A nonclustered index that has the same key(s)* as the clustered index, may still be useful, because the nonclustered index will usually be smaller and denser. Remember, a clustered index includes all in-row data, so it is normally the widest (least dense) index possible.
* The same key columns, in the same sequence, sorted the same way (asc/desc).
For a singleton seek (a seek using an equality predicate into a unique index), the main difference is the presumed greater chance of the index page(s) needed to find that single record being already in memory. All things being equal, a nonclustered index with 100 pages has a better chance than a clustered index stored on 10,000 pages.
For any other kind of seek except a singleton, there will be a scanning component as well. The scanning portion will also benefit from the greater density of the nonclustered index (more rows per page). Even if the pages must come in from persistent storage, reading fewer pages is faster.
You should check the system documentation, ask the person that added the index, and/or look for code comments to understand why this index exists. You also need to check the complete index definitions carefully, including compression, and look at index maintenance scripts. There could be other special reasons for having the nonclustered index (e.g. ability to rebuild online).
Best Answer
When you add/remove any column in existing non-clustered index, the index will be recreated(not altered) and hence index rebuild is not required.
The larger your tables, the greater the impact of executing DDL statements against indexes, both in the consumption of server resources and in the slowing of other queries that are executing at the same time.
On Enterprise Edition, you can choose to use CREATE INDEX ... WITH (DROP_EXISTING = ON, ONLINE = ON) (for certain index changes at least) to minimize interference with other queries (they can continue using the original index while the new one is building, at the cost of the additional disk space / activity required to maintain both). And in 2019 you can use RESUMABLE so, you can pause the creation of the new index during high activity, or spread the operation across multiple maintenance windows.