You are right in that your example query would not use that index.
The query planner will consider using an index if:
- all the fields contained in it are referenced in the query
- some of the fields starting from the beginning are referenced
It will not be able to make use of indexes that start with a field not used by the query.
So for your example:
SELECT [id], [name], [customerId], [dateCreated]
FROM Representatives WHERE customerId=1
ORDER BY dateCreated
it would consider indexes such as:
[customerId]
[customerId], [dateCreated]
[customerId], [dateCreated], [name]
but not:
[name], [customerId], [dateCreated]
If it found both [customerId]
and [customerId], [dateCreated], [name]
its decision to prefer one over the other would depend on the index stats which depend on estimates of the balance of data in the fields. If [customerId], [dateCreated]
were defined it should prefer that over the other two unless you give a specific index hint to the contrary.
It is not uncommon to see one index defined for every field in my experience either, though this is rarely optimal as the extra management needed to update the indexes on insert/update, and the extra space needed to store them, is wasted when half of them may never get used - but unless your DB sees write-heavy loads the performance is not going to stink badly even with the excess indexes.
Specific indexes for frequent queries that would otherwise be slow due to table or index scanning is generally a good idea, though don't overdo it as you could be exchanging one performance issue for another. If you do define [customerId], [dateCreated]
as an index, for example, remember that the query planner will be able to use that for queries that would use an index on just [customerId]
if present. While using just [customerId]
would be slightly more efficient than using the compound index this may be mitigated by ending up having two indexes competing for space in RAM instead of one (though if your entire normal working set fits easily into RAM this extra memory competition may not be an issue).
You cannot loop through objects in the way you have coded. Here's the working code:
Add-Type -AssemblyName "Microsoft.SqlServer.Smo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";
$ServerName = 'xyzabc123'
$DatabaseName = 'test1'
$TableName = 'main'
$TableSchemaName = 'dbo'
$Server = New-Object Microsoft.SqlServer.Management.Smo.Server $ServerName
$Server.ConnectionContext.SqlExecutionModes = [Microsoft.SqlServer.Management.Common.SqlExecutionModes]::CaptureSql
$Database = $Server.Databases[$DatabaseName]
$Table = $Database.Tables[$TableName, $TableSchemaName]
$TablesToArchive = @($Table)
$Database.Tables | Select -ExpandProperty ForeignKeys | % {
if($_.ReferencedTable -eq $TableName -and $_.ReferencedTableSchema -eq $TableSchemaName)
{ $TablesToArchive += $_.Parent }
}
# $TablesToArchive | % { $_.Name }
$ObjectsToArchive = ($TablesToArchive | Select -ExpandProperty Indexes) + ($TablesToArchive | Select -ExpandProperty ForeignKeys)
# $ObjectsToArchive | % { $_.Name }
$ObjectsToArchive | % {
$_.Rename($_.Name + '_archive')
}
$RenameCommands = $Server.ConnectionContext.CapturedSql.Text
# $RenameCommands
For example, you were expecting
$Database.Tables.ForeignKeys
To give you a combined ForeignKeys
collection made up of ForeignKeys
of each Tables
object. If you added the debug commands
# $ObjectsToArchive | % { $_.Name }
You would have been able to track it down very quickly.
Best Answer
Instead of dropping the indexes, I'd recommend disabling them then enabling them. Preferably through a cursor (ignoring the clustered index) so that if an index is added or removed from the archive table the stored procedure doesn't need to be modified.
This is pretty standard when doing large data warehouse loads, which is basically what you are doing.
UPDATE: Scrub that. With the very small workload that you are doing just insert the records without dropping and readding the indexes. If you were moving hundreds of millions of rows then removing the indexes would be worth it.