There are queries on the internet that finds bad indexes
Their logic is so simple though
If write count > read count = bad index
Here one example query
SELECT OBJECT_NAME(s.object_id) AS 'Table Name',
i.name AS 'Index Name',
i.index_id,
user_updates AS 'Total Writes',
user_seeks + user_scans + user_lookups AS 'Total Reads',
user_updates - ( user_seeks + user_scans + user_lookups ) AS 'Difference'
FROM sys.dm_db_index_usage_stats AS s WITH ( NOLOCK )
INNER JOIN sys.indexes AS i WITH ( NOLOCK ) ON s.object_id = i.object_id
AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
AND s.database_id = DB_ID()
AND user_updates > ( user_seeks + user_scans + user_lookups )
AND i.index_id > 1
ORDER BY 'Difference' DESC,
'Total Writes' DESC,
'Total Reads' ASC ;
-- Index Read/Write stats for a single table
SELECT OBJECT_NAME(s.object_id) AS 'TableName',
i.name AS 'IndexName',
i.index_id,
SUM(user_seeks) AS 'User Seeks',
SUM(user_scans) AS 'User Scans',
SUM(user_lookups) AS 'User Lookups',
SUM(user_seeks + user_scans + user_lookups) AS 'Total Reads',
SUM(user_updates) AS 'Total Writes'
FROM sys.dm_db_index_usage_stats AS s
INNER JOIN sys.indexes AS i ON s.object_id = i.object_id
AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
AND s.database_id = DB_ID()
AND OBJECT_NAME(s.object_id) = 'AccountTransaction'
GROUP BY OBJECT_NAME(s.object_id),
i.name,
i.index_id
ORDER BY 'Total Writes' DESC,
'Total Reads' DESC ;,
However is the issue this simple? Can we call all write > read = bad index?
Are there any better logic having SQL query that can define bad indexes?
Lets ignore indexes that ensures integrity of data such as unique constraints
Lets assume that indexes are updated and used every minute
I am using SQL server 2014 and here the results this query generates
Ty
Best Answer
Absolutely not. Unfortunately it can vary a lot between applications so while there are many "rules of thumb" like the one you state none of them are universally applicable apart from "an index that is never used is a waste".
If that index is for enforcing integrity (i.e. it is a unique index or is supporting a foreign key) then that fact would generally trump a performance requirement. Also, the writes could be mostly at times where it doesn't matter (a bulk update in the middle of the night perhaps) or it could be really important that those reads are as fast as possible so slow writes are a price worth paying.
When thinking about these matters always consider the time period that any figures you are looking at cover. It could be that an index looks under-used this week, but next week is the start of a new reporting period so the users are performing a different set of actions and the index suddenly looks far more useful.
Never use rules-of-thumb as hard-and-fast rules to live by. Use them as suggestions and consider the results carefully with your knowledge of your specific application(s)'s needs.
Not without writing a whole book unless you have more specific questions, this is a very wide area. Many whole book have been written on the subject in fact... http://use-the-index-luke.com/ is a good place to start.