Sql-server – Why might a table scan have a huge I/O Cost

sql serversql-server-2008-r2

I know that table scans are bad, and that every table should have a primary key and clustered index, but I'm wondering what might cause the spectacularly bad performance that I'm seeing below in SQL Server 2008 R2.

For testing's sake, I am doing a SELECT * FROM myTable on a table/heap with 1960 rows. The table has the following columns: (int, datetime, int, int, float, smallint, uniqueidentifier). There are no indexes. This table has had a long history of inserts and deletes.

I get the following stats from the table scan in the query plan:

<RelOp 
  AvgRowSize="57" 
  EstimateCPU="0.002313" 
  EstimateIO="74.4157" 
  EstimateRebinds="0" 
  EstimateRewinds="0" 
  EstimateRows="1960" 
  LogicalOp="Table Scan" 
  NodeId="0" 
  Parallel="false" 
  PhysicalOp="Table Scan" 
  EstimatedTotalSubtreeCost="74.418"
  TableCardinality="1960">

And from SET STATISTICS TIME ON:

Table 'myTable'. Scan count 1, logical reads 100458, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

What factors could lead to such a small number of small rows requiring such a massive number of reads (and have the query plan predict the same)?

Best Answer

As per the comments:

See SQL Server “empty table” is slow after deleting all (12 million) records – Remus Rusanu

The answer to this question is: Ghosted Records, subsequently confirmed by the original poster. Posting here for completeness.