Sql-server – I’ve created a view for a report, it now takes 30 minutes to run – Any ideas

join;performancequery-performancesql server

View the actual execution plan here

The plan is showing what happens when a stored procedure is called to get the payments that are associated with a client.

Unfortunately there is no other way to get this data than by inner joining a whole bunch of tables together.

Objec5.Index11 is on the Payments table (nothing sensitive, just a record from the gateway to confirm the transaction). This is 457mb of data according to SQL Sentry Plan Explorer. The rest of the clustered index seeks following on from the inner joins are all roughly the same amount of data equating to somewhere between 10% and 13% of the execution cost of this query.

My questions are:

  • What is the best approach to getting data from multiple tables? I've joined them together via inner joins and dropped them into a view
  • Should the view have a clustered index? Can this only be achieved with schemabinding? If so, will this prevent field level changes to all tables within the view?
  • Would having better estimated row values help SQL Server run this query more efficiently? Or is the approach so fubar that this just is not efficient by design?
  • Is there anything i can do in the short term to boost the performance of this query?

Any help would be appreciated.

I've provided responses to each of the questions below:

  • SET STATISTICS IO ON output here
  • All of the tables have a clustered index on them on the primary key
  • The query consistently performs poorly. However, modifying the query to join on the tables themselves as opposed to views speeds the time the stored proc takes to run quite significantly (from 20 mins to 6 mins) but it still is not fast enough for production
  • The query used to run within 1 – 2 minutes. Now the fastest is 6 minutes
  • Index11 is the payments table, there are 1,570,473 rows according to the execution plan actual rows. This is the same table where there were over 4 million logical reads and 1.4 million scans – This does not seem normal to me or conducive to performance? I could be wrong …

Best Answer

Unable to get to the root cause of the performance issue I've backed out the stored procedure changes. These changes introduced mostly left joins on views. Not sure whether the 1.4m row table scan was having an impact on performance.

With the code changes reverted reports are now running every 3 - 4 minutes. This includes the time to write the file to disk. This code will likely not see the light of day again.