I'm having some performance issues when I run a script. I was reading about the problem with print and its performance, but I cannot figure out how to reduce the impact.
The algorithm is O(n^2). The main loop goes through around 50k elements. For each of them it performs some selects to retrieve data from different tables and finally it does two sub-loops
Main loop(n)
-- some selects...
nested loop(n^2)
nested loop(n^2)
There are different prints inside, not only because of debugging purpose, but also for requirements.
Also, when I wrap all the script into a transaction, its turns dramatically low performance.
Three points:
-Is there a way to reduce the impact of print?
-What is better in performance?: cursor or loop with a counter
-It is normal the low performance within a transaction?
Code:
Server: Windows Server 2008 R2(2Gb RAM)
DBMS: SQL Server 2008 R2
Client: Windows 7 running SQL Management Studio 2012.
The requirement is to show the output with incorrect rows. It will be (in production) a small list with max 80lines. It doesn't create a print for 50k. This is why I don't know how to resolve the performance issue, when the print are relatively uncommon in the output. Now my test case shows between 500-800 lines. The script will run only once in 3 environments.
Best Answer
You could declare a table variable, insert to that rather than printing and dump it in toto at the end of your process:
The identity column will keep things in chronological order. @tables' contents survive a rollback so are preferable over #tables or normal tables for this purposes.
I haven't profiled this suggestion; I throw it out there for what it's worth.