People at my organization use SQL Queries within SSRS reports: for instance
SELECT name, age
FROM egTable
Then, this query is run on the database and then they use a FILTER within SSRS to get rid of unwanted rows. For instance, there is a filter in the SSRS called AGE, and this will be something like Age = 11
My proposition was that, this is bad: this way of doing things means that we are querying the ENTIRE table, and then from the GIANT resultset we are just getting rid of unwanted rows. Instead, we should be introducing a parameter: @age, and then write the query as follows:
SELECT name, age
FROM egTable
WHERE age = @age
Am I correct in saying that the first method pulls the entire table, whereas mine is far more efficient because it only returns a small result set?
How can I verify/prove this?
Best Answer
TL/DR
Yes you are correct, you can prove it by showing the result of
ExecutionLog
in the SSRS database.Longer Answer
I created 2 identical reports based on the Adventureworks database, one with a filter on the dataset for City and one with a parameter in the query for City.
Report 1
The query for this report is:
And the filter is added like this:
The result of this report is:
Report 2
The query for this report is
There are no filters on this dataset
The result of this report is:
Where you can see the same data is shown, but the behaviour is different since the user has to put in a value for the filter (marked in yellow)
If this is undesired this can be overcome by adding a default value to the parameter and setting the visibility to hidden like this:
and this:
Proof of efficiency
The efficiency of both reports can be proven by querying the executionlog table and views like this:
Which for these 2 reports returns:
So the second method not only fetched a lot less rows (26 versus 19614) but also consumed less time processing and rendering the report.