The latest version of the RML Utilities (09.04.0051 as at today) support Extended Events tracing, so maybe you could work with your DBAs to do some controlled tracing. A screenshot from the help file:
They do however have a point because the default trace template provided with RML includes statement-level events (SQL:StmtStarting
, SQL:StmtCompleted
, SP:StmtStarting
, SP:StmtCompleted
), ie every statement in every proc. This could easily bring a server to its knees under the wrong conditions, eg scalar function in a SELECT
statement running across a big table.
Extended Events (XE) were a bit limited in versions prior to 2012, but it is more lightweight. You still need to be careful as SQL statement-level events will still incur overhead for very intensive workloads (sqlserver.sp_statement_starting
, sqlserver.sp_statement_completed
, sqlserver.sql_statement_starting
, sqlserver.sql_statement_completed
). Execution plan capture (sqlserver.query_post_execution_showplan
) can also be intensive. Consider using the XETraceCaptureDefLightWeight.sql
template which is provided and includes less events.
In summary I suggest moving to XE and using the lightweight XE definition provided with the latest versions of RML.
I'm curious about what the plans look like even performance is bad. Hopefully you can tune those plans, and then make sure the dynamic SQL produces the right queries for those conditions.
Some things you may want to look for though:
Make sure your view is set up to simplify out joins that are not needed. Watch http://bit.ly/Simplification to get a picture of how.
Consider using additional predicates that could be used by the QO if particular plan shapes are used. Watch http://bit.ly/Sargability at the section about inverted predicates. But also, consider things such as looking out for year(thedate) = @y
so you can change it to thedate >= datefromparts(@y,1,1) and thedate < datefromparts(@y+1,1,1)
instead (or as well, in case you have an indexed computed column).
Additional IS NOT NULL predicates can be useful for helping some joins to be simplified out.
Check that you're avoiding OR and IN where possible, or at least considering what the impact is. You may want to change someval in (1,2,4)
(which is three seeks within one operator) to someval >= 1 and someval <= 4 and someval != 3
(one seek over a range) in some situations, but be wary of the impact of that on composite indexes. Read http://sqlperformance.com/2016/06/sql-plan/whats-actually-going-on-with-that-seek for more on that.
When a filter isn't used, make sure you're completely leaving it out rather than using catch-all predicate. You may find you're asking it to check values you really don't care about. Hopefully you're doing this already.
Best Answer
There is nothing wrong with using dynamic SQL if you must. In fact in some circumstances it is the only option that you have. It is more of a recommendation not to use it as yes it can lead to a SQL injection if your input is not sanitized, and yes using dynamic SQL in modules that get called often can be detrimental to it's performance.
I don't think there is a concrete example as per se but I would say this: Try to achieve what you are after using regular queries and statements first - only then once you have exhausted all other avenues do it dynamically. Just remember that executing a dynamic SQL string is done in a separate user session to the module that is calling it - so you may encounter permissions issues where you are not expecting any.
If your worried about performance; test it. If your worried about security; validate your input. There is no right or wrong - only that you use your best judgement based on the information and tools you have available to you at the time.