"How bad is it?" depends on the degree to which you are suffering now or could suffer with increased workload in the future.
One major point of suffering with plan cache pollution could be too many single use plans bloating your plan cache leading to inefficient cache usage.
Another point of suffering could be high compilations/second - so in an environment with a heavy workload and a lot of activity, there is a cost associated with compiling over and over.
You can see the impact of compilations/sec in perfmon (SQL Server Statistics:Compilations/sec). This can look like CPU pressure. To your performance/applications, this can look like increased query duration waiting for needless compiles each time it runs.
You can see the impact to the plan cache from the memory bloat by this query borrowed from Glenn Berry's Diagnostic scripts. How big is your SQLCP plan cache?
SELECT TOP(10) [type] AS [Memory Clerk Type],
SUM(pages_kb)/1024 AS [Memory Usage (MB)]
FROM sys.dm_os_memory_clerks WITH (NOLOCK)
GROUP BY [type]
ORDER BY SUM(pages_kb) DESC OPTION (RECOMPILE);
Also the query that was used in the question to identify the number of plans helps as well.
Is This Ever a Good Thing?
There are some cases where this could be good, but the situation is rare. Basically if you were suffering from parameter sniffing gone bad (nutshell: if the data can vary widely from execution to execution based on parameters, one compilation for one set of parameters ideal may yield an excellent query plan for that one query but poor for others.). My guess is that you likely wouldn't be dealing with that as bad as the implications from poor plan reuse.
What Can You Do About It?
Optimize For Ad Hoc Workloads can certainly help with the memory implications since only a stub of the plan is stored in cache at first execution, and the full plan isn't stored until it is executed a second time with the same plan.
Forced Parameterization could help here also. It can sometimes force parameterization to happen and help solve both the issue of cache bloat and the cost of having to recompile.
Fix The Queries Ideally, you shouldn't have to resort to these options, but instead can be more strict in your database development, encourage plan reuse, consider stored procedures for all of their benefits, and attempt to head off the problem that way. The ways to help fix this through forced parameterization or optimize for ad hoc are good to help, but the best solution is always aimed at the root cause.
There is an excellent resource here that talks about some of the dangers of plan cache pollution and some things you can do. I'd recommend a read here. It is written for SQL Server 2012, but the concepts and solutions apply.
According to this, the second run of an ad hoc batch removes the stub (which was used only once) and creates and caches the plan (using it for the first time).
I also haven't seen many references to refcounts
other than it being a count of references by cache objects. Adhoc Compiled Plan objects can still have a refcount
of 1, so it's not exclusively caused by the persistence of the plan.
Best Answer
The SQL Server development team work on the principle of least surprise - so SQL Server generally has new features disabled in the interests of maintaining behaviour as previous versions.
Yes, optimize for adhoc workloads is great at reducing plan cache bloat - but always test it first!
Kalen Delaney tells an interesting anecdote that she asked one of her Microsoft engineer friends whether there would be circumstances where it would not be appropriate to enable this. He comes back several days later to say - imagine an application that has a LOT of different queries, and each query runs exactly twice in total. Then it might be inappropriate. Suffice to say there's not many apps like that!
If the majority of your queries are executed more than once (not exactly twice); it would likely be inappropriate. The general rule would be to turn it if there are many one-time-use adhoc queries on the database; however, there are still not many apps like that.