SQLServer:Databases\Log Flush Wait Time counter, it is staying fairly constant between 600 ms and 900 ms with an average of 750ms
750ms wait time per flush is huge. It means every transaction statement must wait in average 750ms to commit. If you don't use explicit transactions for writes it means each write statement (INSERT/UPDATE/DELETE) must wait 750ms to complete. However this does not corroborate with the other sttaement 'PhysicalDisk\Avg. Disk sec/Write ... For the Log Files volume, it stays almost steady at 8 ms.'. One explanation is that you have log growths (you do, the counters say so) which skew the log flush wait times but that would only explain averages over long periods.
SQLServer:Databases\Log Growths, I'm at 325
Figure out which database is growing and resize the log appropriately. You should not occur growth events in production, they are extremely expensive. the counter has instances per database.
PhysicalDisk\Avg. Disk sec/Write ... For the Log Files volume, it stays almost steady at 8 ms. Is it normal for the Log Files to have consistent writes with no break?
Yes. A transaction cannot commit unless the log was written to disk. As long as you have transactions, you will have log disk writes.
PhysicalDisk\Avg. Disk sec/Write counters: For the TempDB volume, I get dramatic spikes every few minutes that are upwards of 100 ms.
It would be good to know why. Can you correlate it with a specific activity that causes the spike? Can you correlated it with database checkpoints occurring?
Memory\Pages Input/sec counter, the values fluctuate dramatically and have an average of 6 but have many spikes well upwards of 100 (some as high as 400).
So you're paging. Why? Insufficient memory? Any other offending process on the server host? Virtualized overcommitted environment? Only you can tell. Does swapping IO overlap with any of the IOs above (same physical path, ie. same disks)? Then some of the SQL timers may be influenced by this paging spikes.
First, patch: make sure you're on 2012 Service Pack 1 Cumulative Update 10 or newer. In SQL 2014, Microsoft changed TempDB to be less eager to write to disk, and they awesomely backported it to 2012 SP1 CU10, so that can alleviate a lot of TempDB write pressure.
Second, get exact numbers on your latency. Check sys.dm_io_virtual_file_stats to see the average write stall for your TempDB files. My favorite way to do this is either:
sp_BlitzFirst @ExpertMode = 1, @Seconds = 30 /* Checks for 30 seconds */
sp_BlitzFirst @SinceStartup = 1 /* Shows data since startup, but includes overnights */
Look at the file stats section, and focus on the physical writes. The SinceStartup data can be a little misleading since it also includes times when CHECKDB is running, and that can really hammer your TempDB.
If your average write latency is over 3ms, then yes, you might have solid state storage in your SAN, but it's still not fast.
Consider local SSDs for TempDB first. Good local SSDs (like Intel's PCIe NVMe cards, which are under $2k USD especially at the sizes you're describing) have extremely low latency, lower than you can achieve with shared storage. However, under virtualization, this comes with a drawback: you can't vMotion the guest from one host to another to react to load or to hardware issues.
Consider a RAM drive last. There are two big gotchas with this approach:
First, if you really do have heavy TempDB write activity, the change rate on memory may be so high that you won't be able to vMotion the guest from one host to another without everyone noticing. During vMotion, you have to copy the contents of RAM from one host to another. If it's really changing that fast, faster than you can copy it over your vMotion network, you can run into issues (especially if this box is involved with mirroring, AGs, or a failover cluster.)
Second, RAM drives are software. In the load testing that I've done, I haven't been all that impressed with their speed under really heavy TempDB activity. If it's so heavy that an enterprise-grade SSD can't keep up, then you're going to be taxing the RAM drive software, too. You'll really want to load test this heavily before going live - try things like lots of simultaneous index rebuilds on different indexes, all using sort-in-tempdb.
Best Answer
To directly answer your question, yes. Auditing always has overhead. If you have to write down what you've done and call extra code that wouldn't normally be called then by definition it'll have overhead.
Whether that overhead is appreciable enough to cause you an issue we can't say. This is completely dependent upon:
There will be a different in performance using some 3rd party solution utilizing detours/iat/eat hijacking/whatever versus a trace vs security (extended events). Your mileage will vary with all of these (not all supported) choices.
If, for some reason, you keep the audit records in a table in that database then yes... otherwise, no.