Shrinkfile on a datafile is a single-threaded operation, reusing a small memory buffer.
So the Ninja hardware hasn't got an edge with the extra memory and the 80 cores.
Your local PC however has the benefit of local I/O latency (local disk, i.e. not having to make multiple trips to the SAN).
After doing a lot more investigation this is what I have found.
It fixed the issue (significant performance gain and WRITELOG has an average wait time of 0.0126 which was initially 14.681)
Apparently the issue was with the Number of Virtual Log files in my physical log file.
There is a job scheduled to rebuild indexes every night, the job creates 36GB of logs, and until few weeks ago someone had add a job to shrink log file on weekly basis. Log file was being shrunk to 500MB.
Since it is a very busy server the log file would grow in size and it was set to auto grow by 3 percent. Each time it grew it added more and more VLFs. As a result my 35.5GB log file had 1600 VLFs.
To resolve the issue I did the following:
-- To get the number of VLFs in a physical Log file.
DECLARE @count INT, @DBname SYSNAME;
------------------------------------------------------------
SET @DBname = N'TEST_DB'
------------------------------------------------------------
DBCC LOGINFO(@DBname)
SET @count = @@ROWCOUNT;
SELECT @DBname AS DatabaseName , @count Total_VLFs;
GO
The above query returned the databasename and number of VLFs (1600)
To reduce the number of VLFs this is what I did:
USE TEST_DB;
GO
DBCC SHRINKFILE (TEST_DB_log, 0, TRUNCATEONLY)
GO
DBCC SHRINKFILE (TEST_DB_log , 0)
GO
ALTER DATABASE TEST_DB
MODIFY FILE ( NAME = TEST_DB_log
, SIZE = 36864MB
, FILEGROWTH = 4096MB)
GO
This reduced the number of VLFs from 1600
to 20
. The end-users has seen a sudden improvement in the application performance.
Now one thing I am not sure about is, if 20 VLFs is the right number of VLFs for a 36GB log file. But it seems to have a good impact on the performance of the database.
Best Answer
WRITELOG is waiting on commit for your transaction's log records to be hardened to disk, and HADR_SYNC_COMMIT is waiting on commit for your transaction's log records to be sent over the network to a secondary replica and hardened to disk. So they are very, very similar waits.
The both indicate that your application is commiting a lot of transactions, perhaps too many.
And on premium your log file is on a local flash drive with very low latency, so to suffer lots of WRITELOG waits suggests there's something that needs to be fixed in your application.
If you have any processes that run INSERT, UPDATE, or DELETE of single rows in a tight loop, consider wrapping them in an explicit transaction, so you only have to wait for the transaction log to be flushed at the end.
As always Query Store is your friend, and can show you the waits by query, and you can analyze the waits by session too in sys.dm_exec_session_wait_stats to see what parts of your workload are suffering these waits.
You can get a better sense of how much your clients are waiting by comparing the session elapsed time and cpu time to the wait times. EG