You may want to consider using the query that will show you free space in the log and free space in the data file. If you add them up you'll get the similar value as the SSMS is showing.
This is an excellent query which does just that:
DECLARE @log_used DECIMAL(19,7);
CREATE TABLE #x(n SYSNAME, s DECIMAL(19,7), u DECIMAL(19,7), b BIT);
INSERT #x EXEC('DBCC SQLPERF(LogSpace);');
SELECT @log_used = u FROM #x WHERE n = DB_NAME();
DROP TABLE #x;
DECLARE @data_used DECIMAL(19,7);
SELECT @data_used = SUM(a.total_pages)*8/1024.0
FROM sys.partitions AS p
INNER JOIN sys.allocation_units AS a
ON p.[partition_id] = a.container_id;
;WITH x(t,s) AS
(
SELECT [type] = CASE
WHEN [type] IN (0,2,4) THEN 'data' ELSE 'log' END,
size*8/1024.0 FROM sys.database_files AS f
)
SELECT
file_type = t,
size = s,
available = s-CASE t WHEN 'data' THEN @data_used ELSE @log_used END
FROM x;
Query Source and answer to similar question
By the way, it is usually good idea to split data file in your tempDB into number of equally sized chunks (number depends on the number of cores). It tends to help performance a lot. Just don't split the log file (it will do more harm than good).
Before you commit to a particular course of action, you first need to understand what is causing the poor performance.
Look at wait stats while the calculation process is underway. Check this article by Paul Randal for an excellent starting point on how to gather and analyze them.
Use the system dmv, sys.dm_io_virtual_file_stats
to understand the I/O requirements of your process. I use this code, which shows the activity over a 10 minute period:
IF (COALESCE(OBJECT_ID('tempdb..#vfs_stats'), 0) = 0)
CREATE TABLE #vfs_stats
(
run_num INT NOT NULL
, database_id INT NOT NULL
, file_id INT NOT NULL
, sample_ms BIGINT NOT NULL
, num_of_reads BIGINT NOT NULL
, num_of_bytes_read BIGINT NOT NULL
, io_stall_read_ms BIGINT NOT NULL
, num_of_writes BIGINT NOT NULL
, num_of_bytes_written BIGINT NOT NULL
, io_stall_write_ms BIGINT NOT NULL
, io_stall BIGINT NOT NULL
, size_on_disk_bytes BIGINT NOT NULL
);
TRUNCATE TABLE #vfs_stats;
INSERT INTO #vfs_stats (run_num
, database_id
, file_id
, sample_ms
, num_of_reads
, num_of_bytes_read
, io_stall_read_ms
, num_of_writes
, num_of_bytes_written
, io_stall_write_ms
, io_stall
, size_on_disk_bytes
)
SELECT 1
, vfs.database_id
, vfs.file_id
, vfs.sample_ms
, vfs.num_of_reads
, vfs.num_of_bytes_read
, vfs.io_stall_read_ms
, vfs.num_of_writes
, vfs.num_of_bytes_written
, vfs.io_stall_write_ms
, vfs.io_stall
, vfs.size_on_disk_bytes
FROM sys.dm_io_virtual_file_stats(NULL, NULL) vfs;
WAITFOR DELAY '00:10:00';
INSERT INTO #vfs_stats (run_num
, database_id
, file_id
, sample_ms
, num_of_reads
, num_of_bytes_read
, io_stall_read_ms
, num_of_writes
, num_of_bytes_written
, io_stall_write_ms
, io_stall
, size_on_disk_bytes
)
SELECT 2
, vfs.database_id
, vfs.file_id
, vfs.sample_ms
, vfs.num_of_reads
, vfs.num_of_bytes_read
, vfs.io_stall_read_ms
, vfs.num_of_writes
, vfs.num_of_bytes_written
, vfs.io_stall_write_ms
, vfs.io_stall
, vfs.size_on_disk_bytes
FROM sys.dm_io_virtual_file_stats(NULL, NULL) vfs;
SELECT
DatbaseName = d.name
, FileName = mf.name
, SampleTime_ms = s2.sample_ms - s1.sample_ms
, Reads = s2.num_of_reads - s1.num_of_reads
, BytesRead = s2.num_of_bytes_read - s1.num_of_bytes_read
, IOStallRead = s2.io_stall_read_ms - s1.io_stall_read_ms
, Writes = s2.num_of_writes - s1.num_of_writes
, BytesWritten = s2.num_of_bytes_written - s1.num_of_bytes_written
, IOStallWRite = s2.io_stall_write_ms - s1.io_stall_write_ms
, IOStall = s2.io_stall - s1.io_stall
, GrowthOnDisk = s2.size_on_disk_bytes - s1.size_on_disk_bytes
FROM #vfs_stats s1
INNER JOIN #vfs_stats s2 ON s1.run_num = (s2.run_num - 1)
AND s1.database_id = s2.database_id
AND s1.file_id = s2.file_id
INNER JOIN sys.databases d ON s1.database_id = d.database_id
INNER JOIN sys.master_files mf ON s1.database_id = mf.database_id
AND s1.file_id = mf.file_id
ORDER BY d.name
, mf.name;
You'll want to get an excellent understanding of the performance profile of the process that you can use as a baseline. Once you do make changes, you can compare the performance profile after the changes to the baseline performance to understand what impact your change had.
Without understanding the performance profile you will be shooting in the dark by throwing hardware at the problem. However, this can be a good strategy if you know you have underperforming hardware, such as a the data files on old 7200 rpm spinning rust, or old Xeon processors. As a total off-the-cuff recommendation, you could try putting all the databases (tempdb, and the data database) onto a nice fast SSD, preferably a PCIe SSD, and making sure you have enough processor speed and memory bandwidth.
Best Answer
We cracked this a while back by replacing the RAID controller on the server. The disks and server configuration were fine but it appears that the RAID controller couldn't deal with the IO.
We are now in the good place of reads ~ 2ms and Writes at <= 5ms