Consider the following simple MCVE:
SET STATISTICS IO, TIME OFF;
USE tempdb;
IF OBJECT_ID(N'tempdb..#t1', N'U') IS NOT NULL DROP TABLE #t1;
CREATE TABLE #t1
(
r int NOT NULL
);
IF OBJECT_ID(N'tempdb..##t1', N'U') IS NOT NULL DROP TABLE ##t1;
CREATE TABLE ##t1
(
r int NOT NULL
);
IF OBJECT_ID(N'dbo.s1', N'U') IS NOT NULL DROP TABLE dbo.s1;
CREATE TABLE dbo.s1
(
r int NOT NULL
PRIMARY KEY CLUSTERED
);
INSERT INTO dbo.s1 (r)
SELECT TOP(10000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.syscolumns sc1
CROSS JOIN sys.syscolumns sc2;
GO
When I run the following inserts, inserting into #t1
shows no stats I/O for the temp table. However, inserting into ##t1
does show stats I/O for the temp table.
SET STATISTICS IO, TIME ON;
GO
INSERT INTO #t1 (r)
SELECT r
FROM dbo.s1;
The stats output:
SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 1 ms. Table 's1'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 16 ms, elapsed time = 9 ms. (10000 rows affected)
INSERT INTO ##t1 (r)
SELECT r
FROM dbo.s1;
SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 1 ms. Table '##t1'. Scan count 0, logical reads 10016, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 's1'. Scan count 1, logical reads 19, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 47 ms, elapsed time = 45 ms. (10000 rows affected)
Why are there so many reads on the ##temp table when I'm only inserting into it?
Best Answer
Minimal logging is not being used when using
INSERT INTO
and global temp tablesInserting one million rows in a global temp table by using
INSERT INTO
When running
SELECT * FROM fn_dblog(NULL, NULL)
while the above query is executing, ~1M rows are returned.One
LOP_INSERT_ROW
operation for each row + other log data.The same insert on a local temp table
Only going up to 700 rows returned by
SELECT * FROM fn_dblog(NULL, NULL)
Minimal logging
Inserting one million rows in a global temp table by using
SELECT INTO
SELECT INTO
a global temp table with 10k recordsTime and IO Statistics
Based on this blogpost we can add
TABLOCK
to initiate minimal logging on a heap tableLow logical reads
Part of an answer by @PaulWhite on how to achieve minimal logging on temporary tables
Creating a regular table to test this:
Filling it up with 1M records
>1M logical reads on this table
Paul White's answer explaining the logical reads reported on the global temp table
Conclusion
The conclusion being that the
INSERT INTO
is not able to use minimal logging, resulting in logging every inserted row individually in the log file of tempdb when used in combination with a global temp table / normal table. Whereas the local temp table/SELECT INTO
/INSERT INTO ... WITH(TABLOCK)
is able to use minimal logging.