Sql-server – 2019 Enterprise – NOLOCK without NOLOCK, page latch time-out on tempdb and finally a dump

dumpsql serversql-server-2019tempdb

We have a very strange problem with a new SQL Server 2019 on 2 new physical machines:

Infrastructure:
Starting a new installation of SQL Server 2019 Enterprise (15.0.2000.5 / X64 on Windows Server 2019 Standard 10.0 / Build 17763) on 2 new physical machines (for AlwaysOn replicas).
The new machines are a LENOVO :

  • ThinkSystem SR630 – [7X02CTO1WW]
  • 1 CPU : 1 Xeon Gold 6208U – 2.90 Ghz (16 cores x 2 – hyperthreading)
  • 256 Gb RAM (32 Gb x 8)

The problem appears systematically on the 2 new machines…

Test:
The test that generate the malfunctions is as follows:

  1. creation of a database
  2. enlarging the database files and switching to SIMPLE recovery mode
  3. creation of a table
  4. filling the table with 1 million rows
  5. execution of the test request (calculation of aggregated time intervals – collapsing)

It is this last query (executed almost 10 times) which very often leads to an error with the following message:

Msg 601, Level 12, State 1, Line … Could not continue scan with NOLOCK due to data movement.

Of course, at no point did we implement the NOLOCK hint or the READ UNCOMMITTED isolation level. But… information is recorded in the SQL Server event log:

  1. time-out for buffer latch in pages of tempdb
  2. sometimes a dump is generated

Example of « buffer latch » messages :

A time-out occurred while waiting for buffer latch — type 4, bp 00000292CE3D60C0, page 9:18634, stat 0x10b, database id: 2, allocation
unit Id: 422212527063040/140737550352384, task 0x00000292AB073468 : 9,
waittime 300 seconds, flags 0x100000001a, owning task
0x00000292AB06B848. Not continuing to wait. A time-out occurred while
waiting for buffer latch — type 4, bp 00000292CE398340, page 6:10372,
stat 0x10b, database id: 2, allocation unit Id:
422212527063040/140737550352384, task 0x00000292AB07B468 : 8, waittime
300 seconds, flags 0x1a, owning task 0x00000292AB073468. Not
continuing to wait. A time-out occurred while waiting for buffer latch
— type 4, bp 00000292CE3DC480, page 9:18655, stat 0x10b, database id:
2, allocation unit Id: 422212527063040/140737550352384, task
0x00000292AB703C28 : 12, waittime 300 seconds, flags 0x1a, owning task
0x00000292AB07B468. Not continuing to wait.

Our investigations:

  • Installing CU6 and after CU7 does not solve the problem
  • Disk failure has been sweeped, because we create the database on the 3 different hard disk drives. And the problem remains, even when we move the tempdb.
  • The « buffer latch » information likely indicates a memory fault. But no hardware information appears to be about the RAM in the servers (we performed a complete memory test).
  • While soft NUMA is enabled, we performed complementary tests by disabling SOFT NUMA, but it does not solve the problem.

Some elements that seems to decrease the occurrence of the phenomenon:

  1. disabling SOFT NUMA (with hyperthreading enabled): the first 3 test are successes and last 7 are failure
  2. disabling hyperthreading (with SOFT NUMA disabled): on 10 tests only 1 is a failure
  3. disabling hyperthreading (with SOFT NUMA enabled): on 90 tests only 1 is a failure

Some elements that seems to solve the problem:

  1. setting MAXDOP 1
  2. recomputing statistics (UPDATE STATISTICS T_TIME_INTERVAL_TIV WITH FULLSCAN;)

We remove the RAM and replace it by another, with a different reference. The problem persists…

Perhaps this has something to do with the new « Concurrent PFS updates » of SQL Server 2019…
is there any trace flag that disables this new behaviour ?

========== SCRIPT ==========
 
-- 1) creating database
CREATE DATABASE DB_BENCH
GO
 
DECLARE @SQL NVARCHAR(max) = N'';
SELECT @SQL = @SQL + N'ALTER DATABASE DB_BENCH MODIFY FILE (NAME = ''' + name + N''', SIZE = 10 GB, FILEGROWTH = 64 MB);'
FROM DB_BENCH.sys.database_files;
SET @SQL = @SQL + N'ALTER DATABASE DB_BENCH SET RECOVERY SIMPLE;'
EXEC (@SQL);
GO
 
USE DB_BENCH
GO
 
-- 2) creating table and view
CREATE TABLE T_TIME_INTERVAL_TIV
(TIV_ID       INT NOT NULL IDENTITY PRIMARY KEY,
 TIV_GROUP    INT,
 TIV_DEBUT    DATETIME2(0),
 TIV_FIN      DATETIME2(0))
GO 
 
CREATE VIEW V
AS
SELECT TIV_GROUP AS id, TIV_DEBUT AS intime, TIV_FIN AS outtime
FROM   T_TIME_INTERVAL_TIV
GO
 
-- 3) inserting datas
TRUNCATE TABLE T_TIME_INTERVAL_TIV;
GO
 
BULK INSERT T_TIME_INTERVAL_TIV
FROM "C:\DATA_SQL\intervals.txt"
WITH (KEEPIDENTITY , FIELDTERMINATOR = '\t',
      ROWTERMINATOR = '\n');
GO
 
-- 4) testing
SET NOCOUNT ON;
GO
 
SET STATISTICS TIME ON;
GO
 
WITH T1 As
(SELECT id, intime 
 FROM   V
 UNION  ALL
 SELECT id, outtime FROM V),
T2 As
(SELECT ROW_NUMBER() OVER(PARTITION BY id ORDER BY intime) NN, id, intime
 FROM   T1 T1_1),
T3 As
(SELECT T2_1.NN - ROW_NUMBER() OVER(PARTITION BY T2_1.id ORDER BY T2_1.intime,T2_2.intime) NN1,
        T2_1.id, T2_1.intime intime, T2_2.intime outtime
 FROM   T2 T2_1
        INNER JOIN T2 T2_2
              ON T2_1.id=T2_2.id
                 And T2_1.NN=T2_2.NN-1
 WHERE  EXISTS (SELECT *
                FROM   V S
                WHERE  S.id=T2_1.id
                  AND  (S.intime < T2_2.intime AND S.outtime>T2_1.intime))
    OR  T2_1.intime = T2_2.intime)
SELECT id, MIN(intime) intime, MAX(outtime) outtime
FROM   T3
GROUP  BY id, NN1
ORDER BY id, intime, outtime;
 
========== END OF SCRIPT ==========

File to execute the script:

  1. data
  2. dump 0005 SQL Server
    2.1 log
    2.2 mdmp
    2.3 txt
    2.4 SQLDUMPER_ERRORLOG.log
  3. Query plans
    3.1 raw data
    3.2 after a UPDATE STATS in FULLSCAN mode
    3.3 Comparison of the two plans

On 2020-09-14UTC16:00, complementary tests:

Using 2017 version of the database on the SQL Server 2019 as this:

USE master
GO
ALTER DATABASE DB_BENCH SET COMPATIBILITY_LEVEL = 140;
GO

Results without any error over 180 execution of the same query

Using 2019 version and enabling TF 3925 or 3972 or the combination of 3925 and 3972, results with systematically error on all execution (with MAXDOP 0, hyperthrading on and NUMA soft on).

Best Answer

SEE THE SAMPLE REPORT AT THE VERY BOTTOM

I offer here a detailed and complete proposed answer. The poster will need to respond as accepted or provide feedback to improve the solution. My opinion is that the issue reported was not a fault with Microsoft SQL Server 2019, not about TRACE FLAGS, and not about TEMPDB. The issue was the code and the conceptualization of the requirements. The requirements are fairly straightforward, as follows.

The general concept here is similar to a FILO Stack (First In, Last Out). It simulates EXACTLY one thousand (1000) number-classified "Actors" going onto a crowded stage with a writable journal or logging card, where the Actors enter the stage and leave the stage many times over a long series of multiple Broadway Shows, managed by Liza Minelli in New York City, USA, long before COVID.

The exit and enter cycles of each of the 1000 Actors can go on for days and years because it is a very successful show, and their exits and entries are tracked to the second of the day. The data has a granularity of 86400 seconds per day, with no fractions. I have not analyzed if more than one Actor can exit and enter simultaneously, but there will be evidence in the data.

Each Actor enters at Stage Right, and then subsequently exits at Stage Left. Each Actor is "labelled" with a numbered card. This is called a "Classifier" by some business analyst working for Liza Minelli. The analyst assigns this badge number to each Actor, and sticks the stinking badge on their shoulder.

The Actor's badge number does NOT uniquely identify each row in the Poster source data (SQL Pro) because the Actor with the Badge Number may enter the Stage and exits the Stage multiple times over the years, just like a Symbolic Prophet.

Thus, their Classifier appears many times on each row with different enter and exit datetimes. Each datetime they enter at stage right, the Actor gets a new datetime stamped on a logging card that they carry in the "Datetime IN" column. Next, when they subsequently exit at stage left, the Actor gets another stamp in another column named "Datetime OUT".

Over the years, the Actor can enter and exit the stage multiple times, but the annual report for the Security Department under Liza Minelli only needs to see the first "entering Stage Right", known as "Datetime IN" and the last "exiting Stage Left", known as "Datetime OUT". They need this for each individual Actor who is classified with the numbered stinking badge on their shoulder. Note, the badges do stink. In fact, even if the Actors say they don't need no stinking badges, they still stink. Giggle now to stay happy.

Also, the first will come last and the last will come first, since Liza Minelli has Faith (see Matthew 19:30). ANYWAY, here are the steps to complete this requirement / exigence:

All the code is listed here, including all DDL (data definition) and DML (data manipulation / T-SQL), and even dozens of the result rows. Square brackets are shown below, not literally in the object names created. I just use them as a matter of convention. The code blocks below will be referenced by the following numbered list.

  • 1.0: First, create a database named [Quantum];
  • 2.0: Create a new namespace schema therein named [Meta], and one named [Hub];
  • 2.1: Create a new table inside the [Hub], naming it [Hub].[Intervals];
  • 2.2: Include a covering Clustered Index on that table definition (see code)
  • 3.0: Load the source data to the table with OLEDB BULK OPENROWSET provider
  • 3.1: Use an XML Format file (see GitHub) for the provider, just like Azure
  • 4.0: Create a parameterized Stored Procedure to load data
  • 5.0: Implement an input parameter for code page 1252 raw
  • 5.1: Implement an input parameter for starting row and truncation needs
  • 6.0: Implement error handling in the SPROC
  • 7.0: Implement more input parameters for FQFP text and format file UNC Paths
  • 7.1: Create the XML Format File according to SQL 2019 Bulk OPENROWSET provider
  • 8.0: Create Windows Shares with UNC paths and assign permissions as needed
  • 9.0: Download the tabbed text file from this SQL Pro Post
  • 9.1: Put the tabbed file into the UNC File Folder from 8.0
  • 10.0: Extrapolate a SPROC signature for testing, and include arguments
  • 10.1: Load the source data at will, using the raised truncation flag
  • 10.1: You may adjust the parameters for multiple test scenarios
  • 11.0: Create the code that will extract the report (see Git)
  • 11.1: See T-SQL Querying by Itzik Ben-Gan et al (ISBN: 978-0-7356-8504-8
  • 11.2: Study the chapters on Windowing functions
  • See FIRST_VALUE() & LAST_VALUE() especially
  • 12.0 Create a simple set of Common Table Expressions
  • 13.0 Using the CTE, query that source data and output the report

The report should have the following features:

  • a. One Row per classified participant (numbered badges per person)
  • b. Two columns, each with 32-bit datetime columns on each row
  • c. The MAX Exit Time and the MIN Enter Time for each classified action per row

This is actually a classic STACK solution. First In, Last Out (push / pop). The poster appears to require 1 row for each classifier, where the row shows the classifier number (a non-key integer), with their entries and exits. See Shakespeare for more on Actors entering and exiting the Stage of Life.

/*~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Purpose: Creates the ActorStage Database for SQL Pro
--------------------------------------------------------
-- SQL Server 2019 Version
-- The folders must exist as shown 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*/
USE [master]
go
EXEC sp_configure 'show advanced options', 1;
GO
EXEC sp_configure'xp_cmdshell', 1;
GO
RECONFIGURE
-- Next, directories, will only work once. Look at your system.
GO
xp_cmdshell 'MKDIR C:\OLTP_DATA\ActorStage\System\';
GO
xp_cmdshell 'MKDIR C:\OLTP_DATA\ActorStage\Hub\';
GO
xp_cmdshell 'MKDIR C:\OLTP_DATA\ActorStage\Metadata\';
GO
xp_cmdshell 'MKDIR C:\OLTP_DATA\ActorStage\Indices\';
GO
xp_cmdshell 'MKDIR D:\OLTP_TLOG\ActorStage\';
GO

USE [master]
GO

CREATE DATABASE [ActorStage]
CONTAINMENT = NONE
ON PRIMARY -- The Primary Filegroup should never have user data on it.
( NAME = N'ActorStage_SYSDATA_0', FILENAME = N'C:\OLTP_DATA\ActorStage\System\ActorStage_SYSDATA_0.MDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB),
( NAME = N'ActorStage_SYSDATA_1', FILENAME = N'C:\OLTP_DATA\ActorStage\System\ActorStage_SYSDATA_1.MDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB),

FILEGROUP [HUB] 
( NAME = N'ActorStage_HUB_0', FILENAME = N'C:\OLTP_DATA\ActorStage\Hub\ActorStage_HUB_0.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 12MB),
( NAME = N'ActorStage_HUB_1', FILENAME = N'C:\OLTP_DATA\ActorStage\Hub\ActorStage_HUB_1.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 12MB),
( NAME = N'ActorStage_HUB_2', FILENAME = N'C:\OLTP_DATA\ActorStage\Hub\ActorStage_HUB_2.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 12MB),
( NAME = N'ActorStage_HUB_3', FILENAME = N'C:\OLTP_DATA\ActorStage\Hub\ActorStage_HUB_3.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 12MB),

FILEGROUP [METADATA]
( NAME = N'ActorStage_METADATA_0', FILENAME = N'C:\OLTP_DATA\ActorStage\Metadata\ActorStage_METADATA_0.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB),
( NAME = N'ActorStage_METADATA_1', FILENAME = N'C:\OLTP_DATA\ActorStage\Metadata\ActorStage_METADATA_1.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB), 

FILEGROUP [INDICES] 
( NAME = N'ActorStage_INDICES_0', FILENAME = N'C:\OLTP_DATA\ActorStage\Indices\ActorStage_INDICES_0.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB),
( NAME = N'ActorStage_INDICES_1', FILENAME = N'C:\OLTP_DATA\ActorStage\Indices\ActorStage_INDICES_1.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB),
( NAME = N'ActorStage_INDICES_2', FILENAME = N'C:\OLTP_DATA\ActorStage\Indices\ActorStage_INDICES_2.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB),
( NAME = N'ActorStage_INDICES_3', FILENAME = N'C:\OLTP_DATA\ActorStage\Indices\ActorStage_INDICES_3.NDF' , SIZE = 25MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB)

LOG ON -- Log files only fill 1 at a time. The second log file is just exposed here as a precautionary measure.
( NAME = N'ActorStage_TLOG_0', FILENAME = N'D:\OLTP_TLOG\ActorStage\ActorStage_TLOG_0.LDF' , SIZE = 102400KB , MAXSIZE = 204800KB , FILEGROWTH = 20480KB ),
( NAME = N'ActorStage_TLOG_1', FILENAME = N'D:\OLTP_TLOG\ActorStage\ActorStage_TLOG_1.LDF' , SIZE = 102400KB , MAXSIZE = 204800KB , FILEGROWTH = 20480KB )
GO

USE [ActorStage]
GO
CREATE SCHEMA [Metadata] AUTHORIZATION [dbo];
GO
CREATE SCHEMA [Hub] AUTHORIZATION [dbo];
GO
CREATE SCHEMA [Spoke] AUTHORIZATION [dbo];
GO

-- Force CS / AS because Actors are Sensitive about their Accents in each Case
ALTER DATABASE [ActorStage] COLLATE SQL_Latin1_General_CP1_CS_AS;
ALTER DATABASE [ActorStage] SET RECOVERY SIMPLE
--ALTER DATABASE [ActorStage] SET COMPATIBILITY_LEVEL = x
GO

ALTER DATABASE [ActorStage] SET ANSI_NULL_DEFAULT ON
ALTER DATABASE [ActorStage] SET ANSI_NULLS ON 
ALTER DATABASE [ActorStage] SET ANSI_PADDING ON
ALTER DATABASE [ActorStage] SET ANSI_WARNINGS ON
ALTER DATABASE [ActorStage] SET QUOTED_IDENTIFIER OFF
GO

ALTER DATABASE [ActorStage] SET ARITHABORT ON
ALTER DATABASE [ActorStage] SET NUMERIC_ROUNDABORT ON 
GO

ALTER DATABASE [ActorStage] SET AUTO_CLOSE OFF 
ALTER DATABASE [ActorStage] SET AUTO_SHRINK OFF 
ALTER DATABASE [ActorStage] SET AUTO_UPDATE_STATISTICS ON 

GO
ALTER DATABASE [ActorStage] SET CURSOR_CLOSE_ON_COMMIT OFF 
ALTER DATABASE [ActorStage] SET CURSOR_DEFAULT GLOBAL 
ALTER DATABASE [ActorStage] SET CONCAT_NULL_YIELDS_NULL ON 
GO

ALTER DATABASE [ActorStage] SET RECURSIVE_TRIGGERS OFF 
ALTER DATABASE [ActorStage] SET DISABLE_BROKER 
ALTER DATABASE [ActorStage] SET AUTO_UPDATE_STATISTICS_ASYNC OFF 
ALTER DATABASE [ActorStage] SET DATE_CORRELATION_OPTIMIZATION OFF 
ALTER DATABASE [ActorStage] SET TRUSTWORTHY OFF 
ALTER DATABASE [ActorStage] SET ALLOW_SNAPSHOT_ISOLATION OFF 
ALTER DATABASE [ActorStage] SET PARAMETERIZATION SIMPLE 
ALTER DATABASE [ActorStage] SET READ_COMMITTED_SNAPSHOT OFF 
ALTER DATABASE [ActorStage] SET HONOR_BROKER_PRIORITY OFF 
GO

ALTER DATABASE [ActorStage] SET PAGE_VERIFY CHECKSUM  
ALTER DATABASE [ActorStage] SET DB_CHAINING OFF 
ALTER DATABASE [ActorStage] SET FILESTREAM( NON_TRANSACTED_ACCESS = OFF ) 
ALTER DATABASE [ActorStage] SET TARGET_RECOVERY_TIME = 0 SECONDS 
GO

ALTER DATABASE [ActorStage] SET READ_WRITE 
ALTER DATABASE [ActorStage] SET MULTI_USER 
GO

IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled'))
BEGIN
    EXECUTE [ActorStage].[dbo].[sp_fulltext_database] @action = 'enable'
END
GO

Next:

/*~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Actors Stage
-------------------------------------------------
-- This Script Name: CreateTableIntervals.sql
-- Creator: @CubeSpark
-- Created: 2020 COVID Days
-- Purpose: Creates the Intervals table in the Spoke schema for SQL Pro
-- Compats: Tested on SQL Server 2019
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*/

DROP TABLE IF EXISTS Hub.Intervals;
PRINT 'Table Hub.Intervals will be created.';
GO

CREATE TABLE [Hub].[Intervals](
    [Interval_PSK] INTEGER NOT NULL,
    [Classifier] INTEGER NOT NULL,
    [DatetimeIN] DATETIME NOT NULL,
    [DatetimeOUT] DATETIME NOT NULL
 CONSTRAINT [Intervals_PKC] PRIMARY KEY CLUSTERED  -- index
(
    [Interval_PSK], [Classifier], [DatetimeIN], [DatetimeOUT] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [HUB]
) ON [HUB];
GO

IF OBJECT_ID (N'Hub.Intervals', N'U') IS NOT NULL 
PRINT 'Table Hub.Intervals was successfully Created.';
GO

Next, the OLEDB OPENROWSET BULK PROVIDER FORMAT FILE

<?xml version="1.0"?>
<BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
 <RECORD>
  <FIELD ID="1" xsi:type="CharTerm" TERMINATOR="\t"/>
  <FIELD ID="2" xsi:type="CharTerm" TERMINATOR="\t"/>
  <FIELD ID="3" xsi:type="CharTerm" TERMINATOR="\t"/>
  <FIELD ID="4" xsi:type="CharTerm" TERMINATOR="\t"/>
  <FIELD ID="5" xsi:type="CharTerm" TERMINATOR="\r\n"/>
 </RECORD>
 <ROW>
  <COLUMN SOURCE="1" NAME="Interval_PSK" xsi:type="SQLINT"/>
  <COLUMN SOURCE="2" NAME="Classifier" xsi:type="SQLINT"/>
  <COLUMN SOURCE="3" NAME="DateTimeIN" xsi:type="SQLDATETIME"/>
  <COLUMN SOURCE="4" NAME="DateTimeOUT" xsi:type="SQLDATETIME"/>
 </ROW>
</BCPFORMAT>

Next, Seeding SPROC and Error SPROC

    /*~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    -- Systems Deployment Management.
    -------------------------------------------------
    -- This Script Name: CreateGetErrorData.sql
    -- Creator: @CubeSpark
    -- Created: 2020 COVID Days
    This procedure will obtain error data from Catch blocks.

    -- Compats: Tested on SQL Server 2019      
    --ERROR_NUMBER() returns the number of the error.
    --ERROR_SEVERITY() returns the severity.
    --ERROR_STATE() returns the error state number.
    --ERROR_PROCEDURE() returns the name of the stored procedure or trigger 
    --ERROR_LINE() returns the line number inside the routine that caused 
    --ERROR_MESSAGE() returns the complete text of the error message. 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*/
    
    DROP PROCEDURE IF EXISTS Meta.GetErrorData;  
    GO 
    
    IF OBJECT_ID(N'Meta.GetErrorData') IS NULL
    BEGIN
      PRINT 'Stored Procedure Meta.GetErrorData will be created.'
    END;
    
    GO
    -- Create procedure to retrieve error information.  
    CREATE PROCEDURE Meta.GetErrorData
    @InsertErrorRowOnly BIT = 0,
    @SetMessageText NVARCHAR(200) = 'Calling SQL Script is not known'
    AS
    SET NOCOUNT ON;
    IF @InsertErrorRowOnly = 0  -- use this if you want to just print the statements in the results window
    BEGIN
        SELECT
        'UserName' = 'CubeSpark',
        'ErrorNumber' = ERROR_NUMBER(),
        'ErrorSeverity' = ERROR_SEVERITY(),
        'ErrorState' = ERROR_STATE(),
        'ErrorProcedure' = ISNULL(ERROR_PROCEDURE(), @SetMessageText),
        'ErrorLine' = ERROR_LINE(),
        'ErrorMessage' = ERROR_MESSAGE();
    END;
    
    IF @InsertErrorRowOnly = 1 -- use this if you want to register the error in the meta data table.
    BEGIN
        INSERT INTO [Meta].[ErrorData] (
        [UserName], 
        [ErrorNumber], 
        [ErrorSeverity], 
        [ErrorState],
        [ErrorProcedure], 
        [ErrorLine], 
        [ErrorMessage], 
        [ErrorDateTime])
        SELECT 
        'UserName' = 'CubeSpark',
        'ErrorNumber' = ERROR_NUMBER(),
        'ErrorSeverity' = ERROR_SEVERITY(),
        'ErrorState' = ERROR_STATE(),
        'ErrorProcedure' = ISNULL(ERROR_PROCEDURE(), @SetMessageText),
        'ErrorLine' = ERROR_LINE(),
        'ErrorMessage' = ERROR_MESSAGE(),
        'ErrorDateTime' = GETDATE();
    END;
    GO  
    
    IF OBJECT_ID(N'Meta.GetErrorData') IS NOT NULL
    BEGIN
      PRINT 'Stored Procedure Meta.GetErrorData has been created.'
    END

Next, the seeding and loader. You'll need the Data from SQL Pro's post

    /*~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    -- Data Seed Spreading and Loading Script
    -------------------------------------------------
    -- SQL Server 2019 Version
    -- This Script Name: CreateProcedureSeeding.sql
    -- Creator: Mr. Grant Peace
    -- Alterer: Mr. Getauf Thescaffolt 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*/
    -- Use to make it USEFUL is your job

    DROP PROCEDURE IF EXISTS [Meta].[Seeding];
    PRINT 'Stored Procedure Meta.Seeding will be recreated'
    GO
    
    CREATE PROCEDURE [Meta].[Seeding]
    @SchemaName NVARCHAR(20) = 'Hub',
    @TableName NVARCHAR(100) = 'Intervals',
    @StartingRow INTEGER = 1,
    @CodePage INTEGER = 1252,
    @UNC_QualifiedErrorFilePath NVARCHAR(200) = '\\YOURTHING\OutputFiles\Errors.txt',
    @UNC_QualifiedSeedFilePath NVARCHAR(200) = '\\YOURTHING\InputFiles\Intervals.txt',
    @UNC_QualifiedFormatFilePath NVARCHAR(200) = '\\YOURTHING\OutputFiles\Intervals.xml',
    @PrintOnly BIT = 1,
    @WithTruncate BIT = 0,
    @InsertErrorRowOnly BIT = 0,
    @Message_OUT NVARCHAR(MAX) = 'Test' OUTPUT,
    @ErrorMessage_OUT NVARCHAR(MAX) = 'Test'  OUTPUT,
    @RowCount_OUT INTEGER = 0 OUTPUT
    AS
    SET NOCOUNT ON
    DECLARE 
    @Executable NVARCHAR(MAX),
    @RowCount INTEGER, 
    @crlf NCHAR(2) = CHAR(13) + CHAR(10),
    @Parameters NVARCHAR(100),
    @SetMessageText NVARCHAR(1024) = 'No Errors Handled';
    
    SET @Parameters = N'@RowCount_EX INTEGER OUTPUT'
    SET @RowCount_OUT = 0;
    
    SET @Message_OUT = 'Success.';
    SET @ErrorMessage_OUT = 'No errors.';
    
        -- No table exists, so then you can do a SELECT INTO.
        BEGIN TRY
            IF @WithTruncate = 1 
            BEGIN
                SET @Executable = 
                'IF OBJECT_ID (N' 
                + '''' + '[' + @SchemaName + '].[' 
                + @TableName + ']' + '''' + ', N' + '''' + 'U' + '''' + ') IS NOT NULL ' 
                + @crlf
                + 'BEGIN '
                + @crlf
                + '  TRUNCATE TABLE [' + @SchemaName + '].[' + @TableName + '];'
                + @crlf
                + '  INSERT INTO [' + @SchemaName + '].[' + @TableName + '] WITH (KEEPDEFAULTS)'
                + @crlf
                + '  SELECT SourceLoad.* FROM OPENROWSET(BULK ' + '''' + @UNC_QualifiedSeedFilePath + '''' + ', '
                + @crlf
                + '  FORMATFILE = ' + '''' + @UNC_QualifiedFormatFilePath + '''' + ', '
                + @crlf
                + '  ERRORFILE = ' + '''' + @UNC_QualifiedErrorFilePath + '''' + ', '
                + @crlf
                + '  FIRSTROW = ' + CAST(@StartingRow AS NVARCHAR(10)) + ', ' --2,'
                + @crlf
                + '  CODEPAGE = ' + '''' + CAST(@CodePage AS NVARCHAR(6)) + '''' + ') AS SourceLoad'
                + @crlf
                + 'END;' 
                + @crlf
                + 'SET @RowCount_EX = @@ROWCOUNT;'
            END
            ELSE
            BEGIN
                SET @Executable = 
                'IF OBJECT_ID (N' 
                + '''' + '[' + @SchemaName + '].[' 
                + @TableName + ']' + '''' + ', N' + '''' + 'U' + '''' + ') IS NOT NULL ' 
                + @crlf
                + 'BEGIN '
                + @crlf
                + '  INSERT INTO [' + @SchemaName + '].[' + @TableName + '] WITH (KEEPDEFAULTS)'
                + @crlf
                + '  SELECT SourceLoad.* FROM OPENROWSET(BULK ' + '''' + @UNC_QualifiedSeedFilePath + '''' + ', '
                + @crlf
                + '  FORMATFILE = ' + '''' + @UNC_QualifiedFormatFilePath + '''' + ', '
                + @crlf
                + '  ERRORFILE = ' + '''' + @UNC_QualifiedErrorFilePath + '''' + ', '
                + @crlf
                + '  FIRSTROW = ' + CAST(@StartingRow AS NVARCHAR(10)) + ', ' --2,'
                + @crlf
                + '  CODEPAGE = ' + '''' + CAST(@CodePage AS NVARCHAR(6)) + '''' + ') AS SourceLoad'
                + @crlf
                --+ @crlf
                --+ '  FIRSTROW = 2,'
                --+ '  CODEPAGE = ' + '''' + 'RAW' + ''''   + ') AS SourceLoad'
                + @crlf
                + 'END;' 
                + @crlf
                + 'SET @RowCount_EX = @@ROWCOUNT;'
            END;
    
            IF @PrintOnly = 1 PRINT @Executable  -- SSIS Suppresses PRINT because its not STDIO.
            IF @PrintOnly = 0 EXECUTE sp_executesql @Executable, @Parameters, @RowCount_EX = @RowCount_OUT OUTPUT;
            
            IF @PrintOnly = 0 AND @RowCount_OUT > 0
            BEGIN
                PRINT 'Seeding of [' + @SchemaName + '].[' + @TableName + '] is complete, with an affected row count of: ' + CAST(@RowCount_OUT AS VARCHAR(10));
            END
            
            IF @PrintOnly = 0 AND @RowCount_OUT = 0 
            BEGIN
                PRINT 'Table [' + @SchemaName + '].[' + @TableName + '] does not exist. Create the table first before loading.'
            END
        
        END TRY
        
        BEGIN CATCH
        -- This will only register faults in the dynamic SQL.
        PRINT 'Inside Catch'
            SET @SetMessageText =   N'Caller: ' + OBJECT_SCHEMA_NAME(@@PROCID) + '.' + OBJECT_NAME(@@PROCID);
            
            EXECUTE Meta.GetErrorData 
            @SetMessageText = @SetMessageText, 
            @InsertErrorRowOnly = @InsertErrorRowOnly;
            
            SET @Message_OUT = 'Failure';
    
            SET @ErrorMessage_OUT = 'Error Number #: '
            +   CONVERT(NVARCHAR(15), ERROR_NUMBER())
            + '.' + @crlf + 'Error Message: '
            +   ERROR_MESSAGE()
            + @crlf +   'Module line #: '
            +   CONVERT(NVARCHAR(15), ERROR_LINE())
            + @crlf +   'Error Severity: '
            +   CONVERT(NVARCHAR(10), ERROR_SEVERITY())
            + @crlf +   'Error State: '
            +   CONVERT(NVARCHAR(10), ERROR_STATE())
    
            PRINT @ErrorMessage_OUT;
            
        END CATCH;
    
    GO
    IF OBJECT_ID(N'Meta.Seeding') IS NOT NULL
    BEGIN
         PRINT 'Stored Procedure Meta.Seeding was recreated.'
    END
    GO

Next: Data Loader. Pay careful attention to the folder and the parameters.


/*~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Stack Exchange
-- Data Loading SPROC
-------------------------------------------------
https://dba.stackexchange.com/questions/275487/2019-enterprise-nolock-without-nolock-page-latch-time-out-on-tempdb-and-final/280977#280977

-- SQL Server 2019 Version
-- This Script Name: LoadIntervals.sql
-- Creator: Mr. Getauf Thescaffolt
-- Created: 2018-03-31 09:00:00.000
-- Altered: 2019-07-28 09:00:00.000
-- Alterer: Mr. Getauf Thescaffolt 
-- Outputs: Data to table
-- Purpose: 
-- Package: 
-- Folders: 

Requires Format File, UNC Paths, and TAB Text as specified by SQL Pro
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*/

USE [Quantum]
GO

DECLARE @RC INTEGER,
@SchemaName NVARCHAR(20), 
@TableName NVARCHAR(100),
@UNC_QualifiedErrorFilePath NVARCHAR(200),
@UNC_QualifiedSeedFilePath NVARCHAR(200),
@UNC_QualifiedFormatFilePath NVARCHAR(200),
@PrintOnly BIT,
@WithTruncate BIT,
@InsertErrorRowOnly BIT,
@Message_OUT NVARCHAR(MAX),
@ErrorMessage_OUT NVARCHAR(MAX),
@RowCount_OUT INTEGER;

EXECUTE [Meta].[Seeding] 
@SchemaName = [Hub],
@TableName = [Intervals],
@CodePage = 1252,
@StartingRow = 1,
@UNC_QualifiedErrorFilePath = '\\YOURTHING\OutputFiles\Errors.txt',
@UNC_QualifiedSeedFilePath = '\\YOURTHING\InputFiles\Intervals.txt',
@UNC_QualifiedFormatFilePath = '\\YOURTHING\FormatFiles\Intervals.xml',
@PrintOnly = 0,
@WithTruncate = 1,
@InsertErrorRowOnly = 0,
@Message_OUT = @Message_OUT OUTPUT,
@ErrorMessage_OUT = @ErrorMessage_OUT OUTPUT, 
@RowCount_OUT = @RowCount_OUT OUTPUT; 
GO

Next, the Report Code

    /* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    -- Vector:  Inbound or Outbound
    -- Classifier: Source Data column classifies individual actors, and
    -- does not classify rows, only Actors
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*/
    ;
    WITH AllVectors AS 
    (
    -- We are seeking the maximums and minimums datetimes 
    -- for each classifed Actor's First Entrances and Final Bow Exit per row

    SELECT Classifier, 'Vector' = 'Enter', 'Intervals' = DatetimeIN FROM [Hub].[Intervals]
    UNION ALL 
    SELECT Classifier, 'Vector' = 'Exit', 'Intervals' = DatetimeOUT FROM [Hub].[Intervals]
    ),
    BoundaryVectors AS
    (
    SELECT
    Classifier,
    Vector,
    'DatetimeIn' = FIRST_VALUE(Intervals) OVER(PARTITION BY Classifier ORDER BY Vector, Intervals ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW),
    'DatetimeOut' = LAST_VALUE(Intervals) OVER(PARTITION BY Classifier ORDER BY Vector, Intervals ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
    FROM AllVectors
    )
    SELECT
    Classifier,
    DatetimeIn = MIN(DatetimeIn), 
    DatetimeOut = MAX(DatetimeOut) 
    FROM BoundaryVectors
    GROUP BY Classifier
    ORDER BY Classifier
    ;

Report: first few dozen rows:

Classifier DatetimeIn------------- DatetimeOut------------
1 1950-01-15 19:50:04.000 2018-03-18 13:38:30.000
2 1950-02-20 16:08:47.000 2018-04-28 18:20:42.000
3 1950-01-21 08:01:45.000 2018-01-08 15:46:58.000
4 1950-03-14 07:43:54.000 2018-06-10 04:31:08.000
.......... ....................... .......................
997 1950-01-29 23:39:06.000 2018-01-14 00:58:25.000
998 1950-02-21 23:03:46.000 2017-07-29 09:09:21.000
999 1950-01-24 11:24:24.000 2018-05-31 05:32:57.000
1000 1950-03-25 13:39:24.000 2018-06-04 15:12:41.000