Sql-server – SQL Server 2019 running 100% on only 1-2 cores with MAXDOP 8 to 14

cconcurrencycpusql serversql-server-2019

Here is the background on this issue.
I am inserting slightly over 1 million records into a table from within a C++ program using ADO. One row at a time is read in from flat file and converted info a format that ADO will accept to add/update into SQL Server.
Now before you ask, this is a legacy application and it will not be converted to .NET per the clients request.

I have an i7-5960x CPU running/overclocked to 4.4GHZ with 32GB memory and no spinning disk, all Samsung SSD.

I have SQL Server 2019 (Dev Edition) with a MAXDOP of 8 on a system with 8 cores and 8 logical cores.

When running the Add/Update process I have a total CPU use of 10%-20% with 1-2 cores running at 100% (Memory use is 10G out of 32G) causing the Add/Update process to slow processing drastically. SQL Server is showing 132 threads and using 75-80% of the total CPU use. Even if I increase the MAXDOP to 14 it doesn't stop SQL Server from using 100% of any of the CPU cores.

If I comment out the function to write the data to SQL Server the app will process the data from a flat file (37 fields 387 chars total) to the format needed to add/update the data for SQL Server the function will process at least 10,000 records a second. When I run the process to write to SQL Server it will start out processing/writing about 600 records a second to SQL Server then it slowly degrades to about 220 records per second after about 100,000 records.

So the question is, can I restrict SQL Server to a max use on a per core (Not total CPU use) basis or force it to spread the threads/processing over all MAXDOP cores. When it does max out one core it will do it for about 5 minutes then switch to another core and max it out at 100% and the previous core will drop to almost nothing.

Thanks for reading this so far, if you can offer any help it would be really appreciated.

Best Answer

One row at a time is read in from flat file and converted info a format that ADO will accept to add/update into SQL Server.

You are going to have a very hard time leveraging multiple SQL Server cores with this approach. Each single-row insert will be executed by one core, sequentially. And each insert is very likely to keep using the same core, because each insert will be coming from the same connection. From Microsoft SQL Server 2012 Internals by Kalen Delaney, et al:

Each SPID has a preferred scheduler, which is the one that most recently processed a request from the SPID.

If it's not a lot of trouble, you might get a "quick win" improvement in throughput by splitting the file into multiple files, and then having multiple instances of your C++ application loading each file concurrently. Of course, you might run into other bottlenecks there, but it's worth a shot.

A side note: Dan Guzman mentioned:

The slowly degrading performance is symptomatic that the add/update query is performing a scan. Index/query tuning may avoid that and provide more linear and better performance overall.

If your table has additional indexes or foreign key / unique constraints, SQL Server will have to check more and more data as you load the table. You could optimize to get seeks for these checks, or load an unindexed heap staging table first with your app, and then have an additional step that moves data to the real table using T-SQL.


Overall, to leverage more cores, you need to improve the concurrency of your loading application. Either by using bulk loading APIs, or inserting from multiple clients concurrently. For more stable throughput, you'll need to avoid costly table scans, possibly by inserting into a staging table rather than your app table directly.