Sql-server – Multiple SQL queries fired asynchronously progressively slow down

sql server

We are using .NET Core Web API 2. We have a vue client that makes calls to this Web API 2 API method. We have a report dashboard page. On this page we have 13 charts which means 13 calls are made to an API endpoint to get the data for the charts.

If we run the 13 queries in SQL individually all of them run within a second or less. I created a console app in which I ran the 13 queries in a loop and captured some timings. Every query ran in less than 2 seconds.

I then captured some timings in the API method. I captured the start time, end time and duration. All the 13 calls were made within a second of each other. The bottle neck seemed to be around the the code that ran the SQL query. The query timings ranged from less than a second to over 11 seconds:

147
250
250
274
1499
6758
6795
7191
7671
7436
9275
9450
12481

Please note these are times in ms and is only capturing the time the query takes in the API method. I also used SQL Profiler and saw the timings there and they were similar.

It has something to do with making multiple API calls asynchronously. If I made an API call one at a time for each query it will give times of less than a couple of seconds.

I am running the code in C# in such a way that it does not lock the table / database for example I am using isolation level of snapshot. I have also tried to use isolation level or readuncommitted and the problem is still there.

We are using Windows Server 2019 with 2 processors and 10 GB Memory. SQL Server version is 2017.

I have done a few days of research and tried a few things but nothing seems to over come this problem. I needed some advice on what could be the cause of this issue and how to overcome it.

Just to add each query returns a single row of data that is consumed by the chart. I also ran the combined 13 queries in sql management studio and it took 8 seconds for the entire query to process.

I also noticed that when I access the dashboard the server cpu usage goes through the roof hitting 100% for a few seconds.

Update:

Execution plan when I run all 13 directly in SQL Management Studio:

https://www.brentozar.com/pastetheplan/?id=S1TFqgnL8

Execution plan for one of the queries that was triggered via the api code i.e. when I access the dashboard It took 12+ seconds to finish:

https://www.brentozar.com/pastetheplan/?id=HyIjKx3L8

Execution plan for the same query if I run it alone via sql it takes 2 seconds:

https://www.brentozar.com/pastetheplan/?id=ByRCcxnUU

Best Answer

Those plans are pretty gnarly. And utterly inappropriate for such a small SQL Server. To be effective at that scale, you have to be efficient. You can do a lot with a little SQL Server, but you have to do things right.

It appears that you are storing your data in single-element JSON arrays instead of properly typed columns. You shouldn't do that. It will cause you no end of pain. It's good to store JSON docs in a column when you have a legitimate need to, but you are misusing the feature, and not using it for its designed purpose.

EG don't do this:

drop table if exists foo
create table foo(id int primary key, sometableOrgLevel1Lookup nvarchar(max))
insert into foo(id,sometableOrgLevel1Lookup) values (1, N'[1]')

select t.*, j1.id
from foo t
OUTER APPLY OPENJSON(t.sometableOrgLevel1Lookup) WITH (id int '$') j1 

Instead declare columns with proper data types, with Foreign Key consraints, and add an index supporting each Foreign Key:

drop table if exists foo

create table foo
(
  id int primary key, 
  sometableOrgLevel1Lookup int, 
  index ix_sometableOrgLevel1Lookup(sometableOrgLevel1Lookup),
  constraint fk_sometableOrgLevel1Lookup foreign key (sometableOrgLevel1Lookup) references sometableOrgLevel1(Id)
)
insert into foo(id,sometableOrgLevel1Lookup) values (1,1)

select * from foo

At the very least, get rid of all the APPLY operators that are parsing the JSON. SQL Server is guessing about how many rows they will return, leading to poor cardinality estimation, and probably excessive memory grants. Instead write the JOINs like this:

LEFT JOIN [sometableOrgLevel1] AS [org1] ON ([org1].[Id] =  cast(json_value(t.sometableOrgLevel1Lookup,  '$[0]') as int))

I'm not sure if that the the only problem. But the first thing you should do is fix the table definitions and data storage formats, and then test the queries against the modified tables.