Technically to display n-th value you use the ranking functions like RANK
, DENSE_RANK
or ROW_NUMBER
. Which one exactly depends, but what you describe matches best ROW_NUMBER
:
with cte as (
select row_number() over (order by Date) as rn, *
from table)
where rn % 1000 = 0;
But you are saying that you want o do this for performance reasons, avoid going through millions of rows. Such a query had already done the damage, it read from disk all the rows and had to sort them so the performance price was already payed.
A better alternative to selecting a random sample of data is to use the TABLESAMPLE
clause, see Limiting Result Sets by Using TABLESAMPLE:
select *
from table tablesample (100 rows);
TABLESAMPLE will be way more efficient as it actually avoids reading all the data, it only samples some of the pages in the table and returns all rows in the sampled pages.
But consider that if you use TABLESAMPLE your WHERE clause are applied after the sampling. So it may be that the sample does not contain any row for the device/sensor you're interested in. This is specially true for sensor with small data.
I am not sure what type of performance you are looking for, but if CLR or external app is not an option, a cursor is all that is left. On my aged laptop I get through 1,000,000 rows in about 100 seconds using the following solution. The nice thing about it is that it scales linearly, so I would be looking at a little about 20 minutes to run through the entire thing. With a decent server you will be faster, but not an order of magnitude, so it would still take several minutes to complete this. If this is a one off process, you probably can afford the slowness. If you need to run this as a report or similar regularly, you might want to store the values in the same table un update them as new rows get added, e.g. in a trigger.
Anyway, here is the code:
IF OBJECT_ID('dbo.MyTable') IS NOT NULL DROP TABLE dbo.MyTable;
CREATE TABLE dbo.MyTable(
Id INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
v NUMERIC(5,3) DEFAULT ABS(CHECKSUM(NEWID())%100)/100.0
);
MERGE dbo.MyTable T
USING (SELECT TOP(1000000) 1 X FROM sys.system_internals_partition_columns A,sys.system_internals_partition_columns B,sys.system_internals_partition_columns C,sys.system_internals_partition_columns D)X
ON(1=0)
WHEN NOT MATCHED THEN
INSERT DEFAULT VALUES;
--SELECT * FROM dbo.MyTable
DECLARE @st DATETIME2 = SYSUTCDATETIME();
DECLARE cur CURSOR FAST_FORWARD FOR
SELECT Id,v FROM dbo.MyTable
ORDER BY Id;
DECLARE @id INT;
DECLARE @v NUMERIC(5,3);
DECLARE @running_total NUMERIC(6,3) = 0;
DECLARE @bucket INT = 1;
CREATE TABLE #t(
id INT PRIMARY KEY CLUSTERED,
v NUMERIC(5,3),
bucket INT,
running_total NUMERIC(6,3)
);
OPEN cur;
WHILE(1=1)
BEGIN
FETCH NEXT FROM cur INTO @id,@v;
IF(@@FETCH_STATUS <> 0) BREAK;
IF(@running_total + @v > 1)
BEGIN
SET @running_total = 0;
SET @bucket += 1;
END;
SET @running_total += @v;
INSERT INTO #t(id,v,bucket,running_total)
VALUES(@id,@v,@bucket, @running_total);
END;
CLOSE cur;
DEALLOCATE cur;
SELECT DATEDIFF(SECOND,@st,SYSUTCDATETIME());
SELECT * FROM #t;
GO
DROP TABLE #t;
It drops and recreates the table MyTable, fills it with 1000000 rows and then goes to work.
The cursor copies each row into a temp table while running the calculations. At the end the select returns the calculated results. You might be a little faster if you don't copy the data around but do an in-place update instead.
If you have an option to upgrade to SQL 2012 you can look at the new window-spool supported moving window aggregates, that should give you better performance.
On a side note, if you have an assembly installed with permission_set=safe, you can do more bad stuff to a server with standard T-SQL than with the assembly, so I would keep working on removing that barrier - You have a good use case here where CLR really would help you.
Best Answer
Here's a stab at an algorithm. It's not perfect, and depending on how much time you want to spend refining it, there are probably some further small gains to be made.
Let's assume you have a table of tasks to be performed by four queues. You know the amount of work associated with performing each task, and you want all four queues to get an almost equal amount of work to do, so all queues will complete at about the same time.
First off, I'd partition the tasks using a modulous, ordered by their size, from small to large.
The
ROW_NUMBER()
orders every row by size, then assigns a row number, starting at 1. This row number is assigned a "group" (thegrp
column) on a round-robin basis. First row is group 1, second row is group 2, then 3, the fourth gets group 0, and so on.For ease of use, I'm storing the
time
andgrp
columns in a table variable called@work
.Now, we can perform a few calculations on this data:
The column
_grpoffset
is how much the totaltime
pergrp
differs from the "ideal" average. If the totaltime
of the all tasks is 1000 and there are four groups, there should ideally be a total of 250 in each group. If a group contains a total of 268, that group's_grpoffset=18
.The idea is to identify the two best rows, one in a "positive" group (with too much work) and one in a "negative" group (with too little work). If we can swap groups on those two rows, we could reduce the absolute
_grpoffset
of both groups.Example:
With a grand total of 727, each group should have a score of about 182 for the distribution to be perfect. The difference between the group's score and 182 is what we're putting in the
_grpoffset
column.As you can see now, in the best of worlds, we should move about 40 points worth of rows from group 1 to group 2 and about 24 points from group 3 to group 0.
Here's the code to identify those candidate rows:
I'm self-joining the common table expression that we created before,
cte
: On one side, groups with a positive_grpoffset
, on the other side groups with negative ones. To further filter out which rows are supposed to match each other, the swap of the positive and negative sides' rows must improve_grpoffset
, i.e. get it closer to 0.The
TOP 1
andORDER BY
selects the "best" match to swap first.Now, all we need to to is add an
UPDATE
, and loop it until there's no more optimization to be found.TL;DR - here's the query
Here's the complete code: