I would suggest to try an index like this:
CREATE NONCLUSTERED INDEX NCI_ProcessState_ProcessStateDateLastModified_SleepUntil
ON [TABLE](ProcessState,ProcessStateDateLastModified,SleepUntil)
This way you can filter sharp on the state as all others will result in bigger result sets due to the <= filter.
You can check the index usage easily with this query:
SELECT obj.*,usage.*
FROM sys.dm_db_index_usage_stats AS usage
INNER JOIN sys.indexes as obj
ON usage.object_id = obj.object_id
WHERE database_id = db_id()
Just filter it on your index and take a look a the search and update columns.
First, we are not comparing like data types.
Bit is defined as:
An integer data type that can take a value of 1, 0, or NULL.
BIT
Whereas BIGINT is a larger integer that consumes a significant amount of space by default.
That means by default SQL Server has statistics on each of the BIT columns and only one set for the BIGINT.
As you rightfully noted, BIT columns are optimized:
The SQL Server Database Engine optimizes storage of bit columns. If there are 8 or less bit columns in a table, the columns are stored as 1 byte. If there are from 9 up to 16 bit columns, the columns are stored as 2 bytes, and so on.
int, bigint, smallint, and tinyint
there must be a reason you are even considering taking at least 8 - 10 binary values and shoving them into a numerical number.
After all, why lot just use INT and save half the space? Since last I checked, 2,147,483,648 is 10 characters for just 4 bits and BIGINT has about 19, which technically is less space than BIT of youvspoit then into columns.
But this is losing sight of what your data is. How does BIT answer questions on what it represents? 10010 is just a number greater than ten thousand and yet in binary that actually represents something. If in "Saving" space you are forcing transformation before even using the data, will it still be efficient?
But please, do not conflate BIT with a numeric data type like Tinyint or BIGINT. They serve two different purposes.
Best Answer
To make this query run fast you will need the data ordered by timestamp.
You have a few options
If you use a clustered columnstore index you need to FIRST order the data in the table by adding a clustered index on Timestamp and then create the columnstore index using maxdop = 1 and with drop_existing.
full syntax https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/performance-tuning-ordered-cci
You could get rid of the columnstore index and just use a clustered index on Timestamp.
You could use a nonclustered index on Timestamp and include all the other indexes in the table (might be OK if you only have 5 columns depending on the datatypes).
You could partition the table by timestamp - this is another longer conversation