You need to qualify SlotId
in your sub-query with SlotTable
or it will use the SlotId
from the sub-query itself.
UPDATE [SlotTable]
SET [SlotVal1] = (SELECT [SlotVal1]
FROM [SlotTable] AS s2
WHERE s2.[SlotId] = ([SlotTable].[SlotId] - @rowsToShift)),
[SlotVal2] = (SELECT [SlotVal2]
FROM [SlotTable] AS s2
WHERE s2.[SlotId] = ([SlotTable].[SlotId] - @rowsToShift))
WHERE [SlotId] >= (@firtRowToMove + @rowsToShift);
SQL Fiddle
BTW, your CTE version does not work either
Another way to do what you do in this update statement a bit more efficiently (at least in my tests).
update s1
set SlotVal1 = s2.SlotVal1,
SlotVal2 = s2.SlotVal2
from SlotTable as s1
inner join SlotTable as s2
on s2.SlotId = s1.SlotId - @rowsToShift
where s1.SlotId >= @firtRowToMove + @rowsToShift
You may be able to reduce index rebuild/reorg times by moving to NEWSEQUENTIALID()
.
You are currently inserting around 4,500 rows into the table per day. The table has around 6.3 million rows at present. That's around 0.7% of the table per day. Assuming every insert on the table results in an index-page split, this will result in 9,000 pages per day written to the storage system. This will create some unnecessary I/O pressure both for inserts into the index, and for rebuild/reorgs. Switching the Key
field to sequential ID generation will vastly reduce this I/O pattern.
If you switch to NEWSEQUENTIALID()
, the code that performs INSERTs
on the table will need to be sensitive the possibility that it will attempt to insert a duplicate Key
. This is a remote possibility, however it is still a possibility, and is the same for both NEWID()
and NEWSEQUENTIALID()
- since nothing is truly random that is generated by a computer. Queries against the index could gain some benefit by making the index unique, as in:
CREATE UNIQUE NONCLUSTERED INDEX [idx_Robert] ON [dbo].[Aleksander]
(
[Key] ASC
) WITH (
PAD_INDEX = OFF
, STATISTICS_NORECOMPUTE = OFF
, SORT_IN_TEMPDB = OFF
, DROP_EXISTING = OFF
, ONLINE = OFF
, ALLOW_ROW_LOCKS = ON
, ALLOW_PAGE_LOCKS = ON
, FILLFACTOR = 80
) ON [PRIMARY];
This change would make index scans less likely. Paul White has an excellent article about this at https://sqlkiwi.blogspot.com/2011/02/seeking-without-indexes.html
Since you haven't given any indication of the type of queries you run against this table, I will assume you are at some points scanning the entire index nothing about your situation. If your data is truly stored on a single spinning disk, then having your index non-fragmented should reduce the time needed for queries to return information.
Having said that, you could likely get a whole lot more for your money (time is money) by improving the IO subsystem so index rebuild/reorgs are unnecessary. See http://www.brentozar.com/archive/2012/08/sql-server-index-fragmentation/ .
If you are interested in measuring the effects of the index settings you are using for the Key
column, you could monitor page free percentage (among other things) for pages used by the index by using the system dmv, sys.dm_exec_index_physical_stats
. For example:
SELECT o.name, ps.avg_page_space_used_in_percent, ps.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('KeyTable','Table'), NULL, NULL, 'DETAILED') ps
INNER JOIN sys.objects o ON ps.object_id = o.object_id;
(You may want to adjust the parameters passed to only show the index on the Key
column.)
Once the avg_page_space_used_in_percent
crosses some boundary value such as 95% (or more), you would then rebuild the index, which both defragments it, and with an appropriate FILL_FACTOR
setting, leaves some space free on each page for new rows with randomly generated NEWID()
values.
If you did decide to use NEWSEQUENTIALID()
your index could use a FILL_FACTOR
of 100% since index pages would not longer be split on a frequent basis. This has the benefit of reducing the number of pages required to hold the index by 20% (against your current settings), thereby reducing the IO required to rebuild the index when that does become necessary.
Since you are only inserting new rows at a rate of about 0.07% of the total table per day, it may take a very long time before the system automatically triggers a histogram update on the statistics for the table involved. You could use the STATS_DATE()
function to ensure the statistics are never more than x
days old. Up-to-date statistics are vital for the query engine to produce optimal query plans.
Best Answer
I'll try with a table structure this way:
No matter how many groups and how many products fit a package. You can always add/remove products to an specific package.