Very interesting approach. My upvote for the creativity.
Since you reclaimed the space, I assume the original indexes are no longer in place? The downsides of filtered indexes then are:
In practical terms, this means that you have to be extremely careful with filtered indexes as they will often result in horrible query plans. I would not go so far as to call them useless, but I view them as an addition to traditional indexes, not as a replacement (as you are trying to do).
Below this section it goes on to update, in the exact same manner, every table in the view. I cannot see why this is, as I have specified the Table_Year
(which the table is partitioned on) within the query text. Shouldn't SQL only update the necessary table?
The view meets all the partitioning requirements for both Table_Year
and calStartDate
. The latter column is modified by the UPDATE
statement so the query optimizer has to produce a plan that is capable of moving rows between partitions.
In fact, rows could not move between partitions in this case because there is a 1:1 relationship between Table_Year
and year-wise values of calStartDate
, but the steps involved in that reasoning are far too opaque for the optimizer.
The new value for calStartDate
is based on a complex expression that references variables. The query plan will be cached and could be reused when the variables have different values, which is just another factor that means the plan must be very general.
All these considerations lead to a plan that does not feature static partition elimination. It does, however, feature dynamic partition elimination:
On the reading side, the string of Filter operators immediately below the Concatenation are all startup filters. They evaluate their predicate before the subtree is executed. If the predicate evaluates to false, the subtree under the Filter is not executed.
The overall effect is that only tables under the view that could hold qualifying rows (depending on the runtime variable values) are accessed. Notice the execution plan only shows rows being read from one of the base tables, and the Actual Executions property for all other base tables is zero; these operators were not executed at all, at runtime.
In the plan fragment below, the startup Filters ensure that only the green operators execute; the red ones never start at all:
On the writing side, The normal (not 'startup') Filter just to the right of each Clustered Index Update operator ensures that only changes for the current table are passed on. In the example plan, only one Clustered Index Update (and its associated nonclustered index maintenance operators) receives any rows:
Best Answer
For consistency. There are times the access path will use only the data from the indexes or will start from the index and jump to the table. In both cases the informatoin on these two entities should be compatible.
A index is just a separate ordered set of data from a table. When you remove an entry from the table, the entry on the index pointing to this row should also be removed. The same should happen when you update/insert. That's the burden of indexes: faster
select
and slower DML (insert
,update
,delete
).When you do, for example, a
Your result should be the same whether you have indexes or not. And the beauty of Database Management Systems is that they do that automatically for you. :)
You cannot prevent this update. The only time this will not happen is when you do not change the data referenced on the index. If you have a table with four columns (a, b, c, d), and two of them are indexed (a, b), updating only the other two (c, d) will not trigger this extra update.