In the absence of any other input, here's what I was able to deduce. Short story: this appears to work fine.
As far as I can see, enabling TDE for a database is a logged operation, but the actual data encryption is not logged (otherwise the transaction log would grow considerably, which it doesn't). I assume there's something like the differential change map, or a flag in the page headers to indicate if a page is encrypted or not, and the storage engine just chews through everything until it's fully encrypted.
When I enable TDE at the primary database, the secondary database ends up encrypted too (I have verified this with a hex editor). While encrypting is in progress, the view sys.dm_database_encryption_keys indicates percent_complete for all the secondary servers is 0, and this value never increases. The primary server updates this column normally. All the replicas tend to finish encrypting at their own pace, at which point encryption_state changes to 3. Seems like everything works normally once it's finished.
Is there any difference between updating the statistics of a table using sp_updatestats with out resample and updating stats of table using UPDATE STATISTICS without sample options(FULLSCAN,SAMPLE PERCENT,RESAMPLE)
Adding to what is already mentioned by @Gameiswar, the other difference which I know with sp_updatestats
, when you run it for all tables of database it ONLY updates statistics of tables which has at least one row changed. BOL says
For disk-based tables, sp_updatestats updates only the statistics that require updating based on the rowmodctr information in the sys.sysindexes catalog view, thus avoiding unnecessary updates of statistics on unchanged rows.
So when you run sp_updatestats and it says statistics for all tables have been updated this is incorrect/misleading only for tables which has at least one row changed was stats updated.
While going with UPDATE STATISTICS you have quite lot of options to use for stats update.
Plus when you rebuild the index with full scan or with default options the stats are updated for that index so no need to rebuild stats again for it.
PS: Whatever the difference may be, I would not use sp_updatestats as the mechanism it uses to update stats is not good, why should i update stats for table if just one row has changed and I have 100K rows in table, it will just consume resources and cause more issue.
EDIT:
If you want to update stats selectively and do not want to run sp_updatestas use below query to filter out outdated stats. This query uses DMF and would work from SQL Server 2008 R2 SP2 , SQL Server 2012 Sp1 and above
, Copied from Erin Stellato's blog
NOTE: Realize that different tables may have different thresholds and you will need to tweak the query above for your databases. For some tables, waiting until 15% or 20% of the rows have been modified may be ok. But for others, you may need to update at 10% or even 5%, depending on the actual values and their skew.
SELECT [sch].[name] + '.' + [so].[name] AS [TableName] ,
[ss].[name] AS [Statistic],
[sp].[last_updated] AS [StatsLastUpdated] ,
[sp].[rows] AS [RowsInTable] ,
[sp].[rows_sampled] AS [RowsSampled] ,
[sp].[modification_counter] AS [RowModifications]
FROM [sys].[stats] [ss]
JOIN [sys].[objects] [so] ON [ss].[object_id] = [so].[object_id]
JOIN [sys].[schemas] [sch] ON [so].[schema_id] = [sch].[schema_id]
OUTER APPLY [sys].[dm_db_stats_properties]([so].[object_id],
[ss].[stats_id]) sp
WHERE [so].[type] = 'U'
AND [sp].[modification_counter] > 0--change accordingly
ORDER BY [sp].[last_updated] DESC;
If you want to avoid all such hassle you can also use Ola Hallengren update stats script which takes care of lot of such things.
Best Answer
The most reliable sequence I can find to cause an automatic statistics update is:
Update statistics sampling zero rows
This results in an empty statistic object.
Update the target column(s) in a single row of the table
This increments the column modification counter. The combination of an empty statistics object and an incremented modification counter enables a special case statistics update (it simulates creating statistics on an empty table, then adding a row).
Run the query with
OPTION (RECOMPILE)
This causes an automatic update of detected stale statistics, even if a matching plan for the query is already present in cache. The resulting statistics update will subsequently cause an optimality-based recompilation for the original cached plan, if it is matched again.
Demo
Using a similar AdventureWorks query as used in jyao's answer, the following script puts everything above together:
Output
The
DBCC SHOW_STATISTICS
results show the original statistics header, the empty header, and the desired updated header at the end of the process: