There is always at least something else you should also know and almost equally, always something else you should be consciously putting a stop to. Specifically in the context of data warehousing, which is a relatively fledgling sector, leveraging relatively new technologies.
In regards to what I've seen in the real world, walking into a company for the first time and seeing what I'm understanding about your design would be genuinely tear-inducing: Tears of joy and relief. From the outset, you are well on your way to beginning what appears to be a well thought-out ( well engineered ) ETL / data warehousing system. As with the implementation of any software product, your mileage may vary as the solution grows and is consumed by the business, but fundamentally, you are on The Right Track™ ( and yes, you know what a natural key is ).
I've found there to be a number of challenges with these type of solutions, which I will touch upon to reinforce some of your decisions and perhaps lend some insight into the road ahead of you. Firstly, the number of times I've found myself in a predicament on account of a developer ( even fellow database administrators / data professionals ) misunderstanding the context of a control column ( using, for example running a process against the DateInserted
column, a mere time stamp of insertion, over the DateReceived
or similarly named column, intending to relate a row to a particular date of occurance ), that while I agree completely with the cautions @Aaron Bertrand raises, I feel that the prefixes for your control columns could actually be leveraged as a sort of flag to help prevent their misuse. Obvious should be obvious of course, but much like writing code in general, explicit is preferable. That said, I would almost certainly leave such prefixes out of the indexes and such ( probably even keys - PK
types can and should stay in my opinion, but unless there's a real threat of DWD_SubCategories
and DWF_SubCategories
existing in the same schema, they really are just fluff ). I think the concern about the DWD
and DWF
prefixes is valid, but they'll be living in the [NDS]
catalog and would serve to indicate intent, making it completely fine to use the nomenclature in that manner.
The second ( and perhaps most infuriating ) challenge is one of cross-training your coworkers. All of the software engineering, usage flags and design practice rules are completely for naught if your striving-for-paycheque-over-excellence colleagues get involved and do their less than very best ( or to be fair, are even just simply having a bad day ). Do keep in mind that large projects generally have many fingers in the pot, so it is imperative that those fingers are behaving well.
The last thing I'll touch on here is to always keep in mind the actual value of any ETL system to a business. Of the Extract, Transform and Load paradigm, the first and final letters have absolutely no business value, so you will want to work on making the development and maintenance of both the Extract and Load processes as minimal as possible - the "real" work will be done in the Transform phase, so you will want to automate the E and L steps as much as possible so that you can focus on making ( and keeping ) your solution valuable to the business unit by actively working on the transforms.
All of that said, I've only had the opportunity to work on a handful of different warehousing solutions so perhaps a more knowledgeable user could step in and remove my foot from my mouth if I need correcting. As I said initially, this is one of those areas where one can always learn or unlearn something, and I am absolutely no exception.
Oh, one more thing ( and probably the most important ) - Unit Test! Once your E and L are working as intended and you've had the opportunity to put a few domains through your T solution, get somebody to vet the results. If they're good, save the result set somewhere, so that when you make changes ( and you will, without a doubt ) you can ensure you haven't broken something, somewhere else. Again, automate this process as much as you possibly can ( it's another 0-value process to the business, until they go without it at least ;) ). I generally set up a separate schema or catalog for this purpose.
Hopefully some of what I've said will be useful to you!
As an update, @Aaron Bertrand's schema separation seems like it would be quite a good way to avoid unnecessary prefixing as well, so certainly consider that ( I know I will haha ).
In order to change the settings, you don't alter the job directly. Go to the primary database in the log shipping setup in SSMS, right click, choose tasks, then ship transaction logs.
Click on the button highlighted, which will take you to the following screen:-
This screen will allow you to change the UNC path to an IP address
Best Answer
For a starter, here's the Microsoft Docs Page
As an aside, I suggest you do some reading into the mechanics of an availability group, purely so you can understand my answer. (Microsoft Docs Page)
As for the answer to your question, as normally in the case with questions around an RDBMS, it depends.
In this case is depends on what you mean by "...for data to appear"
Let's have a look at the columns in your question:
It is also worth noting that the System DMV you are referring to is at the database level
last_sent_time
This time indicates the last time that the PRIMARY sent a Log Block to the available secondaries. This is the start of the data synchronisation process.
last_received_time
This indicates the last time that the secondary received a log block.
last_hardened_time
This indicates the last time that the secondary cached the recieved log block data to disk.
last_redone_time
This is the time that the last LSN was redone on the target database.
last_commit_time
This is the time of the last commit record was redone and reported back to the primary.
Summary
Of the above, there are various entry-points of the data into the secondary systems.
The data first enters the server into memory at last_received_time
The data first enters the server on disk at last_hardened_time
The data first enters the database data files at last_redone_time
The data first becomes committed and available for reading by queries (outside of strange NOLOCK situations) at last_commit_time
I suspect that the answer to your question is the latter of the 4 concepts. There is a small overhead to the time in this column however, due to the transmission time of the data between the SECONDARY and PRIMARY. This is likely to be unimportant in calculations for determining the speed of data throughput though.