But, from reading into it more, separate physical disks for mdf and ldf files don't really apply when it comes to SSD's. Correct?
The original reason for splitting log and data files off onto seperate disks was 2 fold - latency and bandwidth on the drives.
SSDs don't remove these restrictions, but they do decrease/increase the limits quite significantly (7.9ms for a read with a single HDD vs 0.1ms for a read in a single SSD, roughly).
So ultimately yes and no - it doesn't apply AS MUCH as with HDDs, but those limits are still there and can still be met. It all depends on your workload.
Is my "thinking" setup good or just simply a waste (i.e. no reason to separate out tempdb) where I now have an extra SSD to make use of elsewhere?
Assuming that
- You have 3 physical SSD's
- You have 1 physical HDD
- You need the data to be redundant, but not necessarily the system itself
Your proposed setup would have a few issues (as mentioned before), and a single drive failing is the main one.
You could go for something like this.
Single 7200rpm drive - Windows OS
RAID 5 array (3 SSDs) - Broken down into 4 drives (D for Data, L for Logs, S for Swap and T for Temp)
OR
Single 7200rpm drive - Windows OS
Single SSD - Temp and Swap
RAID 1 array (2 SSDs) - Data and Logs
It's personal preference of mine offloading Windows onto a non-SSD drive when you only have a limited number, but this entirely depends on what the server is doing and how much of a risk you're willing to take.
The size in and of itself is what you will need to appropriately determine when setting and growing. This is all part of properly sizing your database.
But you need to be careful with growing your files at such a small increment. If you are growing your files often, you are causing SQL Server to have to do this relatively expensive operation. Take, for instance, growing of the transaction log. When this happens all write activity will be impeded. Another thing about that is if you grow your transaction log at a small interval and it grows often, you will end up with a large amount of VLFs that are small in size.
Instant File Initialization is a great approach, as it bypasses the zero-initialization of the additional file (or file allocation). This shaves off a lot of the overhead and performance impact, but this is only possible with data files, not log files.
As you can see, there are a handful of things to consider depending on your environment and how much your database grows. Size your database appropriately and find the right growth increment.
Note, autogrowth should be reserved for emergency situations and it is recommended to manually grow your files if at all possible. This gives you full control over all of the points above, as well as allowing you to determine when this operation occurs.
As for your numbers, "space available" is just that: The free space that you have for additional data until growth will need to happen. One strategy is to get notified when you reach a certain threshold of used space so that you can schedule and kick off growth of the appropriate file(s).
Best Answer
If you're copying all of the same data, from all of the same tables, in your source database to your target database, then use your existing source database's file sizes and the provisioned disk behind them as a benchmark for what you'll likely roughly need for your target database.
Otherwise, Erik Darling wrote a good brief article regarding this in No but really, how big should my log file be?. To summarize, he recommends:
He then explains, if your largest index is greater in size than 25% of your database size (MDF) then under the assumption you're doing maintenance such as index rebuilds, you'll need a good amount of space to accomplish that plus additional space for whatever else could be running simultaneously. Otherwise 25% of the MDF size is a decent starting point.
For new databases, I personally use the rule of halves, and make my LDF half the size of my MDF, and I set both files growth rates to half of their sizes. For example if I have a good idea my database will hold 1 GB of data, I'll set the MDF to 1 GB with a 512 MB growth rate. Then I'll set my LDF to 512 MB with a 256 MB growth rate.
Neither of these suggestions are perfect, the values are somewhat arbitrary, but I think are good enough starting points.
The reason it's hard to predict a perfect answer for how to size your LDF is because it depends precisely on how many transactions you plan to run against the database between the timeframe of when your Transaction Log backups run, and / or how big the biggest transaction you may run will be. That's something hard enough for anyone to figure out proactively when provisioning their own database file sizes, let alone to try to figure it out for someone else's database.
For example, if your target database copies all of the same exact data as your source database, but after it's copied, you run some additional transformations on the data that are transactionally heavy, then you'll find you'll likely need more space allocated to your target database's LDF than there is allocated to your source database's
Or if for example, on your target database you scheduled your Transaction Log backups to be at a slower frequency than your source database, you may also find you need a larger log file (LDF) for your target database than your source database's.
Best of luck!