A reorganise-and-shrink is never recommended really.
If you can take the apps the database is serving offline, you can speed up the process and reduce index fragmentation by removing all indexes and primary/foreign key constraints before the shrink (this will mean there is less data to be moved around as only the data pages will be shuffled not the now non-existent index pages, speeding up the process) then recreate all the indexes and keys.
Recreating the indexes after the shrink means they should not be significantly fragmented, and having them gone during the shrink means rebuilding them won't leave many small "holes" in the page allocation within the files that may invite fragmentation later.
Another option if you can offline the applications is to migrate all the data to a fresh database of the same structure. If your build process is solid you should be able to build that blank DB quickly, if not create one from the current DB (restore a backup of the current one, truncate/delete all the contents in the tables and perform a full shrink).
You might still want to drop all the indexes in the destination and recreate them afterwards as this can be a lot more efficient when changing a lot of the indexed data (100% of it in this case). To speed up the copy process, have the datafile(s) of the destination database on different physical drives to the source (unless you are using SSDs in which case you don't need to care about reducing head movements), you can move them to the source location when you are done.
Also, if creating the destination as new (rather than by blanking a copy of the source) create it with an initial size that will contain all the current data plus some months worth of growth - that will make the data copy a little faster again as it won't be allocating new space every now and again throughout the process.
This might be better than using shrink because migrating the data to a fresh database replicates the intended action of the shrink operation, but potentially with far less fragmentation (which is the unintended consequence of a reorganise-and-shrink). A shrink simply takes blocks from near the end of the file and puts them in the first space nearer the beginning making no effort to keep related data together.
I suspect the result will be more efficient space-wise too as there is likely to be less part-used pages afterwards. A shrink will just move part-used pages around, moving the data is more likely to result in full pages especially if you insert into the destination in the order of a table's clustered key/index (where a table has one) and create other indexes after the data has all migrated.
Of course if you can't take the apps offline at all, just performing a shrink is your only option so if you really need to reclaim the space go with that. Depending on your data, access patterns, common working set size, how much RAM the server has, and so forth, the extra internal fragmentation may not be all that significant in the end.
For the copy operation, either SSIS or base T-SQL would work just as well (the SSIS option might be less efficient, but potentially easier to maintain later). If you create the FK relationships at the end along with the indexes you can do a simple "for each table, copy" in either case. Of course for a one-off, a shrink+reorganise is probably fine too but I just like to scare people into never considering regular shrinks! (I've known people schedule them daily).
Best Answer
If the goal is to make backups smaller, shrinking the data file(s) will not help accomplish that goal in this case, because all of those pages that were holding that
max
data were LOB pages that should be deallocated (potentially requiring a rebuild).So unless your drive is literally out of space, I don't see any reason to shrink the file from 8GB, unless you are 100% confident it will never grow that size again.
Note that when you shrink a data file you might actually increase fragmentation which could decrease read performance, impact memory usage, and reduce concurrency. The knee-jerk reaction there is to rebuild indexes, which fixes the fragmentation but increases the file size to make room for the new copies of the index. When the rebuild is done, those pages are freed up, but the data file doesn't magically shrink again, so you're back where you started, and have gained nothing.
If you delete 95% of the data, the backup size will go down, whether you shrink the MDF file(s) or not. An MDF file is just a container; its size does not affect the backup. The actual data affects the backup. More precisely, the number of pages allocated with any data affects the backup (which is why a table rebuild after a big delete might yield a difference in backup size).
This is easy to test for yourself, but keep in mind the impact you observe will depend on a lot of things, including data types, fragmentation, free space, and a host of other variables. Just look at the size of a full backup after each of these steps:
NULL
.NULL
(and after dropping the column, maybe).My theory is that, in almost all cases I can think of, there will be no tangible difference between the backup size after 3. and after 4.
A different variable you might care about is the time it takes to restore that backup on another system, which will be dependent on the size of the MDF file(s) only in the following cases:
The first one is easy to fix, this is just a right you can grant to the service account. The second one should be easy to fix too: add disk space. There's no reason to sacrifice your primary system because your secondary system is inadequate.