Simply...
DBCC ShrinkDatabase()
: shrink all files
DBCC ShrinkFile()
: just one file
For example, you may have a log backup issue and it's grown out of control so you run DBCC ShrinkFile()
.
You almost never use ShrinkDatabase
.
Before you consider using either command, please read Paul Randal's blog on shrinking.
I'd shrink neither one of the files (mdf, ldf) unless there was a clear reason. The files are the size they are because they need to be. Any blogs suggesting to do so as part of regular maintenance probably don't understand how SQL Server works.
The benchmark you cited is using the term "select time" to mean the amount of time that elapses between when the time when the query is sent to the server for execution and the time when the first row is available for retrieval. The "fetch time" is the difference between when the first row is available to the client and the last row has been retrieved by the client.
"Select time" then means the amount of time the server spends identifying the rows that will be returned, and "Fetch time" means the amount of time required for the server to actually deliver all of the data to the client.
In the simplest query, SELECT * FROM t1
, where the server has to do almost no work in order to identify which rows should be returned (all rows would be returned, with no WHERE
clause), we could expect that the select time would be almost instantaneous, while the fetch time would depend on how fast the server can read the rows from its backing store (or cache) as well as the available network bandwidth and latency between the server and the client. In a more complicated query, where the server has to do more work to identify the rows, the select time could be much higher and the fetch time could be lower by comparison.
In summary, "Select time" is an estimate, based on observation from the client's perspective, of how fast the server can identify the rows to be returned, while "Fetch time" is how fast the server can actually deliver them once they have been identified.
The two times added together are the actual time required for your query to be fully answered. In light of that, the only thing that really matters is the total, since that's how long you're going to have to wait until you can use the entire result set.
Best Answer
I think you should first understand the difference between Sync IO and Async IO. The information about the basic nature of both the I/O's can be found in Bob Dorr's I/O presentation blog see the section Async vs Sync IO.
In very simple meaning Async IO is one in which after putting I/O request the program or calling code will not wait for the I/O operation to complete but will get busy with other task, later it can come and check and see if the I/O request has been completed or not. Similarly for Sync IO the calling code will wait for acknowledgement that I/O request is done. Quoting from the Bob Dorr's blog
Now the wait time here represents time spent in waiting Async/Sync IO request to complete.