Initial size is not just 3MB, it is taken from the model database (if not specified during the creation of your user database.) So assuming you haven't specified a initial size during the creation of your user db and you haven't altered the model database file sizes after you have created your userdb you can do the following:
--Create testDB
CREATE DATABASE [TEST_100]
GO
--grow your database file size
ALTER DATABASE [Test] MODIFY FILE ( NAME = N'Test', SIZE = 100MB )
GO
--switch context
USE [TEST_100]
GO
--Find size of modelDB mdf file, this is your initial file size used for the userdb
DECLARE @TargetFileSize int
SELECT @TargetFileSize = (size * 8 / 1024)
FROM sys.master_files
WHERE database_id = 3 --model database
AND file_id= 1 --first file is mdf, assuming you have a model with just one mdf. If you have multiple files, change for the one you need to find.
--shrink the the first file of your current database to the target size that you just found.
DBCC SHRINKFILE (1,@TargetFileSize)
EDIT
Okay, some extra info is needed after your edits and comments.
First of all. I feel that the "Initial size" label that you see when you look at the file properties in SSMS is a misnomer. Basically, your intial size is just a concept. It's the first size that is used during creation of the database. You can either explicitly specify this in the CREATE DATABASE
statement, or you can have SQL Server implicitly copy it from the model database by ommiting that information during creation.
However, once the database is created, from a DBA perpective there is no such thing as a "initial size" there is only one property visible for a DBA and that is: the actual size. Even the "Initial size" property in SSMS just shows actual size, not the initial size.
Well how come that DBCC SHRINKFILE
or DBCC SHRINKDATABASE
"know" the initial size then? Even after you have changed the size. Interesting question.
The first page of a dattabase file is the file header page. In there you have, amongst others, 2 properties: size and minsize.
At creation of the file, both file header properties get filled with the inital value:
DBCC TRACEON(3604)
--parameters for DBCC PAGE: (Dbname, fileID, pageID, outputTypeID)
DBCC PAGE('Test_100',1,0,3)with tableresults
Both sizes are in the amount of data pages. In this case. 288 data pages.
Now if I alter the file size:
ALTER DATABASE [test_100] MODIFY FILE ( NAME = N'test', SIZE = 50MB )
You can see that the "size" property is changed to reflect the new size. However, the "MinSize" property still contains the "Initial" size. It's the minimal size to which the shrink command will go.
However, having said all this. I still don't understand why you want to complicate things by first altering the initial size and then shrink to that initial size. Instead of just shrinking directly to a targetsize.
Anyway, to answer your question. The "initial" size is not exposed as a property to the user/dba.
As stated in Paul's recommendations and in KB 2154845 use 8 files, and adjust as needed.
As a general rule, if the number of logical processors is less than or equal to 8, use the same number of data files as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to acceptable levels or make changes to the workload/code.
Now also make sure that the tempdb files are created such that you don't need auto growth and that they are all the same size with the same growth parameters not in percentiles.
Your tempdb is now ~140 GB so that is about the size you need to provision for. Create 8x15-20 GB datafiles and a single log file (as transaction logs are used sequentially) set all the files with exactly the same growth parameters, something that makes sense on your storage 512mb is fine if you have instant file allocation. As your transaction log is just about 5GB create a single 6-8GB file for the transaction log and set autogrowth to be in MB. Log growth is not affected by instant file allocations so make that parameter smaller than the one on the database files.
If you still see contention then go for 16x 8-10 GB files and dont change the transaction log parameters
Best Answer
No.
No.
SQL Server is an enterprise product and designed with stability and consistency in mind.
That said, unless you have Instant File Initialization on, your performance may suffer and potentially significantly. Depending how good your IO subsystem handles 100GB writes. I would suggest doing something of that significance (100 of 900GB) during off hours, simply to avoid load impact.
If you have Instant File Initialization off, turn it on. You can find instructions here.
Alternatively, you can just Google: "SQL Server enable Instant File Initialization"