In general multiple instances do not increase security, only complexity. There are reasons to use multiple instances but I don't think they fit your situation.
- You have different groups of users that need administrative (sysadmin, securityadmin etc) access.
- One of your databases needs to be on a different side of the firewall than the rest. I'm honestly not sure why this might be necessary but I've seen it done.
- DR/HA
- Wildly different use cases. For example
- Reporting data vs OLTP (basically splitting a load)
- One database is high transaction and you want
optimize for ad hoc
turned off while the rest you want it turned on. (requires different server level settings for best performance)
You will note that only one of these has anything to do with security and it's going to be pretty unusual. Generally those types of system privileges go to one team. Sometimes however you need to isolate an instance that is for a vendor package and you have no choice but to grant the application sysadmin.
As far as performance for multiple instances goes, well, there is one fairly obvious consideration. Each instance has it's own overhead. The amount of memory/system resources required for multiple instances will always be higher than for a single instance. If you are willing to pay the price for the additional hardware then that shouldn't be a big issue. The other performance issue comes into play when you have data on multiple instances that needs to work together. For example writing a query that ties your log information to your login information. If the data is on two different databases then you have no unusual performance issues. If on the other hand you are using two separate instances you have to either use a linked server (performance + security issues) or load all of the data into your application and use the application to sort it out (can work for small amounts of data but anything past that is going to give you a massive headache).
If you delete a row in SQL the space it uses is freed. This will make space available within the 8 KB page. Whether the space will be reused automatically depends on how you insert and delete data.
If your CLUSTERED INDEX
is an ascending value perhaps using the IDENTITY
property for an INT
or BIGINT
column, then the space will not readily be reused just based on deletions and insertions to the table.
If your CLUSTERED INDEX
is based on some other data, such as UserName
, then statistically a fair amount of the space will eventually be reused.
That is just based on the behavior of insertion. However, you can schedule a period in which you alter the index to reorganize your data and reclaim space from the empty data.
There are tools, such as https://ola.hallengren.com/
provides.
Or you can create targeted updates by choosing just certain indexes to be reorganized. See the details on your options at: https://msdn.microsoft.com/en-us/library/ms188388(v=sql.100).aspx
A sample from the MSDN post:
ALTER INDEX ALL ON Production.Product
REBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON,
STATISTICS_NORECOMPUTE = ON);
Review the options that you would like to use from that page.
As Shanky described in his comments, shrinking files is a really bad choice.
Using the REBUILD
or the REORGANIZE
options of ALTER INDEX
will give you better results. However, the ALTER INDEX
should not need to be run frequently. Analyze the degree of fragmentation in order to choose the frequency and the window of time you will use.
An ALTER INDEX
with either REBUILD
or REORGANIZE
will order the data into the update pages while reserving the space indicated by the FILLFACTOR
. This means that if many of the data pages are fragmented, perhaps due to many deletions, the data will be moved around so as to put data in the CLUSTERED INDEX
order.
While the data is being moved into logical order, it will empty pages and extents which will result in recovering space in the database.
EDIT: https://msdn.microsoft.com/en-us/library/ms189858(v=sql.120).aspx (for 2014) explicitly makes the following comments:
"Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction."
"Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and nonclustered indexes on tables and views by physically reordering the leaf-level pages to match the logical, left to right, order of the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the existing fill factor value."
Best Answer
This happens (based on the discussion in comments) because the tables exist in the
model
system database. As per the documentation (my emphasis):As a solution, dropping the tables from
model
will stop this phenomenon.