When you drop a table, it will free the space in the database, but not back to the O.S. because your database has this space allocated. You need to shrink your database. but it's not a good Idea, because it's good to have free space in the database , so it can grow without the need to ask the o.s. to grow your file and then use it. this can cause fragmentation.
SQL is pre-allocating the space for the fixed-length datatypes in your table (datetime 8b per field per row, numeric(10/14, x) 9b per field per row), and possibly 'stubs' for the variable length datatypes. It doesn't matter if they're null or not. This is by design so that the rows are inserted and updated efficiently. The varchar fields with defaults will also have space allocated to them appropriate to the size of the default. It also means your table is going to get a LOT bigger once you start populating the currently-null varchar fields.
A clustered index and the 'hidden' index you refer to (also known as a heap) aren't actually indexes as such; they're actually the table itself, and so will take up the space required to store the table data.
To try and compact your data, you can try to use some smaller datatypes in places. For example, date and time in place of datetime where appropriate, smaller precision numeric fields (numeric(9, x) and lower is 5 bytes), using integer types tinyint/smallint/int instead of numeric(10,0) for whole numbers, and using a bit field to represent true/false instead of varchar(5) (I count 11 true/false fields, which can be covered by 2b worth of bit fields instead of the 66-77b they currently occupy).
Beyond that, there appears to be a lot of normalisation that can be done on the table which could reduce the table size; e.g. can you combine invoice date & invoice time into a single datetime or datetime2 field? Address, staff/user details, comments and many others can be pushed out into lookup tables and a foreign key added to this table. There also seems to be a lot of calculated values which may be better created as calculated columns (or the logic moved into the app if there is one) rather than stored as an actual field.
Finally, you can look into sparse columns, where null values do not take up any space. However looking at the table I don't think they'll help much, as it looks like only a low number of columns will actually contain null data in normal use.
Best Answer
You are not going to get accurate figures for space used back in time because of differences in how efficiently things are laid out in the pages as your data changes over time.
Your "inserted date" column will give you an estimate of the number of rows present, by taking the current row count and subtracting the count of rows inserted since the date you are measuring for (or just counting those inserted on/before that date). It may give an accurate reading rather than an estimate if:
UPDATE
orMERGE
and notDELETE
-then-reINSERT
.You can use the row-count-at-a-date-in-the-past to estimate the total size of the pages it occupied by taking calculating
(data_size_now * rows_then) / rows_now
.That is for trying to look back in time when you haven't kept logs. Maintaining accurate records going forward is easy: setup a job that runs the analysis for each table and have it store the results for later review.