No, in-memory really is this unpolished. If you are familiar with Agile you will know the concept of "minimal shippable product"; in-memory is that. I get the feeling that MS needed a response to SAP's Hana and its ilk. This is what they could get debugged in the timeframe for a 2014 release.
As with anything else in-memory has costs and benefits associated with it. The major benefit is the throughput that can be achieved. One of the costs is the overhead for change management, as you mentioned. This doesn't make it a useless product, in my opinion, it just reduces the number of cases where it will provide net benefit. Just as columnstore indexes are now updatable and indexes can be filtered I have no doubt that the functionality of in-memory will improve over coming releases.
SQL Server 2016 is now generally available. Just as I supposed, In-Memory OLTP has received a number of enhancements. Most of the changes implement functionality that traditional tables have enjoyed for some time. My guess is that future features will be released at the same time for both in-memory and traditional tables. Temporal tables is a case-in-point. New in this version it is supported by both In-Memory and disk-based tables.
Is this a bug?
No it is not a bug. Its by design. They are kept for troubleshooting and supportability
purposes.
From the SQL_Server_2014_In-Memory_OLTP White_Paper
Database administrators do not need to maintain the files that are generated by native compilation. SQL
Server automatically removes generated files that are no longer needed, for example on table and
stored procedure deletion, and on drop database, but also on server or database restart.
I tried to repro your scenario on SQL Server 2014 + RTM + (Build12.0.2000.8)
- Dev Edition server by creating a test memory optimized table and checking the dll loaded using
SELECT name, description FROM sys.dm_os_loaded_modules
WHERE description = 'XTP Native DLL'
After I dropped my table, the dll
still appears in the output of above select statement and the files are still in the folder and after restart they are still there too.
From Books Online -
No user interaction is needed to manage these files (.c, .obj, .xml, .pdb., .dll
). SQL Server will create and remove the files as necessary.
SO I guess, we just have to follow what Microsoft says - SQL server will manage them for us :-)
ONLY FOR EDUCATIONAL PURPOSE :
I managed to clean up the old files by
- Issuing a manual
CHECKPOINT
on the database.
- Taking database offline and then bringing it online.
Ideally, you should not be restarting the server instance, just manual checkpoint and offline/online of the database will clear the files.
e.g. Repro :
USE master
GO
create database db1
GO
ALTER DATABASE db1 ADD FILEGROUP db1_mod CONTAINS memory_optimized_data
GO
-- adapt filename as needed
ALTER DATABASE db1 ADD FILE (name='db1_mod', filename='D:\SQLServer2014\MSSQL12.SQL2014\MSSQL\DATA\db1_mod') -- change here as per your need !!
TO FILEGROUP db1_mod
GO
USE db1
GO
CREATE TABLE dbo.t1
(c1 int not null primary key nonclustered,
c2 int)
WITH (MEMORY_OPTIMIZED=ON)
GO
--- now check if the dll is loaded or not
SELECT name, description FROM sys.dm_os_loaded_modules
WHERE description = 'XTP Native DLL'
![enter image description here](https://i.stack.imgur.com/xEdMz.jpg)
--- now drop table and do a manual checkpoint
use db1;
drop table dbo.t1;
checkpoint
Still the module is loaded in memory (even server restart will load the module sometimes)
![enter image description here](https://i.stack.imgur.com/vsREm.jpg)
The (.c, .obj, .xml, .pdb., .dll
) are still present in the folder :
![enter image description here](https://i.stack.imgur.com/xYgYW.jpg)
Now take the database offline and then bring it online - the (.c, .obj, .xml, .pdb., .dll
) are all gone ...
![enter image description here](https://i.stack.imgur.com/FjMMN.jpg)
Best Answer
Parallel Execution:
SQL Server does support parallel inserts into temp tables in SQL Server 2014 with the
SELECT INTO
syntax. In SQL Server 2016 parallel inserts are also supported into temp tables that are heaps. In SQL Server 2016 SP1 parallel inserts into heaps require theTABLOCK
hint.A query that modifies table variables will not contain any parallel zones. However, a query that references a table variable may run in parallel.
Memory:
Yes, memory-based tables can only be in RAM and cannot spill to disk. See this article.
Performance Boost:
My advice would be to first measure the performance of your stored procedure to confirm that the temp tables are the source of the unacceptable performance. If you confirm that consider the following simpler options first:
Only save data to temp tables that you need. Apply filters when inserting data as soon as possible.
Create columns with the right data types.
Create indexes as appropriate for your temp tables.
Drop your temp tables as soon as you no longer need them.
Make sure that your tempdb database follows best practices. This could be a separate post but here's one resource.
If none of that works and you feel confident that you understand the limits of memory optimized table variables go ahead and test your procedures with them. The previously referenced blog post does mention that as a use case: