Plain answer is yes, if you have enabled trace flag 1118 on your instance a rebuild of those indexes will move them to full extends instead of mixed. You can get more information on this from Paul Randal's post: Are mixed pages removed by an index rebuild?
A synopsis of the comment raised by ray herring on July 10, 2014
and Paul's response:
Ray asked if enabling this flag, and performing rebuilds would move all the pages in mixed extends to dedicated extends. Paul stated yes it would. However one thing I would note is his second question, on if it was worth worrying about for existing tables and indexes. Paul stated, no, since mixed pages only really make a difference in performance is in tempdb.
So you are not likely going to see a great deal of performance increase by worrying about the current pages that are mixed. Just do your normal index maintenance and at some point they will get moved.
Can I host the DLL on a different server so it will get run there?
How would this work? If you want to run code on a remote server, you would still need code on the local server to be able to make that remote call.
Although, you could set up an instance of Express Edition to host the SQLCLR code (it would need to be a Stored Procedure(s) and/or Function(s) instead of a Trigger), then create a Linked Server to that Express instance, and then have a T-SQL Trigger on the local table execute the SQLCLR Stored Procedure(s) and/or Functions over the Linked Server. In this manner, the local code calling the remote code is pure T-SQL. You wouldn't even need to enable CLR Integration on the local instance.
The reason I ask is that we don't want this code slowing down the database server, which it likely would by performing these other tasks within the code.
Whatever work you inside of a Trigger, be it a T-SQL or SQLCLR Trigger, will hold up the completion of that DML statement. Using SQLCLR does not really change anything there.
What are the exact requirements of the Trigger with regards to:
do things other than database work, such as write a JSON string to a txt file and update a system log of a different type of database.
Do those steps / actions need to be bound by the transaction, such that if the INSERT operation fails, the txt file does not exist and the log entry is not there?
Finding a way to do this asynchronously would be ideal (as also mentioned by Jonathan). However, making a Web Service call removes the transactional aspect so you might get files and log entries for rows that do not actually exist.
Using an NT service would be the least impacting on the INSERT operation, and unless you are using WITH (NOLOCK)
(which you shouldn't do), it will only see the results of successful INSERT operations. But, the service is disconnected and will have more latency than the Trigger model, and you will need to find a means of tracking which entries are "new" (the NT service will need to keep track of either a DATETIME
or maybe could use a ROWVERSION
value and compare to @@DBTS
each time). If your table already has a DATETIME
or DATETIME2
column being set upon INSERT
, then you can leverage that existing column, else you will need to add a DATETIME
/ DATETIME2
/ ROWVERSION
column. OR, if you have a PK based upon either an IDENTITY or Sequence, then you can use that value instead of adding a column, and the NT service would just need to keep track of the most recently used ID
. Where would you store that value? Text file? Registry? The "Cloud"? (just kidding about that last one ;-)
On the other hand, you could probably use Service Broker to match the two pieces together. When the Trigger fires, it can add a message to the Service Broker queue, including any necessary info from the INSERTED
table. This allows it to be asynchronous. Then, on the other end, the queue reader can use a SQLCLR stored procedure as the "activation procedure" to take that info and take the appropriate actions (creating the JSON file, logging the entry on the other system, etc).
This model might not require Service Broker. You can set up your own queue table, have a T-SQL Trigger dump entries into it, and then create a SQL Server Agent job that reads from the queue table and calls the SQLCLR Stored Procedure(s) and/or Function(s) asynchronously (whether they are located on the local server or across a Linked Server), and then removes the records when it has processed them successfully.
Best Answer
SQLCLR only allows for managed code / pure-MSIL Assemblies. So .NET Visual C++ might work, and I think I have seen one example of it.
You can load / call unmanaged C++ DLLs in an
UNSAFE
C# / VB.NET Assembly, but that can be riskier than using "unsafe" .NET functionality.In theory, any .NET language should work IF it produces a pure-MSIL Assembly, but I have yet to see working examples of this. I have seen an example of Visual C++, and I have seen an example of someone coding actual MSIL (on Stack Overflow), though I'm still doubtful as to the supposed benefit of that.
Folks have tried F#, but can't doing that as SAFE since it requires another library for F#:
F# with sqlclr in a reasonably safe way and scripted assembly
As to the updated question, which now also asks about the Extended Stored Procedure (xp) API, that should be deprecated and frowned-upon.