I believe your best bet is going to be to use a TRY-CATCH. I've done two things here. First declare and get your "oldID" before your transaction begins. This will help to keep your transaction time down and avoid deadlocks as much as possible. Second I've put the "core" of the rest of it into a TRY block and then in the CATCH block put the reseed again. This way regardless of the error you will re-set the seed. I believe this will work even in a deadlock situation.
DECLARE @oldID bigint
SELECT @oldID = MAX(id) from identtest where id<3000
BEGIN TRANSACTION
BEGIN TRY
SET IDENTITY_INSERT identtest ON;
INSERT INTO identtest (id, x) VALUES (4000,'y');
DBCC CHECKIDENT ('identtest', RESEED, @oldID) WITH NO_INFOMSGS;
SET IDENTITY_INSERT identtest OFF;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
DBCC CHECKIDENT ('identtest', RESEED, @oldID) WITH NO_INFOMSGS;
END CATCH
IF @@TRANCOUNT > 0
COMMIT TRANSACTION;
DBCC CHECKIDENT ('identtest', NORESEED)
All of that being said you may be better off switching your logic around a bit. Would it be possible for you to set your "normal" inserts to start at say 20000000 and then your IDENTITY_INSERTS to start at 10000000? This way regardless of your insert the seed will stay at the larger "normal" value. It sounds like you are a bit far in to do this, but you never know.
The warnings you're seeing most likely come from the sys.sysdepends view.
If you script it out using
EXEC sys.sp_helptext @objname = N'sys.sysdepends'
The definition has a bunch of converts and other nonsense going on.
CREATE VIEW sys.sysdepends AS
SELECT
id = object_id,
depid = referenced_major_id,
number = convert(smallint,
case when objectproperty(object_id, 'isprocedure') = 1 then 1 else column_id end),
depnumber = convert(smallint, referenced_minor_id),
status = convert(smallint, is_select_all * 2 + is_updated * 4 + is_selected * 8),
deptype = class,
depdbid = convert(smallint, 0),
depsiteid = convert(smallint, 0),
selall = is_select_all,
resultobj = is_updated,
readobj = is_selected
FROM sys.sql_dependencies
WHERE class < 2
UNION ALL
SELECT -- blobtype dependencies
id = object_id, depid = object_id,
number = convert(smallint, column_id), depnumber = convert(smallint, type_column_id),
status = convert(smallint, 0), deptype = sysconv(tinyint, 1),
depdbid = convert(smallint, 0), depsiteid = convert(smallint, 0),
selall = sysconv(bit, 0), resultobj = sysconv(bit, 0), readobj = sysconv(bit, 0)
FROM sys.fulltext_index_columns
WHERE type_column_id IS NOT NULL
sys.objects, on the other hand, is fairly straightforward.
CREATE VIEW sys.objects AS
SELECT name,
object_id,
principal_id,
schema_id,
parent_object_id,
type,
type_desc,
create_date,
modify_date,
is_ms_shipped,
is_published,
is_schema_published
FROM sys.objects$
The view definition for sys.sysdepends causes the same warnings when queried on its own.
SELECT *
FROM sys.sysdepends
In general, if you want to control datatypes and indexes and have some performance tuning ability when referencing system views or tables, your best bet is to dump them into a temp table first.
Best Answer
When you exhaust the upper bound of
INT
you will receive, for every new insert:Outside of the MDW use case
Converting to
BIGINT
is far safer (IMHO) than what some people do - go back to 0 and fill in the gaps, or go to -2 billion and just delay dealing with it until you use the same number of values one more time. If 2 billion rows isn't enough, neither is 4 billion. If you use compression changing the data type for values < 2 billion will actually save you some space.However you will come across issues if this is part of a primary key constraint, referenced by foreign keys, has other constraints, etc. You'll need to perform some extra work in addition to just changing the data type.
For MDW specifically
The problem with the solution above, for MDW specifically, is that the data collector package (
SSIS Packages/Data Collector/PerfCountersUpload
) includes some logic that relies on the underlying data type to beINT
. So, the collector jobs fail because of this mismatch.This should buy you some time:
And between now and when you start approaching 0 again (you should set up some kind of monitoring to check the max value and send some kind of alert when you get close), go in and clean up and make sure there are no positive values.
Then, when you start approaching 2 billion again, clean out all the negative values, and reseed.
Is this annoying? Absolutely. I think this might be one of several reasons why you don't see a very large MDW adoption.