Since no one posting feedback as an answer:
Why different results from a job and from SSMS?
Apparently SQL Agent uses EXECUTE AS USER to run job steps if you specify a user with @database_user_name
parameter to sp_add_jobstep
stored procedure. In my example above this behaviour can be replicated in SSMS by logging in as a sysadmin and running this script:
use TestDatabase1;
GO
execute as user = 'TestUser';
GO
select top 1 * from TestDatabase2.dbo.TestTable;
GO
REVERT
GO
Note, that if we change execute as user
to execute as login
in this code snippet, the error go away, but apparently sql agent uses execute as user
.
According to MSDN: While the [EXECUTE AS USER] context switch to the database user is active, any attempt to access resources outside of the database will cause the statement to fail. This includes USE database statements, distributed queries, and queries that reference another database that uses three- or four-part identifiers.
More information is at helpful @RemusRusanu link: Trouble With Database Permissions for sp_send_mail
How do I make the job work (instead of failing)?
ALTER DATABASE TestDatabase1 SET TRUSTWORTHY ON;
GO
RECONFIGURE WITH OVERRIDE;
GO
Other options are also considered in the Remus Rusanu's link above.
If CreatedOn
is not the clustered index (and especially if it is not indexed at all), and you are inserting new data sequentially (e.g. new rows have a newer CreatedOn
), you should consider making that the clustered index. Then you shouldn't have any contention when you are deleting old data and inserting new data, since it will be on a completely different set of pages. Unless for some reason your inserts or the delete are escalating locks. If that is the case then instead of one big transaction:
DELETE [massive amount of rows]
WHERE CreatedOn < [somedate];
You should consider breaking that up into chunks, as you suggested; but, just doing that in a loop and not adding any other changes won't have that much of an effect, since that may still be operating as a single transaction (and your problem might be exasperated by transaction log writes, and perhaps poor autogrowth settings for the transaction log). My method is typically as follows (with no "outer" transaction at play) - commit each set in its own transaction, and checkpoint or backup the log in between transactions. This lets user queries get in between your transactions and also reduces the impact on the log. Note that I picked 1000 arbitrarily; that might not be the right number for your scenario, but I'm pretty sure 10 is not.
DECLARE @somedate DATE = DATEADD(DAY, -1, GETUTCDATE());
BEGIN TRANSACTION;
SELECT 1;
WHILE @@ROWCOUNT > 0
BEGIN
COMMIT TRANSACTION;
-- if in simple recovery: CHECKPOINT;
-- otherwise: BACKUP LOG ...;
BEGIN TRANSACTION;
DELETE TOP (1000) dbo.MyTable
WHERE CreatedOn < @SomeDate;
END
But again, this will help your process most if CreatedOn
is clustered, or at least indexed (having no context I don't know if your overall workload would be better off with that as clustered, but I do know that this query will work much better if it is).
Best Answer
In your case, since you are just doing reporting, it won't make much difference either using inline SQL or a stored procedure.
Aaron Bertrand provides a really good answer to a related question:
Stored procedures vs. inline SQL