Personally I think the biggest caveat would be the difficulty in keeping the list of jobs organized. As far as I'm aware you can't create folders to organize the jobs, so a large number would be cumbersome. I'm not 100% sure of this, though, since none of my servers has more than a dozen or so jobs. Server 2008 and later's Task Scheduler allows for much easier organization, IMO, and in general has much better functionality than previous versions. I'm sure third party apps do an even better job. I would cry if I had to use Server 2003's task scheduler or at.exe
.
The second caveat I can think of would be potentially putting too much load on the SQL server. The Agent is a small program, but running a long or complex task could easily consume a lot of resources. Those resources would not be available for the SQL engine. Since the SQL engine is programmed to take about 80% of available system memory, this could be a problem.
Third, backup may be an issue. You will not only need to backup the filesystem, but also the msdb database to allow for recovery of the jobs (or use something to script the tasks to a text file). This adds a layer of complexity to disaster recovery.
Finally, you wouldn't want to be in a position where you're paying for an SQL Server license just to run SQL Server Agent. If the database gets decommissioned, you'll need to develop a plan for migrating off the SQL Server Agent.
You can use the below query to determine the Memory usage at database level:
SELECT
(CASE WHEN ([database_id] = 32767)
THEN N'Resource Database'
ELSE DB_NAME ([database_id]) END) AS [DatabaseName],
COUNT (*) * 8 / 1024 AS [MBUsed],
SUM (CAST ([free_space_in_bytes] AS BIGINT)) / (1024 * 1024) AS [MBEmpty]
FROM sys.dm_os_buffer_descriptors
GROUP BY [database_id];
GO
If you want to analyse further at object level that which object in that database from above query is using a lot memory use below query:
EXEC sp_MSforeachdb
N'IF EXISTS (SELECT 1 FROM (SELECT DISTINCT DB_NAME ([database_id]) AS [name]
FROM sys.dm_os_buffer_descriptors) AS names WHERE [name] = ''?'')
BEGIN
USE [?]
SELECT
''?'' AS [Database],
OBJECT_NAME (p.[object_id]) AS [Object],
p.[index_id],
i.[name] AS [Index],
i.[type_desc] AS [Type],
--au.[type_desc] AS [AUType],
--DPCount AS [DirtyPageCount],
--CPCount AS [CleanPageCount],
--DPCount * 8 / 1024 AS [DirtyPageMB],
--CPCount * 8 / 1024 AS [CleanPageMB],
(DPCount + CPCount) * 8 / 1024 AS [TotalMB],
--DPFreeSpace / 1024 / 1024 AS [DirtyPageFreeSpace],
--CPFreeSpace / 1024 / 1024 AS [CleanPageFreeSpace],
([DPFreeSpace] + [CPFreeSpace]) / 1024 / 1024 AS [FreeSpaceMB],
CAST (ROUND (100.0 * (([DPFreeSpace] + [CPFreeSpace]) / 1024) / (([DPCount] + [CPCount]) * 8), 1) AS DECIMAL (4, 1)) AS [FreeSpacePC]
FROM
(SELECT
allocation_unit_id,
SUM (CASE WHEN ([is_modified] = 1)
THEN 1 ELSE 0 END) AS [DPCount],
SUM (CASE WHEN ([is_modified] = 1)
THEN 0 ELSE 1 END) AS [CPCount],
SUM (CASE WHEN ([is_modified] = 1)
THEN CAST ([free_space_in_bytes] AS BIGINT) ELSE 0 END) AS [DPFreeSpace],
SUM (CASE WHEN ([is_modified] = 1)
THEN 0 ELSE CAST ([free_space_in_bytes] AS BIGINT) END) AS [CPFreeSpace]
FROM sys.dm_os_buffer_descriptors
WHERE [database_id] = DB_ID (''?'')
GROUP BY [allocation_unit_id]) AS buffers
INNER JOIN sys.allocation_units AS au
ON au.[allocation_unit_id] = buffers.[allocation_unit_id]
INNER JOIN sys.partitions AS p
ON au.[container_id] = p.[partition_id]
INNER JOIN sys.indexes AS i
ON i.[index_id] = p.[index_id] AND p.[object_id] = i.[object_id]
WHERE p.[object_id] > 100 AND ([DPCount] + [CPCount]) > 12800 -- Taking up more than 100MB
ORDER BY [FreeSpacePC] DESC;
END';
Use the querys above in SP's to run at a time period to collect the data you require and this would help you analyse better!
Best Answer
What you can do is to have below script stored on your server or make it as a stored procedure :
use below with sqlcmd
What you can do to alleviate it :