In my opinion, this is one of those operations that would definitely benefit from PowerShell. All you'd need to do is gather the list of job steps that have your old string (in this case, your old location) and replace it with the new string (read: new location).
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") |
Out-Null
$OldLocation = "<YourOldLocation>"
$NewLocation = "<YourNewLocation>"
$SqlServerName = "localhost" # by default this will look at the local default instance
$SqlServer = New-Object Microsoft.SqlServer.Management.Smo.Server($SqlServerName)
# loop through all of the SQL Server Agent Jobs on the instance
foreach ($Job in $SqlServer.JobServer.Jobs) {
# loop through each Job Step on the current Job
foreach ($JobStep in $Job.JobSteps) {
# test to see if the Job Step Command text contains your original string
if ($JobStep.Command.Contains($OldLocation)) {
$JobStep |
Select-Object Parent, Name, Command
# the below two lines would make the change to the new string and apply them
#$JobStep.Command = $JobStep.Command.Replace($OldLocation, $NewLocation)
#$JobStep.Alter()
}
}
}
NOTE: This is sample code and should thoroughly be tested in a non-production environment. Ensure that your previous data is backed up, and that you are positive the change is the desirable outcome.
All this code does it loops through all your jobs on a specific instance. If the job step command contains a certain string, then it will show you what job it belongs to, the name of the step, and the command containing the string. I have deliberately commented out the command modification lines so that the code only actually identifies which steps contain your string.
You can also narrow your search by adding additional conditional clauses to the if
block. For instance, if you just want to check CmdExec
jobs, add this with an -and
conditional check:
$JobStep.SubSystem -eq [Microsoft.SqlServer.Management.Smo.Agent.AgentSubSystem]::CmdExec
This is simply one way to do this. I will stress my point again: make sure you test out this theory and execution far away from production, and ensure that you can revert changes with the necessary backup.
Using partitioning is only going to help your query performance if the partitioning scheme is built to serve your specific queries.
You're going to have to review your query patterns and see how they are accessing the table in order to identify the best approach. The reason for this is you can only partition on a single column (the partitioning key) and this is what will be used for partition elimination.
There are two factors that affect if partition elimination can occur and how well it will perform:
- Partition Key - Partitioning can only occur on a single column and your query must include that column. For example, if your table is partitioned on date and your query uses that date column, then partition elimination should occur. However, if you don’t include the partition key within the query predicate, the engine can not perform elimination.
- Granularity - If your partitions are to big, you won’t gain any benefit from elimination because it will still pull back more data than it needs to. However, make it to small and it becomes difficult to manage.
In many ways, partitioning is just like using any other index, with some added benefits. However, you don’t realize those benefits unless you're dealing with incredibly large tables. Personally, I don't even consider partitioning until my table is over 250 GB in size. Most of the time, well defined indexing will cover many of the use cases on tables smaller than that. Based on your description, you're not seeing huge data growth, so it could be that a properly index table will perform just fine for your table.
I would strongly recommend that you review whether partitioning is actually necessary to solve your issues. One would usually partition a very large table for the purpose of:
- Distributing data between different types of disk so that more “active” data can be placed on faster, more expensive storage while less active data is placed on cheaper, slower storage. This is mostly a cost savings measure.
- Assisting in index maintenance for extremely large tables. Since you can rebuild partitions individually, this can assist in keeping indexes properly maintained with minimal impact.
- Leveraging partitioning for improved archival process. See sliding windows.
Best Answer
Based on the BOL article you referenced to, the date is not going to change unless a Service Pack or CU creates new system objects. However the date returned may not necessarily coincide with the date you install that CU or SP.
The only time you will notably see that value change is when you apply a service pack or upgrade SQL Server. It will likely change the same time you see the
ResourceVersion
value change. Which if I recall I only see the version number change with service packs.I have an instance of SQL Server 2008 R2 that shows a build of
10.50.6220
, but the resource database shows10.50.6000
. Which the date returned is08/09/2014 13:40:43
, that is not when I applied the SP but probably the time frame that Microsoft's SQL Server Team finalized their testing. SP3 was released09/26/2014
.