SQL Server AlwaysOn Availability Groups – T-SQL Jobs Executed on Same Instance

availability-groupsjobssql serversql-server-2017

Recently, our blocked processes dashboard has been reporting blocked processes around the time when we do our statistic update.

The reason was quickly found: an update statistics job step (T-SQL) that is starting on both the secondary and the primary SQL Server instance.
The job updates several statistics on the same database, that is part of an AlwaysOn Availability Group. I would expect this to fail on the secondary instance.

A quick rundown of the failover history:

Server A, which should stay active due to licensing (will be named Active Server), failed over unexpectedly to Server B (Passive Server) on 20/02 at 9PM.

After the unplanned failover, we did another (but this time planned) manual failover back to the Active Server on 21/02 at 12PM.

Job history

Before the first failover all was good, and the active server is the only one running the job.

enter image description here
One job running.
We see the stat updates running on the active side. (which is primary replica at the time)

During the short time that the passive server was the primary replica, we don't have any monitoring and the job history was cleared.

After the failover, back to the 'normal' state, after being on primary on the passive node for less than 24 hours, the job step on the passive instance has also been starting and running on the active instance.

enter image description here
(I killed the sessions).

Now the interesting part to me, is that both jobs are running on the active server, seeming like the job is using the listener to get to the database. But it could be an entirely different reason.

There is a copy job PowerShell task running nightly at 01 AM (dbatools):

powershell.exe Copy-DbaAgentJob -ExcludeJob "CopyJobs,CopyLogins" -Source INDCSPSQLA01  -Destination  INDCSPSQLP01     -Force

My guess is now aimed at the one time, the job copy happened from the active, secondary node –> primary passive node with -Force. This happened at 21/02 01 AM.

The Question

Why is the job step on the passive instance executing on the active instance's database?

Checklist

On both instances, the job target is local:

enter image description here

EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'

The servername is correct

select name from sys.servers 
select @@SERVERNAME

both return the passive server.

The job IDs of active and passive are different:

--08C63F07-0853-41DA-BC88-8FDF44AE491F -- passive
--E8C88965-C581-4E06-B651-CC10637FCEEF -- active

Both jobs use the database in question in their step:

@database_name=N'DB1', 

–> Should not be accessible on the passive instance, resulting in failure.
No readable secondaries

enter image description here

The database is not accessible on the passive instance:

enter image description here

Version of both servers: 14.0.3030.27

T-SQL Job step command example

 @subsystem=N'TSQL', 
        @command=N'update statistics dbo.table with fullscan ...'

Nothing is running on the passive instance when the job starts.

EDIT:

Restarting the agent on the passive node 'fixes' this, resulting in a new error on executing:

Unable to connect to SQL Server 'INDCSPSQLP01'. The step failed.

As a result, it no longer updates statistics on the primary instance

Hostnames = Passive & Active server, jobs where visibly running.
enter image description here

Application info

SQLAgent - TSQL JobStep (Job 0x9D358B2EF6C53C4BAD6A61CA87D51BF5 : Step 1)   
SQLAgent - TSQL JobStep (Job 0x6589C8E881C5064EB651CC10637FCEEF : Step 1)   

Best Answer

I don't have a diagnosis for why this problem occurred, but if you have jobs running on databases that are in availability groups, it's best to include a check in step 1 that uses the fn_hadr_is_primary_replica function to check whether it is running on the primary or secondary.

IF (sys.fn_hadr_is_primary_replica('DB1') <> 1)
  BEGIN
    RAISERROR('%s is secondary', 11, 1, @@servername );
  END

Configure this step to quit the job on failure. This is better than trying to run something that fails because it's hitting a secondary.