If the secondary is up and running, when the log block is flushed to disk (either because it is full or a commit), the record gets pushed to the log writer on the primary and to the log scanner (log reader) process on the primary simultaneously. Then the log scanner communicates with the secondary and the secondary then pulls the transaction from the log scanner on the primary to the secondary and processes the log record. The primary log writer doesn't push transactions across, it just communicates with the secondary, it only does that to see if it is up so that it knows it doesn't have to mark the replica as NOT SYNCRONIZED.
When the secondary is not up, then the log writer cant communicate with the secondary so it marks it as NOT SYNCHRONIZED and stores the records in the tran log on the primary. If you look at sys.databases.log_reuse_wait_desc column it should show AVAILABILITY_REPLICA which means the primary is hanging on to all the records.
Once the secondary is up, it will communicate with the primary to request a log scan, it then processes the transactions and communicates with the primary using progress messages to indicate the hardened LSN, presumably the primary is then adjusting its MinLSN, which in turn means the records prior to MinLSN will get deleted as checkpoints happen and hence VLFs will get truncated releasing space when you do a log backup.
But yes short answer is, if your secondary is down you need as big a log file as you need for as long as it is down. Once it is backup and synched at some time you may need to remove the db from the always on group to shrink the log if it is humungous and you dont want it that big.
Msg 19405, Level 16, State 17, Line 3 Failed to create, join or add
replica to availability group 'AGName', because node 'Node3' is a
possible owner for both replica 'Node3\ReadOnly' and
'Primary/Primary'. If one replica is failover cluster instance, remove
the overlapped node from its possible owners and try again.
This happens for two main reasons that I've witnessed.
Reason #1 - The resource/group is set to have ownership on the node in error
Sometimes (for a multitude of reasons) resources and resource groups in windows clustering won't always have the same ownership. The best way to diagnose this error is to first check to see what SQL Server (which calls windows clustering APIs) thinks the cluster nodes are:
SELECT * FROM sys.dm_os_cluster_nodes
Once we know what is in the cluster, check via Powershell to see what the cluster thinks the ownership is for the FCI:
Get-ClusterOwnerNode -Resource "SQLFCIInstanceName"
This will return the nodes that could own the cluster resource. Chances are it'll include the node name of a node that we know shouldn't really be there.
To fix this, run the following powershell command:
Get-ClusterResource -Name "SQLFCIInstanceName" | Set-ClusterOwnerNode -Owners NodeName1,NodeName2
Double check by running the first powershell command to check ownership, then try to add the replica to the AG again.
Reason #2 - Node Names + Language != Node Names
If the language used wasn't US_English there would be a good chance that the node names (when compared to each other) wouldn't necessarily compare properly. This would cause a whole bunch of other issues (and it does) with the cluster outside of the AG.
This can be tested by taking the node names, converting them to upper or lower and the comparing them against themselves. Sounds like it should always work... but some languages have special characters that don't do the UPPER and LOWER conversions well.
Best Answer
I think the question has a wrong assumption.
...
If you have an AG running on a single node, you can remove the database from the AG completely AND still connect to the database via the listener (as long as the clustering part is online either forced or using Windows 2012+ with dynamic quorum).
For a normal connection without any application intent the listener just functions as a dumb alias for the server. It doesn't check something like whether the database you're connecting to is part of its AG or not.