The fastest way would be to create a cluster an join your current stand alone server to the cluster as a new node. Set the database to full recovery model, take a full backup and a transaction log backup and restore them to the instance on the other node with norecovery.
If you have a good full backup (most recent), it would be possible to set the recovery model to full, take a differential and then take a log backup using the last full from the simple recovery model as your base. The differential should bridge the lsn gap.
To answer your questions:
I'm not sure why you're switching the database into single_user except that you want no changes to be made. If you add the server as a node as I've set forth, this won't be an issue. Otherwise, yes, you'll need downtime to migrate the database. Your other option would be to set the recovery model to full, get it in full, setup mirroring to the new instance on the cluster and then cut over during a downtime which would be much less time.
That's entirely possible to do, but in your initial question you said you didn't want to do this.
You must restore at least one log or differential to bring the lsn gap as close as possible.
I would not use the wizard, but the join only means you've staged a database that is currently in norecovery and it only needs to join the AOAG. The wizard will want to pre-stage the databases for you and ask you for shared locations, etc. Like I said it's best to pre-stage and not use the wizard for the best flexibility.
Msg 19405, Level 16, State 17, Line 3 Failed to create, join or add
replica to availability group 'AGName', because node 'Node3' is a
possible owner for both replica 'Node3\ReadOnly' and
'Primary/Primary'. If one replica is failover cluster instance, remove
the overlapped node from its possible owners and try again.
This happens for two main reasons that I've witnessed.
Reason #1 - The resource/group is set to have ownership on the node in error
Sometimes (for a multitude of reasons) resources and resource groups in windows clustering won't always have the same ownership. The best way to diagnose this error is to first check to see what SQL Server (which calls windows clustering APIs) thinks the cluster nodes are:
SELECT * FROM sys.dm_os_cluster_nodes
Once we know what is in the cluster, check via Powershell to see what the cluster thinks the ownership is for the FCI:
Get-ClusterOwnerNode -Resource "SQLFCIInstanceName"
This will return the nodes that could own the cluster resource. Chances are it'll include the node name of a node that we know shouldn't really be there.
To fix this, run the following powershell command:
Get-ClusterResource -Name "SQLFCIInstanceName" | Set-ClusterOwnerNode -Owners NodeName1,NodeName2
Double check by running the first powershell command to check ownership, then try to add the replica to the AG again.
Reason #2 - Node Names + Language != Node Names
If the language used wasn't US_English there would be a good chance that the node names (when compared to each other) wouldn't necessarily compare properly. This would cause a whole bunch of other issues (and it does) with the cluster outside of the AG.
This can be tested by taking the node names, converting them to upper or lower and the comparing them against themselves. Sounds like it should always work... but some languages have special characters that don't do the UPPER and LOWER conversions well.
Best Answer
To perform a server migration to an AG using log shipping, you should log ship to both servers. When you are ready to actually cut over to the new AG, follow the following steps:
BACKUP DATABASE...WITH NORECOVERY
)WITH NO RECOVERY
.RESTORE DATABASE MyDb WITH RECOVERY
)I'd recommend practicing this routine on your QA system before the real production upgrade. There are a lot of scripts/statements to be run, and lots of switching between servers to do it. It's best to have those steps carefully documented and scripts staged in advance.