First, you can technically do it with DTC or linked server transactions - it's just not supported, and it's a bad idea for performance anyway. Let's talk about a better method.
The way the question is written, it sounds like you're doing archival - moving data from the "current" tables on Server1.Table1 to a set of "archive" tables on Server2.Table2. In that case, it's okay if the data is in both places for a short period of time, so I'd break it up into two separate tasks:
Phase 1: The Copy - Insert old data into server2.table2 from server1.table1. Assuming this is an archival process, you could have this as a scheduled job that runs nightly at midnight (or whatever day/times work for you.)
Phase 2: The Delete - Reading from the read-only replica of server2.table2, delete data that still exists in server1.table1. Do this as a scheduled job that runs X minutes after phase 1, every X minutes, for an hour (or however often you want to retry.) This way, if the read-only replica is running several minutes behind, your delete will still happen eventually.
It's important to read from the read-only replica rather than server2 directly - after all, your goal is to make sure the data is actually protected. (This is why the cross-database stuff isn't supported.)
To design this for performance, I wouldn't actually join between server2.table2 and server1.table1, either - that's a recipe for a blocking mess. Instead, grab the max ID in server2.table2, and that will tell you the data to delete on server1. If record ID# 500 has made it to server2.table2, then you can delete all IDs <= 500 on server1.table1. You can use the same trick with dates. Bonus points if you use the fast ordered delete technique.
If you're archiving individual records rather than going by an ID or date field, you'll still have to join directly between the tables - just don't join on server2 directly, but instead its read-only replica.
Yes, this is more work than a single delete statement wrapped in a transaction, but as you start to push performance and availability boundaries (like scaling data across multiple servers with AlwaysOn AGs), you have to do more of this stuff yourself rather than hitting the easy button, heh.
Best Answer
The article isn't well worded but here's what I think it's saying
Beginning scenario - DatabaseA is in an AG and is currently primary on Server1 and can run on Server2. DatabaseB can only run on Server1
T1 - Transaction begins T2 - Row inserted into DatabaseA (log not yet sent to Server2 though) T3 - Row inserted into DatabaseB T4 - Transaction commit called T5 - DatabaseB commits transaction (since the two DBs commit individually) T6 - DatabaseA fails over before log is sent
Since the log was never sent DatabaseA wouldn't have the row but DatabaseB would. If synchronous mode is being used that can't happen because the commit can't be called until the log from the row is sent. However, it may be possible for the commit on DatabaseB to happen and DatabaseA to fail over before the commit happens there. This may lead to a rollback in DatabaseA instead of a commit. I can't say with certainly that you can get into the same situation with synchronous mode but if you can that's how it would likely happen.