Can I related the crash to availability mode settings?
No. By the looks of your log messages, you actually lost cluster quorum due to the removal of two nodes' votes, and then your witness (file share). Is this a 3 node cluster with a file share witness perchance? In that case, if you pulled these events from one of the node's event logs, then it may appear to each of the nodes that there is a lack of communication with all voters. That would generate a similar, if not same, error footprint like you have above. Nobody can talk to anybody, if that is the case.
During that time, the quorum will be lost as you are currently seeing. There is a level of assuming here, as I'd need to see way more diagnostic information to pinpoint the cause of voter removal, but that is why the quorum was lost.
Regardless, this appears to be a problem that surfaced in a down cluster, in which case your availability mode would have nothing to do with the WSFC cluster failing.
As for "best practices" for the availability mode to go with, you need to determine requirements for data loss, performance impact, and a few other factors that are best described in this BOL reference on Availability Modes.
Clustering is complex, and there are lots of moving parts (no pun intended). Let me try to break this down into more manageable chunks:
From a terminology perspective, there's your Windows Server Failover Cluster (WSFC), and your SQL Server Failover Cluster Instances (FCI). I try to avoid saying "Cluster" and use these acronyms to avoid ambiguity.
Quorum:
The quorum is the number of votes necessary to transact business on your WSFC. Depending on your WSFC configuration, voters can be nodes (servers), a drive, or a file share. You need more than 50% of your votes in order for the WSFC to be online. If you lose 50% or more of your voters, then the WSFC and all clustered services (including your FCI) will go offline and not come back until you have (or force) quorum.
In your configuration, you have two nodes, and one file share for a total of three votes. Any one of those voters can go offline. When you lost the file share, you still had two nodes online, so your WSFC and all clustered services stayed online.
Cluster Owner/Host Server:
When you say that "Node2 was now specified as the active node by Windows", I suspect you are referring to the "Current Host Server" for the cluster. So what is that?
Your WSFC has a network name and an IP address. That name & IP has to be tied to a machine that is part of your cluster. More specifically, it can be tied to any one machine in your cluster. This is part of your WSFC, but not your FCI.
In your scenario, you have three FCIs on a two-node WSFC. It would be a perfectly valid to have one FCI on Node1, and two FCIs on Node2. And the "Current Host Server" for the WSFC could be either node. SQL Server won't care.
So what happened: As you said, there were no adverse effects on the databases. I'd expect that, because SQL Server isn't tied to that WSFC host server. I don't think I wouldn't have expected the host server to move when the file share failed--but I'd let your Windows guys dig into that more. From a SQL perspective, everything worked as expected.
Best Answer
It's completely fine to have the FSW on a different subnet, there is absolutely nothing wrong with that. There is no need to have it on the same subnet, in fact there is an Azure witness which definitely won't be on the same subnet and it works without issue.
Seems to be pointing to either something in the network being an issue or if this is on a virtual machine something is happening to the guest/host which is giving you this trouble. Given there are a plethora of in-depth configuration settings at the host, guest, and OS level that can contribute to this I won't go into any further depth as it'd be out of scope of this site.
This means that whomever attempted to arbitrate for the witness was only one vote off of having quorum for the cluster. Since it's a two node cluster, if the nodes couldn't talk to each other they'd be in this exact situation.
If neither node can talk to each other (obviously an issue) and neither node can talk to the FSW (another issue) this makes me wonder what's broken in the infrastructure - again, either at the virtual layer or the physical (network) layer. It's clear something is going on to cause this and is specific to your environment, not SQL Server.
Yes, however I'm betting the nodes lost connectivity to each other. There are probably some entries about missed heartbeats, connectivity to ~3343, regroups, etc., in the cluster log.
Connectivity doesn't mean votes, connectivity means health checks. Once health checks fail, nodes become partitioned and that's when these events occur. You'll need to find out what occurred in your environment around the time this happened. If it happens quite often and on schedule, then it's some task or software in the environment, if it happens randomly then it's most likely infrastructure issues such as networking or Host/Guest/OS settings if it happens when under load.