Can I related the crash to availability mode settings?
No. By the looks of your log messages, you actually lost cluster quorum due to the removal of two nodes' votes, and then your witness (file share). Is this a 3 node cluster with a file share witness perchance? In that case, if you pulled these events from one of the node's event logs, then it may appear to each of the nodes that there is a lack of communication with all voters. That would generate a similar, if not same, error footprint like you have above. Nobody can talk to anybody, if that is the case.
During that time, the quorum will be lost as you are currently seeing. There is a level of assuming here, as I'd need to see way more diagnostic information to pinpoint the cause of voter removal, but that is why the quorum was lost.
Regardless, this appears to be a problem that surfaced in a down cluster, in which case your availability mode would have nothing to do with the WSFC cluster failing.
As for "best practices" for the availability mode to go with, you need to determine requirements for data loss, performance impact, and a few other factors that are best described in this BOL reference on Availability Modes.
The concepts of quorum, and owners are separate topics. Just because a member of the WSFC doesn't get a vote in quorum does not mean it can't own a resource. Additionally, SQL Server doesn't really play a role at all--the same concepts apply regardless of what type of clustered resource you're dealing with.
Quorum:
The quorum is the number of votes necessary to transact business on your WSFC. Depending on your WSFC configuration, voters can be nodes (servers), a drive, or a file share. You need more than 50% of your votes in order for the WSFC to be online. If you lose 50% or more of your voters, then the WSFC and all clustered services (including your FCI) will go offline and not come back until you have (or force) quorum.
In your configuration, you have a File Share Witness (hopefully in an impartial location reachable by both primary & DR sites), and you've also changed the NodeWeight to 0 for your DR servers. Rather than thinking of NodeWeight as "The DR servers don't get to vote," you should think of it as "The DR servers get to vote, but their vote doesn't count." They are still there, they're still part of the WSFC, it's just that the WSFC doesn't listen.
Even though the FSW isn't hosted on the cluster (or, perhaps, more accurately, because the FSW isn't hosted on the cluster!), it still has to be "present" to vote. Which brings us to the concept of ownership...
Cluster Owner/Host Server:
Your WSFC has a network name and an IP address. That name & IP has to be tied to a machine that is part of your cluster. More specifically, it can be tied to any one machine in your cluster. This is part of your WSFC.
In your scenario, your DR servers have no vote for quorum, but they still must be possible owners of the WSFC Host Server. If you manually fail over to your DR site (which will involve forcing quorum because you have zero voters in DR), then one of your DR servers must host the cluster name & IP. If it doesn't, then your cluster can't come online.
Your FSW is "owned" by the same server that owns/hosts the Cluster itself. Therefore it must have all servers, including DR, as possible owners. When you do force quorum and come online in your DR site, the WSFC is going to want to continue talking to the FSW.
Your original question:
...what would be the effect if I limited the possible owners to just
those nodes at the primary site?
I suspect that you would have problems when you tried to force quorum and bring your WSFC online in the DR site--though, that's just a guess.
From a SQL Server perspective, you just care that you have quorum and the WSFC is reliably up. If you're having issues in that area, I'd look at the specific issues to your reliability. In reality, you probably don't really care which server is hosts your WSFC (and FSW).
Best Answer
Generally when there are geographically distributed clusters, having a disk witness does not make sense. This is due to the need to have block level synchronous disk replication between all sites while preserving things like write ordering and scsi reservations.
In almost all instances of having a geo-cluster it's best to go with a fileshare witness. The two different types of witnesses do the same thing (there are subtle differences like how the arbitration happens, etc.)
Where should you place the witness? The best place would be a neutral 3rd site, however most won't have this available. My recommendation is to place this at the site you want to keep primary in the event of a communications loss between the nodes at each site. In your case I'd likely put it on the primary site. It should be somewhere that is as highly available as possible.