If you want to be able to sustain a failover of two nodes within the failover cluster, then you'll need to ensure that you have five voters for quorum.
You currently have three nodes, and even if you were to add a disk witness, you'd still be at the same point: whether you have three or four possible quorum votes, you'd still need to have three votes for quorum (you need to have more than half of the votes for quorum).
I've never heard the terminology of a "witness server" before, and I think that person is just trying to give a logical name to simply adding another node to your cluster.
In other words, if you now have a four node cluster with a disk witness, that'll be a total of five voters. Five voters will allow you to sustain two nodes to fail (provided your disk witness continues to be functioning properly).
With that being said, you also need to answer a few questions for yourself:
Does you really need three separate instances in this cluster? What is the driving force behind that?
Can one of your possible owning nodes for the FCI be able to sustain all three instances of SQL Server? (think about it, there could be a lot of serious contention there)
Do all of these instances need to be in the same cluster?
And I do want to echo what @AaronBertrand has said above in his comment on your question. There is no such thing as an active / active [ / n [ / n ... ] ]
cluster for SQL Server. It's extremely misleading to think of it like that, and that terminology has traditionally caused a lot of confusion (not to mention, the basic problem of a wrong title for something).
Clustering is complex, and there are lots of moving parts (no pun intended). Let me try to break this down into more manageable chunks:
From a terminology perspective, there's your Windows Server Failover Cluster (WSFC), and your SQL Server Failover Cluster Instances (FCI). I try to avoid saying "Cluster" and use these acronyms to avoid ambiguity.
Quorum:
The quorum is the number of votes necessary to transact business on your WSFC. Depending on your WSFC configuration, voters can be nodes (servers), a drive, or a file share. You need more than 50% of your votes in order for the WSFC to be online. If you lose 50% or more of your voters, then the WSFC and all clustered services (including your FCI) will go offline and not come back until you have (or force) quorum.
In your configuration, you have two nodes, and one file share for a total of three votes. Any one of those voters can go offline. When you lost the file share, you still had two nodes online, so your WSFC and all clustered services stayed online.
Cluster Owner/Host Server:
When you say that "Node2 was now specified as the active node by Windows", I suspect you are referring to the "Current Host Server" for the cluster. So what is that?
Your WSFC has a network name and an IP address. That name & IP has to be tied to a machine that is part of your cluster. More specifically, it can be tied to any one machine in your cluster. This is part of your WSFC, but not your FCI.
In your scenario, you have three FCIs on a two-node WSFC. It would be a perfectly valid to have one FCI on Node1, and two FCIs on Node2. And the "Current Host Server" for the WSFC could be either node. SQL Server won't care.
So what happened: As you said, there were no adverse effects on the databases. I'd expect that, because SQL Server isn't tied to that WSFC host server. I don't think I wouldn't have expected the host server to move when the file share failed--but I'd let your Windows guys dig into that more. From a SQL perspective, everything worked as expected.
Best Answer
Well what you have asked is really debatable and I should also add that quorum has little significance in cluster network configuration as such. Starting from Windows Server 2008 and onward Microsoft says that you can go on and configure WSFC without any heartbeat network connection. If you have not configured dedicated network for heartbeat the cluster validation wizard will only give you warning and that means your cluster is still supported. Now, but that does not means this is all good. Allow me quote a reason for dedicated NIC for heatbeat (Source)
So if you read above you can get some fair idea why a heartbeat communication might still be important.
Since you have windows server 2016 you can easily go with heartbeat communication without worrying about network binding order, an order which tell which network/route should be given priority. By default, Windows server 2016 uses the
Interface Metric
property of a network adapter to determine which route has the highest priority. The lower the Interface Metric property value, the higher the priority. More information in this support articleI also believe it is not too much of overhead to configure heartbeat network, if the heartbeat network goes down the WSFC will start using public network for cluster communication and cluster communication would still go on. I think it is more of segregating things and making cluster communication more secure with heartbeat network. BUT if your public network is teamed up well and has enough bandwidth to accommodate easily both cluster and client communication, by all means go ahead without heartbeat network. Please note if you just have public network all the client and internal cluster communications will go through this link so it has to be strong.
Here is what MVP,MCM Edwin Sarmiento has to say about Heartbeat network (Source)
I believe he is correct, the focus should be more on making "Complete Network" redundant not just part of it.
By network you mean the complete public network is down, in that case this could be single point of failure bringing down the whole WSFC. And this is what precisely Edwin emphasized on above quote. Now if you are saying due to connecting network issue one of the nodes in cluster was removed from cluster membership forcing cluster to calculate quorum and do failover, in that case, since you have 2 votes of Node and quorum disk(> 50 %) the WSFC will remain online and do failover. This network issue would not affect disk/Storage as they are connected via SAN not via cluster public network.
Additional reading:
Windows Server 2008 networking 3 part series
Disclaimer: I must add I am not an network engineer and the detailed discussion of network configuration for WSFC is not within my knowledge scope and believe a network enginner can definitely add more to this answer, I tried to answer your question to best of my knowledge. Hope this helps.