If you want to be able to sustain a failover of two nodes within the failover cluster, then you'll need to ensure that you have five voters for quorum.
You currently have three nodes, and even if you were to add a disk witness, you'd still be at the same point: whether you have three or four possible quorum votes, you'd still need to have three votes for quorum (you need to have more than half of the votes for quorum).
I've never heard the terminology of a "witness server" before, and I think that person is just trying to give a logical name to simply adding another node to your cluster.
In other words, if you now have a four node cluster with a disk witness, that'll be a total of five voters. Five voters will allow you to sustain two nodes to fail (provided your disk witness continues to be functioning properly).
With that being said, you also need to answer a few questions for yourself:
Does you really need three separate instances in this cluster? What is the driving force behind that?
Can one of your possible owning nodes for the FCI be able to sustain all three instances of SQL Server? (think about it, there could be a lot of serious contention there)
Do all of these instances need to be in the same cluster?
And I do want to echo what @AaronBertrand has said above in his comment on your question. There is no such thing as an active / active [ / n [ / n ... ] ]
cluster for SQL Server. It's extremely misleading to think of it like that, and that terminology has traditionally caused a lot of confusion (not to mention, the basic problem of a wrong title for something).
Clustering is complex, and there are lots of moving parts (no pun intended). Let me try to break this down into more manageable chunks:
From a terminology perspective, there's your Windows Server Failover Cluster (WSFC), and your SQL Server Failover Cluster Instances (FCI). I try to avoid saying "Cluster" and use these acronyms to avoid ambiguity.
Quorum:
The quorum is the number of votes necessary to transact business on your WSFC. Depending on your WSFC configuration, voters can be nodes (servers), a drive, or a file share. You need more than 50% of your votes in order for the WSFC to be online. If you lose 50% or more of your voters, then the WSFC and all clustered services (including your FCI) will go offline and not come back until you have (or force) quorum.
In your configuration, you have two nodes, and one file share for a total of three votes. Any one of those voters can go offline. When you lost the file share, you still had two nodes online, so your WSFC and all clustered services stayed online.
Cluster Owner/Host Server:
When you say that "Node2 was now specified as the active node by Windows", I suspect you are referring to the "Current Host Server" for the cluster. So what is that?
Your WSFC has a network name and an IP address. That name & IP has to be tied to a machine that is part of your cluster. More specifically, it can be tied to any one machine in your cluster. This is part of your WSFC, but not your FCI.
In your scenario, you have three FCIs on a two-node WSFC. It would be a perfectly valid to have one FCI on Node1, and two FCIs on Node2. And the "Current Host Server" for the WSFC could be either node. SQL Server won't care.
So what happened: As you said, there were no adverse effects on the databases. I'd expect that, because SQL Server isn't tied to that WSFC host server. I don't think I wouldn't have expected the host server to move when the file share failed--but I'd let your Windows guys dig into that more. From a SQL perspective, everything worked as expected.
Best Answer
Regardless of the quorum model, if a SQL Server 2012 secondary becomes disconnected, it will go offline.
This was changed in SQL Server 2014.
What's New for SQL Server 2014