Clustering is complex, and there are lots of moving parts (no pun intended). Let me try to break this down into more manageable chunks:
From a terminology perspective, there's your Windows Server Failover Cluster (WSFC), and your SQL Server Failover Cluster Instances (FCI). I try to avoid saying "Cluster" and use these acronyms to avoid ambiguity.
Quorum:
The quorum is the number of votes necessary to transact business on your WSFC. Depending on your WSFC configuration, voters can be nodes (servers), a drive, or a file share. You need more than 50% of your votes in order for the WSFC to be online. If you lose 50% or more of your voters, then the WSFC and all clustered services (including your FCI) will go offline and not come back until you have (or force) quorum.
In your configuration, you have two nodes, and one file share for a total of three votes. Any one of those voters can go offline. When you lost the file share, you still had two nodes online, so your WSFC and all clustered services stayed online.
Cluster Owner/Host Server:
When you say that "Node2 was now specified as the active node by Windows", I suspect you are referring to the "Current Host Server" for the cluster. So what is that?
Your WSFC has a network name and an IP address. That name & IP has to be tied to a machine that is part of your cluster. More specifically, it can be tied to any one machine in your cluster. This is part of your WSFC, but not your FCI.
In your scenario, you have three FCIs on a two-node WSFC. It would be a perfectly valid to have one FCI on Node1, and two FCIs on Node2. And the "Current Host Server" for the WSFC could be either node. SQL Server won't care.
So what happened: As you said, there were no adverse effects on the databases. I'd expect that, because SQL Server isn't tied to that WSFC host server. I don't think I wouldn't have expected the host server to move when the file share failed--but I'd let your Windows guys dig into that more. From a SQL perspective, everything worked as expected.
Best Answer
The SQL Resource should have an associated Virtual Network Name (or Client Access Name). This resource has an associated IP address that moves between nodes during failover. This is what your connection strings should point to because this Name/IP will always point to the node that is the owner of the SQL instance resource.
Use the following PowerShell to get the list of network names present in a cluster. The one in the Cluster Group cluster group will be the cluster network name and the one in your SQL cluster group will be the SQL network name, and that is the name to use in your connection strings.
This is because you're failing over the clustered SQL Instance but the core cluster resources (cluster name/IP and quorum resource) remain owned by the original owner. It is possible to fail these over as well, but generally isn't required to be done manually if you're testing failover of the SQL resource.
Use the PowerShell cmdlet Get-ClusterGroup to get a handle on the different collections of resources in your clusters. When you run that cmdlet against a local cluster you'll see 3 or more cluster groups returned:
When you perform a manual failover, you're only moving the cluster group associated with those resources to the new node. An automatic failover may move the Cluster Group if there is a failure of the owner node, but if the automatic failover was SQL related, you may only see the SQL cluster group move to the new node.