Does the Windows Failover Cluster for a multi-subnet SQL Server
Availability Group require a static IP entry for each subnet?
The CNO will require an IP address for every subnet it could reside in.
I am running SQL Server 2012 on Windows Server 2012 Hyper V VMs in 2
separate subnets in the same domain. I understand that I will need an
IP from each subnet when I create the listener for my AAG. What I am
unclear on is the configuration of IPs on the underlying Windows
Failover Cluster.
For the underlying WSFC you'll need at a minimum:
Node1 - IP Address for each unique subnet for each network interface
Node2 - IP Address for each unique subnet for each network interface
CNO - IP Address for each unique subnet
EX: 2 nodes, 2 subnets, 1 interface per node, subnets 192.168.1.1/24 and 192.168.2.1/24
Node1: 192.168.1.10
Node2: 192.168.2.10
CNO: 192.168.1.20, 192.168.2.20
Also, if the server hosting the secondary replica does require its own
IP, does it also require its own unique cluster name (and can you
explain why this is necessary)?
I'm not sure I understand this part of the question. All of the resources can only belong to a single cluster - there is no cluster inside of a cluster thing.
Edit - I looked at the link that you posted and I'm not sure why the author stated "•Cluster name for each node". My only guess is they meant each node needs a name and IP (for the node). Otherwise it's not a correct statement, the author should probably be contacted.
The concepts of quorum, and owners are separate topics. Just because a member of the WSFC doesn't get a vote in quorum does not mean it can't own a resource. Additionally, SQL Server doesn't really play a role at all--the same concepts apply regardless of what type of clustered resource you're dealing with.
Quorum:
The quorum is the number of votes necessary to transact business on your WSFC. Depending on your WSFC configuration, voters can be nodes (servers), a drive, or a file share. You need more than 50% of your votes in order for the WSFC to be online. If you lose 50% or more of your voters, then the WSFC and all clustered services (including your FCI) will go offline and not come back until you have (or force) quorum.
In your configuration, you have a File Share Witness (hopefully in an impartial location reachable by both primary & DR sites), and you've also changed the NodeWeight to 0 for your DR servers. Rather than thinking of NodeWeight as "The DR servers don't get to vote," you should think of it as "The DR servers get to vote, but their vote doesn't count." They are still there, they're still part of the WSFC, it's just that the WSFC doesn't listen.
Even though the FSW isn't hosted on the cluster (or, perhaps, more accurately, because the FSW isn't hosted on the cluster!), it still has to be "present" to vote. Which brings us to the concept of ownership...
Cluster Owner/Host Server:
Your WSFC has a network name and an IP address. That name & IP has to be tied to a machine that is part of your cluster. More specifically, it can be tied to any one machine in your cluster. This is part of your WSFC.
In your scenario, your DR servers have no vote for quorum, but they still must be possible owners of the WSFC Host Server. If you manually fail over to your DR site (which will involve forcing quorum because you have zero voters in DR), then one of your DR servers must host the cluster name & IP. If it doesn't, then your cluster can't come online.
Your FSW is "owned" by the same server that owns/hosts the Cluster itself. Therefore it must have all servers, including DR, as possible owners. When you do force quorum and come online in your DR site, the WSFC is going to want to continue talking to the FSW.
Your original question:
...what would be the effect if I limited the possible owners to just
those nodes at the primary site?
I suspect that you would have problems when you tried to force quorum and bring your WSFC online in the DR site--though, that's just a guess.
From a SQL Server perspective, you just care that you have quorum and the WSFC is reliably up. If you're having issues in that area, I'd look at the specific issues to your reliability. In reality, you probably don't really care which server is hosts your WSFC (and FSW).
Best Answer
Yes, you need to add a DR site IP address for the cluster name. It may not be critical if you aren't using the cluster name for anything, but it is certainly a best practice and considered the correct configuration.
You don't really need to be concerned about the core cluster resources (cluster name, fileshare witness, etc.) failing over to the DR site. The cluster will run just fine if that occurs, and if the network to the DR site is interrupted, the two nodes in the local data center will arbitrate for ownership of the core resources and one will bring them online. This will not cause the availability groups to go offline and come back online--that will only occur if the AG primary can't retain ownership of the AG resources.
Having said that, you can set the possible owners of the cluster name resource on the Advanced tab, but in the case where you need to fail over to the DR node, you would need to change it again in order for the resources to come online on the DR node.