The InnoDB buffer pool caches queries; if you have less usage on one of the nodes then you're going to have less cache used.
If you're not properly load-balancing queries then the amount of cache in use on each node would be different. If you just turned on a node and added it to the cluster the amount of cache would be different than the others one that have been long running.
It doesn't matter if you use Galera in this case as this is a specific InnoDB question; sure you could be using Galera, but it has nothing to do with this caching question you've asked.
The biggest caveat with only 2 machines in a cluster is the possibility of "split-brain", in which the nodes lose communication with one another, and each assumes that they are now the Master, possibly getting out of sync with the other. This risk is mitigated if you can absolutely ensure that writes will not go to the 2 Masters separately while they are not in communication. It is generally advisable to have at least 3 (and always preferably an odd number of nodes) to prevent these types of situations.
I did something similar to what you are proposing (with 3 nodes, not just two) as a MariaDB/Galera Cluster. In actuality, it is a Multi-Master setup, in which a write to any server will be propagated to the others, but in practice I treat it as a Master-Slave with automatic failover.
For Context, I will describe how I do my Master-Slave automatic failover using HAProxy:
I use HAProxy on my application servers and set up two ports, one that my application treats as the Master for writes, and one that the application treats as the Slave for reads. In my HAProxy, the Master port is set to use only the Master, but fail to the Slaves in a particular order. If the Master goes down, HAProxy detects it and redirects writes to the next server, which becomes the de facto Master.
My HAProxy Slave port balances between the Slave nodes (you could include the Master, too) using one of several different balancing algorithms. If one of Slaves goes down, HAProxy continues balancing with whatever is available until the Slave comes back up.
If the Master goes down, HAProxy re-directs and the application just works. When the Master comes back up, it syncs with the other machines before accepting connections. When it begins accepting connections, then HAProxy starts sending writes to it, and it becomes the Master again, and all is right with the world.
In your situation, you can do something similar, using HAProxy as your connection/failover tool. You do not have to split out your reads and writes, but I do so as to take advantage of the extra machines. I do like the MariaDB/Galera solution. I would hesitate only having 2 nodes. It feels dangerous.
Best Answer
I don't know about your application/workload but generally, you should try with:
wsrep_slave_threads=N
(Do not use a value for wsrep_slave_threads that is higher than the average given by the wsrep_cert_deps_distance status variable)innodb_flush_log_at_trx_commit=0
(default is 1)