You should put everything on a level playing field. How ?
Without proper tuning, it is possible for older versions of MySQL to outrun and outgun new versions.
Before running SysBench on the three environments
- Make sure all InnoDB settings are identical for all DB Servers
- For the Master/Slave, run
STOP SLAVE;
on the Slave
- For PXC (Percona XtraDB Cluster), shutdown two Masters
Compare the speeds of just standalone MySQL, Percona, and MariaDB.
ANALYSIS
If MySQL is best (Percona people, please don't throw rotten vegetables at me just yet. This is just conjecture), run START SLAVE;
. Run SysBench on the Master/Slave. If the performance is significant slower, you may have to implement semisynchronous replication.
If PXC is best, you may need to tune the wsrep settings or the network itself.
If MariaDB is best, you could switch to MariaDB Cluster (if you have the Money) or setup Master/Slave with MariaDB. Run Sysbench. If the performance is significant slower, you may need to tune the wsrep settings or the network itself.
Why tune wsrep settings ? Keep in mind that Galera wsrep (WriteSet Replication) uses virtually synchronous commits and rollbacks. In other words, either all nodes commit or all nodes rollback. In this instance, the weakest link would have to be
- how fast the communication between Nodes happens (especially true if the Nodes are in different data centers)
- if any one node has underconfigured hardware settings
- if any one node communicates slower than other node
Side Note : You should also make sure tune MySQL for multiple CPUs
UPDATE 2014-11-04 21:06 EST
Please keep in mind that Percona XtraDB Cluster does not write scale very well to begin with. Note what the Documentation says under its drawbacks (Second Drawback):
This can’t be used as an effective write scaling solution. There might be some improvements in write throughput when you run write traffic to 2 nodes vs all traffic to 1 node, but you can’t expect a lot. All writes still have to go on all nodes.
SUGGESTION #1
For PXC, turn off one node. Run SysBench against a two node cluster. If the write performance is better than a three node cluster, then it is obvious that the communication between the nodes is the bottleneck.
SUGGESTION #2
I noticed you have a 42GB Buffer Pool, which is more than half the server's RAM. You need to partition the buffer pool by setting innodb_buffer_pool_instances to 2 or more. Otherwise, you can expect some swapping.
SUGGESTION #3
Your innodb_log_buffer_size is 8M by default. Try making it 256M to increase log write performance.
SUGGESTION #4
Your innodb_log_file_size is 512M. Try making it 2G to increase log write performance. If you apply this setting, then set innodb_log_buffer_size to 512M.
I feel a bit stupid, but the answer to that strange behavior was simple:
The old node1 cluster node was trying to connect to the cluster again after a reboot. I forgot to uninstall the mysql installation on that node and instead only stopped the mysql server. That was well as long the system did not restart mysql.
To fix the situation I removed the old config files on node1 and also uninstalled the percona cluster packages.
Everything is peachy now.
Best Answer
It is not advisable to use RAM for temporary directory. At time files can grow large enough to outflow RAM and this would be true even with plain MySQL. RAM is best managed by InnoDB buffer pool. You can setup a tmpfs that should be good enough with decent performance.