You typically have to weight the cost of doing so vs. the benefits ... but benefit in risk management is difficult to quantify.
Basically, it comes down to what the cost of an exploit would be, and what the likihood of it happening are.
So, having to restore from backup because someone managed to drop a table which creates a denial of service, and being down for a day has a cost to the company in terms of what profit they'd have made in that given time, but there's also an issue of reputation loss (ie, customers/users who stop doing business with you, or potential users who are less likely to do busines in the future) ... but we have to balance this by the likehood of someone successfully attacking the site and causing this.
If you're not storing credit cards, and you're not a big target (the type of site people would brag about taking out), you're less likely to be hacked ... although, if you're running commonly distributed software, you still risk attacks by script kiddies who are just looking for people running software with a known exploit.
...
What our security folks don't seem to understand is that it's a balancing act -- some changes for security will create a burden on your users. And sometimes, the security itself will cause outages (eg, one of our external partners moves IP ranges ... but the new holes in the firewall weren't made, and due to a "network hold" we can't get any changes made for over a week) or just performance degredation.
Sometimes it's just that it takes longer to code, or more headaches to maintain, etc.
But it's something you have to answer for yourself -- is the cost worth the benefit of having made the change? (and sometimes, if the cost is just in man-power, was there an opportunity cost; ie, could you have been doing something else that would derive even more benefit with your time?)
How far apart (ping time) are the two cities? 80ms is what we experience going across the US. It is not bad.
Writing to both heads of Master-Master is possible, but has lots of pain points.
NDB Cluster allows hot-hot, but (as you say) requires some conversion.
So, back to what I see as the only viable solution: A single writable master, plus any number of slaves.
One thing that can make a remote master painful is if the user's "unit" of action translates into many SQL statements. That can/should be solved by
(1) Rethink the code to use fewer statements
(2) Use a Stored Procedure to encapsulate as many of the SQL statements as possible, then deploy that on the remote Master.
Reads (other than "critical reads") can/should go to a slave, behind a load balancer. And some mechanism should ensure that reads are usually "local".
Best Answer
You can take a look into MySQL Fabric (Official Doc) but it requires more db server
I have tried this tool only in R&D env for testing a basic HA
It supports some sharding scenarios
Here some high level pros and cons
Pros:
Cons: