You are right, PostgreSQL's built-in replication (aka Hot Standby) replicates whole cluster - so it's not suitable in your case.
You will need some trigger-based solution. For example,
- Slony-I - most mature, written in C, flexible, good for complex setups
- SkyTools pgq+londiste - C+Python, more lighweight than Slony-I
- RubyRep - Ruby/JRuby, simple, easy to setup, not so mature
Please note that triggers always cause some extra load, and also initial sync will be equivalent of dumping all replicated tables.
Hope this helps, I'd propose to come back with more detailed questions if you have problems.
Good luck!
(I'm putting this in an answer, as it's way too long for a comment.)
We have a scenario similar to yours for our bug tracking system. We use it internally, of course, but customers can also submit issues through a page we created on our customer SharePoint site.
What we decided to do was host the database and website only at the office and provide external access from there (which we were already doing for some of our SaaS customers). If the internet totally bombs out (rare), it's more important that we can continue to work than for our customers to be able to submit new issues.
In your scenario, I don't know how critical the data is, how much data there is, or how important it is for external users to be able to write data.
Perhaps you could consider using a database at the alternate location as a read-only secondary, but direct all writes to the primary. While this will probably involve some application changes to separate read-only and read-write connections, this type of solution might be enough to satisfy the requirements for the small amount of time the office internet is down.
Regardless, I stick by my recommendation to not mix MySQL/SQL Server if you can avoid it. IMO, you'll be better off long-term by directing resources into proceeding with the existing migration plan, and hold off on developing a more robust replication solution until that stage of the project is complete.
Also, definitely try your best to avoid any master-master replication scenario. These can be highly non-trivial to configure and support at the best of times. The $ and time that will be spent developing and debugging a solution involving master-master heterogeneous replication will be astronomical, and probably won't ever work correctly 100% of the time (actually, probably nowhere close to that). Not that a built-in homogeneous replication solution will be perfect either, but at least in that case, you can call for customer support if something blows up and you don't know how to fix it; if you roll your own solution, you're on your own.
Best Answer
Yes you can run all the replication agents from the machine inside the firewall. After everything is setup normally, just disable the SQL Agent jobs that run the agents on the DMZ machine, then create the same jobs on the machine inside the network. Enable the jobs inside the network and start them.
The push and pull terminology with SQL replication just refers to the machine that actually runs the job. There's no other differences besides the job location.