Having done this type of project before, here's some of the practical gotchas to look out for:
"2. This master then does one-way slave propagation to up to 200 read-only servers via VPN."
Think carefully about the network traffic required here. In the simplest terms, if the insert/update/delete load on the database takes up 1/200th of a network cable's throughput, and if your master server only has one network connection, it's going to be saturated. (In practical terms, it'd be saturated long before that since you also need to accommodate query loads and backups.)
Next, think about the logging requirements. Every transaction has to be logged, and the logged changes get sent to the read-only servers as they happen. If any one read-only server drops offline, then the master has to retain all of the logs until the read-only server comes back online and consumes the changes. In a high-volume change environment like you're describing, this can easily be a higher size than the original data set itself (since some records can get updated over and over, or get deleted.) You need the ability to recognize when a replica has gotten farther behind than it can easily catch up, and then the ability to reinitialize that replica from backup/restore rather than replication synchronization.
Since you want to be able to fail over from one master to another, then every master would need to be aware of the replication synchronization state of every read-only replica.
With these limitations, you can see why you usually want to separate the work of distribution out onto other servers. You want masters handling writes, but then just batching off the changes to a set of distributor servers that are responsible for tracking replication synchronization states across many read-only servers. (This is how Microsoft SQL Server's transactional replication works, and I would imagine most other DBMS's have similar distributor architectures available as well.)
schemas is used for organizational structure and namespacing. By namespacing I mean that you can have similarly named table in few places. And another feature of using schema is additional access control for that schema. You will have joins depending on your data model. You are using schemas implicitly already. There is a "search path" defined in PostgreSQL database.
So every time you type create table X, you use default schema.
And for queries you should use table aliases so you might add only schema name to every table but if you use alias your queries should be the same with or without schema.
I don't have experience with above feature (multitenant db and amazon and ...)
Best Answer
I don't know how these things apply to RDS in particular, but regarding community PostgreSQL at large, there are a few options:
Use logical replication to replicate data from the other servers into a single PostgreSQL database. A major caveat here is that the tables on the central server need to have the exact same table name, including schema name, as they do on the outlying servers. So if the outlying servers have tables with the same table name and schema name as each other, that will be a problem. If you are starting from scratch, it is pretty easy to solve this by having each server include the server descriptions in all of its schema names so that all schema names are unique (i.e. "london_accounts", "chicago_accounts", rather than just "accounts" on both servers). But if you are adapting existing systems which didn't abstract away the schema names, it can be a major pain to have to change everything to fit this pattern.
Or, you could just leave the data where it is, and use the foreign data wrapper postgres_fdw to query it from the central location. Caveats here are that all the outlying servers have to be online for this to work, the high ping time and possible low network throughput to them can be major performance problem, and FDW has higher overhead and often comes up with worse plans than than native data.
You can overcome the uptime, ping, and throughput issues by using physical replication of each outlying servers to the central location (e.g. three database instances running on three different port numbers of the same piece of hardware), and then use FDW from a 4th server to each replica to tie those replicas together. You might still suffer from some overhead and planning issues.