I do it all the time with my T-SQL modules.
Essentially, all you need to do is run your modules from two or more connections in a loop for a couple of minutes. Typically, all potential problems are exposed in a few minutes, assuming you have a SQL Server box with decent CPUs.
I wrote a few examples here and here.
Managing environments
I think you definitely don't want to be forced into a single database version. You've got enough developers that you will inevitably have multiple development work streams, and requirements to apply patches to the current production environment independent of development workstreams.
You can use Liquibase or a manual process to produce patch scripts to upgrade versions. I suggest starting out with a manual process and using the schema comparison tool for QA on the patches. Clean, automated, transparent synchronisation of a nontrivially complex database is a bit utopian.
Your central data model can be kept in whatever system takes your fancy. I've used everything from tedious enterprise repository tools to create table scripts. Create table scripts play nicely with ordinary source control tools such as subversion and not all repository tools do a good job of versioning.
Whatever you use as your master data model repository you need a fairly clean mechanism for deploying an environment from that model. It should be structured so that rollouts to an environment are easy. You also need a mechanism to patch from one released version to the next.
What I've done
I've done the following in the past when I was managing development environments. It's not particularly high tech, but it's amenable to version control and automated builds, so it makes it easy to roll out an environment to a specific version, and maitaining a large number of environments is quite practical.
Maintain a central repository: This could be a set of database creation scripts held in a version control systems, or a repository model in a data modelling tool. Take your pick. This model should have a build mechanism that allows an environment to be rolled out from the scripts without a lot of manual intervention.
If you have a lot of reference data you will need a load mechanism for it. Depending on how you want to do it, you could keep this in a database or in a set of files. The advantage of files is that they can also be versioned and labelled from the same version control system as your code base. A bunch of CSV files and bulk loader scripts in a source control repository can do this easily enough.
One option for deploying development environments is to take backups of the production database patched to the appropriate version and make them available for devs to restore into a development environment.
Make it easy to roll out: Like any CI build process, the database should be deployable from a single script. Set it up so that database connections can be paramaterised, or the script is location independent and can just be run through the connection.
Patch scripts: You will need roll forward and probably roll back scripts from each released version.
Build test environments from the repository model: This ensures that development on environments that are out of sync with the repository gets caught in testing.
Test the deployment process: Automated patching scripts, however they are created should be testable. Schema comparison tools are quite good for this, even if you dont't use them to generate the patch scripts.
Create a reference environment with the repository model build you tested against
Create a smoke test environment with either a backup of your production release or a build based on the current released version.
Run the patch script against the smoke test environment.
Use the schema comparison tool to compare the smoke test environment with the reference environment. The patch script should result in the two databases being identical, so you can investigate any differences.
What I like about this process
This is a bit heavyweight, and was designed for deploying into fairly bureaucratic and opaque production environments. However, it has the following strengths:
Developers can tinker where they need to.
Multiple branches and streams of development can be accommodated.
The deployment scripts themselves can be tested in a repeatable manner. This is very helpful to shut down deployment bunfights, as the repeatability can be demonstrated.
The schema comparison tools provide QA on the deployment itself.
Testing is always done against what is expected to be released, and this will catch issues arising from environments being out of sync.
Deployment based on repository models and patch scripts means that uncontrolled rubbish doesn't get accidentally migrated from development environments into production.
A lot of the process can be automated, although it is often desirable to prepare and test the deployment patch scripts manually.
Environments are cheap and easy to deploy without having to jump through hoops.
Developers are forced to make a system that is amenable to a simple build and deployment process.
Developers are forced to learn basic database administration tasks, but test and production environments are insulated from noob mistakes.
How it addresses your requirements
Developers have local copy of data to run development code against
The deployment scripts or DB images mean that they can set up an environment from any version that's available.
Able to rollback database structure to a previous changeset
Again, sorted by the deployment scripts. Either through DDL or test database backup images created through a controlled process, developers can bring up an environment to any specific version that you have.
Able to separate new feature schema changes vs schema design fix changes
Patches to a common version can be maintained in a separate fork in the svn tree. If database backups are used as the reference environments they need to be stored somewhere with the same folder structure as the branching of the source control projects.
Able to modify database structure locally for testing
The simple deployment process allows devs to tinker, and easily restore an environment to a local state, or bring up a reference environment to do comparisons and make change sets against.
Best Answer
One option is to use a "hot standby" server, either with log shipping or with streaming (or both). This will automatically apply changes from the master into the replica whenever there is connectivity. The problem is that the hot standby server is read only. If your Django never needs to write into the database, this is fine, but if it needs to write this will not work. So this will likely not work, as most applications will write into the database when someone logs in, even if the application is only used for reporting purposes. Another potential problem is that unlogged tables are not available on the hot standby.
Another possibility to set up a standby server, and then promote it when you need to use it. When you promote it, it will become writable, but it will sever the connection to the master and so will slowly diverge. You will have to recreate it every now and then in order to bring them back into sync. If you need to minimize downtime, you can be using the old promoted copy while you are re-establishing a new standby copy in the background. This is a bit more work, and requires you to have enough storage for two full copies of the database (although it could be less if your master changes slowly and your file system supports fancy deduplication)
You could use logical replication to replicate changes from the master into a "live" writable test database. This is a lot more work as you need to go through all the tables and decide which ones should have changes replicated from the master and which should stand alone on the replica after the initial sync, and which ones need to do both with some kind of conflict resolution. And if the purpose of this dev database is so that you can test work that is likely to have bugs and cause bad data to get into your database and you want to protect the production database from getting corrupted with this bad data, this won't work very well. Locally caused corruption will remain in the local database, replicating new changes from the master is unlikely to fix the bad local data, so you will need to do a full refresh to get rid of the bad data. So overall, this method will probably be more work but no more effective than the previous solution. This also has the problem that it will not replicate schema changes. So if your master schema has not been stabilized but is still getting new tables, new columns, etc. this will be a nightmare.
Either create a demo database which has enough data to serve its purpose, and forget about trying to keep it up to date; or get a bigger laptop. If really really need to have all the latest data from the master, but don't need the old stuff, then you can use logical replication to do that. You can create your subscription with "copy_data false" to avoid copying the historical data. Presumably you need at least some history, but it would then be on you to figure out how much and how to get it.