I would like to know if it is possible to share the structure and data between the two systems without duplicating and/or having to copy everything each time I load a new OS. Effectively, I would like all the data to be available to Ubuntu when it is created in Windows, and vice versa.
You can't. Just not supported, not possible. No can do. PostgreSQL does not attempt to make the data directory portable across architectures, operating systems, etc.
The only way you can do this is to run a Windows virtual machine inside Ubuntu when you want to access a Windows data directory, or a Linux virtual machine inside Windows when you want to access a Linux data directory.
You might want to run a VM image that's shared between both platforms, and have it contain the database - but be careful, Linux's NTFS driver isn't great for performance, and might not have all the wrinkles ironed out when it comes to concurrent I/O so I'd be a little leery running something like a VM image off it. FAT32 and exFAT are not crash-safe. So there aren't really any good choices for a file system both systems can share to run "real work" off.
So personally... I'd just have a small separate machine. Or I'd keep a dump of the DB and reload it when I rebooted.
(Additionally, if running PostgreSQL under Linux on Linux's NTFS file system driver works, that's a little surprising, and I certainly wouldn't rely on it for anything I cared about).
SQL Server has JSON support though it seems to be a variation on it's XML support.
PostGres has had JSON support since 9.2.
Teradata has had it since 2014.
This makes them hybrids. Obviously PostGres is the open-source one.
It really depends on what you need from JSON support. If it is to return the document on a key then that has always been possible without explicit JSON support.
If you want to index certain parts of the JSON document then that does require specific JSON support. There is the question of how sparse the attribute is that you want to index. If it is present in the vast majority of cases (or even a mandatory attribute) then I would break it out of the JSON and have it as an explicit attribute in a hybrid store.
If by parallelize or die you mean support a distributed dataset or die then I'm not sure that I agree. For OLTP work you would have to be in a Times top 100 company to approach the limits of the traditional RDBMS. By definition most companies just don't generate as much useful data as they would like to think.
Companies like Facebook, Twitter and NetFlix are dealing with data on an order of magnitude far beyond what most people will see.
For high end web analytics work then yes, you might want a NOSQL product as a collector. Cassandra is useful in that respect, plus it has tunable consistency.
The gotchas in distributed systems is Brewers CAP theorem. You can have any two of Consistency, Availability or Partition tolerance. It's a bit more blurred than that but the 2 of 3 rule is generally true.
There's the challenge of how you handle referential integrity and in some cases how do you honour primary key constraints. Your data might be splashed across many servers so honouring a foreign key concept might require a lookup between one server and another. Even if this were a feature it would seriously impact performance.
If people want to interact with an API that only talks JSON then I see no problem with that providing there is an underlying DAL that obfuscates whether a data attribute is present in a JSON document or explicit attribute in a hybrid design.
Best Answer
In theory, YES... If your mongod dbpath is at partition what both sides can read and write. So, you cannot use linux partitions (ext2,ext3,ext4,...) because your windows don't know how to handle. It's better not to use ntfs at windows side, because linux don't always handle ntfs right. So, solution is use older vfat32 partition type. Both sides can handle that well.