The notation that consists of sticking the letters "P" for Primary, "K" for Key and "F" for foreign key, in front of attribute names, is crippled, flawed and confusing.
It is crippled because it suggests that "foreign keys" are keys in a sense similar to "primary keys" and/or "candidate keys". They are not.
It is also crippled because the notation tends to suggest/instill the belief in people's minds that there is somehow an important distinction between "primary" keys and "non-primary" keys. There is not.
It is flawed because it neglects the notion that any attribute can be part of any number of keys. The notation typically allows for an attribute to participate in at most "the" "primary" key and only one other "non-primary" key.
It is confusing because it is both crippled and flawed. Here are the facts about keys :
Foreign keys are not "keys". The term is a blast from the past, and a gross misnomer. They denote an inclusion dependency from the table that the "foreign key" is on, to some other table. There can be any number of "foreign keys" on a table, to any number of other tables, and it is even a fact that the very same set of attributes can be a foreign key twice, or thrice, or any number of times (perhaps to just as many distinct other tables). Heck, the very same set of attributes can even be a "foreign key" twice or more to the very same table (with a different mapping of from/to attributes). Anyhow, any attribute can participate in any number of "foreign key"s.
Primary keys are just keys. There is nothing special about them, in comparison with "non-primary" keys. All of them denote nothing more than just a rule that for the set of attributes they consist of, the combinations of values that appear for these attributes, in the table, must be unique. The fact that most SQL engines force you to declare a primary key, is, once again, a sad and sorry blast from the past. The idea to single out one key and declare that one "more important than others" is merely psychological. It might suggest that this is the preferred key to use for the table at hand. Don't make the mistake of thinking that this constitutes an obligation for other database designers to reference only this key in what tables they design.
Brief : there can be any number of "foreign keys", and any attribute can participate in any of them. There can be any number of keys, and any attribute can participate in any of them.
A notation or a tool that respects these facts, will at the very least allow to identify each key properly (e.g. using labels such as "K1", "K2", ..., "F1", "F2", ...), and allow for each attribute to be accompanied by more than one of these labels (e.g. "K1,K2,F2").
Furthermore, with respect to "foreign keys" specifically, such a tool will also allow to document the exact and complete nature of the "foreign key" : that is, it will also allow to identify the target table of the foreign key and it will also allow to specify/identify what the mapping is like between the attributes of the "source" and the "target" table.
Some thoughts....
Typically one does not want to store pieces of tightly interrelated information in different systems. The chances of things getting out of sync is significant and now instead of one problem on your hands you have two. One thing you can do with Mongo though is use it to pipeline your data in or data out. My preference is to keep everything in PostgreSQL to the extent this is possible. However, I would note that doing so really requires expert knowledge of PostgreSQL programming and is not for shops unwilling to dedicate to using advanced features. I see a somewhat different set of options than you do. Since my preference is not something I see listed I will give it to you.
You can probably separate your metadata into common data, data required for classes, and document data. In this regard you would have a general catalog table with the basic common information plus one table per class. In this table you would have an hstore, json, or xml field which would store the rest of the data along with columns where you are storing data that must be constrained significantly. This would reduce what you need to put in these tables per class, but would allow you to leverage constraints however you like. The three options have different issues and are worth considering separately:
hstore is relatively limited but also used by a lot of people. It isn't extremely new but it only is a key/value store, and is incapable of nested data structures, unlike json and xml.
json is quite new and doesn't really do a lot right now. This doesn't mean you can't do a lot with it, but you aren't going to do a lot out of the box. If you do you can expect to do a significant amount of programming, probably in plv8js or, if you want to stick with older environments, plperlu or plpython. json
is better supported in 9.3 though at least in current development snapshots, so when that version is released things will get better.
xml is the best supported of the three, with the most features, and the longest support history. Then again, it is XML.....
However if you do decide to go with Mongo and PostgreSQL together, note that PostgreSQL supports 2 phase commit meaning you can run the write operations, then issue PREPARE TRANSACTION
and if this succeeds do your atomic writes in Mongo. If that succeeds you can then COMMIT
in PostgreSQL.
Best Answer
The data you are generating from raspberry pi, temperature and humidity sensor is a time series data. MongoDB has documented some best practices and key tips for storing the time series data.
Time Series Data and MongoDB: Part 2 – Schema Design Best Practices
This explains the following schema patterns for time series data.
Scenario 1: One document per data point
Scenario 2: Time-based bucketing of one document per minute
Scenario 3: Size-based bucketing