Store the data in flat text files which are uniquely named based on each system, not in a database.
Databases (particularly relational ones) excel at storing data in tables with a consistant schema, relating those tables to one another and querying them in arbitrary ways. What you are describing is (based on my reading) basically a web page to display text files. There is only one way to access the data (via what is effectively a unique key, the system name), there don't appear to be any relationships, and the data will vary considerably from system to system (hence your issue with lots of columns).
So I don't see much benefit in using a relational database (a document database such as RavenDB, CouchDB or MongoDB may be a better fit, but even they are likely to be overkill for what you describe).
If you're really keen on using MySQL, just store the contends of the file in a table you can lookup via a name:
system_name varchar(255) NOT NULL Primary Key
file_type char(2) NOT NULL = 'F' or 'IF' (for frequent or infrequent)
data Longtext NULL
And parse your textual data in PHP (or simply display it).
You could store your data exactly as you receive it, or process it into XML, JSON or YAML to standardise it.
I just noticed the show common properties requirement.
Perhaps you could have columns in your table for those common properties, but not all the properties. You can quickly query the common properties while still being able to view the full data. (I don't have direct experience with any of the above document databases, but I believe all of them will let you index arbitrary fields in your documents, which will be much more flexible than a fixed number of columns).
If you really want to stay with MySQL, you may want to look into the Entity-Attribute-Value pattern. (IMO, EAV is an anti-pattern, but it should be OK just for the common properties).
Of course, if display time isn't really a big worry, just opening and parsing 1000 text files isn't going to take too long (I'd expect ~10 seconds).
I think its faster (feel free to test) to keep the record in the same table while processing, but be sure to set an index on the column(s) that track which processor is using them.
Also use MyISAM tables if you don't require InnoDB's features. The ARCHIVE table can best be set to the archive type, so it does not make space for updates (if you don't need to write to it)
Best Answer
The only way that I can think of to do this (without using triggers) is to use Firebird. Doesn't really answer the question (MySQL), but is interesting functionality to explore - see the fiddle here!
It does make the table declaration more complex (con) but with the massive pro of not having to maintain triggers oneself. It would appear from the error messages that Firebird uses triggers in the background - better that the server maintain them than me!
Then, to test (standard functionality first):
fails with the error:
Look at the quality of that error message - even telling you the offending value!
Then we try (again standard
CHECK
constraint stuff):Error message:
Now, we test the SQL within the
CHECK
constraint!gives the error:
The SQL in the
CHECK
constraint works the other way round also - see the fiddle. Note the part of the error message where it says:table TEST At trigger 'CHECK_3'
- triggers at work in the background!This is really powerful stuff! It's such a shame that Firebird isn't more popular/widespread - it really deserves the no. 2 spot in F/LOSS database ecosphere!
Incredibly, MySQL only recently introduced
CHECK
constraints (only at least 25 years after all its other major - and not so major - competitors). But, now it appears to be making serious efforts to catch up! But nobody else (AFAIK) has SQL inCHECK
constraints, not even the mighty Oracle, MS SQL Server or PostgreSQL (note to self - suggest this on the lists!).