Store the data in flat text files which are uniquely named based on each system, not in a database.
Databases (particularly relational ones) excel at storing data in tables with a consistant schema, relating those tables to one another and querying them in arbitrary ways. What you are describing is (based on my reading) basically a web page to display text files. There is only one way to access the data (via what is effectively a unique key, the system name), there don't appear to be any relationships, and the data will vary considerably from system to system (hence your issue with lots of columns).
So I don't see much benefit in using a relational database (a document database such as RavenDB, CouchDB or MongoDB may be a better fit, but even they are likely to be overkill for what you describe).
If you're really keen on using MySQL, just store the contends of the file in a table you can lookup via a name:
system_name varchar(255) NOT NULL Primary Key
file_type char(2) NOT NULL = 'F' or 'IF' (for frequent or infrequent)
data Longtext NULL
And parse your textual data in PHP (or simply display it).
You could store your data exactly as you receive it, or process it into XML, JSON or YAML to standardise it.
I just noticed the show common properties requirement.
Perhaps you could have columns in your table for those common properties, but not all the properties. You can quickly query the common properties while still being able to view the full data. (I don't have direct experience with any of the above document databases, but I believe all of them will let you index arbitrary fields in your documents, which will be much more flexible than a fixed number of columns).
If you really want to stay with MySQL, you may want to look into the Entity-Attribute-Value pattern. (IMO, EAV is an anti-pattern, but it should be OK just for the common properties).
Of course, if display time isn't really a big worry, just opening and parsing 1000 text files isn't going to take too long (I'd expect ~10 seconds).
What I have done in the past is to use something like this:
1) Each document has a logical type associated with it.
2) Set up tables for metadata for each logical type. Each row can store all metadata associated with the document.
The other approach is that of key-value-modelling which can actually be ok or not depending on what you are doing with it. In this case you have a metadata table which stores all metadata, one value per record, for all documents. This works best if you find a reasonable way to aggregate the data in result sets, and if you aren't doing complex searches across multiple metadata fields.
Best Answer
It sounds like you want to read up on Joins: http://en.wikipedia.org/wiki/Join_%28SQL%29
A "join" basically lets you have a query that returns data from multiple tables, based on some criteria, such as records in two tables having a certain matching value. If you want to give the appearance of having this combined data all in one table, you could create a View ( http://en.wikipedia.org/wiki/View_%28database%29 ) of the query with all the joins. MS Access lets you use views to drive reports as easily as it lets you use tables for that purpose.