Yes, there is a pretty big pitfall you're going to run into fairly quickly, and that is with the size and maintenance of the tables. You are somewhat on the right track by saying that you want to put your data into a temporary table daily, and then move it into your permanent table, but you'll soon run into trouble with this scheme.
For example, let's say you want to "roll off" the oldest month's worth of data after two years. In your design, you would have to issue a DELETE statement against your big, big table. This will likely be somewhat slow, depending on the number of indexes you have. Also, it will cause index fragmentation, and the only way to fix that would be to rebuild or reorganize the indexes on this very large table which would also cause performance problems. There are a whole host of other issues with a big single table type design as well. For example, with a big, single table, you can't do FILEGROUP based backups, which means that if you want to have a full backup of your database, it's gonna be BIG, and it's gonna take a LONG time to complete.
What's the solution? Table partitioning. Read about this in depth, in as many places as you can. Basically, partitioning allows you to split up your data onto "tables within tables" -- each partition shares the same schema, and is accessed through the table object, but can be indexed and maintained differently. Partitions are basically tables, cut up by some useful key. In your case it will likely be date. They can be dropped just like (and just as fast as) tables, which means that if you partition your big data tables by date, you can simply drop old partitions instantly, with no adverse effect to the indexes on any of the other partitions. You can put partitions on different filegroups, which means that older partitions can be rolled off, or rolled on to cheaper commodity storage if it's not commonly used. Last but not least, in SQL 2012 you'll be able to create COLUMNSTORE type indexes on your older, read-only partitions, while having a different, more insert-oriented indexing scheme on the active partition where you're inserting all your sensor data.
Hope this helps. You have a good amount of research to do regarding partitioning and partitioning schemes, but hopefully now you know the direction you need to be looking.
P.S.:
Oh, and I forgot your bulleted list of questions...
Answer 1, 2, and 5. See above. Answer 3: In SQL Server, you can compress on a partition by partition basis, so compress your older partitions aggressively using PAGE compression. But I believe your out-of-row large data types will not be compressed if you do this -- again, you may want to alleviate this problem by normalizing your sensor values. Answer 4: Absolutely not, but if all you want to do is store static data by day and never search on it any other way, compressed flat files may be a much easier way to go.
P.P.S:
Oh, and another thing. You don't need your two-table solution to make this all work. Large binary sensor data should be of type VARBINARY(MAX) because its values can be stored "out of row" but still be a column in a single table (see the sp_tableoption documentation). You may want to consider normalizing some of your sensor data out of the binary data you have in the table, though, because your database won't be good for much beyond retrieving chunks of sensor data by time if you don't.
The MySQL SET datatype is a disaster - as are any type of array datatypes (supported unfortunately by many RDBMSs). Check out my answer to another question here. As I note, MySQLs SET is a breach of Codd's second rule - no repeating group datatypes. It is also completely non-portable should you wish to change RDBMS.
If you wish to store that data this way is to have a BOOK table with book_id, name, author, country_of_publication, subject.... Then have a CHAPTER table with book_id, chapter_id, chapter_name, chapter_sub_subject.... and then a PAGE chapter with book_id, chapter_id, page_id, pager_number, ... images... other stuff... then LINE book_id, chapter_id, page_id, line_id, line_text...
This conforms to the relational model. I'm not clear on why you'd want to do this. What's wrong with TEXT datatype? Also, you might want to look into FTS (Full Text Search) - MySQL's native FTS (available on both MyISAM and InnoDB now). There's also Lucene, Sphinx and Spider (there may well be others - Google is your friend).
[EDIT in response to OP's comment]
I had the feeling it was wrong to make identically-structured tables
for each book
The whole point is that you should have identically structured tables for all books - you make the structure general enough to cope with (virtually) every possibility.
but I also though it would be slower to search through larger table
(containing ALL sentences not just 1book's worth) to (re)construct a
page, as well as doing an extra WHERE book_id = 'N' match...
Use indexes on book_id, chapter_id and maybe even page_id. Also, if you're going to be searching in your entire corpus for a particular word, then condsider Full Text Indexing.
Presumably this extra overhead is near-identical to me first sending
mysql to the single-book table. This leaves me with the query of how
to store a small array for use in PHP --- I'm leaning towards VARCHAR
with (un)serialize.
Be aware that the maximum size of VARHCHAR is 65535 bytes - some of James Joyce's sentences might go beyond that :-), but it's unlikely that even Joyce went beyond the 4GB of a TEXT datatype :-)
I Googled "storing books in MySQL" and came up with this which you may find of interest, and interestingly up popped this on the first page of the same search.
Best Answer
What I would do is to create a table with the stock_id (that can be the alphanumeric code or a integer), the timestamp of the measurement and the current value. That is your entry data, 3 columns.
From that point you can add columns for calculations (the difference absolute or percent) with the previous value. Having all in the same table will simplify the model and ease your queries. Try to create a date (not timestamp) column and create a partition by it. It may lighten a bit the access to the table as long as you set it in your queries.