Mysql – fixed length table

database-sizeMySQLtabletimestamp

I only have ever used mysql and storing data on .txt files.

Currently I'm working with tick data (I have node.js on my server). My custom setup so far stores a fixed amount of tick data as comma separated values on a txt file (I use javascript to add new vaules and delete old). The setup can't safely collect as much info as I would like and occasionally a 'server hic-up' happens and the data gets lost/damaged. It takes a long time to 're-grow' all the data, so I need to get something more robust (this way is too fragile).

How can I achieve a data structure that fits this description?

The database should store prices that are associated with unix timestamps oldest to newest. the database should have a maximum number of rows (100,000). If adding one new row would put the database over its maximum size then it should delete its oldest row.

My thoughts are:
Could I set the primary key to be the timestamp?
How would I handle keeping the database to a set amount of rows?
what would the speed be if I get all the rows at once in a query?
What database language would be suited for this?

I only have 3-4 users that will access this app so there wont be a lot of querys. txt files are defiantly not good for (me) storing large data.

Any advice?

Best Answer

Could I set the primary key to be the timestamp?

Yes, the primary key can be any column but I think you'd be better off using an ID column for the primary key to ensure it's unique (which is technically impossible to gaurantee with a timestamp primary key). As ypercube suggests, a PK of (timestamp, id) and using milliseconds on the time value may be a good option as well when using InnoDB and version 5.6+.

You can still create an index on the datetime column which will give similar performance, though if you're pulling back all rows all the time it's hardly an issue to begin with.

How would I handle keeping the database to a set amount of rows?

You can use a stored procedure to handle multiple pieces of logic, such as

  • Insert new row
  • Check table row count
    • If > 100,000 delete oldest row

Or, you can use a trigger which would have similar logic and automatically fires for every insert on the table.

what would the speed be if I get all the rows at once in a query?

100,000 rows isn't that many in terms of a relational database, so you shouldn't have retrieval time being an issue.

What database language would be suited for this?

Again, it's a small size so pretty much anything works. MySQL seems like an obvious choice if you have previous experience with it.