What type of datastore should I use for high volume time series data

database-designdatabase-recommendation

Imagine a project which requires to be able to process 50,000+ (tiny ints) writes per second. The data, once inserted, won't change. The data is a measurement on a particular time. Sub second resolution is not required (two measurements of the same device can be a second apart).

We are willing to buy/rent more hardware but not more expensive hardware (no fancy RAID etc).

The data is ordered in sets (per device), a couple of hundreds. And should thus be shown/graphed. There is no relation between devices but we do want to group them (showing, for example, 10 graphs).

The only other requirement is that the datastore is open source and *nix based and has reasonable (community) support.

What kind of datastore would you use?

Best Answer

A 'data logging' approach to the problem

For anything involving 50,000 inserts a second you will need to batch your inserts. If you have to run it on cheap hardware then you need to look at the efficiency of your I/O.

If you assume:

  • Up to the second latency is not necessary (i.e. batched writes are acceptable)

  • You have no requirement for analytics within the application except your graphs by device.

  • Any further analytic requirements can be met by batching data into an external system

  • Some administrative overhead is acceptable in adding devices to the system (i.e. they have to be registered).

  • A polled once per second read is acceptable - incoming items can be batched and recorded on a per-second basis with a null recorded if a datum is not produced by a given device within that period.

Then you can store one row per second with all the devices de-normalised into an array of readings. The application maintains a registry of offsets into this array by device. As you add more devices the BLOB simply expands.

This changes your problem to one of storing one 50K-ish BLOB per second indexed on the time, which can be done by pretty much any DBMS platform that supports BLOBs. You might even be able to use a key-value pair system such as Berkely DB.

50K per second is 180MB per hour, 4.3GB per day and approximately 1.5TB per year. This should be possible to manage even with fairly modest hardware. Depending on how much you need to archive you can periodically clear down historical data. You will need something that supports partitioned tables to do this efficiently, though.

Getting the data back out

One disadvantage of this approach is that you would have to read your entire data set to query the statistics for any given device, which would imply scanning 4GB of data for a single day. If you need to support a lot of users querying the devices on an ad-hoc basis you will need to find a way to supplement this store with a fast querying capability. Some possibilities for this are:

  • One hour's data is approximately 180MB in memory. Cache a rolling period of a few hours - either within your application, or in leading partitions on a fast store (maybe a SSD) that can be archived off on an hourly basis.

    If your applications that displayed the graphs queried this on a rolling basis (i.e. only got the most recent data) then this structure could be keyed on time and still support a significant query workload.

    You may need to benchmark the BLOB reader performance of your platform to judge whether you need to build additional caching within your middle-tier, although you will almost certainly need a middle tier server application of some description to avoid shipping the entire BLOB record out to the client.

  • If you only expect to monitor a small subset of sensors at any given time, then you may be able to run caching on a per-sensor basis, with the cache living for some period (e.g. 10 minutes) after the last read for that sensor. The main cache would push values out into the per-sensor cache until the per-sensor cache expires. Per-sensor caching could be implemented fairly efficiently as a ring buffer of 1-2 hours (or whatever period seems appropriate) worth of sensor readings with supplementary data for the beginning and end periods. A time period should then resolve into an offset within the ring buffer. In this case the memory overhead per sensor amounts to a few bytes of header information.

  • Calculate aggregates of average readings for each minute, 10 minutes, hour or suitable period, and store those in a supplementary table. Graphing data over longer time periods can use the aggregates.

  • Take the aggregate values and un-pivot them into a dimensional structure in a supplementary data mart (i.e. with time and sensor ID dimensions). This can be done on (for example) a nightly basis. Ad-hoc queries on historical data can come from the data mart.

Some combination of these approaches will allow quick access to near-realtime data with a fair amount of read traffic. Even if you have to cache this with your application and write something hand built to do it.

Administrative overhead

A sensor will have to be registered with the system and given an ID (essentially the ID is - or maps to - an array index within the BLOB). As the BLOB is opaque the database doesn't need to care about columns. A KVP store that can deal with BLOBs efficiently may be sufficient.

A sensor can be registered with the system, which will then start recording data. You may need to explicitly record start/end dates of sensor connections, and maintain metadata that links sensor IDs to offsets within the array. You will need to recycle array slots as it's a static structure. Any middle-tier caching will have to be aware of this as well.

Message queues

You may wish to consider an asynchronous architecture using a transactional queue manager such as RabbitMQ to manage spikes in the read workload with the regular writes. The logging process and a query fronting process would collect requests to be processed in a batch queue. This would smooth out spikes in the system load. If you anticipate a significant ad-hoc query workload you may wish to look into this.

Pros and Cons - summary

This structure is optimised to minimise the storage space requirements, which should in turn minimise the I/O overhead on the system and allow it to run on relatively cheap hardware - in particular hardware with relatively cheap storage subsystems. In principle an ordinary PC should be able to handle the write workload from the sensors, although you may need faster disk hardware or SSDs to cope with any substantial query workload.

However, this architecture requires a fair bit of plumbing to get the data back out. The Sensor ID cannot be indexed in the database, as it is just an offset into an array of sensor readings.1 This means an entire periods worth of data must be read to get a single sensor reading out. The querying functionality may therefore require a fairly sophisticated caching function in the middle tier to get decent performance, so this approach comes at the price of some development overhead.

1 This is an egregious violation of first normal form, although a relational store may not be necessary or even desirable for this application.