I have a query that does this sort of thing:
(SELECT rel_id, timestmap, y FROM table_1 AS full_
WHERE full_.timestamp BETWEEN %s AND %s
ORDER BY full_.timestamp)
UNION ALL
(SELECT rel_id, timestamp, y FROM table_2 AS full_
WHERE full_.timestamp BETWEEN %s AND %s
ORDER BY full_.timestamp)
UNION ALL
...
Which results in this form:
rel_id timestamp y
1 2013-01-01 a
1 2013-01-02 b
1 2013-01-03 c
1 2013-01-04 d
2 2013-01-01 e
2 2013-01-04 f
2 2013-01-06 g
What I'm doing here is directly selecting from partition tables, which are partitioned based on the ID of a related table (so, the first part selects from table_1
, and that is all the rows that have a relationship to another table's row with ID 1, and so on). This works very well, and is fast (and avoids having to get the query planner to figure out which partition it should be looking at).
The problem is, to process it (using python pandas library), I need it in this form:
timestamp 1 2
2013-01-01 a e
2013-01-02 b null
2013-01-03 c null
2013-01-04 d f
2013-01-06 null g
I know it is possible to select the y
column of each sub query as the rel_id
(so as to make the name of the column the same as that of the relationship ID) – e.g. SELECT timestamp, y AS 1...
– but I don't know how to 'merge' the resulting series of timestamp/values together in this way so that there are null
values for a series that is missing a value at a given timestamp index.
The Pandas library can do this really easily, but it is pretty slow (slower than fetching the data in the first place), so I want to see if it is faster to make use of postgres's power…
- Is this possible?
- Is it likely to be fast?
- Is it likely to be faster than processing in Python (my feeling is it will be)?
Best Answer
If it is faster you have to measure for yourself. However, doing it on the db side sends less data across, so I would assume it to be faster.
The pivot itself is fairly simple. I put your query's result in a table to make the example simpler.
SQL Fiddle
PostgreSQL 9.1.9 Schema Setup:
First step is to only return one row per date. That is simply done with a
group by
:Query 1:
Results:
Now wee need to pull the "correct" value into each column. For that we combine an aggregate with a case. The
case
returns null for all rows for which the condition is not met. the aggregate ignores nulls. That leaves the one value we are looking for:Query 2:
Results:
To make this work with your original query just replace
your_query
in my example with