Postgresql – Postgres jsonpath ANY Equivalent

postgresql

Consider the following json:

{
  "floor": [
    {
      "level": 1,
      "apt": [
        {
          "no": 1,
          "area": 40,
          "rooms": 1
        },
        {
          "no": 2,
          "area": 80,
          "rooms": 3
        },
        {
          "no": 3,
          "area": null,
          "rooms": 2
        }
      ]
    },
    {
      "level": 2,
      "apt": [
        {
          "no": 4,
          "area": 100,
          "rooms": 3
        },
        {
          "no": 5,
          "area": 60,
          "rooms": 2
        }
      ]
    }
  ]
}

Using Postgres jsonpath, if I wanted to get the apartments with 3 rooms, the following would suffice:

'$.floor[*].apt[*] ? (@.rooms == 3)'

My question is how would I do the same if I wanted to compared against an ARRAY such that I could provide the number of rooms in an array similar to ANY in SQL

'$.floor[*].apt[*] ? (@.rooms = ANY{3,5,6,10,11} )'

Using Postgres 12.2 on Linux Ubuntu 18.04, 64Bit

Best Answer

I don't know of any way to do that with jsonpath.

I'd suggest pulling apartments with any number of rooms and filtering that list, but I suspect that would get quite large as the dataset grows ...

... but then, if you're talking about large data volumes here, I would have to ask why you're not doing this is a proper, Relational way? Yes, you can get postgres to do this with JSON data but should you? This is a very simple data structure that you could represent in a single table which, properly indexed, will cope with huge data volumes.

select level, no, size, rooms 
from apts
order by 1, 2 ; 

+-------+----+------+-------+
| level | no | size | rooms |
+-------+----+------+-------+
|     1 |  1 |   40 |     1 |
|     1 |  2 |   80 |     3 |
|     1 |  3 | null |     2 | 
|     2 |  4 |  100 |     3 |
|     2 |  4 |   60 |     2 |
+-------+----+------+-------+

select * 
from apts 
where rooms in ( 3, 5, 6, 10, 11 ) ; 

+-------+----+------+-------+
| level | no | size | rooms |
+-------+----+------+-------+
|     1 |  2 |   80 |     3 |
|     2 |  4 |  100 |     3 |
+-------+----+------+-------+

Relational databases are really good at finding small bits of [related] data and bolting them back together again. They're generally pretty rubbish at taking large blocks of data, like chunks of JSON, and pulling them apart again. Yes, RDBMS providers are trying to improve this, but I would suggest it'll still be way slower than doing it the "old-fashioned" way.