Just a slight variation to Chris's answer:
SELECT a, translate(b, '[]', '{}')::text[] AS b, d
FROM json_to_record('{"a": 1, "b": ["hello", "There"], "c": "bar"}')
AS x(a int, b text, d text);
The idea is the same: massage the JSON array into an array - in this case, through an array literal. In addition to a bit cleaner looking code (though I love it, regex usually does not help much in this regard :), it seems slighly faster, too:
CREATE TABLE jsonb_test (
id serial,
data jsonb
);
INSERT INTO jsonb_test (id, data)
SELECT i, format('{"a": %s, "b": ["foo", "bar"], "c": "baz"}', i::text)::jsonb
FROM generate_series(1,10000) t(i);
SELECT a, string_to_array(regexp_replace(b, '\[*\"*\s*\]*','','g'),',') AS b, d
FROM jsonb_test AS j,
LATERAL json_to_record(j.data::json) AS r(a int, b text, d text);
-- versus
SELECT a, translate(b, '[]', '{}')::text[] AS b, d
FROM jsonb_test AS j,
LATERAL json_to_record(j.data::json) AS r(a int, b text, d text);
On this dataset and on my test box, the regex version shows and average execution time of 300 ms, while my version shows 210 ms.
Use the "overlaps" operator:
select *
from queryables
where array[users_who_like_ids] && ARRAY['1','78','80'];
Your design is really quesionable: first why are you storing numbers as strings? And secondly why don't you store this as a properly normalized one-to-many relationship?
Best Answer
I was close, unlike JavaScript and PHP it seems PostgreSQL array key numbers start with 1, not 0.