Foreign data wrappers are currently read-only. This will change in the future, but that's the way it is. You can also use dblink to access other databases. As I understand it a big issue will be commit handling. I would recommend against this approach unless you are trying to aggregate lots of data that is really external. It is a huge complexity cost and the benefits have to be worth it.
Edit: Note that foreign tables and foreign servers are how you use foreign data wrappers to map in foreign data to your application. You can't choose between them. It's all three or nothing.
You are forming an ad-hoc row type (effectively an anonymous record) with this expression:
(media_files.position, media_files.token, media_files.title)
in your aggregate function call:
ARRAY_AGG((media_files.position, media_files.token, media_files.title)
ORDER BY media_files.position) AS media_files
Arrays types can only be built upon well-known types. Your option is to announce such a type to the system and cast the record to it before forming the array.
Create a well-known composite type:
CREATE TYPE my_type AS (
position int -- data type?
,token text
,title text
)
I am guessing data types for lack of information here. Fill in your actual types.
Creating a table has the same effect: It announces a well known composite type to the system indirectly, as well. For this reason, you can (ab-)use a temporary table to register a composite type for the duration of the session:
CREATE TEMP TABLE my_type AS (
position int -- data type?
,token text
,title text
)
Either way, you can then cast your record:
ARRAY_AGG((media_files.position, media_files.token, media_files.title)::my_type
ORDER BY media_files.position) AS media_files
Then you can reference elements of the (now well-known) type by name:
SELECT media_files[1].position, media_files[1].token
FROM (
...
,ARRAY_AGG((media_files.position, media_files.token, media_files.title)::my_type
ORDER BY media_files.position) AS media_files
...
FROM ....
GROUP BY ...
) sub;
Now, Postgres can use these names for building a JSON value. Voilá.
Best Answer
It's an implementation limitation. It's theoretically possible, of course, but nobody's written the code to handle it yet.
To cope with column removals or type changes, PostgreSQL would have to scan every view that references the view being modified (using
pg_catalog.pg_depend
) to see if any of them relied on the column. It'd also need to look for whole-row references and disallow changes in those cases.It's less clear why adding a column is not permitted. Again, I suspect that's down to whole-row references. If
pg_depend
was checked for whole-row references without finding any, and the new column appeared at the end, it'd be OK to add it.However, views created with
SELECT * FROM
wouldn't "inherit" the new column, because*
gets expanded into a column-list during view creation, though. So if you hadview_A
that does aSELECT * FROM view_B
, and you added a column toview_B
, it wouldn't appear inview_A
. Yet if you dropped and re-createdview_A
, the column would appear. Needless to say, that's not good. To cope with that, PostgreSQL would have to keep track of whether a given view's column list ("targetlist" in PostgreSQL internal terms) came from a*
wildcard. Which is more complicated than you'd think because you can writesomeview.*
too.All in all - it's complicated, and nobody's wanted it enough to do the work to implement it.