\copy
can use a temporary table.
First I tested and confirmed this with version 9.0 at the command line.
Then I created a file with SQL and psql meta command \copy
using multiple temporary tables. That worked for me, too.
CREATE TEMP TABLE tmp as SELECT * FROM tbl;
\copy (SELECT * FROM tmp JOIN tbl USING (id)) TO '/var/lib/postgres/test1.csv';
Call:
psql -p5432 mydb -f test.sql
Note the terminating semicolon, which is optional at the end of a file (terminated implicitly), but required after any other SQL statement and also after the last one if executed in psql interactively.
Normally, psql meta-commands cannot be mixed with SQL on the same line in a file executed per psql -f
. I quote the manual on psql:
Parsing for arguments stops at the end of the line, or when another
unquoted backslash is found. An unquoted backslash is taken as the
beginning of a new meta-command. The special sequence \\
(two
backslashes) marks the end of arguments and continues parsing SQL
commands, if any. That way SQL and psql commands can be freely mixed
on a line. But in any case, the arguments of a meta-command cannot
continue beyond the end of the line.
Different rules apply after \copy
, though. Essentially, psql switches back to SQL mode automatically after \copy
See:
But you wrote you had all commands on separate lines. So that cannot be the explanation in your case.
All that aside, have you considered using COPY
(the SQL command) instead of \copy
(the psql meta-command)?
Of course, the target file would have to be local to the server not the client in this case. And different file privileges apply. The manual:
Files named in a COPY
command are read or written directly by the
server, not by the client application. Therefore, they must reside on
or be accessible to the database server machine, not the client. They
must be accessible to and readable or writable by the PostgreSQL user
(the user ID the server runs as), not the client.
I think it's an implementation detail.
A conforming implementation is not required to perform the exact
sequence of actions defined in the General Rules, provided its effect
on SQL-data and schemas, on host parameters and host variable, and on
SQL parameters and SQL variables is identical to the effect of that
sequence. The term effectively is used to emphasize actions whose
effect might be achieved in other ways by an implementation.1
I think an implementer could evaluate a common table expression 20 times, even in 20 different ways, and still have a conforming implementation. The only relevant issue is whether "its effect . . . is identical to the effect" of the sequence of actions defined in the General rules.
[1]. Section 6.3.3.3, "Rule evaluation order", in a draft of the SQL 2008 standard, having the local filename 5CD2-01-Framework-2006-01.pdf, p. 41 I have no idea where I got it. Google might know.
Best Answer
To add to what others have said. Temporary tables can also have primary keys, constraints, indexes, etc. whereas CTEs cannot.
On the flip side, you can do some pretty neat tricks with CTEs that would be harder, I think, if done with temporary tables-- such as chaining them to perform deletes, inserts, and selects all in one statement. There are some nice examples of "cool-CTE-tricks" here.