\copy
can use a temporary table.
First I tested and confirmed this with version 9.0 at the command line.
Then I created a file with SQL and psql meta command \copy
using multiple temporary tables. That worked for me, too.
CREATE TEMP TABLE tmp as SELECT * FROM tbl;
\copy (SELECT * FROM tmp JOIN tbl USING (id)) TO '/var/lib/postgres/test1.csv';
Call:
psql -p5432 mydb -f test.sql
Note the terminating semicolon, which is optional at the end of a file (terminated implicitly), but required after any other SQL statement and also after the last one if executed in psql interactively.
Normally, psql meta-commands cannot be mixed with SQL on the same line in a file executed per psql -f
. I quote the manual on psql:
Parsing for arguments stops at the end of the line, or when another
unquoted backslash is found. An unquoted backslash is taken as the
beginning of a new meta-command. The special sequence \\
(two
backslashes) marks the end of arguments and continues parsing SQL
commands, if any. That way SQL and psql commands can be freely mixed
on a line. But in any case, the arguments of a meta-command cannot
continue beyond the end of the line.
Different rules apply after \copy
, though. Essentially, psql switches back to SQL mode automatically after \copy
See:
But you wrote you had all commands on separate lines. So that cannot be the explanation in your case.
All that aside, have you considered using COPY
(the SQL command) instead of \copy
(the psql meta-command)?
Of course, the target file would have to be local to the server not the client in this case. And different file privileges apply. The manual:
Files named in a COPY
command are read or written directly by the
server, not by the client application. Therefore, they must reside on
or be accessible to the database server machine, not the client. They
must be accessible to and readable or writable by the PostgreSQL user
(the user ID the server runs as), not the client.
First, let's assume that work_mem
is at 1024MB
, and not the impossible 1024GB
reported (impossible with a total of 3GB
on the machine).
Anyway it's much too high. As said in Resource Consumption in PostgreSQL documentation, with some emphasis added:
work_mem (integer)
Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The
value defaults to one megabyte (1MB). Note that for a complex query,
several sort or hash operations might be running in parallel; each
operation will be allowed to use as much memory as this value
specifies before it starts to write data into temporary files. Also,
several running sessions could be doing such operations concurrently.
Therefore, the total memory used could be many times the value of
work_mem; it is necessary to keep this fact in mind when choosing the
value. Sort operations are used for ORDER BY, DISTINCT, and merge
joins. Hash tables are used in hash joins, hash-based aggregation, and
hash-based processing of IN subqueries.
The query whose plan is shown is complex and requires several levels of hashing, so you're clearly in the case the doc is warning against.
The default 1Mb
is conservative, but I wouldn't raise work_mem
above 128MB
for a 3GB
instance. shared_buffers
on the other hand could be set to 1024MB
: this one is allocated only once and kept for the entire instance's lifetime.
Best Answer
According to the error message, the offending row at this line number in the file is too large to be imported.
COPY
uses an internal buffer per row that can only grow to 1GB. It's impossible to import or export larger rows with COPY in existing versions of PostgreSQL (up to and including version 12)