You've defined an input variable and a return variable with the same name, so it can't be disambiguated by PL/PGSQL, as the error message is telling you.
You can change it to returns table (c integer, d integer)
as well, or you can create a type for this, like CREATE TYPE mytype (a integer, b integer)
and then use it in your function as returns setof mytype
, which should give you the behavior you're looking for, without colliding variables.
To avoid ambiguity in the target table name, provide names of schema and table separately. With existing tables Postgres can use the current search_path
to default to the first schema with an object of that name for unqualified table names. Since that's obviously not possible for (yet) non-existing tables, we have to be more explicit there.
If you typically want to place the target table in the same schema as the source, you can still omit the schema name and make the function default to the schema of the (existing!) source table for convenience. Like:
CREATE OR REPLACE FUNCTION copy_table(_source_tbl regclass
, _target_tbl text
, _target_schema text = NULL)
RETURNS bool AS
$func$
DECLARE
query_str text;
BEGIN
IF _target_schema IS NULL THEN -- if no target schema provided ...
SELECT c.relnamespace::regnamespace::text -- ... default to schema of input table
FROM pg_class c
WHERE c.oid = _source_tbl
INTO _target_schema;
END IF;
query_str = format('DROP TABLE IF EXISTS %1$I.%2$I;
CREATE TABLE %1$I.%2$I AS (TABLE %3$s);'
, _target_schema
, _target_tbl
, _source_tbl);
EXECUTE query_str;
RAISE NOTICE '%', query_str;
RETURN true;
END
$func$ LANGUAGE plpgsql;
Call:
SELECT copy_table('ex.test', 'text1', 'ex');
Since the 3rd (last) parameter _target_schema
has a default value, we can omit it, in which case the function defaults to the schema of the source table:
SELECT copy_table('ex.test', 'test1');
Even if we don't provide the source schema explicitly, relying on the current search_path
:
SELECT copy_table('test', 'test1');
Related:
Best Answer
Response from AWS Support:
tl;dr Bug in Aurora 2.0 (PostgreSQL 10.4), fixed in Aurora 2.1 (PostgreSQL 10.5).
EDIT: Along with JohnC, I saw this error arise again in Aurora 3.0.0 (PostgreSQL 11.4). Support response Jan 11, 2020: