Given the following setup:
regress=> CREATE SCHEMA A;
CREATE SCHEMA
regress=> CREATE SCHEMA B;
CREATE SCHEMA
regress=> SET search_path = B, public;
SET
regress=> CREATE TABLE bar(email text);
CREATE TABLE
CREATE UNIQUE INDEX index_bar_on_email ON bar USING btree (email);
CREATE INDEX
I cannot reproduce the problem you report in PostgreSQL 9.2:
regress=> SET search_path = A, B;
SET
regress=> CREATE TABLE bar(email text);
CREATE TABLE
regress=> CREATE UNIQUE INDEX index_bar_on_email ON bar USING btree (email);
CREATE INDEX
However, rather than using the search_path
, it's safer to use explicit schema-qualification. For example, I'd re-write the above as:
regress=> RESET search_path;
RESET
regress=> SHOW search_path ;
search_path
----------------
"$user",public
(1 row)
CREATE TABLE B.bar(email text);
CREATE UNIQUE INDEX b.index_bar_on_email ON b.bar USING btree (email);
CREATE TABLE A.bar(email text);
CREATE UNIQUE INDEX index_bar_on_email ON A.bar USING btree (email);
The indexes are automatically created in the schema of their associated table; see:
regress=> \di B.
List of relations
Schema | Name | Type | Owner | Table
--------+--------------------+-------+-------+-------
b | index_bar_on_email | index | craig | bar
(1 row)
regress=> \di A.
List of relations
Schema | Name | Type | Owner | Table
--------+--------------------+-------+-------+-------
a | index_bar_on_email | index | craig | bar
(1 row)
Update based on question change:
Yes, what you've shown does look like a Rails adapter issue. It's checking to see whether the index exists in any schema. It should be checking to see whether the first table of the given name in the search_path
has the named index.
I would write the query differently. I'd leave off the join on pg_class
entirely, instead using a cast to regclass
to handle search_path
resolution for me. I'd use the resulting oid to search for the index. Compare original, then updated, below. Note that the updated query does require search_path
to be set first.
regress=> SELECT DISTINCT i.relname, d.indisunique, d.indkey, t.oid, am.amname
FROM pg_class t, pg_class i, pg_index d, pg_am am
WHERE i.relkind = 'i'
AND d.indexrelid = i.oid
AND d.indisprimary = 'f'
AND t.oid = d.indrelid
AND t.relname = 'bar'
AND i.relnamespace IN (SELECT oid FROM pg_namespace WHERE nspname IN ('b','a') )
AND i.relam = am.oid
ORDER BY i.relname
;
relname | indisunique | indkey | oid | amname
--------------------+-------------+--------+-------+--------
index_bar_on_email | t | 1 | 28585 | btree
index_bar_on_email | t | 1 | 28592 | btree
(2 rows)
regress=> SELECT DISTINCT i.relname, d.indisunique, d.indkey, 'bar'::regclass::oid, am.amname
FROM pg_class i, pg_index d, pg_am am
WHERE i.relkind = 'i'
AND d.indexrelid = i.oid
AND d.indisprimary = 'f'
AND 'bar'::regclass = d.indrelid
AND i.relam = am.oid
ORDER BY i.relname
;
relname | indisunique | indkey | regclass | amname
--------------------+-------------+--------+----------+--------
index_bar_on_email | t | 1 | 28585 | btree
(1 row)
VACUUM rewrites the entire block, efficiently packing the remaining rows and leaving a single contiguous block of free space (though this space isn't zeroed and the physical disk file might contain the remnants of deleted rows which of course are in no way visible to the database user).
test schema:
--#psql postgres postgres
select oid from pg_database where datname='postgres';
/*
oid
-------
12035
*/
create schema stack;
set search_path=stack;
create table foo(bar text);
insert into stack.foo(bar) values('row 1');
insert into stack.foo(bar) values('row 2');
checkpoint;
select * from foo;
/*
bar
-------
row 1
row 2
*/
select relfilenode
from pg_class c join pg_namespace n on n.oid=c.relnamespace
where nspname='stack' and relname='foo';
/*
relfilenode
-------------
446488
*/
physical contents of table backing file:
xxd -a /var/lib/postgresql/9.3/main/base/12035/446488
0000000: 3800 0000 f093 9d83 0000 0000 2000 c01f 8........... ...
0000010: 0020 0420 0000 0000 e09f 3c00 c09f 3c00 . . ......<...<.
0000020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
*
0001fc0: 2176 4c14 0000 0000 0000 0000 0000 0000 !vL.............
0001fd0: 0200 0100 0208 1800 0d72 6f77 2032 0000 .........row 2.. <---- this is row 2
0001fe0: 2176 4c14 0000 0000 0000 0000 0000 0000 !vL.............
0001ff0: 0100 0100 0208 1800 0d72 6f77 2031 0000 .........row 1..
delete and vacuum:
delete from foo where bar='row 1';
vacuum stack.foo;
checkpoint;
physical contents of table backing file:
xxd -a /var/lib/postgresql/9.3/main/base/12035/446488
0000000: 3800 0000 e8b5 9e83 0000 0500 2000 e01f 8........... ...
0000010: 0020 0420 0000 0000 0000 0000 e09f 3c00 . . ..........<.
0000020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
*
0001fc0: 2176 4c14 0000 0000 0000 0000 0000 0000 !vL.............
0001fd0: 0200 0100 0209 1800 0d72 6f77 2032 0000 .........row 2.. <---- this is free space
0001fe0: 2176 4c14 0000 0000 0000 0000 0000 0000 !vL.............
0001ff0: 0200 0100 0209 1800 0d72 6f77 2032 0000 .........row 2..
insert new row:
insert into stack.foo(bar) values('row 3');
checkpoint;
final physical contents of table backing file:
xxd -a /var/lib/postgresql/9.3/main/base/12035/446488
0000000: 3800 0000 c8ec 9e83 0000 0100 2000 c01f 8........... ...
0000010: 0020 0420 0000 0000 c09f 3c00 e09f 3c00 . . ......<...<.
0000020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
*
0001fc0: 2476 4c14 0000 0000 0000 0000 0000 0000 $vL.............
0001fd0: 0100 0100 0208 1800 0d72 6f77 2033 0000 .........row 3.. <---- this is row 3
0001fe0: 2176 4c14 0000 0000 0000 0000 0000 0000 !vL.............
0001ff0: 0200 0100 0209 1800 0d72 6f77 2032 0000 .........row 2..
clean up:
drop schema stack cascade;
Best Answer
Can't tell, from the top of my head, the exact sequence of events in case of an exception. But I can tell you that it is comparatively expensive and should be avoided unless needed. An exception (error) causes the surrounding transaction to be rolled back, unless trapped.
But to trap an error, you need an
EXCEPTION
clause in a plpgsql block (in a function, procedure or aDO
statement). And it is even expensive to have anEXCEPTION
clause in a plpgsql function block to begin with (before any exceptions are raised), because such a block effectively forms a subtransaction (that can potentially be rolled back) incurring substantially more overhead. Consequently, the manual warns: