If you want to catch NULL
violations as well you have to cover the UPDATE as well, which makes the function a little more expensive.
SET search_path = netcen;
CREATE OR REPLACE FUNCTION fun_test(myobj test, OUT myoutput text)
RETURNS text AS
$func$
BEGIN
UPDATE test
SET tes = myobj.tes,
testname = myobj.testname
WHERE testkey = myobj.testkey;
IF FOUND THEN
myoutput := 'UPDATE successfull. testkey = ' || myobj.testkey;
RETURN;
END IF;
INSERT INTO test
SELECT (myobj).*;
myoutput := 'INSERT successfull. testkey = ' || myobj.testkey;
EXCEPTION
WHEN null_value_not_allowed THEN
RAISE NOTICE 'null_value_not_allowed occurred.';
myoutput := 'A field was null.';
WHEN not_null_violation THEN
RAISE NOTICE 'not_null_violation occurred.';
myoutput:= 'A field was null.';
WHEN unique_violation THEN
RAISE NOTICE 'unique_violation occurred.';
myoutput:= 'Duplicate value.';
END
$func$ LANGUAGE plpgsql;
Using an OUT
parameter and made some simplifications and clarifications.
I would just let the regular EXCEPTION happen. People are not supposed to enter NULL values for NOT NULL columns. This way, the UPDATE can run outside the EXCEPTION block.
The approach you're using is unnecessarily complex - and very inefficient. Instead of the first function use:
create or replace function compute_pair_id_value(id bigint, value integer)
returns setof pair_id_value
as $$
SELECT $1, generate_series(0,$2);
$$
language sql;
or better, get rid of it entirely and write the whole operation like this:
-- Sample data creation:
CREATE TABLE my_obj(id bigint, obj_value integer);
insert into my_obj(id,obj_value) VALUES (1712437,2),(17000,5);
-- and the query:
SELECT id, generate_series(0,obj_value) FROM my_obj;
Resulting in:
regress=> SELECT id, generate_series(0,obj_value) FROM my_obj;
id | generate_series
---------+-----------------
1712437 | 0
1712437 | 1
1712437 | 2
17000 | 0
17000 | 1
17000 | 2
17000 | 3
17000 | 4
17000 | 5
(9 rows)
This exploits PostgreSQL's behaviour with set-returning functions called in the SELECT
list. Once PostgreSQL 9.3 comes out it can be replaced with a standards-compliant LATERAL
query.
Since it turns out your question was a simplified version of the real problem, let's tackle that. I'll work with the simplified compute_pair_id_value
above to avoid the hassle of plpython3. Here's how to do what you want:
SELECT (compute_pair_id_value(id,obj_value)).* FROM my_obj;
Result:
regress=> SELECT (compute_pair_id_value(id,obj_value)).* FROM my_obj;
id | value
---------+-------
1712437 | 0
1712437 | 1
1712437 | 2
17000 | 0
17000 | 1
17000 | 2
17000 | 3
17000 | 4
17000 | 5
(9 rows)
but again, be warned that compute_pair_id_value
will be called more than once. This is a limitation of PostgreSQL's query executor that can be avoided in 9.3 with LATERAL
support, but as far as I know you're stuck with it in 9.2 and below. Observe:
create or replace function compute_pair_id_value(id bigint, value integer)
returns setof pair_id_value
as $$
BEGIN
RAISE NOTICE 'compute_pair_id_value(%,%)',id,value;
RETURN QUERY SELECT $1, generate_series(0,$2);
END;
$$
language plpgsql;
output:
regress=> SELECT (compute_pair_id_value(id,obj_value)).* FROM my_obj;
NOTICE: compute_pair_id_value(1712437,2)
NOTICE: compute_pair_id_value(1712437,2)
NOTICE: compute_pair_id_value(17000,5)
NOTICE: compute_pair_id_value(17000,5)
id | value
---------+-------
1712437 | 0
1712437 | 1
1712437 | 2
17000 | 0
17000 | 1
17000 | 2
17000 | 3
17000 | 4
17000 | 5
(9 rows)
See how compute_pair_id_value
is called once per output column?
There is a workaround: Another layer of subquery to unpack the composite type result. See:
regress=> SELECT (val).* FROM (SELECT compute_pair_id_value(id,obj_value) FROM my_obj) x(val);
NOTICE: compute_pair_id_value(1712437,2)
NOTICE: compute_pair_id_value(17000,5)
id | value
---------+-------
1712437 | 0
1712437 | 1
1712437 | 2
17000 | 0
17000 | 1
17000 | 2
17000 | 3
17000 | 4
17000 | 5
(9 rows)
You can use the same technique in your code if you really must LOOP
over the results (it's slow to do that, so avoid it if you can).
Best Answer
There are two ways to store Large Objects LOBs(like photos and images) in your property table. You can use the BYTEA data type which is limited to 1GB. The other option is OID, which stores large objects in a special LOB structure. The OID column is limited to 2GB.
OID
Here is the relevant documentation from postgres regarding OIDS.
BYTEA
Here is a great example from Jack Douglas on how to use a function to import data into a bytea column. Just remember that in order to use the function, you would have to run the following command to activate plpgsql.
(where bytea_import_function.sql contains the code from Jack's example.)
Now connect to psql and your database
OID has several advantages over BYTEA. BYTEA will preload your images into memory before sending them to the user, whereas an OID column streams the data directly to the user.