Just from a logical standpoint you're going to have to have 2 copies of the data, or a log of each change (including the deletes/inserts). You can use the tables/triggers like you said, use flashback query (and that will work on 11g by the way, assuming that all the archive logs for the day are still online-- and, in my experience, it's not that expensive of a task depending on the query)
You can also approach it a little differently, although this method is more complex, it might be faster to some extent, especially if there's some limit to which rows are "in scope"-- meaning, that if you can put some kind of code to ignore the vast majority of the rows in the table (like a rule that they cannot update hours that are more than a month old or something similar).
You create a second table that has all the rows that are "in scope" for the update-- it's just a plain old copy of the other table as it sits right now.
When your job runs, it first deletes records from the table that are "out of scope" and then runs a query that gets the edited/deleted rows..
select t1.a as a1, t2.a as a2, t1.b as b1, t2.b as b2
from t1
left join t2 on t2.id=t1.id
...
Start building your file with this info
Be sure to UPDATE the t2 with the new values (and perhaps delete records that were deleted)
Then you run another that gets the inserted rows
Add this info to your file as well
Likewise, insert those rows into table 2.
In the end, it's a coinflip on whether or not this is faster-- but at least it'll run when you can schedule it-- but the flashback query is basically the same thing and letting oracle handle it.
It's much more simple and perhaps less error prone to just use flashback query in my opinion.
If this is still desirable for you (although not THAT vital), this is a solution to it:
CREATE TABLE rows_affected_in_session (
session_pid int NOT NULL,
session_time timestamptz NOT NULL,
table_name text NOT NULL,
rows_affected int NOT NULL,
PRIMARY KEY (session_pid, session_time, table_name));
CREATE OR REPLACE FUNCTION f_update_rows_affected_in_session (p_table regclass, p_rows_affected int) RETURNS void AS $BODY$
BEGIN
IF NOT EXISTS (SELECT 1 FROM rows_affected_in_session WHERE session_pid = pg_backend_pid() AND session_time = current_timestamp AND table_name = p_table::text) THEN
INSERT INTO rows_affected_in_session (session_pid, session_time, table_name, rows_affected)
VALUES (pg_backend_pid(), current_timestamp, p_table::text, p_rows_affected);
ELSE
UPDATE rows_affected_in_session SET rows_affected = rows_affected + p_rows_affected
WHERE session_pid = pg_backend_pid() AND session_time = current_timestamp AND table_name = p_table::text;
END IF;
END;
$BODY$ LANGUAGE PLPGSQL SECURITY DEFINER;
CREATE OR REPLACE FUNCTION f_rows_affected_in_session() RETURNS TABLE (table_name text, rows_affected int) AS $BODY$
SELECT table_name, rows_affected
FROM rows_affected_in_session
WHERE session_pid = pg_backend_pid() AND session_time = current_timestamp
ORDER BY 1
$BODY$ LANGUAGE SQL SECURITY DEFINER;
You use it by calling the function f_update_rows_affected_in_session(<table>, <rows_affected>)
after every successful insert within triggers (which you can use ROW_COUNT diagnostics), and then use the function f_rows_affected_in_session()
to retrieve the rows affected within the session (you can determine in it exactly which child tables were affected, or sum the results for a grand total). Example:
SELECT f_update_rows_affected_in_session('mytable'::regclass, 1);
SELECT f_update_rows_affected_in_session('mytable'::regclass, 1);
SELECT f_update_rows_affected_in_session('mytable'::regclass, 1);
SELECT * FROM f_rows_affected_in_session();
Best Answer
If you are running a mixed workload (OLTP on the table and analysis on the snapshot), you could consider replication and a hot-standby
This reduces the impact on the heavily loaded server to 0 - you need to be on 9.0 though...