While I'm not sure if it's the easiest way, you can use an after servererror on database
trigger to log all errors to a table.
From Log all database errors to a table by Ferdous Nadvi:
drop trigger log_errors_trig;
drop table log_errors_tab;
create table log_errors_tab (
error varchar2(30),
timestamp date,
username varchar2(30),
osuser varchar2(30),
machine varchar2(64),
process varchar2(8),
program varchar2(48));
create or replace trigger log_errors_trig
after servererror on database
declare
var_user varchar2(30);
var_osuser varchar2(30);
var_machine varchar2(64);
var_process varchar2(8);
var_program varchar2(48);
begin
select username, osuser, machine, process, program
into var_user, var_osuser, var_machine, var_process, var_program
from sys.v_$session
where audsid = userenv('sessionid');
insert into log_errors_tab
values(dbms_standard.server_error(1),sysdate,var_user,
var_osuser,var_machine,var_process,var_program);
end;
/
See also Oracle System Event Triggers on PSOUG and the official documentation for system event triggers
The DML in Inmemory involves new features like Journal & Inmemory Transaction Manager. When there is an insert operation on a table enabled for Inmemory, these new features get updated to bring the consistency in Transaction. Sometimes, there are overheads involved in keeping Inmemory Column Unit (IMCU) transactionally consistent.
Following are some of the factors which affect the performance:
- The rate of change
- The inmemory compression level chosen for a table
- The location of the changed rows
- The type of operations being performed
For example: Tables with higher compression levels will incur more overhead than tables with lower compression levels.
Following are few other deciding factors which can be fine-tuned:
- Inmemory Coordinator (IMCO) Process kicks-in every two minutes to
load the data into inmemory
- The number of IMCUs repopulated via trickle repopulate in a given 2 minute window is limited by the new initialization parameter INMEMORY_TRICKLE_REPOPULATE_SERVERS_PERCENT
- The overhead to update the indexes during DML
- DISTRIBUTE or DUPLICATE clause used for the table while using in RAC environment
So, all these combined decide the runtime for an insert. Also, do look at the bundle path updates which are targeted for OLTP & DW performance improvements.
To improve performance in your case you can try changing the compression level used for the table to memcompress for dml
:
alter table test_table inmemory priority critical memcompress for dml duplicate all
Regarding the methods you had tried:
- Stop one instance on the RAC (2 node RAC)
Stopping one instance doesn't affect the performance as RAC nodes are mainly used for Inmemory Fault Tolerance through the DUPLICATE/DISTRIBUTE clause used for the inmeomory table. So, it won't help improve the insert performance.
- Change the inmemory priority to "high" then "low".
Changing the priority clause is mainly used while populating/repopulating the table into inmemory. When there are many tables enabled for inmemory, the IMCO picks up the tables based on the priority setting (CRITICAL, HIGH, MEDIUM, LOW & NONE
) and loads them into inmemory area. So, keeping the table priority to high or more has an impact when the expectation is to immediately see the updated version of table after an insert. For this, the IMCO duration needs to be fine-tuned.
Best Answer
SET FEEDBACK OFF
suppresses the insert notifications.See the documentation.
You can also use
SET ERRORLOGGING
to log errors to a table of your choice. Again, see the documentation.