Slow inserts in oracle 12c inmemory table

bulk-insertin-memory-databaseoracleoracle-12cperformance

We are running some tests to check the in-memory performance on a 12c (12.1.0.2.4 EE) RAC. Servers have 56GB memory and 20core CPUs.

Our plan is to have a few read-performance critical tables in memory and the rest in disk. The test was to first populate the tables using our insert tool and then run queries on it using JMeter (web application benchmark tool).

The insert tool basically reads records from a file and then inserts records to the DB in blocks and commits.

We started testing with one table and observed slow insert rates straightaway. But when the table is made a no inmemory table the insert rates were fine.

The table has 90 columns, 1 trigger, 15 indexes.

The test preparation and results are given below.

Preparation


1) Create the table, trigger, indexes.

2) Make table in-memory using "alter table test_table inmemory priority critical"

Results


Without Inmemory option (~7000 recs/sec)

Avg time to read 1 record  = [0.0241493] ms
Avg time to insert 1 record  = [0.141788] ms
Avg time to insert 1 block of 500 number of rows  = [70.894] ms
Avg time to commit 2 blocks(500 rows per block)  = [3.888] ms
Total time for 2000 blocks of inserts  = [141.788] s, at [7052.78] recs/s
Total time for 1000 number of commits  = [3.888] s
Total time for 2000 blocks of inserts + 1000 number of commits  = [145.676] s
Total time to read 1000000 number of records from file  = [24.1493] s
Total time to read 1000000 number of records + 2000 blocks of inserts + 1000 number of commits  = [169.825] s

With Inmemory option (~200 recs/sec)

Avg time to read 1 record  = [0.0251651] ms
Avg time to insert 1 record  = [4.62541] ms
Avg time to insert 1 block of 500 number of rows  = [2312.7] ms
Avg time to commit 2 blocks(500 rows per block)  = [3.32] ms
Total time for 200 blocks of inserts  = [462.541] s, at [216.197] recs/s
Total time for 100 number of commits  = [0.332] s
Total time for 200 blocks of inserts + 100 number of commits  = [462.873] s
Total time to read 100000 number of records from file  = [2.51651] s
Total time to read 100000 number of records + 200 blocks of inserts + 100 number of commits  = [465.39] s

The memory parameters of the DB are given below.

NAME                            TYPE        VALUE

lock_sga                        boolean     FALSE
pre_page_sga                    boolean     TRUE
sga_max_size                    big integer 30G
sga_target                      big integer 30G
unified_audit_sga_queue_size    integer     1048576
inmemory_clause_default         string  
inmemory_force                  string      DEFAULT
inmemory_max_populate_servers   integer     8
inmemory_query                  string      ENABLE
inmemory_size                   big integer 10G
inmemory_trickle_repopulate_    integer     1
servers_percent     
optimizer_inmemory_aware        boolean     TRUE
buffer_pool_keep                string  
buffer_pool_recycle             string  
db_block_buffers                integer     0
log_buffer                      big integer 1048552K
use_indirect_data_buffers       boolean     FALSE
memory_max_target               big integer 0
memory_target                   big integer 0
optimizer_inmemory_aware        boolean TRUE
pga_aggregate_limit             big integer 8G
pga_aggregate_target            big integer 4G

We are also tried the following, but the results were the same.

1) Stop one instance on the RAC (2 node RAC)

2) Change the inmemory priority to "high" then "low".

Hope someone can point me in the right direction.

Best Answer

The DML in Inmemory involves new features like Journal & Inmemory Transaction Manager. When there is an insert operation on a table enabled for Inmemory, these new features get updated to bring the consistency in Transaction. Sometimes, there are overheads involved in keeping Inmemory Column Unit (IMCU) transactionally consistent.

Following are some of the factors which affect the performance:

  • The rate of change
  • The inmemory compression level chosen for a table
  • The location of the changed rows
  • The type of operations being performed

For example: Tables with higher compression levels will incur more overhead than tables with lower compression levels.

Following are few other deciding factors which can be fine-tuned:

  • Inmemory Coordinator (IMCO) Process kicks-in every two minutes to load the data into inmemory
  • The number of IMCUs repopulated via trickle repopulate in a given 2 minute window is limited by the new initialization parameter INMEMORY_TRICKLE_REPOPULATE_SERVERS_PERCENT
  • The overhead to update the indexes during DML
  • DISTRIBUTE or DUPLICATE clause used for the table while using in RAC environment

So, all these combined decide the runtime for an insert. Also, do look at the bundle path updates which are targeted for OLTP & DW performance improvements.

To improve performance in your case you can try changing the compression level used for the table to memcompress for dml:

alter table test_table inmemory priority critical memcompress for dml duplicate all

Regarding the methods you had tried:

  1. Stop one instance on the RAC (2 node RAC)

Stopping one instance doesn't affect the performance as RAC nodes are mainly used for Inmemory Fault Tolerance through the DUPLICATE/DISTRIBUTE clause used for the inmeomory table. So, it won't help improve the insert performance.

  1. Change the inmemory priority to "high" then "low".

Changing the priority clause is mainly used while populating/repopulating the table into inmemory. When there are many tables enabled for inmemory, the IMCO picks up the tables based on the priority setting (CRITICAL, HIGH, MEDIUM, LOW & NONE) and loads them into inmemory area. So, keeping the table priority to high or more has an impact when the expectation is to immediately see the updated version of table after an insert. For this, the IMCO duration needs to be fine-tuned.