HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Slow inserts in oracle 12c inmemory table

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
inmemoryinserts12csloworacletable

Problem

We are running some tests to check the in-memory performance on a 12c (12.1.0.2.4 EE) RAC. Servers have 56GB memory and 20core CPUs.

Our plan is to have a few read-performance critical tables in memory and the rest in disk. The test was to first populate the tables using our insert tool and then run queries on it using JMeter (web application benchmark tool).

The insert tool basically reads records from a file and then inserts records to the DB in blocks and commits.

We started testing with one table and observed slow insert rates straightaway. But when the table is made a no inmemory table the insert rates were fine.

The table has 90 columns, 1 trigger, 15 indexes.

The test preparation and results are given below.

Preparation

1) Create the table, trigger, indexes.

2) Make table in-memory using "alter table test_table inmemory priority critical"

Results

Without Inmemory option (~7000 recs/sec)

Avg time to read 1 record  = [0.0241493] ms
Avg time to insert 1 record  = [0.141788] ms
Avg time to insert 1 block of 500 number of rows  = [70.894] ms
Avg time to commit 2 blocks(500 rows per block)  = [3.888] ms
Total time for 2000 blocks of inserts  = [141.788] s, at [7052.78] recs/s
Total time for 1000 number of commits  = [3.888] s
Total time for 2000 blocks of inserts + 1000 number of commits  = [145.676] s
Total time to read 1000000 number of records from file  = [24.1493] s
Total time to read 1000000 number of records + 2000 blocks of inserts + 1000 number of commits  = [169.825] s


With Inmemory option (~200 recs/sec)

```
Avg time to read 1 record = [0.0251651] ms
Avg time to insert 1 record = [4.62541] ms
Avg time to insert 1 block of 500 number of rows = [2312.7] ms
Avg time to commit 2 blocks(500 rows per block) = [3.32] ms
Total time for 200 blocks of inserts = [462.541] s, at [216.197] recs/s
Total time for 100 number of commits = [0.332] s
Total time for 200 blocks of inserts + 100 number of commits = [462.873] s
Total time to read 1

Solution

The DML in Inmemory involves new features like Journal & Inmemory Transaction Manager. When there is an insert operation on a table enabled for Inmemory, these new features get updated to bring the consistency in Transaction. Sometimes, there are overheads involved in keeping Inmemory Column Unit (IMCU) transactionally consistent.

Following are some of the factors which affect the performance:

  • The rate of change



  • The inmemory compression level chosen for a table



  • The location of the changed rows



  • The type of operations being performed



For example: Tables with higher compression levels will incur more overhead than tables with lower compression levels.

Following are few other deciding factors which can be fine-tuned:

  • Inmemory Coordinator (IMCO) Process kicks-in every two minutes to


load the data into inmemory

  • The number of IMCUs repopulated via trickle repopulate in a given 2 minute window is limited by the new initialization parameter INMEMORY_TRICKLE_REPOPULATE_SERVERS_PERCENT



  • The overhead to update the indexes during DML



  • DISTRIBUTE or DUPLICATE clause used for the table while using in RAC environment



So, all these combined decide the runtime for an insert. Also, do look at the bundle path updates which are targeted for OLTP & DW performance improvements.

To improve performance in your case you can try changing the compression level used for the table to memcompress for dml:

alter table test_table inmemory priority critical memcompress for dml duplicate all

Regarding the methods you had tried:

  • Stop one instance on the RAC (2 node RAC)



Stopping one instance doesn't affect the performance as RAC nodes are mainly used for Inmemory Fault Tolerance through the DUPLICATE/DISTRIBUTE clause used for the inmeomory table. So, it won't help improve the insert performance.

  • Change the inmemory priority to "high" then "low".



Changing the priority clause is mainly used while populating/repopulating the table into inmemory. When there are many tables enabled for inmemory, the IMCO picks up the tables based on the priority setting (CRITICAL, HIGH, MEDIUM, LOW & NONE) and loads them into inmemory area. So, keeping the table priority to high or more has an impact when the expectation is to immediately see the updated version of table after an insert. For this, the IMCO duration needs to be fine-tuned.

Context

StackExchange Database Administrators Q#112375, answer score: 2

Revisions (0)

No revisions yet.