Performance improvement by placing tables on a virtual RAM disk

configurationoracleoracle-11g-r2performancestorage

I am trying to improve performance of queries by placing the tables in a virtual RAM DISK. I am using in Oracle 11.2 Standard edition with 4G of SGA and 2G of PGA.

I created a Virtual RAM DISK of size 140GB using imdisk. Then I moved only one table, which has 350 million rows and 32gb of size, to this RAM DISK.

Now I am accessing this table with simple straight forward query.

SELECT PRICE, QUANTITY 
FROM TEST_TABLE;

It is taking around 5-6 minutes to read the entire table.

There is no difference in performance whether the table is in hard disk (SSD) or RAM disk. When RAM read speed is many times faster than hard disk, why is this not getting reflected in the query performance?

Is something is missed here?

Any help would be appreciated.

Explain Plan
Plan hash value: 2982911821

-------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |   348M|    11G|  1311K  (2)| 04:22:14 |
|   1 |  TABLE ACCESS FULL| TEST_TABLE |   348M|    11G|  1311K  (2)| 04:22:14 |
--------------------------------------------------------------------------------

This is the output of stats:

| Id | Operation| Name| Starts | E-Rows | A-Rows |A-Time|Buffers|
|0   | SELECT STATEMENT| |1 | | 13 |00:00:00.04 | 4 |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| DISPLAY_CURSOR |      1 |    136 |     13 |00:00:00.04 |  4 |

I am trying here to see if RAM disk can really improve the performance, as this is full table scan and it has fetch so much of rows, I thought this would be appropriate use case to effects of RAM disk with Oracle.

If moving tables in RAM is going to work then, We are planning to keep our entire database in RAM (our database size is 500GB), so that we can avoid the I/O bottleneck or I/O bound and we also make sure that we have the best hardware in place that cares of I/O bound, there is nothing more can be done for the I/O. Right now I am focusing only on I/O cost and thought moving tables to RAM is going to solve those problems.

More over we already have a mechanism in place to handle the volatility of RAM, so that's not a problem for me right now. Also, the entire database is in SSD (still we are not satisfied with the performance). The data must be fetched far better from RAM than SSD. Only question that I am not able to get answer is why that is not reflected here (even though the sample query is a I/O bound). This is an exercise that I am doing to take some vital decision on using RAM and fine tuning other queries.

Best Answer

Why bother with a RAM disk, when you can just increase SGA, increase buffer cache, define KEEP pool, use supported, built-in tools?

Why lie to the database, and make it think it uses disk and perform all the related extra unnecessary operations, while it uses memory indirectly ineffectively?

Why risk database corruption and losing the data by using memory as "persistent" storage?

What makes you think the database worked for 5-6 minutes? Did you check what the database really did?

Even if the database read the whole table in let's say, 20 seconds, fetching and displaying 350 million rows takes a lot more. People often think wall clock time = execution time, but that is not true.

Forget EXPLAIN PLAN, that is just an assumption. At least, check the following:

alter session set statistics_level=all;
SELECT PRICE, QUANTITY 
FROM TEST_TABLE;
select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

Run the above for both tests, and compare the results.

Or enable SQL tracing. Check at least the "disk I/O" statistics between the test. Measure your tests.