I am giving some sga dependent pool size according to your sga_max_size=11516192768 parameter . you can tune
db_cache_size=3221225472
java_pool_size=16777216
large_pool_size=16777216
shared_pool_size=2147483648
Before tune the init.ora keep backup of master DB and also keep separate backup of control files.
Give it try
Why bother with a RAM disk, when you can just increase SGA, increase buffer cache, define KEEP pool, use supported, built-in tools?
Why lie to the database, and make it think it uses disk and perform all the related extra unnecessary operations, while it uses memory indirectly ineffectively?
Why risk database corruption and losing the data by using memory as "persistent" storage?
What makes you think the database worked for 5-6 minutes? Did you check what the database really did?
Even if the database read the whole table in let's say, 20 seconds, fetching and displaying 350 million rows takes a lot more. People often think wall clock time = execution time, but that is not true.
Forget EXPLAIN PLAN, that is just an assumption. At least, check the following:
alter session set statistics_level=all;
SELECT PRICE, QUANTITY
FROM TEST_TABLE;
select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
Run the above for both tests, and compare the results.
Or enable SQL tracing. Check at least the "disk I/O" statistics between the test. Measure your tests.
Best Answer
What version of Oracle are you using? Are you actually setting values for the SGA components and/ or the PGA separately? If so, why? If you're using a recent version of Oracle, the starting point would normally be to set
memory_target
and let Oracle size the various memory components automatically. Unless, of course, you have some reason to need to set minimum values for certain memory components.As for what to set
memory_target
to, you just need to leave enough memory for the OS and any other processes that need to run on the machine (including things like any backup manager you might be using, any monitoring software that runs on the server, etc.) It seems unlikely to me that you'd need 20% of 128 GB for that software. I'd guess that 8 GB would be more than enough leaving 120 GB for Oracle'smemory_target
. You could do better than guessing by taking a look at the actual memory utilization on the server over time.