It depends. When looking at a data warehouse, if you don't have a specific design in mind, automatic storage management may be an excellent route.
Consider the discussion at AskTom, OTN Forums, OTN Forums 2, and OTN Forums 3.
There is no one right way to deal with things, and the answers change based on a host of hardware and network factors. In order to discover for yourself, preload a sample data warehouse (only a gig or two, enough to play with) on an ASM based machine, on a SAN with the Raid being virtualized by linux and on a hardware based raid machine.
By timing the results of queries on all three of the environments, you'll be able to discover which methodology works the best for you performance-wise. I've deployed databases using ASN and linux-based virtual raids, and a virtual raid behaved slightly better (a few years ago.) However, I suspect that was in part the way the drives were set up.
There is no singular right answer. If you can provide us more details about the size and performance requirements, it may be possible to explore various test cases.
--Edit--
Every "disk group" may be made up of one or more disks, directories, or files on the appropriate subsystem. Oracle recommends "For best performance and reliability, choose a RAID device or a logical volume on more than one physical device and implement the stripe-and-mirror-everything (SAME) methodology." when placing files on a filesystem. That reads as if oracle is recommending RAID 1+0.
ASM managed disk groups, however, "A normal redundancy disk group requires a minimum of two failure groups (or two disk devices) if you are using two-way mirroring. The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices" apparently automatically provide mirroring.
These devices themselves can be comprised of RAID devices, and so on. In practical tests when I was setting up RAIDed data warehouses, a simple virtual RAID 5 on the filesystem provided acceptable performance, and additional ASM added no performance benefits. In this kind of optimization task, first identify your resources, and then test the every possible configuration, as sometimes the results can be extremely counterintuitive.
First look what other parameters are set:
CREATE pfile='pfile.txt' FROM spfile;
Maybe db_cache_size, shared_pool_size, sga_target or other memory related parameters are set to non zero? Remember that when using AMM those parameters specify minimum memory allocated for particular pool. So if sga_target is 6GB you will not be allowed to set memory_target to 4GB.
Also sum of internal variables __sga_target, __db_cache_size, etc. may be more than your specified value of 4GB.
If you see those symptoms you can cleanup pfile bounce Oracle with pfile and recreate spfile. In the same step you can also set PGA_AGGREGATE_TARGET
to zero.
STARTUP pfile='pfile.txt'
CREATE spfile from pfile='pfile.txt';
Best Answer
Forget
memory_target
andmemory_max_target
. AMM has always been a feature that worked on paper, but failed in practice. Oracle finally admitted it semi-officially as of the release of 12.2:When I have no information about earlier memory usage, I just go with a 1:4 ratio for PGA:SGA target.
Which implicitly sets:
Then later you can tune if needed, after observing the workload and memory usage.