I found these articles helpful:
Further reading suggests that the benefits of using RAID1 for log files (assuming isolated from data files) is lost when more than 1 log file exists on that RAID1 array. This is due to the sequential nature of the transaction log writes to disk. The sequential write benefit is lost when multiple log files are accesses on the same RAID1 array due to the random nature of access on a spinning disk. This would suggest that RAID10 is the better choice in a multiple DB environment unless you have the disks to isolate each log file.
These stats below sold me on proposal 3, isolating tempDB on RAID1 by stealing 2 disks from LOG array moving log array from RAID10 to RAID1. Basing much of this on RAID1s ability to maintain good WRITE speed.
TEMPDB is clearly under more stress than I have realised.
These table rankings ring true for snapshot values during normal operation (not just the accumulated totals) as we do have intensive out of hour routines.
TOTAL IO:
db.tempdb.mdf = 144,747,290,352
db.2.mdf = 100,482,243,080
db.2.ldf = 2,571,065,773
db.s.mdf = 1,702,508,040
db.s.ldf = 223,032,162
TOTAL READS:
DB.2.mdf = 84,851,614,280.00
db.tempdb.mdf = 72,271,813,552.00
db.s.mdf = 1,691,504,864.00
db.2.LDF= 93,822,304.00
TOTAL WRITES:
db.tempdb.mdf = 72,475,476,800
db.2.mdf = 15,630,628,800
db.2.ldf = 2,477,243,469
db.tempdb.ldf = 222,946,079
One possible concern maybe the additional of the tempdb ldf and mdf on the same raid1 array but if this is a problem tempdb.ldf can be moved to the log array.
These are helpful links that explain TEMPDB usage, what it does and how it may effect my apps:
You are receiving an Oracle ORA-03297 error because the HWM (High Water Mark) of a table is beyond the size you tried to shrink a datafile to.
First try and "shrink space" for each affected table:
alter table fragmentedtable enable row movement;
alter table fragmentedtable shrink space;
Next, check to see how much space can be freed from each of the existing tablespace files without moving the current database objects.
The following (taken from Ask Tom & assuming a block size of 8k) tells you how much space you can currently free from each datafile, in megabytes:
select file_name,
ceil( (nvl(hwm,1)*8192)/1024/1024 ) SHRINK_TO,
ceil( blocks*8192/1024/1024) CURRENT_SIZE,
ceil( blocks*8192/1024/1024) -
ceil( (nvl(hwm,1)*8192)/1024/1024 ) SAVINGS
from dba_data_files a,
( select file_id, max(block_id+blocks-1) hwm
from dba_extents
group by file_id ) b
where a.file_id = b.file_id(+)
For example:
FILE_NAME SHRINK_TO CURRENT_SIZE SAVINGS
-------------------------------------------------- ---------- ------------ ----------
/u01/app/oracle/oradata/PHIL112/system01.dbf 696 700 4
/u01/app/oracle/oradata/PHIL112/sysaux01.dbf 579 610 31
/u01/app/oracle/oradata/PHIL112/users01.dbf 85 93 8
/u01/app/oracle/oradata/PHIL112/undotbs01.dbf 65 120 55
SQL>
To resize, use:
alter database datafile '/path/to/data/file.dbf' resize 100m;
The next option you have is to create a new tablespace (and associated datafile(s)) and move the current database objects to the new tablespace. You can do this by using datapump, or do it manually. Tim Hall has an excellent article on how to perform this here.
Best Answer
There's some information missing that would be useful but my first approach would be to concatenate the files into a smaller number of large input files to cut down the file handling overhead. Look at the load method and check if the data profile supports the use of SQL Loader direct loads. If SQL loader is not appropriate maybe use external tables and add the APPEND hint to the INSERT.
You may be able to think about how the table is partitioned and if the partitioning is reflected in the files, i.e. each file being relevant to one partition. This might give you scope for making the load parallel based on the partition criteria.
The above assumes that you are inserting data but if you have updates in there as well then I would go for external tables with a merge statement. This has worked for me with 10g in the past. I had a shell script loading each file based on its suitability for each loading method. We loaded up to 80 million records in around 3 to 4 hours from, in our case, hundreds of files rather than millions. So taking a complete guess at 100 records per file you should be able to load them in less than a day.