Firstly, I will say that this workload is by no means heavy by Oracle standards; thousands of commits/sec are possible, easily. Secondly, however, what matters here is not your database and it's not your server: it's your storage. There are many options here; you won't go too wrong with something like NetApp (I don't work for them, just a satisfied user) and the question is, what size? Here ORION is your friend. Whatever storage array you choose, this will take care of your first level of resilience, if your first node fails you simply mount the disks on another and start back up again, and Oracle will perform crash recovery so no data will be lost.
My advice is, get some numbers at a lower level - MB/s, IOPs - by performing a representative benchmark and take those to your nearest storage vendor and ask them what they've got. With NetApp at least, you can start fairly small and grow, adding another head for more resilience/better performance, adding shelves for more capacity, etc etc. Then test the crap out of it with Orion!
The logic with 'A' and 'B' might be "hidden" behind a virtual column on which you could do the partitioning:
alter session set nls_date_format = 'yyyy-mm-dd';
drop table tq84_partitioned_table;
create table tq84_partitioned_table (
status varchar2(1) not null check (status in ('A', 'B')),
date_a date not null,
date_b date not null,
date_too_old date as
( case status
when 'A' then add_months(date_a, -7*12)
when 'B' then date_b
end
) virtual,
data varchar2(100)
)
partition by range (date_too_old)
(
partition p_before_2000_10 values less than (date '2000-10-01'),
partition p_before_2000_11 values less than (date '2000-11-01'),
partition p_before_2000_12 values less than (date '2000-12-01'),
--
partition p_before_2001_01 values less than (date '2001-01-01'),
partition p_before_2001_02 values less than (date '2001-02-01'),
partition p_before_2001_03 values less than (date '2001-03-01'),
partition p_before_2001_04 values less than (date '2001-04-01'),
partition p_before_2001_05 values less than (date '2001-05-01'),
partition p_before_2001_06 values less than (date '2001-06-01'),
-- and so on and so forth..
partition p_ values less than (maxvalue)
);
insert into tq84_partitioned_table (status, date_a, date_b, data) values
('B', date '2008-04-14', date '2000-05-17',
'B and 2000-05-17 is older than 10 yrs, must be deleted');
insert into tq84_partitioned_table (status, date_a, date_b, data) values
('B', date '1999-09-19', date '2004-02-12',
'B and 2004-02-12 is younger than 10 yrs, must be kept');
insert into tq84_partitioned_table (status, date_a, date_b, data) values
('A', date '2000-06-16', date '2010-01-01',
'A and 2000-06-16 is older than 3 yrs, must be deleted');
insert into tq84_partitioned_table (status, date_a, date_b, data) values
('A', date '2009-06-09', date '1999-08-28',
'A and 2009-06-09 is younger than 3 yrs, must be kept');
select * from tq84_partitioned_table order by date_too_old;
-- drop partitions older than 10 or 3 years, respectively:
alter table tq84_partitioned_table drop partition p_before_2000_10;
alter table tq84_partitioned_table drop partition p_before_2000_11;
alter table tq84_partitioned_table drop partition p2000_12;
select * from tq84_partitioned_table order by date_too_old;
Best Answer
You didn't mention how the data comes in, but I would hazard a guess that it is through flat files. If this is the case, I worked at an order processing company that used Oracle for the back-end of the WMS we used. We regularly received 10 of millions of records a day. We used SQL loader scripts to load the data. The caveat to that is that the data needs to be in the proper format. See here:
http://knoworacle.blogspot.com/2012/05/oracle-sql-loader-with-example.html
And of course from the makers:
http://docs.oracle.com/cd/B10501_01/text.920/a96518/aload.htm