You've got two questions in here:
Is the above data sufficient information to base an index rebuild
decision upon or would I need to have additional metrics involved?
One missing link is the size of the index. If you're talking about an object with less than, say, 1000 pages, then index rebuilds aren't all that critical.
Another missing link would be the churn of the index. Typically I see filtered indexes used when they're a very, very small subset of the entire table, and the subset changes fast. Guessing by the name of your filtering field (OfferStatus = 0), it sounds like you're indexing just the rows where you haven't made an offer yet, and then you're going to immediately turn around and make an offer. In situations like that, the data's changing so fast that index rebuilds usually don't make sense.
Or, for filtered indexes do we even care about fragmentation and
should just explicitly update statistics at X interval?
SQL Server updates stats on objects when ~20% of the data changes, but filtered indexes & stats are a special case. They're also updated when 20% of the data changes - but it's 20% of the base table, not 20% of the filtered subset. Because of that, you probably want to manually update stats on them periodically. I love Ola Hallengren's maintenance scripts for this - the index maintenance stored proc has a parameter for updating statistics, and another parameter for choosing what level of sampling you want, and another parameter for choosing whether to update stats on all objects or only the ones with changed rows. It's fantastic.
For what you want to do, I would recommend the following (which is pretty much what you were thinking).
1> Create history tables for the historic data you have - keep the schemas as similar as possible. Split up by some logical grouping (such as year/month) based on how they are going to be queried (say you need to report with in month/year as well as all). Do not worry about the table size of the splits unless they are getting into the TB size range (your dbms should handle it) just make sure that they are appropriately indexed for the queries that need to be run. You should consider putting these onto a different disk to the active data if performance is an issue.
2> Create a routine to move data from the active table to the relevant historic table. Run this periodically. As a practice rebuild the indexes on the table that has had the data removed from it, and maybe update the table statistics. Easiest way to do this is to write a sql script.
3> Consider the reporting you want to do. If you want to only have to deal with 1 table when writing queries, create a view that joins the archived tables together. Create indexes on all the tables to suite the view. This way if you want all the data, select from the view. If you want data from a specific year/month, query that table. The view will look something like:
create view view_all_data as
select "Jan12" as month,a.* from data_Jan12 a
union
select "Feb12" as month,b.* from data_Feb12 b
....
I am assuming here that the system is not a highly used transactional system and that you have windows of low usage to run the analysis queries. If you need to maintain high levels of performance, you may like to do the above in a separate database (separate hardware) and port across the new data that you get from backups.
Best Answer
As far as I understand your requirements, the madison option for apt-cache does what you want:
On my computer:
I hope this is what you need. It also includes the ppas.