The following is just insane ranting and raving...
If you leave all data in one table (no partitioning), you will have O(log n) search times using a key. Let's take the worst index in the world, the binary tree. Each tree node has exactly one key. A perfectly balanced binary tree with 268,435,455 (2^28 - 1) tree nodes would be a height of 28. If you split up this binary tree into 16 separate trees, you get 16 binary trees each with 16,777,215 (2^24 - 1) tree nodes for a height of 24. The search path is reduced by 4 nodes, a 14.2857 % height reduction. If the search time is in microseconds, a 14.2857 % reduction in search time is nil-to-negligible.
Now in the real world, a BTREE index would have treenodes with multiple keys. Each BTREE search would perform binary searching within the page with a possible decent into another page. For example, if each BTREE page contained 1024 keys, a tree height of 3 or 4 would be the norm, a short tree height indeed.
Notice that a partitiioning of a table does not reduce the height of the BTREE which is already small. Given a partitioning of 260 milliion rows, there is even the strong likelihood of having multiple BTREEs with the same height. Searching for a key may pass through all root BTREE pages every time. Only one will fulfill the path of the needed search range.
Now expand on this. All the partitions exist on the same machine. If you do not have separate disks for each partition, you will have disk I/O and spindle rotations as an automatic bottleneck outside of partition search performance.
In this case, paritioning by database does not buy you anything either if id is the only search key being utitlized.
Partitioning of data should serve to group data that are logically and cohesively in the same class. Performance of searching each partition need not be the main consideration as long as the data is correctly grouped. Once you have achieved the logical partitioning, then concentrate on search time. If you are just separating data by id only, it is possible that many rows of data may never be accessed for reads or writes. Now, that should be a major consideration: Locate all ids most frequently accessed and partition by that. All less frequently accessed ids should reside in one big archive table that is still accessible by index lookup for that 'once in a blue moon' query.
The overall impact should be to have at least two partitions: One for frequently accessed ids, and the other paritiion for the rest of the ids. If the frequently accessed ids is fairly large, you could optionally partition that.
What do you expect your real life volume of data to be?
For 10 million rows, I wouldn't bother with partitioning. The overhead far outweighs the benefits: partitioning isn't a silver bullet to cure performance issues.
To answer,
Point 1: on the first run, data needs loaded into memory ("buffer pool") and will stay cached until evicted based on memory pressure and usage. Personally, I'd test with the cache filled because you'd expect your app to require that data very often, especially if you think partitioning is the solution to some problem
For point 2, what queries do you expect to run in production? The queries should be representative of this production load. However they should test different realistic filter combinations with and without partition key at least.
Edit, some reading, after comments below:
Best Answer
I have an answer in a previous question that talks about the different forms of partitioning.
Not knowing if you have Database Paritioning Feature (DPF) enabled (since it is a separate license for DB2), your biggest bang for the buck would be to have table partitioning and/or a multidimensional clustering table. MDC's you want to cluster on things with low cardinality, so don't use your date field for that. But you may be able to use other fields used in the query for it.
As for partitioning, you will first need to create your table with partitions defined. (You unfortunately can't partition a non-partitioned table). In your definition, you could define all the partitions you want. After that you can always attach new ones. Once they are defined you can move the data from the old table to the new table, or just reload your fact table (depending on your business case).
Now, here is another thought on defining partitioning. IBM does recommend putting each partition in its own tablespace (for performance and disaster recover reasons). To do this you would have to define the table multiple times each with its partition range and assign it to a tablespace. Then do
ALTER TABLE ADD PARITION
and attach the table to the "base table". If you are using one day at a time for data, this perhaps could get costly, so maybe you would rather wish to partition quarterly or monthly or something like that.