It is not completely clear what your after with your question. I assume you want to know if and importantly when an alter index .. coalesce
was performed.
I think without (an in other answers already mentioned auditing setup) it is impossible to get such a date (although I would want to know if someone knows better).
That said, the effects of an alter index coalesce
can be made visible with a analyze index .. compute statistics
.
create table ix_test (
a number
);
create unique index ix_test_ix on ix_test(a);
insert into ix_test
select
rownum
from
dba_objects, dba_objects, dba_objects
where
rownum < 1e7;
commit;
Now that the table is filled, the index is analyzed and some index related values extracted:
analyze index ix_test_ix compute statistics;
select
LEAF_BLOCKS,
AVG_LEAF_BLOCKS_PER_KEY,
CLUSTERING_FACTOR,
SAMPLE_SIZE
from
user_indexes
where
index_name = 'IX_TEST_IX';
LEAF_BLOCKS AVG_LEAF_BLOCKS_PER_KEY CLUSTERING_FACTOR SAMPLE_SIZE
----------- ----------------------- ----------------- -----------
2 19875 1 15152
All but one row deleted and index analyzed (again):
delete from ix_test where a != 500000;
analyze index ix_test_ix compute statistics;
The query above returns the same values.
Coalescing the index:
alter index ix_test_ix coalesce;
analyze index ix_test_ix compute statistics;
The query above now returns
LEAF_BLOCKS AVG_LEAF_BLOCKS_PER_KEY CLUSTERING_FACTOR SAMPLE_SIZE
----------- ----------------------- ----------------- -----------
1 1 1 1
So, there is a possibility to find out if there was a coalescing done, but it's not straight forward and probably useless, depending on your context.
After much thought, I think I've managed it. Only drawback is that you'll need double the space of the underlying table to do so...
Don't think there's any other way to do it as you can't redefine PKs without dropping and recreating (correct me if I'm wrong).
PHIL@PHILL11G2 > alter table bigtable add constraint bigtable_pk primary key (object_id);
Table altered.
PHIL@PHILL11G2 > commit;
Commit complete.
PHIL@PHILL11G2 > create table bigtable_interim as ( select * from bigtable where 1=0 );
Table created.
PHIL@PHILL11G2 > EXEC dbms_redefinition.can_redef_table('PHIL', 'BIGTABLE');
PL/SQL procedure successfully completed.
PHIL@PHILL11G2 > exec DBMS_REDEFINITION.start_redef_table(uname => 'PHIL',orig_table => 'BIGTABLE',int_table => 'BIGTABLE_INTERIM');
PL/SQL procedure successfully completed.
PHIL@PHILL11G2 >
PHIL@PHILL11G2 > create unique index bigtable_int_hash on bigtable_interim (object_id) global partition by hash (object_id) (partition p1 tablespace users, partition p2 tablespace users) online;
Index created.
PHIL@PHILL11G2 > exec dbms_redefinition.finish_redef_table(uname => 'PHIL',orig_table => 'BIGTABLE',int_table => 'BIGTABLE_INTERIM');
PL/SQL procedure successfully completed.
PHIL@PHILL11G2 > select count(*) from bigtable_interim;
COUNT(*)
----------
2300640
PHIL@PHILL11G2 > select count(*) from bigtable;
COUNT(*)
----------
2300640
PHIL@PHILL11G2 >
PHIL@PHILL11G2 > alter table bigtable add constraint bigtable_hashed_pk primary key (object_id) ;
Table altered.
PHIL@PHILL11G2 > -- that reused the hashed index i created
Feel free to correct any mistakes I've made :)
Phil
Best Answer
The optimal number of columns to compress depends on:
These factors can be estimated for the table
The aim is to maximize the size of the compressed prefix whilst minimizing the number of blocks needed to hold all rows with the same prefix.
Assuming that the data is uniform at least to a degree, and ignoring the small amount of overhead compression introduces, you could attempt to implement this approach like this:
helper functions:
test IOT:
query:
result:
check:
The check above roughly corresponds with the prefix length with the maximum
rows_per_block
in the calculation - but I suggest you check my working carefully for yourself before trusting it :)I am assuming the table is so large that you can't just take a copy and try out different prefix lengths. Another approach would be to do just that on a sample of the data - the sample should be chosen as a random selection of prefixes for a given compression candidate (rather than just a random selection of rows)