I have quite big datapump dump file and tried to load it into compressed tablespace that is created with "DEFAULT COMPRESS FOR OLTP" because of lack of free space. Anyway, when I look at "compressed" dba_segments and compare them with source segments they have the same size.
How does this compress feature work? I expected lower size of compressed tables but it seems doesn't work this way. P.S. Datapump job is not finished due to no free space left.
COMPRESS TABLESPACE 11g
compressionoracle
Related Question
- SQL Server Compression – Alternative Way to Compress NVARCHAR(MAX)
- SQL Server – Why Compressing an Existing Table Results in No Change to Table Size
- SQL Server 2014 – How to Compress Database
- SQL Server 2012 – How to Compress Columns for Whole Table
- PostgreSQL – How to Compress Database
- Oracle MOVE COMPRESS – Impact on Future Data
- Sql-server – When would I want to use COMPRESS and DECOMPRESS functions in SQL Server 2016
- Sql-server – SQL Server 2016 compress function details – external decompression
Best Answer
The datapump export has the complete DDL for every object you exported. So while the default for the tablespace is to compress, this can (and will) be overridden by any explicit setting for a table.
Check the COMPRESSION column on xxx_TABLES for the schema(s) you are importing to see if you are truly getting any compression.
You will need to use "transform=segment_attributes" variants on import to ensure that you strip off the storage elements of your objects to that they pick up the default you set at tablespace level.