Actually I am bit confused with my database export, while i exporting it shows "Total estimation using BLOCKS method: 20.83 GB" But my DB export dump is only 6 GB. Few days back we deleted some unwanted files and data from our database. But I am not getting idea about this huge difference. Also, my export dump file is not showing any kind of error messages in the log files.
After importing this dump on an server and check DBA segemt using below cmd
"SELECT OWNER, SUM(BYTES)/1024/1024 SIZE_MB FROM DBA_SEGMENTS
group by owner; "
data seems to be fine but the size shows a large difference.
Please advice if any body have idea on this?
Best Answer
Data pump uses BLOCKS method to estimate the size of the dump by default during the export. The estimate is calculated by multiplying the number of database blocks used by the source objects, multiply the block sizes.
If the tables are fragmented then actual size may vary.
In your case, as you have said, you had heavy deletion in the past which didn't shrink down the High Water Mark leading data pump to count empty blocks as well during the estimation.
Dump file only contains index DDLs during the export and these DDLs runs when we import it resulting in bigger database size.
Demonstration:
For demonstration I have created one table selecting from
all_objects
view.The newly created table has 1664 blocks.((1664*8)/1024=13). Which is 13MB.
Now lets estimate the dump size of this table.
Its exactly 13MB.
Next, lets delete all the records from the table and try estimate the dump size of table.
Lets run
expdp
-Its still 13 MB.
So such fragmentation may exist in tables that we try to export and the estimated size can be misleading.
Better way to estimate size of the dump file would be using
ESTIMATE=STATISTICS
in expdp command. But you should gather table statistics to gain accuracy as shown below.Again, lets try to estimate the dump size.
Now it's showing accurate result, its 11.43 KB which contains meta data only.
References: