You could do that in Oracle 11gR2 with expdp
and the REMAP_DATA
option.
- Create a function in a package that takes a blob as argument, and returns
null
(or an empty blob perhaps).
Call expdp
as usual, adding:
REMAP_DATA=SCHEMA.TAB.BLOB_COLUM:SCHEMA.PACKAGE.YOUR_FUNCTION
Short example (schema: mat
):
create or replace
package remap as
function null_lob(col blob) return blob;
end remap;
/
create or replace
package body remap as
function null_lob(col blob)
return blob is
eblob blob;
begin
return null;
end null_lob;
end remap;
/
create table foo(a number, b blob);
-- insert a line with a 10M blob, commit
select a, length(b) from foo;
A LENGTH(B)
---------- ----------
1 10485760
Plain export:
$ expdp mat/*** DIRECTORY=EXP_DIR TABLES=MAT.FOO
...
Total estimation using BLOCKS method: 11.06 MB
...
. . exported "MAT"."FOO" 10.00 MB 1 rows
...
Dump file is ~11M.
With the data remap:
$ expdp mat/*** DIRECTORY=EXP_DIR TABLES=MAT.FOO \
REMAP_DATA=MAT.FOO.B:MAT.remap.null_lob
...
Total estimation using BLOCKS method: 11.06 MB
...
. . exported "MAT"."FOO" 5.398 KB 1 rows
...
Dump file is ~100k, importing that dump does indeed import a null in the column.
Another option would be to use an external table to do the load/unload, with the ORACLE_DATAPUMP
driver (I think this was introduced in 10g, see external table concepts).
For the unload, use something like:
create table foo_unload
organization external (
type oracle_datapump
default directory exp_dir
location ('foo.dmp')
)
as select a, empty_blob() b from foo;
(You can drop the external table right after creating it, that doesn't - by default anyway - delete the dump file.)
Then move the foo.dmp
file to your target server & directory, and create the external table there:
create table foo_load (a number, b blob)
organization external
type oracle_datapump
default directory exp_dir
location ('foo.dmp')
);
And finally load your target table (CTAS or truncate ...; insert /*+ append */ ...
for instance):
create table foo as select * from foo_load;
Drop the external loader table once that's done.
I suspect this may have something to do with you starting impdp as sysdba. Please see the note in http://docs.oracle.com/database/121/SUTIL/dp_export.htm#SUTIL824:
"Do not start Export as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users."
Try your command as system.
Best Answer
You are right when stating I thought that by using
INCLUDE=TABLE
only the tables would be included, but it includes all related objects too. This means that constraints and triggers will be created for you when you useINCLUDE=TABLE
when importing.Possible Solution
If you have the possibility to re-create the dump using the
expdp
command then you might want to just export the tables of the required schema and use theINCLUDE
parameter.exp_include.par
This is basically the expdp parameter file.
This will create a dump of the required schema and contain only tables and dependant objects.
After you have generated the required
*.dmp
file, import using theEXCLUDE=...
option.imp_exclude.par
This is the parameter file for the impdp.
This should result in the tables being imported correctly.
Reference Material