You could do that in Oracle 11gR2 with expdp
and the REMAP_DATA
option.
- Create a function in a package that takes a blob as argument, and returns
null
(or an empty blob perhaps).
Call expdp
as usual, adding:
REMAP_DATA=SCHEMA.TAB.BLOB_COLUM:SCHEMA.PACKAGE.YOUR_FUNCTION
Short example (schema: mat
):
create or replace
package remap as
function null_lob(col blob) return blob;
end remap;
/
create or replace
package body remap as
function null_lob(col blob)
return blob is
eblob blob;
begin
return null;
end null_lob;
end remap;
/
create table foo(a number, b blob);
-- insert a line with a 10M blob, commit
select a, length(b) from foo;
A LENGTH(B)
---------- ----------
1 10485760
Plain export:
$ expdp mat/*** DIRECTORY=EXP_DIR TABLES=MAT.FOO
...
Total estimation using BLOCKS method: 11.06 MB
...
. . exported "MAT"."FOO" 10.00 MB 1 rows
...
Dump file is ~11M.
With the data remap:
$ expdp mat/*** DIRECTORY=EXP_DIR TABLES=MAT.FOO \
REMAP_DATA=MAT.FOO.B:MAT.remap.null_lob
...
Total estimation using BLOCKS method: 11.06 MB
...
. . exported "MAT"."FOO" 5.398 KB 1 rows
...
Dump file is ~100k, importing that dump does indeed import a null in the column.
Another option would be to use an external table to do the load/unload, with the ORACLE_DATAPUMP
driver (I think this was introduced in 10g, see external table concepts).
For the unload, use something like:
create table foo_unload
organization external (
type oracle_datapump
default directory exp_dir
location ('foo.dmp')
)
as select a, empty_blob() b from foo;
(You can drop the external table right after creating it, that doesn't - by default anyway - delete the dump file.)
Then move the foo.dmp
file to your target server & directory, and create the external table there:
create table foo_load (a number, b blob)
organization external
type oracle_datapump
default directory exp_dir
location ('foo.dmp')
);
And finally load your target table (CTAS or truncate ...; insert /*+ append */ ...
for instance):
create table foo as select * from foo_load;
Drop the external loader table once that's done.
if you grant privileges (as you mention above), the problem may seem to go away, but your code executing in SCHEMA_B may be accessing data from tables in SCHEMA_A, unless that is what you want.
Unfortunately there are no easy answers to this problem, since remap_schema option of datapump doesn't really support what you are trying to do. However you may try following option (I have done it before),
- Extract all plsql code and/or view definition (datapump can do this for you using impdp with sqlfile option),
- use sed or any other tool of your choice and replace SCHEMA_A. to SCHEMA_B. where you see objects prefixed. Be careful, occasionally unwanted lines might get changed, so you will have to track that and implement workarounds.
- impdp tables,indexes,constraints to SCHEMA_B
- using sqlplus connect to SCHEMA_B and run the modified scripts from step 2 to load all plsql code, create views and other dependent objects etc.
It is cumbersome, but short of fixing your source schema, I don't think you have too many choices. Luckily if you are in a situation where code doesn't change that often, you can hold on to converted files and avoid these manual steps until something breaks.
Oh btw, if your pl/sql code looks like "create or replace schema_a.package_name", bad luck, remap_schema won't support that either (until 11.2, I have not tested in 12c yet).
Best Answer
It is enough to specify the
schemas
parameter.If you want to export only the tables, you can also add
include=table
:The above exports the tables with indexes and triggers, but it skips procedures, sequences and other objects not directly related to tables.