This is largely dependent on the differences is Oracle version, and OS version. IF both are at the same version, and Oracle is patched in the same manor then the answer becomes a factor of:
IF all of the database related files will reside in the same place THEN
...do a cold clean shutdown of the database
...copy the datafiles, temp, undo, redo logs, control files AND instance related files (initINST.ora, spfileINST.ora, orapwINST, etc) in $ORACLE_HOME/dbs
...setup the instance in the oratab file (or, if it is Windows, use ORADIM to create the instance related windows service)
...startup normally
IF the file structure will change, but the instance name will not THEN
...do a cold clean shutdown of the database
...copy the datafiles, undo, redo logs AND instance related files (initINST.ora, spfileINST.ora, orapwINST, etc) in $ORACLE_HOME/dbs
...do not copy controlfiles
...setup the instance in the oratab file (or, if it is Windows, use ORADIM to create the instance related windows service)
...startup nomount
...recreate the controlfiles using the "create controlfile reuse ~ noresetlogs" command
...add back in TEMP tablespace
IF the file structure will change AND the instance name will as well THEN
...do a cold clean shutdown of the database
...copy the datafiles, undo, redo logs AND instance related files (initINST.ora, spfileINST.ora, orapwINST, etc) in $ORACLE_HOME/dbs
...do not copy controlfiles, temp or redo logs
...setup the instance in the oratab file (or, if it is Windows, use ORADIM to create the instance related windows service)
...startup nomount
...recreate the controlfiles using the "create controlfile set ~ resetlogs" command
...add back in TEMP tablespace
This is the typical, old-style clone technique. Normally, I am a fan of the RMAN DUPLICATE, but if this is a one time thing, I probably wouldn't worry about it. If this isn't 11g, you would be taking a separate backup, transporting that for use by rman. If it is 11g, duplicate can use the "from active database" clause, so long as the instances can reach each other on the network.
We also found existing tools tricky to work with, so ended up hacking together a solution using the sqlalchemy library in python. Our general approach was to mirror the source database; hackily convert data types where necessary; create the target database; copy across the data in chunks.
For an example, see: https://github.com/MIT-LCP/oracle2postgres/blob/master/migration.ipynb. Using this approach, we migrated a fairly large database (~100 schema, with >1000 tables, some tables with >200 million rows) from Oracle to Postgres.
For more detail, see: https://github.com/MIT-LCP/oracle2postgres. The package can be installed with pip install oracle2postgres
. If you hit problems when reusing the code, feel free to raise an issue and we'll do our best to help.
Best Answer
Learn and use an ETL tool
You'll save yourself a lot of time and pain if you use ETL tools like Pentaho Kettle or Talend Studio to automate the data transfer and merging. They'll happily connect to the Pg instance and the Oracle instance and do the required copying and merging using quite flexible strategies. They can consume CSV dumps too.
Use CSV dumps
If you need to use dumps, try to dump individual tables using
COPY ... FORMAT CSV
rather than attempting to wranglepg_dump
into doing Oracle-compatible partial dumps. This will save you a great deal of time when importing them into other DBMses, since pretty much everything understands CSV. Keep the schemas consistent for each instance so you don't need to handle different schemas for different instances.Use disjoint set or composite keys
Where unique keys are used, make sure you assign disjoint key ranges to different instances (eg the key range is
instance_id * 10^7
to(instance_id + 1 * 10^7)-1
) or use composite keys of(instance_id, generated_key)
. If you use disjoint ranges add aCHECK
constraint to enforce it; similarly, if using composite keys add aCHECK
constraint to force theinstance_id
part to a constant.Lowest common denominator data types
Avoid using PostgreSQL arrays, record types, and
hstore
. They'll all dump as string literals inCSV
dumps and you'll have to parse them in order to do something useful with them. For example:You really, really don't want to have to parse that. It's a pretty extreme case, an anonymous record value containing three different arrays, one an array of integers, one an array of strings including quotes, and one an array of anonymous two-field row tuples. But seriously ... use plain relational modelling and CSV export, the convenience of composite types, hstore, and arrays aren't worth the compatibility pain if you know you'll be doing lots of cross-DB work.
It's actually possible to use them in your schema, you'll just need to
COPY FROM
a set of queries that expands them into appropriate relational structures. Since you're going to have to do that, you might as well use regular relational modelling in the first place and save yourself some pain.Enums are OK; you can just treat them as strings when loading them. They're just a convenient and performant shortcut for a side table and a FK constraint anyway, allowing you to avoid joins and sorts.
In modern PostgreSQL,
bytea
is output as hex strings. In older versions, it's output as octal escape sequences. Either way, you can load it into avarchar
column in Oracle then convert it if you can't get Oracle to natively understand the representation. I haven't dealt with Oracle binary types, so I can't help much more there.