Postgresql – Moving data between Oracle and PostgreSQL

dumpmigrationoraclepostgresql

For a future project I need to use the following setup:

Some small (offline) data boxes (collecting data over the time) fill PostgreSQL databases (about 1 TB a year in total).

From time to time I would have the chance to move data out of those boxes (onto a stick or mobile drive) and import that into a huge corporate Oracle database.

So far, this is hopefully not a really bad solution. At the moment I'm looking for things to be prepared for easy use later.

Has anyone ever done things like that?

  • Experiences, what to do and what to prevent?
  • Any other compatibility?
  • Things to do in advance?
  • How to move the data? Special dump-settings?

Best Answer

Learn and use an ETL tool

You'll save yourself a lot of time and pain if you use ETL tools like Pentaho Kettle or Talend Studio to automate the data transfer and merging. They'll happily connect to the Pg instance and the Oracle instance and do the required copying and merging using quite flexible strategies. They can consume CSV dumps too.

Use CSV dumps

If you need to use dumps, try to dump individual tables using COPY ... FORMAT CSV rather than attempting to wrangle pg_dump into doing Oracle-compatible partial dumps. This will save you a great deal of time when importing them into other DBMses, since pretty much everything understands CSV. Keep the schemas consistent for each instance so you don't need to handle different schemas for different instances.

Use disjoint set or composite keys

Where unique keys are used, make sure you assign disjoint key ranges to different instances (eg the key range is instance_id * 10^7 to (instance_id + 1 * 10^7)-1) or use composite keys of (instance_id, generated_key). If you use disjoint ranges add a CHECK constraint to enforce it; similarly, if using composite keys add a CHECK constraint to force the instance_id part to a constant.

Lowest common denominator data types

Avoid using PostgreSQL arrays, record types, and hstore. They'll all dump as string literals in CSV dumps and you'll have to parse them in order to do something useful with them. For example:

regress=> SELECT ROW( ARRAY[1,2,3], ARRAY['a','b','withquotehere:''inline', 'doublequotehere:"'], ARRAY[ ROW(1,'a'), ROW(2,'b') ] );
                                            row                                             
--------------------------------------------------------------------------------------------
 ("{1,2,3}","{a,b,withquotehere:'inline,""doublequotehere:\\""""}","{""(1,a)"",""(2,b)""}")
(1 row)

You really, really don't want to have to parse that. It's a pretty extreme case, an anonymous record value containing three different arrays, one an array of integers, one an array of strings including quotes, and one an array of anonymous two-field row tuples. But seriously ... use plain relational modelling and CSV export, the convenience of composite types, hstore, and arrays aren't worth the compatibility pain if you know you'll be doing lots of cross-DB work.

It's actually possible to use them in your schema, you'll just need to COPY FROM a set of queries that expands them into appropriate relational structures. Since you're going to have to do that, you might as well use regular relational modelling in the first place and save yourself some pain.

Enums are OK; you can just treat them as strings when loading them. They're just a convenient and performant shortcut for a side table and a FK constraint anyway, allowing you to avoid joins and sorts.

In modern PostgreSQL, bytea is output as hex strings. In older versions, it's output as octal escape sequences. Either way, you can load it into a varchar column in Oracle then convert it if you can't get Oracle to natively understand the representation. I haven't dealt with Oracle binary types, so I can't help much more there.