What’s the most effective way to back-up / save data on Oracle 11g large data sets

oracleoracle-11g

We have a database(actually more databases split by primary access path, modulo on a key.) with millions of records (about 400 millions ). Right now I need to save a specific table userData(which has specific data such username, address, city, state, zipcode, country) remotely(on a different server). I'm wondering which would be the best solution given the following requirements :

  • The data transfer between the host server (database server) and the
    remote server must be as small as possible.

  • The operation should not be CPU / resource intensive. It should be silent (should not move needle too much on the monitoring dashboard) due the fact we are using a shared On Demand(as a service) service provider which seems to place a "hold" on our instance when we make too much noise.

  • The operation must be fast.

    I was thinking if it would be possible to do some "compression" on the fly (e.g. using 7z) and I'm wondering what kind of query should I use to make it "silent " to do not disturb the network / CPU too much .
    11g database is used on Solaris . On the remote server I'm planning to use redhat but that can be different of course .

Best Answer

You could use Oracle Data Pump Export and unload data compressing the resulting dump file(s). Below is an example of unloading the tables EMPLOYEES and DEPARTMENTS owned by the user HR:

expdp hr parfile=hrexp.par

The parameter file hrexp.par contains the following parameters:

content=data_only
compression=all
include=table:"IN ('EMPLOYEES', 'DEPARTMENTS')"
directory=dmpdir
dumpfile=hrdata.dmp
reuse_dumpfiles=yes
nologfile=yes

The dump file is created in the directory specified by the directory parameter. You will probably want this directory to be located on the NFS so that it's available both to source database host and the remote host. The Oracle database should have appropriate privileges to be able to read from/write to this directory, and the user which performs the export operation should have read/write access to the directory object:

SQL> create directory DMPDIR as '/nfs/oraexp';
Directory created.

SQL> grant read, write on directory DMPDIR to hr;
Grant succeeded.

In the example, the value of parameter content instructs Data Pump Export to unload data only. The compression parameter here specifies that the exported data should be compressed.

And now the most important part concerning performance. Oracle Data Pump can use different methods during load and unload operations, and in your case Direct Path method may be preferable because of your CPU resource usage requirement. In order for Data Pump to be able to use this method, you should satisfy some requirements listed in the section "Situations in Which Direct Path Unload Is Not Used" of the Database Utilities Guide:

If any of the following conditions exist for a table, then Data Pump uses the external table method to unload data, rather than the direct path method:

  • Fine-grained access control for SELECT is enabled.
  • The table is a queue table.
  • The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns.
  • The table contains encrypted columns.
  • The table contains a column of an evolved type that needs upgrading.
  • The table contains a column of type LONG or LONG RAW that is not last.
  • The Data Pump command for the specified table used the QUERY, SAMPLE, or REMAP_DATA parameter.