For PostgreSQL, I would recommend using pgloader
.
pgloader
leaves the rows that it cannot import in a separate text file, so you can take a look at what went wrong, fix the errors, and finish importing the data. COPY
simply crashed when it encounters a row it didn't expect, which is likely to happen several times for very big files.
pgloader
can also be set up to load data in parallel, but I'm not sure how much help it would be if you're just loading a single table.
For MySQL, the standard LOAD DATA LOCAL INFILE
is as fast as you're going to get (as far as I know). It spits out a warning
for each row that it cannot import, so you can go back and look at that data later.
Other than that, 11.2 GB isn't too bad. Be sure to remove all existing indexes on the table that you're importing, and recreate the indexes afterwards.
I've imported text files of comparable size into both databases without a problem. Loading data and creating indexes might take a few hours, so just leave it overnight.
Best Answer
A useful little protip - Visio is quite widely available within corporate I.T. suites and Visio Pro or higher can be used to reverse engineer and document a database.
Visio professional has a database reverse engineering tool that will read a database schema and create E-R models from it. Both PostgreSQL and MySQL support ODBC INFORMATION_SCHEMA tables so the ODBC driver for the Visio DB modeller will work with them. Start by creating a database diagram, and then look for 'Reverse Engineer' under the 'Database' menu.
As a bonus, if the physical DB has missing foreign key constraints you can add them back into the model for documentation purposes. For a more complex schema you can also create several diagrams showing different subsystems. I find myself using this to document systems at client sites on a semi-regular basis and it's often the only tool available.