There are various tricks for some performance improvements in the OCI library(for example chunk size settings). But since you do not have direct access to OCI you should contact you DOA vendor.
If you want to trace user's session from begging you have to create a "LOGON TRIGGER ON SCHEMA" and then call something like dbms_support.start_trace(true) in this trigger.
Or you can set tracing in other session using dbms_monitor.
You should first "prove" that the root cause is BLOB handling. Server side log should give you some numbers. (Just beware - it's not easy to read).
PS: there is also a way how you can register your "interceptor" library in OCI.dll. This will give detailed tracing for the client's side and will tell you how your DAO is working. But this really requires some Oracle experience.
Your issue may involve the configuration settings for either OS or MySQL
HISTORICAL EXAMPLES
Years ago when I worked at a website firm, I used --skip-extended-insert to load a DB Server that had 2GB RAM. That made a whole world of sense to me because the server I dumped it from had 64GB RAM and the client wanted the data on a smaller machine.
At my current placement of employment, an outside vendor dumped data from SQL Server and produced a mysqldump compatible dumpfile. The problem with that dump file was each table had a single insert with all the rows. One table had 3.5 million. They could not be loaded into in-house VM will less than 8GB RAM configured. Shockingly, the solution to that was to scp the file to may laptop, load it into MySQL 5.5.37 on my laptop, mysqldump from it, send the new mysqldump back to the Linux CentOS VM, and load the mysqldump.
BTW, the 3.5 million row table loaded as a single insert into my Windows 7 laptop in 4.5 hours without changing any MySQL settings. To be honest with you, I did believe that "shot in the dark" would work because I did not reconfigure mysqldump on the laptop to work with the dump. My guess is that it probably loaded because the undo log in ibdata1 just grew on disk and held everything.
You should not have to live to skip-extended-insert
if you dump on reload on the same machine.
SUGGESTIONS
While I was typing my answer, I saw you inject the following into the question
According to the a query I ran, the table clocks in at around 1.7GB, but it's MyISAM and apparently that number is unreliable (but it's a big table, anyway).
SUGGESTION #1
If that one MyISAM table has 1.7GB and has the 11 million rows, my suggestion would be for you to increase the size of the bulk insert buffer. The bulk insert buffer is for MyISAM usage only and supports extended inserts. Set bulk_insert_buffer_size to 256M.
Go to my.cnf or my.ini and add this
bulk_insert_buffer_size = 256M
Then, login to MySQL and run
mysql> SET GLOBAL bulk_insert_buffer_size = 1024 * 1024 * 256;
SUGGESTION #2
As for your InnoDB, you need to use a bigger log buffer. Set it to 64M by adding this to my.cnf
innodb_log_buffer_size = 64M
You must restart MySQL to use that new value.
SUGGESTION #3
You may also need bigger InnoDB Logs by increasing innodb_log_file_size. This require manual intervention as follows:
STEP 01: Run this command as root
mysql> SET GLOBAL innodb_fast_shutdown = 0;
STEP 02: Shutdown mysql
- Linux :
service mysql stop
- Windows (as Administrator) :
net stop mysql
STEP 03: Add this option to my.cnf
or my.ini
innodb_log_file_size = 1G
STEP 04: In the OS, goto the folder where ib_logfile0 and ib_logfile1 are located and run
Linux
mv ib_logfile0 ib_logfile0.bak
mv ib_logfile1 ib_logfile1.bak
Windows
rename ib_logfile0 ib_logfile0.bak
rename ib_logfile1 ib_logfile1.bak
STEP 05: Start mysql (the log files get recreated so start will take an extra 1-2 min)
- Linux :
service mysql start
- Windows (as Administrator) :
net start mysql
After making one or more of these suggested changes, try dumping and reloading. It goes go much smoother. Start with just SUGGESTION #1
and try it. Try the other other two to see if it improves more.
SUMMARY
You need these options for your target MySQL DB Server in your my.cnf or my.ini
bulk_insert_buffer_size = 256M
innodb_log_buffer_size = 64M
innodb_log_file_size = 1G
max_allowed_packet = 1G
GIVE IT A TRY !!!
Best Answer
I recommend avoiding putting more than 16MB (perhaps less) in a column in MySQL. There are many limitations in MySQL and possible in the companion tools you are using.
For storing Text (as opposed to blobs) in a column, I would strongly consider compressing it. This is likely to shrink the data by 3x. Do the compression in the Client, not the Server.
What is the Text for? How will be it be used eventually? An analogy: with images (which must be stored in
BLOB/MEDIUMBLOB/LONGBLOB
), it can be better to store them as files when the use is HTML's<img src=file...>
. This makes the fetching trivial.I once had the task of storing large data, even multi-GB movies. Obviously, I could not even use
LONGBLOB
(4GB limit). I chose to break the stream into 50KB (pick a number) chunks and store them in aBLOB
in rows of a table. Together with it was a sequence number to allow reconstructing the result. This allowed me to "stream" things rather than "store (or fetch) it all at once". Hence I was not straining the limits of the client software (which was PHP and Apache).As for inserting blob (or compressed text), it may be optimal to convert it to hex, then use
UNHEX("...")
in theINSERT
statement. This completely avoids any issue of escaping strings, etc. (But does not avoid SQL Injection problems.) It does, however, cut in half some of the limits you may encounter.