Mysql – Derby slows down after 1.2 million records

derbyinsertjavaMySQL

I'm using a Derby database, mass insertions slow down to 1/4 to 1/6 the speed once there are 1.2 million records (about 6 GB database size, one major table, other tables are tiny). Is that normal? Is there something I can do to tweak it and make it run faster? I compressed the main table.

Questions

  • Should I consider other databases?
  • How could I compare Derby's performance, with, say, MySQL in this kind of scenario?
  • Are there such comparisons already?

(Running on Windows 7-based server R2, 16GB RAM, 8-core machine).

Best Answer

DISCLAIMER : Not a Derby Expert

DERBY

There are options you can set to increase data pages for Derby

For a mass load of table prodtable, you may want to consider

  • create temptable just like prodtable, but with no indexes
  • load data into the temptable
  • rebuild indexes on temptable
  • rename prodtable to zaptable
  • rename temptable to prodtable
  • drop zaptable

MySQL

If you switch over to MySQL, you could do wonderful things to performance tune data processing

  • Start with using only InnoDB
  • You can configure InnoDB to
    • hyperthread read threads
    • hyperthread write threads
    • access multiple CPUs/Cores
    • setup partitions buffer pools

Here are some of my past posts on configuring InnoDB in MySQL for multicore performance

While I may be a little biased towards MySQL, make sure you have Derby's caching configured to accommodate mass INSERTs. That way, you need not perform a major data migration if Derby can be made to handle large transactions.