Mysql – Speed inserts in database with a table with +47 million data per each year, containing more than 10 years

importMySQL

I'm moving my database to mysql but there is one table that is pretty big (more than 47 million rows for each year) and it takes years to insert all the data.

I already tried everything I found in the forum, partitioning, index, fulltext…and is true it works better but its far away from efficient.

For giving more background: The data base is about hydrological data recorded during the past ten years. In my data base there are three tables: one keeping the objects to be saved, another to save the sensors of each object and the last one and the one which is causing the big problem is where I keep hourly all the information collected for each sensor (this table will have 47 million data for each year int the database)

I'm using .NET for the transactions.

Is there any way I can performed the inserts in less than 10 hours?

Thank you very much!

Best Answer

Simple this one. Create a staging table with the same columns as your main table. No indexes on the staging table please as this will slow it down. Insert your rows into the staging table via .NET as normal. Then run:

BEGIN TRAN
INSERT INTO main_table (cols here)
    SELECT cols here from staging_table
TRUNCATE TABLE staging_table
COMMIT

You can still run that SQL inline via .net, or simply use a stored proc. Either way, the insert into the main table will be in a SINGLE insert, rather than .nets single insert for each row (RBAR). As the above method will insert data into the main table in one go, the indexes are updated once. We use this to great effect in our warehousing databases.


As your inserting a lot of data into the main_table, an alternative is to drop all the indexes on the main_table, insert your data, then re-create the indexes. This too is a very efficient (read fast) way.