Mysql – How to update 10 million+ rows in MySQL single table as Fast as possible

myisamMySQLmysql-5.6performanceupdate

Using MySQL 5.6 with InnoDB storage engine for most of the tables. InnoDB buffer pool size is 15 GB and Innodb DB + indexes are around 10 GB. Server has 32GB RAM and is running Cent OS 7 x64.

I have one big table which contains around 10 millions + records.

I get an updated dump file from a remote server every 24 hours. The file is in csv format. I don't have control over that format. The file is ~750 MB. I tried inserting data to a MyISAM table row by row and it took 35 minutes.

I need to take only 3 values per line out of 10-12 from the file and update it in the database.

What's the best way to achieve something like this ?

I need to do this daily.

Currently Flow is like this:

  1. mysqli_begin_transaction
  2. Read Dump file line by line
  3. Update each record Line by Line.
  4. mysqli_commit

Above operations takes around 30-40 minutes to complete and while doing this, there are other updates going on which gives me

Lock wait timeout exceeded; try restarting transaction

Update 1

data loading in new table using LOAD DATA LOCAL INFILE. In MyISAM it took 38.93 sec while in InnoDB it took 7 min 5.21 sec. Then I did:

UPDATE table1 t1, table2 t2
SET 
t1.field1 = t2.field1,
t1.field2 = t2.field2,
t1.field3 = t2.field3
WHERE t1.field10 = t2.field10

Query OK, 434914 rows affected (22 hours 14 min 47.55 sec)

Update 2

same update with join query

UPDATE table1 a JOIN table2 b 
ON a.field1 = b.field1 
SET 
a.field2 = b.field2,
a.field3 = b.field3,
a.field4 = b.field4

(14 hours 56 min 46.85 sec)

Clarifications from questions in comments:

  • About 6% of the rows in the table will be updated by the file, but sometimes it can be as much as 25%.
  • There are indexes on the fields being updated. There are 12 indexes on the table, and 8 indexes include the update fields.
  • It is not necessary to do the update in one transaction. It can take time but not more than 24 hours. I am looking to get it done in 1 hour without locking the whole table, as later I have to update the sphinx index which is dependent on this table. It does not matter if the steps take a longer duration as long as the database is available for other tasks.
  • I could modify the csv format in a preprocess step. The only thing that matters is quick update and without locking.
  • Table 2 is MyISAM. It is the newly created table from csv file using load data infile. MYI file size is 452 MB. Table 2 is indexed on the field1 column.
  • MYD of the MyISAM table is 663MB.

Update 3 :

here are more details about both table.

CREATE TABLE `content` (
  `hash` char(40) CHARACTER SET ascii NOT NULL DEFAULT '',
  `title` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
  `og_name` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
  `keywords` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
  `files_count` smallint(5) unsigned NOT NULL DEFAULT '0',
  `more_files` smallint(5) unsigned NOT NULL DEFAULT '0',
  `files` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT '0',
  `category` smallint(3) unsigned NOT NULL DEFAULT '600',
  `size` bigint(19) unsigned NOT NULL DEFAULT '0',
  `downloaders` int(11) NOT NULL DEFAULT '0',
  `completed` int(11) NOT NULL DEFAULT '0',
  `uploaders` int(11) NOT NULL DEFAULT '0',
  `creation_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `upload_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `last_updated` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `vote_up` int(11) unsigned NOT NULL DEFAULT '0',
  `vote_down` int(11) unsigned NOT NULL DEFAULT '0',
  `comments_count` int(11) NOT NULL DEFAULT '0',
  `imdb` int(8) unsigned NOT NULL DEFAULT '0',
  `video_sample` tinyint(1) NOT NULL DEFAULT '0',
  `video_quality` tinyint(2) NOT NULL DEFAULT '0',
  `audio_lang` varchar(127) CHARACTER SET ascii NOT NULL DEFAULT '',
  `subtitle_lang` varchar(127) CHARACTER SET ascii NOT NULL DEFAULT '',
  `verified` tinyint(1) unsigned NOT NULL DEFAULT '0',
  `uploader` int(11) unsigned NOT NULL DEFAULT '0',
  `anonymous` tinyint(1) NOT NULL DEFAULT '0',
  `enabled` tinyint(1) unsigned NOT NULL DEFAULT '0',
  `tfile_size` int(11) unsigned NOT NULL DEFAULT '0',
  `scrape_source` tinyint(1) unsigned NOT NULL DEFAULT '0',
  `record_num` int(11) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`record_num`),
  UNIQUE KEY `hash` (`hash`),
  KEY `uploaders` (`uploaders`),
  KEY `tfile_size` (`tfile_size`),
  KEY `enabled_category_upload_date_verified_` (`enabled`,`category`,`upload_date`,`verified`),
  KEY `enabled_upload_date_verified_` (`enabled`,`upload_date`,`verified`),
  KEY `enabled_category_verified_` (`enabled`,`category`,`verified`),
  KEY `enabled_verified_` (`enabled`,`verified`),
  KEY `enabled_uploader_` (`enabled`,`uploader`),
  KEY `anonymous_uploader_` (`anonymous`,`uploader`),
  KEY `enabled_uploaders_upload_date_` (`enabled`,`uploaders`,`upload_date`),
  KEY `enabled_verified_category` (`enabled`,`verified`,`category`),
  KEY `verified_enabled_category` (`verified`,`enabled`,`category`)
) ENGINE=InnoDB AUTO_INCREMENT=7551163 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=FIXED


CREATE TABLE `content_csv_dump_temp` (
  `hash` char(40) CHARACTER SET ascii NOT NULL DEFAULT '',
  `title` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `category_id` int(11) unsigned NOT NULL DEFAULT '0',
  `uploaders` int(11) unsigned NOT NULL DEFAULT '0',
  `downloaders` int(11) unsigned NOT NULL DEFAULT '0',
  `verified` tinyint(1) unsigned NOT NULL DEFAULT '0',
  PRIMARY KEY (`hash`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci

and here is the update query which updates content table using data from content_csv_dump_temp

UPDATE content a JOIN content_csv_dump_temp b 
ON a.hash = b.hash 
SET 
a.uploaders = b.uploaders,
a.downloaders = b.downloaders,
a.verified = b.verified

update 4:

all above testing was done on test machine., but now i did same tests on the production machine, and queries are very fast.

mysql> UPDATE content_test a JOIN content_csv_dump_temp b
    -> ON a.hash = b.hash
    -> SET
    -> a.uploaders = b.uploaders,
    -> a.downloaders = b.downloaders,
    -> a.verified = b.verified;
Query OK, 2673528 rows affected (7 min 50.42 sec)
Rows matched: 7044818  Changed: 2673528  Warnings: 0

i apologize for my mistake. Its better to use join instead of each record update. now i am trying to improve mpre using the index suggested by rick_james , will update once bench-marking is done.

Best Answer

Based on my experience, I would use LOAD DATA INFILE to import your CSV File.

The LOAD DATA INFILE statement reads rows from a text file into a table at a very high speed.

Example I found on the internet Load Data example. I tested this example on my box and worked fine

Example Table

CREATE TABLE example (
  `Id` int(11) NOT NULL AUTO_INCREMENT,
  `Column2` varchar(14) NOT NULL,
  `Column3` varchar(14) NOT NULL,
  `Column4` varchar(14) NOT NULL,
  `Column5` DATE NOT NULL,
  PRIMARY KEY (`Id`)
) ENGINE=InnoDB

Example CSV File

# more /tmp/example.csv
Column1,Column2,Column3,Column4,Column5
1,A,Foo,sdsdsd,4/13/2013
2,B,Bar,sdsa,4/12/2013
3,C,Foo,wewqe,3/12/2013
4,D,Bar,asdsad,2/1/2013
5,E,FOObar,wewqe,5/1/2013

Import Statement to be run from MySQL console

LOAD DATA LOCAL INFILE '/tmp/example.csv'
    -> INTO TABLE example
    -> FIELDS TERMINATED BY ','
    -> LINES TERMINATED BY '\n'
    -> IGNORE 1 LINES
    -> (id, Column3,Column4, @Column5)
    -> set
    -> Column5 = str_to_date(@Column5, '%m/%d/%Y');

Result

MySQL [testcsv]> select * from example;
+----+---------+---------+---------+------------+
| Id | Column2 | Column3 | Column4 | Column5    |
+----+---------+---------+---------+------------+
|  1 |         | Column2 | Column3 | 0000-00-00 |
|  2 |         | B       | Bar     | 0000-00-00 |
|  3 |         | C       | Foo     | 0000-00-00 |
|  4 |         | D       | Bar     | 0000-00-00 |
|  5 |         | E       | FOObar  | 0000-00-00 |
+----+---------+---------+---------+------------+

IGNORE just simply ignores the first line which are column headers.

After IGNORE, we are specifying the columns (skipping column2), to import, which matches one of the criteria in your question.

Here is another example directly from Oracle : LOAD DATA INFILE example

This should be enough to get you started.