If your table has a table with more than 1000 columns, it cannot be converted to InnoDB. In that case, run this query
SELECT CEILING(SUM(index_length)/POWER(1024,2)) num
FROM information_schema.tables WHERE engine='MyISAM';
This will give you the correct size for key_buffer_size in MB.
Since you are doing an UPSERT, you should set concurrent_insert to 2 to make INSERTs go faster. You may want to consider changing the table's row format to Fixed. I wrote about why to do both in StackOverflow. In essence, if you make the table's row format Fixed, all table rows are the same size. Thus, INSERTs and UPDATEs would operate on the exact same length of data. Management of row access is far more reasonable.
Since MyISAM only caches indexes (in the key buffer), all data must be read from disk. anything you can do to getting better RAID performance (as asked by @TomTom) would help your cause as well.
When you have a Primary Key with an auto_increment it will generate a new ID only if you insert a NULL value. If you set ID=4 in your INSERT
, the ID will be 4 so you'll not loose your ID during your "move" operation.
We don't have the "SEQUENCE" notion like in Oracle database so your "global ID" problem it's not so easy to do.
Maybe you can try something like this (but it'll add complications for just a 4 millions rows table)
Create a table used for generates your "Global ID", with one int filed auto_incremented:
CREATE TABLE test.sequence_table (next_id int primary key auto_increment);
When you want insert a new row in your child table:
Solution 1: With SELECT
in information_schema
BEGIN; -- Start a new Transaction to ensure consistency
INSERT INTO test.sequence_table values (NULL); -- Generate a new ID
SELECT @next_ID:=(auto_increment - 1) FROM information_schema.tables WHERE table_schema="test" AND table_name="sequence_table"; -- Here I use a MySQL Variable but you can store it in PHP or whatever
INSERT INTO child_table values (null, @next_ID, "Max", "SQL"); -- Use your variable
COMMIT; -- Wonderfull :)
Edit after ypercube comment:
Solution 2: With LAST_INSERT_ID()
BEGIN; -- Start a new Transaction to ensure consistency
INSERT INTO test.sequence_table values (NULL); -- Generate a new ID
SELECT @next_ID:=LAST_INSERT_ID(); -- Use of the MySQL function LAST_INSERT_ID()
INSERT INTO child_table values (null, @next_ID, "Max", "SQL"); -- Use your variable
COMMIT; -- Wonderfull :)
Best Answer
Assuming that
Customer_details
table is populated you can do lookups on the fly withLOAD DATA INFILE
by leveraging session variables and aSET
clauseSuppose we have a CSV file with the following content
Let's try import it:
There you have it.