I've got a database that I need to update a column in some rows. I've got the data from my logs and I wrote php script that processes each log and extracts the data in to a file with an update for each record that needs it.
I broke it down in to each month but running this is taking forever. Is there a better way to do it?
The file is basically a .sql file filled with 205k rows of
update table set ended_at = '2019-01-31 23:59:30' where uuid = '304ad6ea-b949-42f5-96a0-a7a550e6126d';
Is there a better way?
Best Answer
I would suggest that the schema is poorly designed if you have 205K rows with the identical
uuid
.Furthermore, if you need all of them to have the same
ended_at
value, then there should be some other table that has, for example, one row withuuid
andended_at
. TheJOIN
to get the data during aSELECT
will be more than compensated for by having removedended_at
from the big table.Perhaps I am being overly critical. If so, please provide some more insight into what the columns mean, and where the uuids come from, etc.
The best way to update "many" rows in a "huge" table is to walk through the table by the
PRIMARY KEY
, and update a thousand rows at a time. More details on chunking: http://mysql.rjweb.org/doc.php/deletebig#deleting_in_chunks