"Nov 11 09:05:14 localhost kernel: [ 5931.834129] Out of memory: Kill process 6761 (mysqld) score 233 or sacrifice child"
You are crossing system memory and then kernel is killing highest user of memory - in your case that'd be mysqld.
Consider revising mysql global and per-thread variables in order to limit total usage under control.
You may use this procedure to estimate MySQL memory usage or you might want to fill in here to check the usage details.
Also... Swapping is not good for MySQL, it's better to have swappiness set to 1-10 or 0 depending on your kernel version. Refer here.
WordPress
WordPress was not designed with scaling in mind. (Caveat: What I say here is based on looking at innards several years ago; things may have changed.) There is no "read-write" separation, so scaling reads via Replication would be a major undertaking. Each "user" has his own database with 13+ tables. For large installations, this leads to OS issues in searching the filesystem, and with MySQL issues with things like table_open_cache
.
On the other hand, your question implies one huge user, not a huge number of users. And, I can't tell how much of the damage comes from WooCommerce instead of WordPress, itself.
Big table inefficiencies
Here are issues that make the search implementation inefficient for big tables:
LIKE '%tub%'
is inefficient. It cannot use any index, so it must scan the entire table. OR
adds insult to injury. FULLTEXT
is the best alternative, but it comes with caveats, and requires changes to both the schema and the SELECT
.
"Key-value" store ("EAV") does not scale well. postmeta
is such a schema. And it leads to kludges like meta_value+0
and CAST(mt1.meta_value AS CHAR)
.
The complexity of the GROUP BY
plus ORDER BY
make it impossible use any index to prevent finding all the rows before delivering only LIMIT 0,10
. However, that means that the extra burden of SQL_CALC_FOUND_ROWS
is minimal.
Having LONGTEXT
for meta_value
virtually eliminates any optimization relating to it.
This would help some: INDEX(post_id, meta_key)
. (Tacking on meta_value
would be even better, but it is not possible because of LONGTEXT
.) This may be the second best 'quick fix'.
Other notes
One MySQL connection will use only one CPU. So, as long as you do not have concurrent searches, there is no need for more than one CPU core. Even if you have multiple simultaneous searches, there is some overlap since some of the effort is I/O.
1GB of RAM? You are probably terribly I/O-bound? You must have innodb_buffer_pool_size
set to a terribly small value? (Otherwise you would be swapping, which is terribly bad for MySQL.) 1GB is not "tiny", it is "minuscule"! Go for 4GB at least. Then set the buffer_pool to 1500M. This one change is the biggest "quick fix". Deal with RAM before RAID. Get metrics on CPU and I/O usage, if you can.
Of the 2.5M meta rows, how many have meta_key = '_visibility'
. That's what the EXPLAIN
decided to start with. It estimated 70K, but that could be seriously off.
The schema is about twice as bulky as it needs to be, mostly because of sloppy datatypes. Smaller --> more cacheable --> less I/O --> faster. However making the change would be tedious and maybe impossible because of WordPress and/or WooCommerce.
Prototyping
In programming it is often good to start with a "prototype" to prove the various design concepts and implementation details. Prototyping often involves "off the shelf" tools. When the data size grows and the accesses increase, it is time to throw out the "prototype" and design a custom system based on what you learned. You are past that time.
Best Answer
Good luck. You'll need to do way more work than just a "restore".
Essentially, you'll need to create a custom pipe to convert the data in the SQL Server database into the table structure used by WordPress.
Even if the table structure was identical, and it isn't, you can't "restore" a .bak file into a MySQL database. The .bak does not contain SQL statements; it is a binary representation, page by page, of the SQL Server database.
Presuming there aren't a huge number of articles, it is generally considered easier to manually transfer them into WordPress using the interface. Welcome to the thrill of vendor lock-in.
If you have access to a SQL Server, you might try restoring the .bak file then exporting the desired data as SQL Statements using the "script database" facility. You might then be able to do a transformation on the SQL Statements to make them "fit" the design of the WordPress database. I'd be very skeptical that this would be easy, however at a certain point this will take less time than manually recreating the stories in WordPress.