The simple answer is No.
In Mysql replication, Master copies the bin log files to slaves, and after that, it's work is over. Now the Slave will run the bin files and execute them, but there won't be any performance on Master.
There might be scenario where you are using full synchronous replication, in which master will wait for the slave to execute the query, but again it won't impact the performance in terms of memory or CPU, but the master will wait for the query to be executed.
Also, for your second question, Phil already answered it, that ssh sends data through encryption which uses a lot of CPU, hence if you want other ways, use the other methods which are described by Phil.
Here are some things you should consider:
CONSIDERATION #1
Using DELETE FROM tbl ... LIMIT 5000
is not replication-safe because the 5000 rows being deleted may not be the same order or the same set of 5000 rows on a slave. See MySQL Documentation (Replication and LIMIT). Should a DELETE on a Master get interrupted, it has to rollback. In that event, it could be remotely possible that the number of times the DELETE FROM tbl ... LIMIT 5000
is called may not be the same. Extra calls of the DELETE
on a Slave is not critical. Too few calls could be bad. This should not happen.
SUGGESTION : Make sure the row count for RTesAll
is identical on the Master and all Slaves.
CONSIDERATION #2
Since you are running a Stored Procedure on the Master, the DELETE
command will not begin its execution on a Slave until the completion of the DELETE
on the Master and the DELETE
command is posted in the Master's Binary Logs. With one Master and 4 Slaves (5 DB Servers), there are a certain of calls to DELETE multiplied by 5.
SUGGESTION #1 : You should add set sql_log_bin = 0;
to the Stored Procedure
DROP PROCEDURE IF EXISTS PurgeOlderData;
DELIMITER $$
CREATE PROCEDURE `PurgeOlderData`(In StartDate DATETIME ,In EndDate DATETIME,In NoOfRecordsToDelete BIGINT,In TableName CHAR(50))
BEGIN
SET sql_log_bin = 0;
SET @delete_counter = 0;
SET @table_name = TableName;
SET @number_of_records_to_delete = NoOfRecordsToDelete;
SET @start_date = StartDate;
SET @end_date = EndDate;
WHILE @delete_counter < @number_of_records_to_delete DO
SET @varSQL = CONCAT('DELETE FROM ', @table_name,' WHERE recordDate BETWEEN \'',@start_date ,'\' AND \'', @end_date ,'\' LIMIT 5000;');
PREPARE stmt FROM @varSQL;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
/*SELECT @varSQL;*/
SET @delete_counter = @delete_counter + 5000;
END WHILE;
SET sql_log_bin = 1;
END $$
DELIMITER ;
to prevent all DELETEs from replicating. If you are deleting 611992998 rows, deleting every 5000 rows requires executing DELETE ... LIMIT 5000
122399 times (That's 611992998 / 5000 rounded up to next integer).
This will prevent having to write to the binlogs 122399 times.
SUGGESTION #2 : Run the Stored Procedure on the Master and Slaves at the same time.
- This prevents replication lag
- This gets rid of all the rows at the same time
- This is best done during maintenance cycles or slow read times
CONSIDERATION #3
Make sure autocommit
is enabled on Master and Slaves. Otherwise, all the deletes will pile up in the undo logs inside ibdata1 before either doing all the deletes at once as a single transaction or a huge rollback.
Give it a Try !!!
Best Answer
This is often handled by the application actively choosing to read from the master based on context.
A common example I've seen mentioned is a database of blog posts. *Viewing, a blog post would obviously be done from a replica, but previewing an edit to a blog post -- which may be the same code, since it's rendering the post -- might be something you do by reading from the master.
Depending on the application, you might also read from the replica and then fall back to reading from the master if you don't find what you expect.
There are also solutions, like Maxscale -- designed by MariaDB but works with MySQL too -- that are essentially a proxy between your app and both MySQL servers. Whenever you are in a transaction, you're talking to the master, whenever you're not in a transaction and issue a
SELECT
, the queries go to the replica... so you can easily force a read from the master just by doing it in a transaction. There are other rules in play as well that send certain statements to both servers, such asUSE
andSET
statements. If the replica lag goes beyond a certain threshold, Maxscale sends everything to the master.