I tend to shy away from the FEDERATED storage engine for three(3) reasons:
- It only supports MyISAM
- DDL performed on the source table requires manually changing the FEDERATED table design on external services.
- Bulk operations against a FEDERATED table can become an instant mightmare !!!
What to do ???
One possibility is to use mysqldump and merge them
EXAMPLE
Assuming all tables have identical structures
Assuming all tables are not using auto_increment IDs
Assuming all tables are InnoDB
On the server that will combine the three tables, create four(4) import databases (one time operation):
CREATE DATABASE ImportDB1;
CREATE DATABASE ImportDB2;
CREATE DATABASE ImportDB3;
CREATE DATABASE ImportCombined;
Import Process Goes Something Like This
mysqldump the three(3) tables
mysqldump -hHOSTIP1 -Pportnumber1 -u... -p... dbname tbname > /root/ImportTable1.sql &
mysqldump -hHOSTIP2 -Pportnumber2 -u... -p... dbname tbname > /root/ImportTable2.sql &
mysqldump -hHOSTIP3 -Pportnumber3 -u... -p... dbname tbname > /root/ImportTable3.sql &
wait
Load the mysqldumps into the separate import databases in parallel:
mysql -hHOSTIP4 -u... -p... -A -DImportDB1 < /root/ImportTable1.sql &
mysql -hHOSTIP4 -u... -p... -A -DImportDB2 < /root/ImportTable2.sql &
mysql -hHOSTIP4 -u... -p... -A -DImportDB3 < /root/ImportTable3.sql &
wait
rm -f /root/ImportTable1.sql &
rm -f /root/ImportTable2.sql &
rm -f /root/ImportTable3.sql &
wait
Next, combine the data.
DROP TABLE IF EXISTS ImportCombined.tbname;
CREATE TABLE ImportCombined.tbname LIKE ImportDB1.tbname;
ALTER TABLE ImportCombined.tbname DISABLE KEYS;
INSERT INTO ImportCombined.tbname SELECT * FROM ImportDB1.tbname;
INSERT INTO ImportCombined.tbname SELECT * FROM ImportDB2.tbname;
INSERT INTO ImportCombined.tbname SELECT * FROM ImportDB3.tbname;
ALTER TABLE ImportCombined.tbname ENABLE KEYS;
ALTER TABLE ImportCombined.tbname ENABLE KEYS;
Now, perform the task you already have in place against the ImportCombined.tbname table.
"You fill in these steps"
Then, when the table is ready to be exported, mysqldump it and copy 2 times:
mysqldump -hHOSTIP4 -Pportnumber4 -u... -p... ImportCombined tbname > /root/ExportTable1.sql
cp /root/ExportTable1.sql /root/ExportTable2.sql &
cp /root/ExportTable1.sql /root/ExportTable3.sql &
mysql -hHOSTIP1 -Pportnumber1 -u... -p... -A -Ddbname < /root/ExportTable1.sql &
mysql -hHOSTIP2 -Pportnumber2 -u... -p... -A -Ddbname < /root/ExportTable2.sql &
mysql -hHOSTIP3 -Pportnumber3 -u... -p... -A -Ddbname < /root/ExportTable3.sql &
wait
rm -f /root/ExportTable[123].sql &
As show from these steps, these things should be shell scripted and perhaps crontab'd.
Here is another maneuver: If the tables are MyISAM, you could copy the MyISAM tables (.frm, .MYD, and .MYI files for each table) in other folders (databases in the eyes of mysqld) instead of doing mysqldumps, and perform similar operations as mentioned before. Make sure no writes to the tables take place during the copy.
I am against writing to both Masters in a dual-master setup. There are too many things that can go wrong, and they can be messy to fix -- AUTO_INCREMENTs, other duplicate keys, etc.
Hundreds of "Sleep" connections is virtually no impact on a server, so limiting to 40 is not useful. 10 or more active connections (non-Sleep) can be an issue. In that case I would look at the queries. Usually optimizing the queries is the best answer.
Note also, that every write (INSERT, UPDATE, etc) that is done on one Master must be done on all the other Masters and Slaves. So, you can't really "spread" writes around.
If you have processes that do only reads (SELECTs), then they should go to Slaves and/or the backup Master, not the live, writable, Master. This will help.
Be aware of the "critical write" problem. Example: A user posts a blog comment, then looks at his comments, but it is missing. This can happen if the write went to one machine, but the read hit another, and replication is "behind".
(My comments apply to all versions, and all APIs, not just 5.1 and PHP's mysqli.)
I stay away from mysql_pconnect (and other connection pooling mechanisms). Connection startup/teardown is very fast in MySQL. Pooled connections may have issues with @variables, transaction modes, sql_modes, etc.
Best Answer
You can do this in 3 different ways:
The best answer to this question depends on the context of your request.
If you are doing this from time to time on a dev/test machine (and I certainly hope you're not doing it on an important/production one!), you can then afford to make global changes and you could:
Edit the database interface file inside the "libraries" folder as described here and put the line
SET AUTOCOMMIT = 1;
in thepostConnect
function. This will affect every connection made from that instance of phpMyAdmin - you may or may not want this, depending on whether you also use that machine's phpMyAdmin for running stuff on prod?Put the line
autocommit = 0
to set the variable in your my.cnf/my.ini. This now means that every connection to that server will automatically commit (described here). Again, you may or may not want to do this.Finally, if this is something that you'll want to do from time to time - i.e. experiment at lunchtime/whatever, you can issue a
BEGIN [WORK]
before you issue your SQL statements! (ibid - aliases ofSTART TRANSACTION
- see here).This has the advantage of not affecting the configuration of either phyMyAdmin or the MySQL server, BUT has the disadvantage of the dev (i.e. you) having to remember to put it in each time. NOT advisable on a production machine, but means that you can use your db tool on both work systems and your experimental ones without worry!
The choice is yours!