ASPECT #1 : Replication
I don't think that
replicate-wild-do-table = db_backup.%
replicate-rewrite-db = db->db_backup
belong together.
Other people have wondered about this as well
The problem stems from the order replication rules are processed. According to the MySQL Documentation on Replication Rules:
If any --replicate-rewrite-db options were specified, they are applied before the --replicate-* filtering rules are tested.
Even the MySQL Documentation on replicate-rewrite-db says:
The database name translation is done before the --replicate-* rules are tested.
The replicate-wild-do-table
is enforced after the rewrite. It would not be surprising if this ordering somehow imposed an INSERT into a table that has data already.
You are probably asking how did the data get there ?
ASPECT #2 : mysqldump
Doing mysqldump --single-transaction
would seem to be the greatest way to point-in-time dumps of data. Unfortunately, mysqldump --single-transaction
has an Achilles' Heel : ALTER TABLE
. If a table is subject to any ALTER TABLE
commands, such as a DROP TABLE
and CREATE TABLE
, that can break the integrity of the transaction the mysqldump was trying to do the dump in. Truncating a table (which is DDL in the MySQL Universe) and dropping and adding indexes can also be as disruptive.
You can find more information on that from MySQL Performance Blog's Best kept MySQLDump Secret. I actually addressed this point in a past question describing 12 commands that can break the integrity of a mysqldump's transaction : MySQL backup InnoDB
CAVEAT
EPILOGUE
One or both of the aspects may have contributed to letting a row slip in during the mysqldump that should not have existed due to either the rewrite rules or the isolation of the mysqldump being overridden.
SUGGESTIONS
I would do a mysqlbinlog dump of all the relay logs since the start of the mysqldump to see all INSERTs that the Slave will process and see if those rows already exist on the Slave. If they do, you could probably do two things:
1 : Skip all the Duplicate Key errors
Simply add this to my.cnf on the Slave
[mysqld]
slave-skip-errors=1062
skip-slave-start
and restart mysql. Then, run START SLAVE;
all the duplicate-key errors will get bypassed. When Seconds_Behind_Master
gets to 0, remove those lines and restart mysql.
2 : Download percona tools
The tools you need are
Use these to find the differences in the Slave, and then correct them
the person who set it up left the password the same as the password
for the super user account on our other server
If the other server is the master, it's normal because having different passwords in the replicated database is not possible.
PostgreSQL replicates everything from the master, including user accounts and their passwords. There's no way to divert that temporarily.
If someone needs to be superuser on the secondary without knowing this password, a different authentication method should be used on this server, such as Certificate Authentication, or external methods (ldap, radius...) . As a configuration file, the pg_hba.conf
can be different on the slave.
Best Answer
Here are the grants you just mentioned
Based on this, you should not be able to write to the Slave because SUPER is a system level privilege (SUPER only appears in mysql.user as Super_priv) and not a DB level privilege (SUPER does not appear in mysql.db). The first GRANT lacks SUPER privilege. The second GRANT does not have a SUPER privilege context at all.
Having SUPER privilege has a lot of firepower because you can run the following
Not every user needs this. Giving the SUPER privilege to just anyone can actually hamper a DBA from logging into mysql if max_connections is reached.