It is a brave soul indeed that still uses an all-MyISAM database.
Why are writes being quietly discarded instead of being queued, as they would be for a regular table lock?
Look at the error message again
[20-Nov-2014 01:41:56 UTC] Wordpress database error The MySQL server is running
with the --read-only option so it cannot execute this statement for query
INSERT INTO wp_options
...
WordPress is reporting this as the error, but it not a failure on the part of MySQL. You turned on read_only. That's what WordPress sees.
And more importantly, how can I fix that so they DO get queued and are eventually run in the order they were requested? :)
Just remove read_only:
FLUSH TABLES WITH READ LOCK;
system time cp -pdRu /mysql_data_dir/thedb /backup_dir/thedb
unlock tables;
The DB Connections should start to queue up all INSERTs, UPDATEs, and DELETEs.
You said something far more profound in the question
Both reads AND writes need to be possible during the backup. But the backup time is very short, so it's fine for the writes to be queued until the "unlock tables" command.
SUGGESTION #1
You need to convert all your tables to InnoDB. Reads and Writes can continue and queue up just fine without FLUSH TABLES WITH READ LOCK;
. You would dump the database like this
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
MYSQLDUMP_OPTIONS="--single-transaction --routines --triggers"
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} thedb > thedb.sql
SUGGESTION #2
If you want everything to remain MyISAM, your should get another DB Server and setup MySQL Replication. All your writes would happen on the Master. On the Slave, you would do the backups like this
STOP SLAVE;
FLUSH TABLES WITH READ LOCK;
system time cp -pdRu /mysql_data_dir/thedb /backup_dir/thedb
START SLAVE;
You can also split your reads between Master and Slave, or do all reads on the Slave.
EPILOGUE
If you go with SUGGESTION #1, convert everything to InnoDB with this script
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
MYISAM_TO_INNODB_CONVERSION_SCRIPT=/tmp/ConvertMyISAMToInnoDB.sql
SQL="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;')"
SQL="${SQL} FROM information_schema.tables WHERE engine='MyISAM' AND"
SQL="${SQL} table_schema NOT IN ('information_schema','mysql','performance_schema')"
SQL="${SQL} ORDER BY (data_length+index_length)"
mysql ${MYSQL_CONN} -ANe"${SQL}" > ${MYISAM_TO_INNODB_CONVERSION_SCRIPT}
mysql ${MYSQL_CONN} < ${MYISAM_TO_INNODB_CONVERSION_SCRIPT}
GIVE IT A TRY !!!
UPDATE 2014-11-21 10:52 EST
In your comment, you just asked
Is there any major reason to NOT install a 2nd MySQL instance (the slave) on the same server?
There is a huge reason why you do not want to setup a 2nd MySQL instance on the same box to do replication.
- Everything is MyISAM
- MyISAM only caches indexes, never data
- MyISAM defers caching data to the OS
- Master and Slave caching the same data in two different DB means twice as much data for the OS to cache
- See my other reasons in my old post Is it unwise to run replication on same physical server?
If you convert everything to InnoDB (which caches data and indexes, has failsafes for crash recovery and transactions), then, yes, you could setup a 2nd MySQL instance for replication on the same server (provided you have sufficient RAM and diskspace for the 2nd instance).
Edited to answer the question related to the \ddp
command not the \dp
command as @personne3000 pointed out in the comment below.
You probably want to use ALTER DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA kpi REVOKE EXECUTE ON FUNCTIONS FROM intranet2;
This is because postgres
is the user that was granted the default privilege of execute on the functions in the schema kpi
and is granting it to intranet2
, as noted by the permissions of intranet2=X/postgres
I created a small example to illustrate what's going on.
pgsql@[local]:5432:pgsql:=# CREATE ROLE bob;
CREATE ROLE
Time: 0.526 ms
pgsql@[local]:5432:pgsql:=# CREATE SCHEMA we_like_bob;
CREATE SCHEMA
Time: 0.608 ms
pgsql@[local]:5432:pgsql:=# ALTER DEFAULT PRIVILEGES FOR ROLE bob IN SCHEMA we_like_bob GRANT EXECUTE ON FUNCTIONS TO pgsql;
ALTER DEFAULT PRIVILEGES
Time: 1.480 ms
pgsql@[local]:5432:pgsql:=# \ddp
Default access privileges
Owner | Schema | Type | Access privileges
-------+-------------+----------+-------------------
bob | we_like_bob | function | pgsql=X/bob
(1 row)
User pgsql has the execute privilege, X, granted by bob.
Try to revoke the execute privilege for the role pgsql, as pgsql (a superuser).
pgsql@[local]:5432:pgsql:=# ALTER DEFAULT PRIVILEGES FOR ROLE pgsql IN SCHEMA we_like_bob REVOKE EXECUTE ON FUNCTIONS FROM pgsql;
ALTER DEFAULT PRIVILEGES
Time: 0.176 ms
pgsql@[local]:5432:pgsql:=# \ddp
Default access privileges
Owner | Schema | Type | Access privileges
-------+-------------+----------+-------------------
bob | we_like_bob | function | pgsql=X/bob
(1 row)
Didn't work because the command is not removing the bob role's default ability to grant execute on functions to role pgsql, it is removing the pgsql role's ability to revoke execute on functions from pgsql.
If we change it to remove the privilege of execute from the bob role for the role pgsql, then it works.
pgsql@[local]:5432:pgsql:=# ALTER DEFAULT PRIVILEGES FOR ROLE bob IN SCHEMA we_like_bob REVOKE EXECUTE ON FUNCTIONS FROM pgsql;
ALTER DEFAULT PRIVILEGES
Time: 0.644 ms
pgsql@[local]:5432:pgsql:=# \ddp
Default access privileges
Owner | Schema | Type | Access privileges
-------+--------+------+-------------------
(0 rows)
pgsql@[local]:5432:pgsql:=#
Best Answer
I would choose Option 2
Option 2 could also be written as
That way, you only have to run
SET GLOBAL read_only = 0;
to get writes going again. FLUSH TABLES WITH READ LOCK does not halt InnoDB writes to Transaction Logs, Redo Logs, an Undo Logs. That's whySET GLOBAL read_only = 1;
is your friend.CAVEAT
In my early days a a DBA, I used to do this:
When I was done, I did this
Then, I discovered SUPER.
EPILOGUE
Go with Option 2 as is, or change
FLUSH TABLES WITH READ LOCK;
toFLUSH TABLES;