I have very bad news for you.
You should not have deleted the ibdata1 file. Here is why:
ibdata1 contains four type of information:
- table metadata
- MVCC data
- data pages (with innodb_file_per_table enabled)
- index pages (with innodb_file_per_table enabled)
Each InnoDB table created has a numercial id assigned to it via some auto increment metadata feature to each ibd file. That internal tablespace id (ITSID) is embedded in the .ibd file. That number is checked against the list of ITSIDs maintained, guess where, ... ibdata1.
I also have very good news for you along with some bad news.
It is possible to reconstruct ibdata1 to have the correct ITSIDs but it takes work to do it. While I personally have not done procedure alone, I assisted a client at my employer's web hosting to do this. We figured this out together but since the client hosed ibdata1, I let him do most of the work (30 InnoDB tables).
Anyway, here a past post I made in the DBA StackExchange. I answered another question whose root cause was the mixing up of ITSIDs.
To cut right to the chase, here is the article explaining what to do with reference to ITSID and how to massage ibdata1 into acknowledging the presence of the ITSID contained within the .ibd file.
I am sorry there is no quick-and-dirty method for recovering the .ibd file other than playing games with ITSIDs.
UPDATE 2011-10-17 06:19 EDT
Here is your original innodb configuration from your question:
innodb_file_per_table=1
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
innodb_data_file_path=ibdata1:10M:autoextend
innodb_buffer_pool_size = 384M
innodb_log_file_size=5M
innodb_lock_wait_timeout = 18000
Please notice that innodb_log_file_size is there twice. Look carefully...
innodb_file_per_table=1
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G <----
innodb_buffer_pool_size=4G
innodb_data_file_path=ibdata1:10M:autoextend
innodb_buffer_pool_size = 384M
innodb_log_file_size=5M <----
innodb_lock_wait_timeout = 18000
The last setting of innodb_log_file_size takes precedence. MySQL expected to start up with the log files being 5M. Your ib_logfile0 and ib_logfile1 were 1G when you tried to start up mysqld. It saw a size conflict and took the path of least resistance, which was to disable InnoDB. That's why InnoDB was missing from show engines;
. Mystery solved !!!
UPDATE 2011-10-17 11:07 EDT
The error message was deceptive because innodb_log_file_size was smaller than the log files (ib_logfile0 and ib_logfile1), which were 1G at the time. What's interesting is this: Corruption was reported because the file was expected to be 5M and the files were bigger. If the situation were reversed and the innodb log files were smaller than the declared size in my.cnf you should get something like this in the error log:
110216 9:48:41 InnoDB: Initializing buffer pool, size = 128.0M
110216 9:48:41 InnoDB: Completed initialization of buffer pool
InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes
InnoDB: than specified in the .cnf file 0 33554432 bytes!
110216 9:48:41 [ERROR] Plugin 'InnoDB' init function returned error.
110216 9:48:41 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
In this example, the log files were already existing as 5M and the setting for innodb_log_file_size was bigger (in this case, 32M).
For this particular question, I blame MySQL (eh Oracle [still hate saying it]) for the inconsistent error message protocol.
The provided error Error 'Duplicate entry '411465' for key 1' on query
means that the slave read and attempted to execute a binary log event to insert a row that already existed, ie. the same value 411465
for your primary key.
The most likely cause of this is the the insert was executed on the slave. To diagnose the query, you would use mysqlbinlog and use the binary log coordinates from SHOW SLAVE STATUS
. This will give you the server-id that the query originated from, which will match either your main master or 'passive' master.
Once you determine the query, you can identify the row on the server that is throwing the slave error to determine next steps. You can choose to :
- skip the entry using
SET GLOBAL sql_slave_skip_counter=1
to proceed to the next binary log statement
- Delete the specific row on the server and start slave to have the statement run from replication.
However, you need to take steps to understand how the mismatch occurred, or you're going to run into this again. This will require some more detective work from your end using mysqlbinlog
.
If, as you say, only one master is writeable at a time, you should ensure the following:
- passive master is
read_only=1
and your failover solution is able to modify read_only.
- the user that your application runs as (or any other non-trusted user) does not have the
SUPER
privilege. Any user with SUPER
privilege can execute statements even on read_only=1
servers.
- setup pt-table-checksum to ensure the data is in sync on both servers.
Best Answer
It would be routed to the master, because SELECT ... FOR UPDATE is only a part of a transaction, and according to the MaxScale 2.1 readwritesplit documentation, "all statements within an open transaction" are routed to the master.
You should be able to verify this with a query like this, which should consistently give you the hostname of the master: