Mysql – Innodb cluster setup error

MySQLmysql-innodb-cluster

Reframed:

Here are the steps I followed but is resulting into an error:

  1. I am connected to my primary node (10.0.0.4) and I checked the config of all the nodes using below:

MySQL localhost JS > dba.configureInstance('clusteruser@10.0.0.4')
Please provide the password for 'clusteruser@10.0.0.4': ***********
Save password for 'clusteruser@10.0.0.4'? [Y]es/[N]o/Ne[v]er (default No): N
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster…

This instance reports its own address as 10.0.0.4

The instance '10.0.0.4:3306' is valid for InnoDB cluster usage.
The instance '10.0.0.4:3306' is already ready for InnoDB cluster usage.

MySQL localhost JS > dba.configureInstance('clusteruser@10.0.0.5')
Please provide the password for 'clusteruser@10.0.0.5': ***********
Save password for 'clusteruser@10.0.0.5'? [Y]es/[N]o/Ne[v]er (default No): N
Configuring MySQL instance at 10.0.0.5:3306 for use in an InnoDB cluster…

This instance reports its own address as 10.0.0.5

The instance '10.0.0.5:3306' is valid for InnoDB cluster usage.
The instance '10.0.0.5:3306' is already ready for InnoDB cluster usage.

MySQL localhost JS > dba.configureInstance('clusteruser@10.0.0.6')
Please provide the password for 'clusteruser@10.0.0.6': ***********
Save password for 'clusteruser@10.0.0.6'? [Y]es/[N]o/Ne[v]er (default No): N
Configuring MySQL instance at 10.0.0.6:3306 for use in an InnoDB cluster…

This instance reports its own address as 10.0.0.6

The instance '10.0.0.6:3306' is valid for InnoDB cluster usage.
The instance '10.0.0.6:3306' is already ready for InnoDB cluster usage.

  1. I create the cluster on the primary node using below (note that all the three nodes are in the same subnet, i.e, 10.0.0.0/24)

    MySQL 10.0.0.4:3306 ssl JS > cluster=dba.createCluster('TestCluster', {ipWhitelist:"10.0.0.0/24", localAddress:"10.0.0.4"})
    A new InnoDB cluster will be created on instance 'clusteruser@10.0.0.4:3306'.

Validating instance at 10.0.0.4:3306…

This instance reports its own address as 10.0.0.4

Instance configuration is suitable.
Creating InnoDB cluster 'TestCluster' on 'clusteruser@10.0.0.4:3306'…
WARNING: On instance '10.0.0.4:3306' membership change cannot be persisted since MySQL version 5.7.25 does not support the SET PERSIST command (MySQL version >= 8.0.11 required). Please use the .configureLocalInstance command locally to persist the changes.
Adding Seed Instance…

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

  1. cluster.status() output is all good:

MySQL 10.0.0.4:3306 ssl JS > cluster.status()
{
"clusterName": "TestCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "10.0.0.4:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"10.0.0.4:3306": {
"address": "10.0.0.4:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "10.0.0.4:3306"
}

4.Now on running the addInstance command, it gives below error:

MySQL 10.0.0.4:3306 ssl JS > cluster.addInstance('clusteruser@10.0.0.5', {ipWhitelist:"10.0.0.0/24",localAddress:"10.0.0.5:33061"})
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster …

Please provide the password for 'clusteruser@10.0.0.5': ***********
Save password for 'clusteruser@10.0.0.5'? [Y]es/[N]o/Ne[v]er (default No): N
Validating instance at 10.0.0.5:3306…

This instance reports its own address as 10.0.0.5

Instance configuration is suitable.
Cluster.addInstance: WARNING: Not running locally on the server and can not access its error log.
ERROR:
Group Replication join failed.
ERROR: Error joining instance to cluster: '10.0.0.5:3306' – Query failed. MySQL Error (3092): ClassicSession.query: The server is not configured properly to be an active member of the group. Please see more details on error log.. Query: START group_replication: MySQL Error (3092): ClassicSession.query: The server is not configured properly to be an active member of the group. Please see more details on error log. (RuntimeError)

  • The error log on the primary node [Updated]:

    2019-03-20T11:39:41.616437Z 0 [Warning] Neither –relay-log nor –relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname change
    d!! Please use '–relay-log=final1-relay-bin' to avoid this problem.
    2019-03-20T11:39:41.629685Z 0 [Note] Failed to start slave threads for channel ''
    2019-03-20T11:39:41.635012Z 0 [Note] Event Scheduler: Loaded 0 events
    2019-03-20T11:39:41.635158Z 0 [Note] /usr/sbin/mysqld: ready for connections.
    Version: '5.7.25-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL)
    2019-03-20T12:06:00.903700Z 11 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
    2019-03-20T12:06:00.903837Z 11 [Note] Plugin group_replication reported: '[GCS] Added automatically IP ranges 10.0.0.4/32,127.0.0.1/8 to the whitelist'
    2019-03-20T12:06:00.903992Z 11 [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'
    2019-03-20T12:06:00.904064Z 11 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
    2019-03-20T12:06:00.904078Z 11 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "fc723a1e-4a69-11e9-a59b-420
    10a000004"; group_replication_local_address: "10.0.0.4:33061"; group_replication_group_seeds: ""; group_replication_bootstrap_group: true; group_replication_poll_spin_loops: 0; group_r
    eplication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
    2019-03-20T12:06:00.904113Z 11 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0'
    2019-03-20T12:06:00.904117Z 11 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds'
    2019-03-20T12:06:00.904140Z 11 [Note] Plugin group_replication reported: 'Member configuration: member_id: 100; member_uuid: "0b0e0a29-4a5b-11e9-a286-42010a000004"; single-primary mode
    : "true"; group_replication_auto_increment_increment: 7; '
    2019-03-20T12:06:00.904623Z 13 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= 0, master_log_file='', mast
    er_log_pos= 4, master_bind=''. New state master_host='', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
    2019-03-20T12:06:00.953566Z 16 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './final1-relay
    -bin-group_replication_applier.000002' position: 1366
    2019-03-20T12:06:00.953909Z 11 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
    2019-03-20T12:06:00.977254Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3'
    2019-03-20T12:06:00.977284Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 33061'
    2019-03-20T12:06:01.979636Z 19 [Note] Plugin group_replication reported: 'Only one server alive. Declaring this server as online within the replication group'
    2019-03-20T12:06:01.979758Z 0 [Note] Plugin group_replication reported: 'Group membership changed to 10.0.0.4:3306 on view 15530835619791601:1.'
    2019-03-20T12:06:01.982410Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'
    2019-03-20T12:06:01.982482Z 0 [Note] Plugin group_replication reported: 'A new primary with address 10.0.0.4:3306 was elected, enabling conflict detection until the new primary applies
    all relay logs.'
    2019-03-20T12:06:01.982520Z 21 [Note] Plugin group_replication reported: 'This server is working as primary member.'
    2019-03-20T12:07:38.755098Z 13 [Note] Plugin group_replication reported: 'Primary had applied all relay logs, disabled conflict detection'
    2019-03-20T12:07:39.225967Z 0 [Warning] Plugin group_replication reported: '[GCS] Connection attempt from IP address 10.0.0.5 refused. Address is not in the IP whitelist.'

my.cnf:

[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

report_host=10.x.x.1
server_id=100
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE
group_replication_local_address="10.x.x.1:33061"
transaction_write_set_extraction=XXHASH64

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

Best Answer

Is SELinux disabled on the instance you're trying to add to the cluster? And the error log you're checking is of the instance you're trying to add?

Also, please use the dba.checkInstanceConfiguration() command to verify if your instances are ready to be used in InnoDB cluster.

Note that you can also use InnoDB cluster without disabling SELinux: https://dev.mysql.com/doc/refman/8.0/en/group-replication-frequently-asked-questions.html

To change the value of group_replication_local_address, as mentioned in the link above, use the following option on dba.createCluster() / addInstance():

- localAddress

To consult the available options of the AdminAPI commands you can use the online-helper:

mysql-js> \? createCluster

Cheers,

Miguel