MySQL Cluster – How to Resolve Startup Problems

mysql-cluster

I have installed MySQL Cluster necessary packages and I am now using Auto installer to config the nodes properly. when I try to start the Cluster In Deploy Configuration part, all the nodes (1 management node, 4 data nodes) starts correctly except SQL node and I got below error.

Command `/usr/local/mysql/bin/mysqld --defaults-file=/home/debian/MySQL_Cluster/49/my.cnf', running on 192.168.120.107 exited with 1:
2016-04-03 16:32:49 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-04-03 16:32:49 0 [Note] /usr/local/mysql/bin/mysqld (mysqld 5.6.28-ndb-7.4.10-cluster-gpl) starting as process 11472 ..

enter image description here

you see all the nodes are started. I use the command it recommended with --explicit_defaults_for_timestamp option like below.

 /usr/local/mysql/bin/mysqld --defaults-file=/home/debian/MySQL_Cluster/49/my.cnf  --explicit_defaults_for_timestamp 
2016-04-03 16:42:06 0 [Note] /usr/local/mysql/bin/mysqld (mysqld 5.6.28-ndb-7.4.10-cluster-gpl) starting as process 11503 ...

you can see with --explicit_defaults_for_timestamp option service start correctly but still when I use management node to see the status I see the cluster has a problem see management node :

ndb_mgm> show 
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=1    @192.168.120.111  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0, *)
id=2    @192.168.120.117  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0)
id=3    @192.168.120.118  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 1)
id=4    @192.168.120.76  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 1)

[ndb_mgmd(MGM)] 1 node(s)
id=50   @192.168.120.79  (mysql-5.6.28 ndb-7.4.10)

[mysqld(API)]   1 node(s)
id=49 (not connected, accepting connect from 192.168.120.107)

as you can see the SQL node still has not been started correctly :

[mysqld(API)]   1 node(s)
id=49 (not connected, accepting connect from 192.168.120.107)

I don't understand the problem.

Best Answer

Answering my own question, I could say the MySQL in the SQL node was running so when the MySQL Cluster auto installer was trying to start the the SQL node it failed so by stopping the MySQL in the SQL node and starting the cluster the problem solved.