MySQL – Can’t Connect to MySQL NDB Manager Using ndb_desc

MySQLndbcluster

I've got a basic MySQL NDB cluster running – it works well. Manager, data and SQL nodes are all OK. 1 manager, 2 SQL nodes, 4 data nodes. 2 of the data nodes are from the initial creation, 2 are nodes I'm trying to add.

Environment-wise, this is what I have (applies to all nodes).

OS: CentOS 7
SELinux: disabled
firewalld : not installed
MySQL version: mysql-5.6.28
NDB: ndb-7.4.10

View of the configuration looks like this:

[root@mysql-ndb-manager ~]# ndb_mgm -e "SHOW"
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=2    @10.133.16.108  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0, *)
id=3    @10.133.16.196  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0)
id=6    @10.133.16.121  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 1)
id=7    @10.133.16.112  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 1)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.133.16.179  (mysql-5.6.28 ndb-7.4.10)

[mysqld(API)]   3 node(s)
id=4    @10.133.16.117  (mysql-5.6.28 ndb-7.4.10)
id=5    @10.133.16.180  (mysql-5.6.28 ndb-7.4.10)
id=8 (not connected, accepting connect from any host)

NoOfReplicas is set to 2, so this node count should be OK. I realised pretty quickly that NoOfReplicas set to 2 means adding a single additional data node doesn't work.

As you can see, I have an available node ID (8) that can be allocated for things like ndb_desc, etc.

New nodes I'm trying to add:

IDs: 6 & 7

They are configured properly, as far as I can tell, but have no data on them. They've started properly and are listed as part of the cluster. It's the redistribution process that isn't working for me.

The docs are pretty easy to understand, i.e. they require the use of ndb_desc to connect to the manager and alter the table structure so that the data is distributed to the new nodes.

When I run the required commands, I get this:

[root@mysql-ndb-manager ~]# ndb_desc -c 10.133.16.179:1186 my_table -d 
appdb -p --ndb-nodeid=8
Unable to connect to management server.

NDBT_ProgramExit: 1 - Failed```

I've seen others with similar issues, and it nearly always seems to be the lack of a node ID that can be allocated. This isn't the case for me, as far as I can tell.

I've tried connecting with the manager's IP address, localhost, 127.0.0.1.
I've tried locally and remotely. I've tried using the exact command from the docs, i.e. their database and table names but that doesn't work (as expected, I guess).

The only data I have right now is as follows:

Database: `appdb`
Table: `my_table` (ID auto_increment and `name` char(25))
Data: 14 rows, random names

Data is replicated between the old/working nodes without issue, i.e. I can create a database or run any SQL command on one of them and the change is immediately reflected on the other node. Offline nodes show up as offline as expected and show up as online when I bring them back up.

I've also got a rolling restart script that works without any issues so it looks like the clustering side of things is OK.

Lastly, here's my manager's configuration file:

[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster

[mgm]
HostName=10.133.16.179

[ndbd default]
NoOfReplicas=2
DataMemory=256M
IndexMemory=128M
DataDir=/var/lib/mysql-cluster

[ndbd]
HostName=10.133.16.108

[ndbd]
HostName=10.133.16.196

[mysqld]
HostName=10.133.16.117

[mysqld]
HostName=10.133.16.180

[ndbd]
HostName=10.133.16.121

[ndbd]
HostName=10.133.16.112

[mysqld]

What else could be causing this?

Thanks!

Best Answer

There is nothing in the description that points to the error. Everything looks fine. So would be good if you could post the error messages written in the cluster log when you start ndb_desc.