Using MySQL Shell to create a three-node MySQL InnoDB Cluster

MySQL InnoDB Cluster was introduced in MySQL version 5.7 and consists of three parts – Group Replication, MySQL Shell and MySQL Router. MySQL InnoDB Cluster provides a complete high availability solution for MySQL. In this post, I am going to explain how to setup a three-node cluster using the MySQL Shell.

Note: Visit this page to learn more about MySQL InnoDB Cluster.
Other blogs on Group Replication and InnoDB Cluster:
MySQL 8.0 Group Replication – Three-server installation
Adding a replicated MySQL database instance using a Group Replication server as the source
Replicating data between two MySQL Group Replication sets using “regular” asynchronous replication with Global Transaction Identifiers (GTID’s)
MySQL 8.0 InnoDB Cluster – Creating a sandbox and testing MySQL Shell, Router and Group Replication

To begin, I am going to install three instances of the MySQL database and MySQL Shell (both version 8.0.15) on three separate virtual machines with the IP addresses of 192.168.1.161, 192.168.1.162 and 192.168.1.163. I will do most of the work on 192.168.1.161 via the MySQL Shell.

I am not going to change the configuration file (my.cnf or my.ini) except to add a unique value for server_id for each of the three servers. I will use the last three digits of the IP addresses for the server_id. So, for 192.168.1.161, my configuration file only has this in it:

# MySQL Configuration File
[mysqld]
server_id=161   # each server needs to have a unique server number

Let’s get started

From a terminal window on 192.168.1.161, I will open the MySQL Shell using the command: mysqlsh. MySQL Shell has three modes – SQL, Javascript and Python. For this post, I will be using the SQL and JavaScript modes. Once you are in the MySQL Shell console, here are the commands to switch between the modes:

SQL mode - \sql
JavaScript Mode - \js
Python mode - \py

Each mode is highlighted with a different color. Here is a screenshot to show you what each mode looks like:

When you first open MySQL Shell, you aren’t connected to a database. I will start Shell (via mysqlsh), connect to the local instance, and then switch to SQL mode.


# mysqlsh

MySQL Shell 8.0.15-commercial

Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.


Type '\help' or '\?' for help; '\quit' to exit.

 MySQL  JS    \connect root@localhost:3306

Creating a session to 'root@localhost:3306'
Please provide the password for 'root@localhost:3306': 
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): N
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 12
Server version: 8.0.15-commercial MySQL Enterprise Server - Commercial
No default schema selected; type \use  to set one.

 MySQL  JS    \sql

Switching to SQL mode... Commands end with ;

I am starting with three fresh installations of MySQL. I shouldn’t have any databases or tables other than the default ones. And, I want to double-check to make sure there haven’t been any transactions already executed. I can do this with the SHOW MASTER STATUS\G command:

 MySQL  SQL    show master status\G

*************************** 1. row ***************************
             File: binlog.000001
         Position: 151
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.0003 sec)

Note: I installed these servers a few times, and sometimes when I ran this command, it would show binlog.000002 instead of binlog.000001. This doesn’t matter – as long as the Executed_GTID_Set is blank.

From an OS command prompt, you can take a look at the binary log file binlog.000001, which is located inside your MySQL data directory. And the binlog.index file contains a list of the active binary logs, which in this case, is only binlog.000001.


MacVM161:data root# ls -l bin*
-rw-r—– 1 _mysql _mysql 151 Apr 3 20:48 binlog.000001
-rw-r—– 1 _mysql _mysql 16 Apr 3 20:48 binlog.index
MacVM161:data root# cat binlog.index
./binlog.000001

Since this is a new installation, I should only see the four default MySQL databases, and the four default MySQL users:

 MySQL  SQL    show databases;

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.0013 sec)

 MySQL  SQL    select user, host from mysql.user;

+------------------+-----------+
| user             | host      |
+------------------+-----------+
| mysql.infoschema | localhost |
| mysql.session    | localhost |
| mysql.sys        | localhost |
| root             | localhost |
+------------------+-----------+
4 rows in set (0.0004 sec)

When you use MySQL Shell to build your InnoDB Cluster, you can use pre-built administration commands to configure and manage the cluster. Before adding an instance to the cluster, I can check to see if the instance is suitable for InnoDB Cluster by running the dba.checkInstanceConfiguration command. To use the commands, I will switch to JavaScript mode:

 MySQL  SQL    \js

Switching to JavaScript mode…

 MySQL  JS    dba.checkInstanceConfiguration('root@localhost:3306')

Please provide the password for 'root@localhost:3306': 
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM163.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Please use the dba.configureInstance() command to repair these issues.

{
    "config_errors": [
        {
            "action": "server_update", 
            "current": "CRC32", 
            "option": "binlog_checksum", 
            "required": "NONE"
        }, 
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "enforce_gtid_consistency", 
            "required": "ON"
        }, 
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "gtid_mode", 
            "required": "ON"
        }, 
    ], 
    "status": "error"
}

From the above output (under the header ““Some configuration options need to be fixed:”“) – and normally I would need to add these variables and their correct values to the MySQL configuration file and reboot MySQL. But with MySQL Shell, I can change these variables dynamically on the server and reboot the server using the dba.configureInstance() command.

 MySQL  JS    dba.configureInstance('root@localhost:3306')

Please provide the password for 'root@localhost:3306': 
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): 
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM161.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

WARNING: User 'root' can only connect from localhost.
If you need to manage this instance while connected from other hosts, new account(s) with the proper source address specification must be created.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: cluster_adm

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y

Cluster admin user 'cluster_adm'@'%' created.
Configuring instance...
The instance 'localhost:3306' was configured for InnoDB cluster usage.
Restarting MySQL...

I am going to need to start Shell on the other two servers, connect to the database, and execute the dba.configureInstance command – creating the cluster_adm user as well. All three instances should also be rebooted (if they weren’t rebooted automatically).

After the reboot of the MySQL instance (mysqld), I still have MySQL Shell open, but I will need to reconnect again to the database. Notice – if I have lost my connection and I attempt to do anything – like switch to SQL mode, Shell will re-connect to the database for me:

 MySQL  JS    \sql


Switching to SQL mode... Commands end with ;
Error during auto-completion cache update: ClassicSession.runSql: Lost connection to MySQL server during query
The global session got disconnected..
Attempting to reconnect to 'mysql://root@localhost:3306'..
The global session was successfully reconnected.

I can then run dba.checkInstanceConfiguration again on all three servers to see if the instance is ready for InnoDB Cluster. (Remember to switch to JavaScript mode (\js) if you aren’t already in JavaScript mode)

 MySQL  SQL    \js

Switching to JavaScript mode…

 MySQL  JS    dba.checkInstanceConfiguration('root@localhost:3306')

Please provide the password for 'root@localhost:3306': 
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): N
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM161.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'localhost:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

The above status line of “status”: “ok”, confirms this instance is ready for InnoDB Cluster.

As I am doing these installs, I like to check the master status as I go along, so I will do that again. First I need to switch to SQL mode.

 MySQL  JS    \sql

Switching to SQL mode... Commands end with ;

 MySQL  SQL    show master status\G

*************************** 1. row ***************************
             File: binlog.000002
         Position: 151
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.0002 sec)

The master status hasn’t changed, so nothing was written to the binary log.

One immediate problem from not using the default root user is you aren’t given the option to assign a password for the cluster_adm user. I can check this by querying the mysql.user table:

 MySQL  SQL    select user, host, authentication_string from mysql.user where user = 'cluster_adm';

*************************** 1. row ***************************
+-------------+------+-----------------------+
| user        | host | authentication_string |
+-------------+------+-----------------------+
| cluster_adm | %    |                       |
+-------------+------+-----------------------+

Since there isn’t a password for cluster_adm (bug 94977), I will go ahead and set it. But – I don’t want to write this to the binary log, as then it would get replicated to the other servers once I start the cluster. I will need to modify this user and set a password on the other two servers as well. I can suppress writing to the binary log with SET SQL_LOG_BIN=0; and then turn it back on with SET SQL_LOG_BIN=1;. (Don’t forget to substitute the value of new_password for your actual password)

SET SQL_LOG_BIN=0;
ALTER USER 'cluster_adm'@'%' IDENTIFIED BY 'new_password';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;


 MySQL  SQL    SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.0001 sec)

 MySQL  SQL    ALTER USER 'cluster_adm'@'%' IDENTIFIED BY 'new_password';
Query OK, 0 rows affected (0.0001 sec)

 MySQL  SQL    FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.0001 sec)

 MySQL  SQL    SET SQL_LOG_BIN=1;
Query OK, 0 rows affected (0.0001 sec)

I can check again to see if the cluster_adm user now has a password:

 MySQL  SQL    select user, host, authentication_string from mysql.user where user = 'cluster_adm';

+-------------+------+------------------------------------------------------------------------+
| user        | host | authentication_string                                                  |
+-------------+------+------------------------------------------------------------------------+
| cluster_adm | %    | $A$005$',q+B<%=JJ|Mz.WH!XXX/iQ4rvG/3DzX/UharambelivesYp1oODqtNZk25     | 
+-------------+------+------------------------------------------------------------------------+

NOTICE: The cluster_adm user must have the same password on all servers in the cluster!

InnoDB Cluster uses Global Transaction Identifiers (GTIDs). “A global transaction identifier (GTID) is a unique identifier created and associated with each transaction committed on the server of origin (the master). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication topology.” Source

Each server in the three-node cluster will have its own GTIDs, which is composed of a Universal Unique Identifier (UUID), a colon (:) and an incremental number. To see the UUID for each server, this value is stored in the MySQL data directory in the auto.cnf file:

 # cat auto.cnf
[auto]
server-uuid=ae1a6186-5672-11e9-99b4-80e6004d84ae

Therefore, each of these three servers will have their own UUID being used in the GTIDs. The cluster itself will have a separate UUID being used in its own GTID, and this UUID is generated when the cluster is created. I have the following UUID’s for the three servers:

IP Address Server UUID
192.168.1.161 ae1a6186-5672-11e9-99b4-80e6004d84ae
192.168.1.162 cd287ef0-5672-11e9-be9a-b79ce5a797fd
192.168.1.163 d85f6086-5672-11e9-ad64-c08d80ddd285
InnoDB Cluster – to be determined –

Now I am ready to create the cluster. From the first server (192.168.1.161), I will switch to JavaScript mode, and then I will want to re-connect as the cluster_adm user.

 MySQL  JS    \js

Switching to JavaScript mode...

 MySQL  JS    \connect cluster_adm@192.168.1.161:3306

Creating a session to 'cluster_adm@192.168.1.161:3306'
Please provide the password for 'cluster_adm@192.168.1.161:3306': 
Save password for 'cluster_adm@192.168.1.161:3306'? [Y]es/[N]o/Ne[v]er (default No): N
Fetching schema names for autocompletion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 19
Server version: 8.0.15-commercial MySQL Enterprise Server - Commercial
No default schema selected; type \use  to set one.

I can create the cluster using the dba.createCluster command:

 MySQL  JS    dba.createCluster('myCluster');

A new InnoDB cluster will be created on instance 'cluster_adm@192.168.1.161:3306'.

Validating instance at 192.168.1.161:3306...

This instance reports its own address as MacVM161.local

Instance configuration is suitable.
Creating InnoDB cluster 'myCluster' on 'cluster_adm@192.168.1.161:3306'...
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

The InnoDB Cluster was successfully created. Now when I do a SHOW MASTER STATUS\G, I will see some values under the Executed_Gtid_Set section: (I will need to switch back to the SQL mode)

 MySQL  JS    \sql

Switching to SQL mode... Commands end with ;

 MySQL  SQL    show master status\G

*************************** 1. row ***************************
             File: binlog.000001
         Position: 12406
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: ae1a6186-5672-11e9-99b4-80e6004d84ae:1-12,
c446e75c-5674-11e9-bf50-753fdb914192:1-2
1 row in set (0.0003 sec)

Under the Executed_Gtid_Set section, I have two sets of GTID’s. One is for the 192.168.1.161 server (ae1a6186-5672-11e9-99b4-80e6004d84ae), and the other is for the cluster (c446e75c-5674-11e9-bf50-753fdb914192).

Executed_Gtid_Set: ae1a6186-5672-11e9-99b4-80e6004d84ae:1-12,
c446e75c-5674-11e9-bf50-753fdb914192:1-2

The Executed_Gtid_Set shows which transactions have been applied to this server (192.168.1.161). There have been 12 transactions for the server (ae1a6186-5672-11e9-99b4-80e6004d84ae:1-12) and two transactions for the cluster (c446e75c-5674-11e9-bf50-753fdb914192:1-2). The GTID’s for the cluster should be the same on each server, as we add them to the server. Also, once you add servers to the cluster, the read-only servers will be changed to SUPER_READ_ONLY to prevent write-transactions from being applied to the read-only servers in a single-primary mode cluster. If you want to view the transactions which were executed, you can use the mysqlbinlog utility.

I can reference my earlier table of the UUID’s for all of the servers, and I can now add the UUID for the cluster (in red:

IP Address Server UUID
192.168.1.161 ae1a6186-5672-11e9-99b4-80e6004d84ae
192.168.1.162 cd287ef0-5672-11e9-be9a-b79ce5a797fd
192.168.1.163 d85f6086-5672-11e9-ad64-c08d80ddd285
InnoDB Cluster c446e75c-5674-11e9-bf50-753fdb914192

I can check the status of the cluster. (after switching back to JavaScript mode)

 MySQL  JS    \js

Switching to JavaScript mode...

 MySQL  JS    var cluster = dba.getCluster()
 MySQL  JS    cluster.status()

{
    "clusterName": "myCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "192.168.1.161:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "192.168.1.161:3306": {
                "address": "192.168.1.161:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "MacVM161.local:3306"
}

Under the topology section above, we can see server 192.168.1.161 is in the cluster and has the status of ONLINE.

I am now ready to add 192.168.1.162 and 192.168.1.163 to the cluster. But first I need to make sure each server is ready to be added to the cluster using the dba.checkInstanceConfiguration command. I can run these commands from any one of the servers, but I am going to run this from the first server – 192.168.1.161. Prior to this, I went ahead and made the same changes to the MySQL configuration file (my.cnf or my.ini) as I did on 192.168.1.161.

 MySQL  JS    dba.checkInstanceConfiguration('cluster_adm@192.168.1.162:3306')

Please provide the password for 'cluster_adm@192.168.1.162:3306': 
Save password for 'cluster_adm@192.168.1.162:3306'? [Y]es/[N]o/Ne[v]er (default No): 
Validating MySQL instance at 192.168.1.162:3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM162.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance '192.168.1.162:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

 MySQL  JS    dba.checkInstanceConfiguration('cluster_adm@192.168.1.163:3306')

Please provide the password for 'cluster_adm@192.168.1.163:3306': 
Save password for 'cluster_adm@192.168.1.163:3306'? [Y]es/[N]o/Ne[v]er (default No): 
Validating MySQL instance at 192.168.1.163:3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM162.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance '192.168.1.163:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

Both servers are ready to go, as indicated by the “status”: “ok”.

Now I can add the other two servers. (I have already set the variable “cluster” earlier – and you only need to set “var cluster = dba.getCluster()” once per session. If you get an error, you might have to set it again)

 MySQL  JS    cluster.addInstance('cluster_adm@192.168.1.162:3306')

A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster ...

Please provide the password for 'cluster_adm@192.168.1.162:3306': 
Save password for 'cluster_adm@192.168.1.162:3306'? [Y]es/[N]o/Ne[v]er (default No): 
Validating instance at 192.168.1.162:3306...

This instance reports its own address as MacVM162.local

Instance configuration is suitable.
The instance 'cluster_adm@192.168.1.162:3306' was successfully added to the cluster.

 MySQL  JS    cluster.addInstance('cluster_adm@192.168.1.163:3306')

A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster ...

Please provide the password for 'cluster_adm@192.168.1.163:3306': 
Save password for 'cluster_adm@192.168.1.163:3306'? [Y]es/[N]o/Ne[v]er (default No): 
Validating instance at 192.168.1.163:3306...

This instance reports its own address as MacVM162.local

Instance configuration is suitable.
The instance 'cluster_adm@192.168.1.163:3306' was successfully added to the cluster.

The SHOW MASTER STATUS\G should now be the same on all three servers.

 MySQL  SQL    SHOW MASTER STATUS\G

*************************** 1. row ***************************
             File: binlog.000005
         Position: 17098
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: ae1a6186-5672-11e9-99b4-80e6004d84ae:1-12,
c446e75c-5674-11e9-bf50-753fdb914192:1-14
1 row in set (0.0002 sec)

I can also check the status of the cluster, to see if all three nodes are ONLINE.

 MySQL  JS    cluster.status()

{
    "clusterName": "myCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "192.168.1.161:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "192.168.1.161:3306": {
                "address": "192.168.1.161:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "192.168.1.162:3306": {
                "address": "192.168.1.162:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "192.168.1.163:3306": {
                "address": "192.168.1.163:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "MacVM161.local:3306"
}

All three nodes have the status of ONLINE, so the cluster is up and ready to use.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots 
Visit http://2044thebook.com for more information.
Tony is the editor/illustrator for NASA Graphics Standards Manual Remastered Edition 
Visit https://amzn.to/2oPFLI0 for more information.