MySQL 8.0 InnoDB Cluster – Creating a sandbox and testing MySQL Shell, Router and Group Replication
May 30, 2018 Leave a comment
MySQL’s InnoDB Cluster was released on Apr 12, 2017, with version 5.7 and is also included in MySQL version 8.0.
MySQL InnoDB cluster provides a complete high availability solution for MySQL. MySQL Shell includes AdminAPI which enables you to easily configure and administer a group of at least three MySQL server instances to function as an InnoDB cluster. Each MySQL server instance runs MySQL Group Replication, which provides the mechanism to replicate data within InnoDB clusters, with built-in failover. AdminAPI removes the need to work directly with Group Replication in InnoDB clusters, but for more information see Chapter 18, Group Replication which explains the details. MySQL Router can automatically configure itself based on the cluster you deploy, connecting client applications transparently to the server instances. In the event of an unexpected failure of a server instance the cluster reconfigures automatically. In the default single-primary mode, an InnoDB cluster has a single read-write server instance – the primary. Multiple secondary server instances are replicas of the primary. If the primary fails, a secondary is automatically promoted to the role of primary. MySQL Router detects this and forwards client applications to the new primary. Advanced users can also configure a cluster to have multiple-primaries. (source: Introducing InnoDB Cluster )
The following diagram shows an overview of how these technologies work together:
I am not going to go into any more details about how the InnoDB Cluster works, as there are plenty of articles over on the MySQL Server Team and MySQL High Availability blogs.
This post will walk you through setting up MySQL InnoDB Cluster in a sandbox environment. I used a Mac running 10.13, but these commands should also work on Linux. The only differences will be the location of the root directory and where MySQL is installed.
To begin, you will need to install the MySQL server (version 8.0), MySQL Router and MySQL Shell. You may download these from the MySQL downloads page. And, I am not going to walk you through the steps of installing these – as on the Mac, the installation is very easy. For any other OS, you will need to refer to the man pages for each package.
Okay – so I have installed all three components of the MySQL InnoDB Cluster – the MySQL server, MySQL Router and MySQL Shell (all three were versions 8.0.11). For ease of going through this tutorial, I would recommend opening five terminal windows – one for the MySQL Shell, one for regular OS access, and three for each of the sandboxes that I will create. The sandboxes will all run on one server, and each sandbox will have a different port number for the MySQL instance. In this example, I am using ports 3310, 3320 and 3330.
To make this tutorial easy to follow, simply enter/use the commands which appear as bold red. I have included the output from these commands separately.
One note: before you start, make sure that all of the directories in the the mysql home directory – /usr/local – are owned by the mysql user. You don’t have to create the mysql user separately, but you will notice the user name will be _mysql.
cd /usr/local
chown -R mysql mysql*
Let’s begin by logging in as root on my Mac and open the MySQL shell:
mysqlsh
~:root # mysqlsh MySQL Shell 8.0.11 Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit.
I will be creating three sandbox instances to demo how MySQL InnoDB Cluster works. I will deploy the first sandbox instance using port 3310 and will need to enter a root password
dba.deploySandboxInstance(3310);
MySQL JS > dba.deploySandboxInstance(3310); A new MySQL sandbox instance will be created on this host in /var/root/mysql-sandboxes/3310 Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks. Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3310 successfully deployed and started. Use shell.connect('root@localhost:3310'); to connect to the instance.
Next, deploy the second sandbox instance using port 3320 – and all three instances will need to have the same root password.
dba.deploySandboxInstance(3320);
MySQL JS > dba.deploySandboxInstance(3320); A new MySQL sandbox instance will be created on this host in /var/root/mysql-sandboxes/3320 Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks. Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3320 successfully deployed and started. Use shell.connect('root@localhost:3320'); to connect to the instance.
Finally, deploy the third sandbox instance using port 3330.
dba.deploySandboxInstance(3330);
MySQL JS > dba.deploySandboxInstance(3330); A new MySQL sandbox instance will be created on this host in /var/root/mysql-sandboxes/3330 Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks. Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3330 successfully deployed and started. Use shell.connect('root@localhost:3330'); to connect to the instance.
From within the mysql shell, while in Javascript mode, connect to the first host on port 3310.
shell.connect(“root@localhost:3310”);
MySQL JS > shell.connect("root@localhost:3310"); Please provide the password for 'root@localhost:3310': Creating a session to 'root@localhost:3310' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 20 Server version: 8.0.11 MySQL Community Server - GPL No default schema selected; type \use to set one.
I will create the cluster starting with the 3310 sandbox – and the name of the cluster will be simply ‘mycluster’.
cluster = dba.createCluster(‘mycluster’);
MySQL localhost:3310 ssl JS > cluster = dba.createCluster('mycluster'); A new InnoDB cluster will be created on instance 'root@localhost:3310'. Validating instance at localhost:3310... Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as MacVM151.local Instance configuration is suitable. Creating InnoDB cluster 'mycluster' on 'root@localhost:3310'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure.
Next, add the other two sandboxes to the cluster – the ones on ports 3320 and 3330.
Adding 3320…
cluster.addInstance(“root@localhost:3320”);
MySQL localhost:3310 ssl JS > cluster.addInstance("root@localhost:3320"); A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Please provide the password for 'root@localhost:3320': Adding instance to the cluster ... Validating instance at localhost:3320... Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as MacVM151.local Instance configuration is suitable. The instance 'root@localhost:3320' was successfully added to the cluster.
And 3330…
cluster.addInstance(“root@localhost:3330”);
MySQL localhost:3310 ssl JS > cluster.addInstance("root@localhost:3330") A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Please provide the password for 'root@localhost:3330': Adding instance to the cluster ... Validating instance at localhost:3330... Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as MacVM151.local Instance configuration is suitable. The instance 'root@localhost:3330' was successfully added to the cluster.
And now I can check the cluster status – and you can see 3310 is read-write, and the other two are read-only.
cluster.status();
MySQL localhost:3310 ssl JS > cluster.status(); { "clusterName": "mycluster", "defaultReplicaSet": { "name": "default", "primary": "localhost:3310", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "localhost:3310": { "address": "localhost:3310", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "localhost:3320": { "address": "localhost:3320", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "localhost:3330": { "address": "localhost:3330", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://root@localhost:3310" }
I have opened another terminal window, where I can also check the processes for mysqld, and you can see all three sandbox instances are running.
ps -ef|grep mysqld
~:root # ps -ef|grep mysqld 501 2124 853 0 9:25PM ttys000 0:00.00 grep mysqld 501 2078 1 0 8:55PM ttys002 0:07.28 /usr/local/mysql/bin/mysqld --defaults-file=/Users/tonydarnell/mysql-sandboxes/3310/my.cnf 501 2098 1 0 9:16PM ttys002 0:03.98 /usr/local/mysql/bin/mysqld --defaults-file=/Users/tonydarnell/mysql-sandboxes/3320/my.cnf 501 2106 1 0 9:16PM ttys002 0:03.67 /usr/local/mysql/bin/mysqld --defaults-file=/Users/tonydarnell/mysql-sandboxes/3330/my.cnf
From the terminal, go to /var/root and you can see the mysql-sandboxes directory, and the contents of each directory
cd /var/root
pwd
ls -ld mysql-sandboxes
ls -ld mysql-sandboxes/*
ls -ld mysql-sandboxes/3310/*
~:root # cd /var/root ~:root # pwd /var/root ~:root # ls -ld mysql-sandboxes drwxr-xr-x 5 _mysql wheel 170 Apr 24 11:25 mysql-sandboxes ~:root # ls -ld mysql-sandboxes/* drwxr-xr-x 8 _mysql wheel 272 Apr 24 12:07 mysql-sandboxes/3310 drwxr-xr-x 8 _mysql wheel 272 Apr 24 11:25 mysql-sandboxes/3320 drwxr-xr-x 8 _mysql wheel 272 Apr 24 11:25 mysql-sandboxes/3330 ~:root # ls -ld mysql-sandboxes/3310/* -rw-r----- 1 root wheel 5 Apr 24 12:07 mysql-sandboxes/3310/3310.pid -rw------- 1 _mysql wheel 746 Apr 24 12:07 mysql-sandboxes/3310/my.cnf drwxr-xr-x 2 _mysql wheel 68 Apr 24 11:24 mysql-sandboxes/3310/mysql-files drwxr-x--- 41 _mysql wheel 1394 Apr 24 12:07 mysql-sandboxes/3310/sandboxdata -rwx------ 1 _mysql wheel 126 Apr 24 11:24 mysql-sandboxes/3310/start.sh -rwx------ 1 _mysql wheel 196 Apr 24 11:24 mysql-sandboxes/3310/stop.sh
I want make sure I change ownership of the sandboxes to the mysql user.
cd /var/root
chown -R mysql mysql-sandboxes/
ls -ld mysql-sandboxes/
~:root # cd /var/root ~:root # chown -R mysql mysql-sandboxes/ ~:root # ls -ld mysql-sandboxes/ drwxr-xr-x 5 _mysql wheel 170 Apr 24 11:25 mysql-sandboxes/
Now, I want to verify that mysql router isn’t running.
ps -ef|grep router
~:root # ps -ef|grep router 0 2766 2356 0 11:31AM ttys002 0:00.00 grep router
Before I start mysqlrouter I will want to include the router bin directory in the $PATH for root
I will edit root’s .profile file and add: /usr/local/mysql-router/bin to the $PATH.
I can now start the mysqlrouter in bootstrap mode. If you run this as root you will need to specify the user with the –user variable.
mysqlrouter –bootstrap localhost:3310 –directory /usr/local/myrouter –user=mysql
/usr/local:root # mysqlrouter --bootstrap localhost:3310 --directory /usr/local/myrouter --user=mysql Please enter MySQL password for root: Bootstrapping MySQL Router instance at '/usr/local/myrouter'... MySQL Router has now been configured for the InnoDB cluster 'mycluster'. The following connection information can be used to connect to the cluster. Classic MySQL protocol connections to cluster 'mycluster': - Read/Write Connections: localhost:6446 - Read/Only Connections: localhost:6447 X protocol connections to cluster 'mycluster': - Read/Write Connections: localhost:64460 - Read/Only Connections: localhost:64470
/usr/local/myrouter/start.sh
Next, I will start the router.
/usr/local:root # /usr/local/myrouter/start.sh
I want to check the processes to make sure that router is running.
ps -ef|grep router
/usr/local:root # ps -ef|grep router 0 2145 1 0 9:32PM ttys000 0:00.02 sudo ROUTER_PID=/usr/local/myrouter/mysqlrouter.pid /usr/local/bin/mysqlrouter -c /usr/local/myrouter/mysqlrouter.conf --user=mysql 74 2146 2145 0 9:32PM ttys000 0:00.22 /usr/local/bin/mysqlrouter -c /usr/local/myrouter/mysqlrouter.conf --user=mysql 0 2148 2136 0 9:32PM ttys000 0:00.00 grep router
I can now connect to the router and see which of the sandbox instances I connect to from the router.
If you already have a mysql shell window open – use this command: shell.connect(“root@localhost:6446”).
Or, from the command prompt – use this mysqlsh –uri root@localhost:6446
/usr/local/myrouter:root # mysqlsh --uri root@localhost:6446 Creating a session to 'root@localhost:6446' Enter password: Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 135 Server version: 8.0.11 MySQL Community Server - GPL No default schema selected; type \use to set one. MySQL Shell 8.0.11 Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL localhost:6446 ssl JS >
Switch to sql mode and check to see which port is being used.
\sql
select @@port;
MySQL localhost:6446 ssl JS > \sql Switching to SQL mode... Commands end with ; MySQL localhost:6446 ssl SQL > select @@port; +--------+ | @@port | +--------+ | 3310 | +--------+ 1 row in set (0.0005 sec)
I can see that I am connected to port 3310 – which is the read/write instance of the InnoDB cluster.
If you want to check the status of the cluster – you can’t do that from router – you will have to connect to ports 3310, 3320 or 3330.
If you try to check the status of the cluster while in SQL mode, you get this error:
MySQL localhost:6446 ssl SQL > cluster.status(); ERROR: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'cluster.status()' at line 1
If you try while in javascript mode, you get this error:
MySQL localhost:6446 ssl JS > cluster.status(); ReferenceError: cluster is not defined
So, I will want to connect back to the cluster itself – but first I need to go to javascript mode.
\js
shell.connect(“root@localhost:3310”);
MySQL localhost:6446 ssl SQL > \js Switching to JavaScript mode... MySQL JS > shell.connect("root@localhost:3310"); Please provide the password for 'root@localhost:3310': Creating a session to 'root@localhost:3310' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 193 Server version: 8.0.11 MySQL Community Server - GPL No default schema selected; type \use to set one.
I can check the status of the cluster again from javascript mode.
cluster=dba.getCluster();
cluster.status();
MySQL localhost:3310 ssl JS > cluster.status(); { "clusterName": "mycluster", "defaultReplicaSet": { "name": "default", "primary": "localhost:3310", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "localhost:3310": { "address": "localhost:3310", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "localhost:3320": { "address": "localhost:3320", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "localhost:3330": { "address": "localhost:3330", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://root@localhost:3310" }
I am going to create a database, a table and then insert data into the table so I can see how group replication will replicate the changes from the read-write server to the other two servers.
I am opening three terminal windows each open to a separate port – 3310, 3320, and 3330.
mysql -uroot -p -P3310 -h127.0.0.1
/usr/local/myrouter:root # mysql -uroot -p -P3310 -h127.0.0.1 Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 219 Server version: 8.0.11 MySQL Community Server - GPL Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
mysql -uroot -p -P3320 -h127.0.0.1
~:root # mysql -uroot -p -P3320 -h127.0.0.1 Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 109 Server version: 8.0.11 MySQL Community Server - GPL Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
mysql -uroot -p -P3330 -h127.0.0.1
/usr/local:root # mysql -uroot -p -P3330 -h127.0.0.1 Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 99 Server version: 8.0.11 MySQL Community Server - GPL Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
On each of the three instances – 3310, 3320 and 3330 – I will look at what databases I already have on each instance (they should only contain the default mysql databases).
show databases;
mysql> show databases; +-------------------------------+ | Database | +-------------------------------+ | information_schema | | mysql | | mysql_innodb_cluster_metadata | | performance_schema | | sys | +-------------------------------+ 5 rows in set (0.00 sec)
On the read-write server, which is on port 3310, I will create a database named test_01.
create database test_01;
mysql> create database test_01; Query OK, 1 row affected (0.05 sec)
Now, I can check to see if the database was created on 3310, and then check on the other two to see that it has been replicated. I will run this on all three instances.
show databases;
mysql> show databases; +-------------------------------+ | Database | +-------------------------------+ | information_schema | | mysql | | mysql_innodb_cluster_metadata | | performance_schema | | sys | | test_01 | +-------------------------------+ 6 rows in set (0.00 sec)
I can see the test_01 database, and the new database doesn’t have any tables, so I will run this on all three run this to show zero tables:
use test_01;show tables;
mysql> use test_01;show tables; Database changed Empty set (0.01 sec);
I am going to create a table named “employees” on 3310.
CREATE TABLE `employees` (
`id_emp` int(11) NOT NULL AUTO_INCREMENT,
`name_first` varchar(45) DEFAULT NULL,
`name_middle` varchar(45) DEFAULT NULL,
`name_last` varchar(45) DEFAULT NULL,
`phone_home` varchar(45) DEFAULT NULL,
`phone_cell` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id_emp`)
) ENGINE=InnoDB AUTO_INCREMENT=10000;
mysql> CREATE TABLE `employees` ( -> `id_emp` int(11) NOT NULL AUTO_INCREMENT, -> `name_first` varchar(45) DEFAULT NULL, -> `name_middle` varchar(45) DEFAULT NULL, -> `name_last` varchar(45) DEFAULT NULL, -> `phone_home` varchar(45) DEFAULT NULL, -> `phone_cell` varchar(45) DEFAULT NULL, -> PRIMARY KEY (`id_emp`) -> ) ENGINE=InnoDB AUTO_INCREMENT=10000; Query OK, 0 rows affected (0.10 sec)
On all three instances – 3310, 3320 and 3330 – I will run this to show that the employee table creation was propagated to the other two servers via replication.
use test_01;show tables;
mysql> use test_01;show tables; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed +-------------------+ | Tables_in_test_01 | +-------------------+ | employees | +-------------------+ 1 row in set (0.00 sec)
And, on all three instances – 3310, 3320 and 3330 – I will run this to show that the employee table is empty.
select * from employees;
mysql> select * from employees; Empty set (0.01 sec)
Now I can insert a row into the employees table on 3310.
INSERT INTO `test_01`.`employees` (`name_first`, `name_middle`, `name_last`, `phone_home`, `phone_cell`) VALUES (‘John’, ‘H’, ‘Smith’, ‘404-555-1212’, ‘404-555-2020’);
mysql> INSERT INTO `test_01`.`employees` (`name_first`, `name_middle`, `name_last`, `phone_home`, `phone_cell`) VALUES ('John', 'H', 'Smith', '404-555-1212', '404-555-2020'); Query OK, 1 row affected (0.00 sec)
And, on all three instances – 3310, 3320 and 3330 – I will run this to show that the insert statement was propagated to the other two servers.
select * from employees;
mysql> select * from employees; +--------+------------+-------------+-----------+--------------+--------------+ | id_emp | name_first | name_middle | name_last | phone_home | phone_cell | +--------+------------+-------------+-----------+--------------+--------------+ | 10006 | John | H | Smith | 404-555-1212 | 404-555-2020 | +--------+------------+-------------+-----------+--------------+--------------+ 1 row in set (0.00 sec)
On another terminal – login as root and let’s take a look at the sandbox files to see what was created.
cd /var/root
pwd
ls -ld mysql-sandboxes/*
~:root # cd /var/root ~:root # pwd /var/root ~:root # ls -ld mysql-sandboxes/* drwxr-xr-x 8 _mysql wheel 272 Apr 24 11:24 mysql-sandboxes/3310 drwxr-xr-x 8 _mysql wheel 272 Apr 24 11:25 mysql-sandboxes/3320 drwxr-xr-x 8 _mysql wheel 272 Apr 24 11:25 mysql-sandboxes/3330
Now from within shell – connect to the router. You might have to start a new shell – quit the old one if you have it open – otherwise, you might still be connected to 3310.
mysqlsh
~:root # mysqlsh MySQL Shell 8.0.11 Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit.
shell.connect(“root@localhost:6446”);
MySQL JS > shell.connect("root@localhost:6446"); Please provide the password for 'root@localhost:6446': Creating a session to 'root@localhost:6446' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 146 Server version: 8.0.11 MySQL Community Server - GPL No default schema selected; type \use to set one.
I am going to kill the first box sandbox (using port 3310) which was the read-write instance.
dba.killSandboxInstance(3310);
MySQL localhost:6446 ssl JS > dba.killSandboxInstance(3310); The MySQL sandbox instance on this host in /var/root/mysql-sandboxes/3310 will be killed Killing MySQL instance... Instance localhost:3310 successfully killed.
# switch to sql mode
\sql
MySQL localhost:6446 ssl JS > \sql Switching to SQL mode... Commands end with ;
Now, I can check to see which port is now being used by the router.
select @@port;
MySQL localhost:6446 ssl SQL > select @@port; +--------+ | @@port | +--------+ | 3320 | +--------+ 1 row in set (0.0004 sec)
I will switch to another terminal window and despite killing the sandbox, the sandbox files for 3310 weren’t removed.
ls -ld /var/root/mysql-sandboxes/*
~:root # ls -ld /var/root/mysql-sandboxes/* drwxr-xr-x 7 _mysql wheel 238 Apr 24 11:58 /var/root/mysql-sandboxes/3310 drwxr-xr-x 8 _mysql wheel 272 Apr 24 11:25 /var/root/mysql-sandboxes/3320 drwxr-xr-x 8 _mysql wheel 272 Apr 24 11:25 /var/root/mysql-sandboxes/3330
I will switch back the mysqlsh window, and switch to javascript mode.
\js
MySQL localhost:6446 ssl SQL > \js Switching to JavaScript mode...
I can now check the status of the cluster.
cluster=dba.getCluster();
MySQL localhost:6446 ssl JS > cluster=dba.getCluster();
cluster.status();
You can see how the instance for 3310 is now labeled as MISSING and how 3320 is now the read-write instance.
MySQL localhost:6446 ssl JS > cluster.status(); { "clusterName": "mycluster", "defaultReplicaSet": { "name": "default", "primary": "localhost:3320", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active", "topology": { "localhost:3310": { "address": "localhost:3310", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "(MISSING)" }, "localhost:3320": { "address": "localhost:3320", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "localhost:3330": { "address": "localhost:3330", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://root@localhost:6446" }
Let’s add 3310 back into the cluster – and after you do this, if you quickly do another cluster status, you will see it is in recovery mode.
dba.startSandboxInstance(3310);
MySQL localhost:6446 ssl JS > dba.startSandboxInstance(3310); The MySQL sandbox instance on this host in /var/root/mysql-sandboxes/3310 will be started Starting MySQL instance... Instance localhost:3310 successfully started.
# if you do another cluster.status(); very quickly
# you can now see that 3310 is in RECOVERING mode
cluster.status();
MySQL localhost:6446 ssl JS > cluster.status(); { "clusterName": "mycluster", "defaultReplicaSet": { "name": "default", "primary": "localhost:3320", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active", "topology": { "localhost:3310": { "address": "localhost:3310", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "RECOVERING" }, "localhost:3320": { "address": "localhost:3320", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "localhost:3330": { "address": "localhost:3330", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://root@localhost:6446" }
And then do another cluster.status() – and you can see how 3310 has rejoined the cluster, but it is now a read-only node.
cluster.status();
MySQL localhost:6446 ssl JS > cluster.status(); { "clusterName": "mycluster", "defaultReplicaSet": { "name": "default", "primary": "localhost:3320", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "localhost:3310": { "address": "localhost:3310", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "localhost:3320": { "address": "localhost:3320", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "localhost:3330": { "address": "localhost:3330", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://root@localhost:6446" }
Now that 3310 is back online, if I try and do an insert from 3310 – I get an error – because it is a read-only node.
INSERT INTO `test_01`.`employees` (`name_first`, `name_middle`, `name_last`, `phone_home`, `phone_cell`) VALUES (‘John’, ‘H’, ‘Smith’, ‘404-555-1212’, ‘404-555-2020’);
mysql> INSERT INTO `test_01`.`employees` (`name_first`, `name_middle`, `name_last`, `phone_home`, `phone_cell`) VALUES ('John', 'H', 'Smith', '404-555-1212', '404-555-2020'); ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
I can do an insert on 3320 as it is the read-write node, and check to see if it was replicated to the other two servers.
INSERT INTO `test_01`.`employees` (`name_first`, `name_middle`, `name_last`, `phone_home`, `phone_cell`) VALUES (‘Susan’, ‘K’, ‘James’, ‘912-555-8565’, ‘912-555-9986’);
mysql> INSERT INTO `test_01`.`employees` (`name_first`, `name_middle`, `name_last`, `phone_home`, `phone_cell`) VALUES ('Susan', 'K', 'James', '912-555-8565', '912-555-9986'); Query OK, 1 row affected (0.09 sec)
And now I can check the employees table to see both rows of data – on all of the nodes.
use test_01; select * from employees;
mysql> select * from employees; +--------+------------+-------------+-----------+--------------+--------------+ | id_emp | name_first | name_middle | name_last | phone_home | phone_cell | +--------+------------+-------------+-----------+--------------+--------------+ | 10003 | John | H | Smith | 404-555-1212 | 404-555-2020 | | 10004 | Susan | K | James | 912-555-8565 | 912-555-9986 | +--------+------------+-------------+-----------+--------------+--------------+ 2 rows in set (0.00 sec)
The InnoDB Cluster (sandbox version) is now up and running!
If you want to start over, kill all three mysqld processes, where pid1 and pi2 are the process ID’s from the ps -ef statement.
ps -ef| grep mysqld
kill -9 pid1 pid2
Double-check to make sure you killed the mysqld processes:
ps -ef| grep mysqld
~:root # ps -ef| grep mysqld 0 2342 1 0 10:05PM ttys000 2:34.77 /usr/local/mysql/bin/mysqld --defaults-file=/var/root/mysql-sandboxes/3320/my.cnf --user=root 0 2347 1 0 10:05PM ttys000 2:29.65 /usr/local/mysql/bin/mysqld --defaults-file=/var/root/mysql-sandboxes/3330/my.cnf --user=root 0 2706 1 0 9:58AM ttys000 0:41.80 /usr/local/mysql/bin/mysqld --defaults-file=/var/root/mysql-sandboxes/3310/my.cnf --user=root 0 2721 2356 0 11:17AM ttys002 0:00.00 grep mysqld ~:root # kill -9 2342 2347 2706 ~:root # ps -ef| grep mysqld 0 2723 2356 0 11:17AM ttys002 0:00.00 grep mysqld
Remove the sandbox files.
cd /var/root
ls -ld mysql-sandboxes/
rm -r mysql-sandboxes/
ls -ld mysql-sandboxes/
~:root # cd /var/root ~:root # ls -ld mysql-sandboxes/ drwxr-xr-x 5 _mysql wheel 170 Apr 23 22:05 mysql-sandboxes/ ~:root # rm -r mysql-sandboxes/ ~:root # ls -ld mysql-sandboxes/ ls: mysql-sandboxes/: No such file or directory
ps -ef|grep router
kill -9 2645 2646 (whatever the PIDs are for router)
ps -ef|grep router
~:root # ps -ef|grep router 0 2645 1 0 9:27AM ttys000 0:00.01 sudo ROUTER_PID=/usr/local/myrouter/mysqlrouter.pid /usr/local/bin/mysqlrouter -c /usr/local/myrouter/mysqlrouter.conf --user=mysql 74 2646 2645 0 9:27AM ttys000 0:39.63 /usr/local/bin/mysqlrouter -c /usr/local/myrouter/mysqlrouter.conf --user=mysql 0 2764 2356 0 11:31AM ttys002 0:00.00 grep router ~:root # kill -9 2646
Remove the /usr/local/myrouter directory.
cd /usr/local/
ls -ld myrouter
rm -r /usr/local/myrouter
/usr/local:root # cd /usr/local/ /usr/local:root # ls -ld myrouter drwx------ 10 _mysql _mysql 340 Apr 24 11:40 myrouter /usr/local:root # rm -r /usr/local/myrouter /usr/local:root # ls -ld myrouter ls: myrouter: No such file or directory
You can now start the whole process over again.
![]() |
Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn. |
![]() |
Tony is the author of Twenty Forty-Four: The League of Patriots
Visit http://2044thebook.com for more information. |