Using mysqldump and the MySQL binary log – a quick guide on how to backup and restore MySQL databases

Be sure to check out my other posts on mysqldump:
- Scripting Backups of MySQL with Perl via mysqldump
- Splitting a MySQL Dump File Into Smaller Files Via Perl
- Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 1 of 2
- Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 2 of 2


I have already written several posts on using mysqldump. This post will be a quick guide to using mysqldump to backup and restore your databases. And we will look at using the MySQL binary log (binlog) along with mysqldump for point-in-time recovery. If your database is extremely large, and for databases in a production environment, you might want to consider using MySQL Enterprise Backup (mysqlbackup), which is part of the MySQL Enterprise Edition.

For those of you that aren’t familiar with mysqldump:

The mysqldump client is a utility that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects, table data, or both. It dumps one or more MySQL database for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

The best feature about mysqldump is that it is easy to use. The main problem with using mysqldump occurs when you need to restore a database. When you execute mysqldump, the database backup (output) is an SQL file that contains all of the necessary SQL statements to restore the database – but restoring requires that you execute these SQL statements to essentially rebuild the database. Since you are recreating your database, the tables and all of your data from this file, the restoration procedure can take a long time to execute if you have a very large database.


NOTE: If you are using GTID’s (global transaction identifiers) in your database, you will need to include the –set-gtid-purged=OFF option, otherwise you will receive this error:

Warning: A partial dump from a server that has GTIDs will by default include the
GTIDs of all transactions, even those that changed suppressed parts of the database.
If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete 
dump, pass --all-databases --triggers --routines --events. 

For these examples, I will not include the –set-gtid-purged=OFF option.


Dumping and making a copy of a single database

To dump/backup a single database:

mysqldump -uroot -p database_name > db_dump.sql

To load the dump file back into mysql, you will need to create the new database first. If you use the –databases option before the database name, mysqldump will also dump the CREATE DATABASE and USE statements that you need prior to inserting the data from the dump.

You can either use mysqladmin to create the new database, or create it from a MySQL prompt:

# mysqladmin create new_database_name

mysql> CREATE DATABASE new_database_name;

Next, you can simply import the dump file into mysql.

# mysql new_database_name < db_dump.sql

You can also use the dump file to move the database to another server. If you did not use the –databases option, then you will need to create the database first.

Dumping events, routines, triggers

Here are the options for mysqldump to also dump event scheduler events, stored procedures or functions. If you want to include these, use these options:

–routines – dumps stored procedures and functions
–events – dumps Event Scheduler events
–triggers – dumps triggers

When you use mysqldump, triggers are dumped by default. If you want to disable any of these functions, use the “skip” versions: https://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_events–skip-events, –skip-routines, or –skip-triggers.

Only dump table definitions

If you want to just dump the CREATE TABLE statements without any data, use the –no-data option.

# mysqldump --no-data database_name > db_definitions_dump.sql

You might want to add the –routines and –events options to include stored routines and event definitions.

# mysqldump --no-data --routines --events database_name > db_definitions_dump.sql

Only dump the data

If you want to just dump the data without any table definitions, you can use the –no-create-info option:

# mysqldump --no-create-info database_name > db_data_dump.sql

Using mysqldump to test a new version of mysql

Let’s say that you need to upgrade to a new version of mysql. In order to test to see if your database objects are handled properly by the new version, you can simply dump the data definitions and import them into the new version of MySQL (preferably on a separate computer).

On the computer with the old version of MySQL:

mysqldump --all-databases --no-data --routines --events > db_definitions.sql

Then, on the upgraded server, you can just import this dump file:

mysql -uroot -p < db_definitions.sql

This will help you spot any potential incompatibilities with the new version of MySQL. If you don’t receive any errors, you can then dump the data and load into the new server. Be sure to run some test queries on the new upgraded server as well.

Point-in-time recovery using mysqldump and the binary logs

The MySQL binary logs (binlogs) contains all of the SQL statements or “events” that could change the database (I say “could” because a delete statement that does not delete any rows would still be entered into the binary log – unless you are using row-based logging).

For more information about the binary log, see: http://dev.mysql.com/doc/refman/5.6/en/binary-log.html.

Since the binlog contains all of the events that happen to the database, you can use the binlog to apply these same changes to a different database. If you started your MySQL instance with the binlogs enabled, and you have never flushed the binlogs, then the binlogs contain all of the SQL statements for all of the data that is in your database. The binlog itself is like a backup of your database.

If you want to use the binary logs in addition to mysqldump to restore your database, you need to have the binary logs (binlogs) enabled. There are many options for the binlogs (see http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html, but the only two that you really need for this example are:

 --log-bin[=base_name]
 --log-bin-index[=file_name]

One other option is to use the –binlog-format. You can set this value to STATEMENT (default), ROW or MIXED. For more information about these options, see http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html#sysvar_binlog_format.

These variables need to go into your my.cnf or my.ini file under the [mysqld] section, and this will require a restart of mysqld.

Once you have the binary log enabled, you will need to do a few things differently when you use mysqldump. You will need to:

- flush the tables and place a READ lock on the tables
- check to see what binlog is being used
- check the position of the binlog
- dump your data with mysqldump
- release the lock

By placing a read lock on the tables, you are stopping anyone from modifying the data in the database. By having the binlog and binlog position, these will allow you use the binary logs to restore any statements that happened after the mysqldump. Open two terminal windows – one with a MySQL prompt, and another with a root prompt:

In the MySQL prompt, issue the READ lock and SHOW MASTER STATUS:

mysql> FLUSH TABLES WITH READ LOCK;
Query OK, 0 rows affected (0.47 sec)

mysql> SHOW MASTER STATUS\G
*************************** 1. row ***************************
             File: mysql-bin.000008
         Position: 191
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

Now you are ready to dump the database with whatever options you need:

# mysqldump --all-databases > db_000008_191_dump.sql

Once the dump has finished, you can release the lock:

mysql> UNLOCK TABLES;
Query OK, 0 rows affected (0.00 sec)

It is extremely important that you write down the binlog file and position from the SHOW MASTER STATUS statement and somehow associate it with the dump file. One way to do this is to insert the binlog file name and position into the dump file name. In my example above, I did this by adding both to the dump file name db_000008_191_dump.sql.

When you have to restore the database, you will need to load the dump file first, and then apply the binlog(s) to the database.

Let’s assume that we need to restore the entire database. First, we will import the dump file:

# mysql -uroot -p < db_000008_191_dump.sql

Next, you will need to load the information from the binlog(s). To load information from the binlogs, you need to use the mysqlbinlog utility. You can check your MySQL data directory to see how many binlogs you have used since the one that was in the SHOW MASTER STATUS statement:

$ ls -l mysql-bin*
-rw-rw----  1 mysql  wheel     67110 Apr  4 16:22 mysql-bin.000001
-rw-rw----  1 mysql  wheel   1230893 Apr  4 16:24 mysql-bin.000002
-rw-rw----  1 mysql  wheel  13383907 Apr  4 17:03 mysql-bin.000003
-rw-rw----  1 mysql  wheel  13383907 Apr  4 19:03 mysql-bin.000004
-rw-rw----  1 mysql  wheel  13383907 Apr  4 19:07 mysql-bin.000005
-rw-rw----  1 mysql  wheel  13383907 Apr 18 16:48 mysql-bin.000006
-rw-rw----  1 mysql  wheel  13383907 Apr 21 13:37 mysql-bin.000007
-rw-rw----  1 mysql  wheel  13383907 Apr 21 13:37 mysql-bin.000008
-rw-rw----  1 mysql  wheel    154847 Apr 21 13:37 mysql-bin.000009
-rw-rw----  1 mysql  wheel       171 Apr 21 13:37 mysql-bin.index

You can also just look at the mysql-bin.index file (located in your MySQL data directory), which contains a list of all of the binary files in use:

 # cat mysql-bin.index
./mysql-bin.000001
./mysql-bin.000002
./mysql-bin.000003
./mysql-bin.000004
./mysql-bin.000005
./mysql-bin.000006
./mysql-bin.000007
./mysql-bin.000008
./mysql-bin.000009

In this example, we will need to apply the changes from the binlog file mysql-bin.000008 after position 191, and then all of the mysql-bin.000009 binlog. You will need to add the correct data directory PATH to your mysqlbinlog statement.

mysqlbinlog --start-position=191 $DATA_DIR_PATH/mysql-bin.000008 | mysql -u root -p

After you have inserted all of mysql-bin.000008 after position 191, you can insert the entire mysql-bin.000009 binlog file:

mysqlbinlog $DATA_DIR_PATH/mysql-bin.000009 | mysql -u root -p

Note: During the restore process, you do not want anyone inserting any data into the database.

Your database should now be back to the state when the database crashed or was deleted. It isn’t mandatory, but it is also a good idea to copy the binlogs to a separate location as part of your backup routine. You can use mysqlbinlog to do this as well – see: http://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html.

For more information on using the binary logs for point-in-time recovery, see https://dev.mysql.com/doc/refman/5.6/en/point-in-time-recovery.html. There are a lot of other options for using binlogs. The best option for backing up and restoring your MySQL database is to use the MySQL Enterprise Backup (mysqlbackup), which is part of the MySQL Enterprise Edition subscription, which includes 24×7 MySQL Support and the other Enterprise features.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Installing and testing the MySQL Enterprise Audit plugin

MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.

(from http://www.mysql.com/products/enterprise/)

MySQL Enterprise Audit provides an easy to use, policy-based auditing solution that helps organizations to implement stronger security controls and to satisfy regulatory compliance.

As more sensitive data is collected, stored and used online, database auditing becomes an essential component of any security strategy. To guard against the misuse of information, popular compliance regulations including HIPAA, Sarbanes-Oxley PDF, and the PCI Data Security Standard require organizations to track access to information.

Download the White Paper: MySQL Enterprise Edition Product Guide

To meet these demands, organizations must be able to produce an audit trail of information to help track who does what to which piece of data. This includes login and logoff attempts, attempts to access a database or a table, changes to database schema and much more.

MySQL Enterprise Audit gives DBAs the tools they need to add auditing compliance to their new and existing applications by enabling them to:

  • Dynamically enable/disable audit stream
  • Implement policies that log all or selected login or query activities
  • Automatically rotate audit log files based on size
  • Integrate XML-based audit log stream with MySQL, Oracle and other third party solutions

(from http://www.mysql.com/products/enterprise/audit.html)

MySQL Enterprise Audit is implemented using a server plugin. MySQL Enterprise Audit uses the open MySQL Audit API to enable standard, policy-based monitoring and logging of connection and query activity executed on specific MySQL servers. Designed to meet the Oracle audit specification, MySQL Enterprise Audit provides an out of box, easy to use auditing and compliance solution for applications that are governed by both internal and external regulatory guidelines.

When installed, the audit plugin enables MySQL Server to produce a log file containing an audit record of server activity. The log contents include when clients connect and disconnect, and what actions they perform while connected, such as which databases and tables they access.

(from http://dev.mysql.com/doc/refman/5.6/en/mysql-enterprise-audit.html)

Installing and testing the MySQL Enterprise Audit plugin
Manual: http://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin.html

Check that Enterprise Edition is installed:

The thread pool plugin requires the MySQL Enterprise Edition, and is not available in the Community Edition. After you have installed MySQL Enterprise Edition from http://edelivery.oracle.com, you can check from mysql to make sure that you have the correct version:

mysql> SHOW VARIABLES LIKE ‘version%’;
+-------------------------+---------------------------------------------------------+
| Variable_name           | Value                                                   |
+-------------------------+---------------------------------------------------------+
| version                 | 5.6.14-enterprise-commercial-advanced-log               |
| version_comment         | MySQL Enterprise Server - Advanced Edition (Commercial) |
| version_compile_machine | x86_64                                                  |
| version_compile_os      | osx10.7                                                 |
+-------------------------+---------------------------------------------------------+
4 rows in set (0.00 sec)

Before you attempt to install the plugin, you may verify that the thread pool plugin is in the plugins directory:

In the my.cnf or my.ini file, check for the location of the plugin directory (plugin_dir).

[mysqld]
plugin_dir=/path/to/plugin/directory

If the plugin directory value is not located in the my.cnf or my.ini file, check the location from within mysql:

mysql> SHOW VARIABLES like ‘plugin_dir’;
+---------------+------------------------------+
| Variable_name | Value                        |
+---------------+------------------------------+
| plugin_dir    | /usr/local/mysql/lib/plugin/ |
+---------------+------------------------------+
1 row in set (0.00 sec)

Check to see if the thread pool plugin (thread_pool.so) is in the plugin directory:

$ ls -l /usr/local/mysql/lib/plugin/audit_log.so
-rwxr-xr-x+ 1 _mysql  wheel  38828 Sep 10 03:58 /usr/local/mysql/lib/plugin/audit_log.so

Installing the Enterprise Audit plugin:

To load the plugin at server startup, use the –plugin-load option to name the object file that contains the plugin. With this plugin-loading method, the option must be given each time the server starts. You may also put the following lines in your my.cnf file:

[mysqld]
plugin-load=audit_log.so

If object files have a suffix different from .so on your system, substitute the correct suffix (for example, .dll on Windows). Alternatively, to register the plugin at runtime, use this statement (changing the suffix as necessary):

mysql> INSTALL PLUGIN audit_log SONAME ‘audit_log.so’;

INSTALL PLUGIN loads the plugin, and also registers it in the mysql.plugins table to cause the plugin to be loaded for each subsequent normal server startup.

If the plugin is loaded with –plugin-load or has been previously registered with INSTALL PLUGIN, you can use the –audit-log option at server startup to control plugin activation. For example, to load the plugin and prevent it from being removed at runtime, use these options:

[mysqld]
plugin-load=audit_log.so
audit-log=FORCE_PLUS_PERMANENT

If it is desired to prevent the server from running without the audit plugin, use –audit-log with a value of FORCE or FORCE_PLUS_PERMANENT to force server startup to fail if the plugin does not initialize successfully.

Optional audit variables: http://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin-options-variables.html

To verify that the audit_log was loaded successfully:

mysql> SHOW VARIABLES LIKE ‘audit_log%’;
+---------------------------+--------------+
| Variable_name             | Value        |
+---------------------------+--------------+
| audit_log_buffer_size     | 1048576      |
| audit_log_file            | audit.log    |
| audit_log_flush           | OFF          |
| audit_log_format          | OLD          |
| audit_log_policy          | ALL          |
| audit_log_rotate_on_size  | 0            |
| audit_log_strategy        | ASYNCHRONOUS |
+---------------------------+--------------+
7 rows in set (0.00 sec)

or

mysql> SELECT * FROM information_schema.PLUGINS where PLUGIN_NAME = ‘audit_log’\G
*************************** 1. row ***************************
           PLUGIN_NAME: audit_log
        PLUGIN_VERSION: 0.1
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: AUDIT
   PLUGIN_TYPE_VERSION: 3.1
        PLUGIN_LIBRARY: audit_log.so
PLUGIN_LIBRARY_VERSION: 1.4
         PLUGIN_AUTHOR: Oracle
    PLUGIN_DESCRIPTION: Auditing events logger
        PLUGIN_LICENSE: PROPRIETARY
           LOAD_OPTION: ON
1 row in set (0.02 sec)

Testing Enterprise Audit

Check to see if the audit log file was created. The default location is in the MySQL data directory. To locate the data directory:

mysql> SHOW VARIABLES WHERE Variable_Name = ‘datadir’;
+---------------+---------------------------------------------+
| Variable_name | Value                                       |
+---------------+---------------------------------------------+
| datadir       | /volumes/wv_server_raid_01/mysql_data/data/ |
+---------------+---------------------------------------------+
1 row in set (0.00 sec)

Then list that file to see if it was created:

# ls -l /volumes/wv_server_raid_01/mysql_data/data/audit.log
-rw-rw----  1 mysql  mysql  0 Mar  6 20:07 audit.log

Now, let’s test the plugin. You will need to create a user and a test database to use: (you may need to change the permissions to fit your testing scenario)

mysql> CREATE USER ‘audit_test_user’@’localhost’ IDENTIFIED BY ‘audittest123’;
Query OK, 0 rows affected (0.49 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO ‘audit_test_user’@’localhost’;
Query OK, 0 rows affected (0.02 sec)

mysql> create database audit_test;
Query OK, 1 row affected (0.00 sec)

Once the user has been created, erase the contents of the log file so you can only see the audit trail of this user:

# > /volumes/wv_server_raid_01/mysql_data/data/audit.log

Next, login with the audit_test_user:

# mysql -uaudit_test_user -paudittest123
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 375
Server version: 5.6.14-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql> 

(You may ignore the warning about using the password on the command line)

As the audit test user, create a simple table in the audit_test database, populate the table, and perform a select:

mysql> use audit_test;
Database changed

mysql> CREATE TABLE audit_test_table (firstname VARCHAR(20), lastname VARCHAR(20));
Query OK, 0 rows affected (0.05 sec)

mysql> INSERT INTO audit_test_table values (“John”, “Smith”);
Query OK, 1 row affected (0.47 sec)

mysql> select * from audit_test_table;
+------------+----------+
| firstname  | lastname |
+------------+----------+
| John       | Smith    |
+------------+----------+
1 row in set (0.00 sec)

You may now inspect the contents of the /usr/local/mysql/data/audit_log file:
(this output has been manually formatted for easier viewing)


# cat /usr/local/mysql/data/audit.log
<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:49 UTC”
RECORD_ID=”13376_2014-03-06T01:18:10”
NAME=”Connect”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”connect”
PRIV_USER=”audit_test_user”
PROXY_USER=””
DB=””/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:49 UTC”
RECORD_ID=”13377_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”select”
SQLTEXT=”select @@version_comment limit 1”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:56 UTC”
RECORD_ID=”13378_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”select”
SQLTEXT=”SELECT DATABASE()”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:56 UTC”
RECORD_ID=”13379_2014-03-06T01:18:10”
NAME=”Init DB”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=””/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:26:00 UTC”
RECORD_ID=”13380_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”create_table”
SQLTEXT=”CREATE TABLE audit_test_table (firstname VARCHAR(20), lastname VARCHAR(20))”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:26:14 UTC”
RECORD_ID=”13382_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”insert”
SQLTEXT=”INSERT INTO audit_test_table values ("John", "Smith")”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:26:23 UTC”
RECORD_ID=”13383_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”select”
SQLTEXT=”select * from audit_test_table”/>

MySQL Enterprise Audit is now configured and ready to use. To stop Enterprise Audit, issue this command:

mysql> UNINSTALL PLUGIN audit_log SONAME ‘audit_log.so’;

This command will fail if the audit-log=FORCE_PLUS_PERMANENT variable was used.

Removing the test objects

To remove the audit_test_user user and drop the audit_test database:

DROP USER ‘audit_test_user’@’localhost’;
FLUSH PRIVILEGES;
DROP DATABASE audit_test;

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

MySQL Enterprise Monitor – send advisor events to your chat client with Perl and Jabber

MySQL Enterprise Monitor (MEM) is part of the MySQL Enterprise Edition, and MEM provides real-time visibility into the performance and availability of all your MySQL databases. MEM and the MySQL Query Analyzer continuously monitor your databases and alerts you to potential problems before they impact your system. It’s like having a “Virtual DBA Assistant” at your side to recommend best practices to eliminate security vulnerabilities, improve replication, optimize performance and more. As a result, the productivity of your developers, DBAs and System Administrators is improved significantly.

With MEM, you have a couple of notification options for receiving information when MEM has received an event alert. An event alert is a “significant deviation from the baseline performance trends, and best-practice Advisors recommended changes to configuration and variable settings to improve performance”. From: http://www.mysql.com/products/enterprise/monitor.html

You may choose to receive these alerts via email or SNMP. I am not going to go over how to setup these alerts – but instructions may be found here.

I have MEM installed on my home server, and since I have been using it for a while, I have been able to tune my instances and databases so that I rarely receive any alerts. I normally receive these alerts via email, but I don’t always check email for this particular account during the day. Since most of my colleagues are in other parts of the world, we use a Jabber chat client for quick communications during the day. I wanted to figure out a way for me to receive a chat message whenever MEM had an alert. For my chat client, I use adium – which is an open-source multi-protocol instant messaging client for Mac OS X – it supports MSN, Jabber, Yahoo! and other networks. But this should work with any XMPP-based chat client application.

You will probably want to create a new POP3 email address for these alerts, as this script will delete the email messages from the server. If you want to keep an email copy of these alerts, you may add an additional email address to the alert notification group. If you use a free email service (like gmail), remember that it has a daily limit of the number of messages that you can send. I ran into this problem when testing the scripts, so I created my own email under one of my domains. I found two Perl scripts; one that acts as a POP3 client, and another that sends the Jabber message – and I combined them into one Perl script. You might need to do a little tweaking to get it to work with your service.

This is a sample email message from MEM. The first “Subject” line is the actual email subject. In MEM, you can customize what information you want in your email subject line via Email Notification Groups. I combined several parts of the email message into one message to be sent via chat.


(Email Subject Line)
(macserver01, MacServer01) - MEM WARNING Alert: User Has Rights To Database That Does Not Exist

(Email Body)

Subject: macserver01, MacServer01
Time: 2013-12-30 17:01:44 UTC (2013-12-30 12:01:44 EST)
Category: Security
Event: User Has Rights To Database That Does Not Exist
Problem Description
When a database is dropped, user privileges on the database are not automatically dropped. This has security implications as that user will regain privileges if a database with the same name is created in the future, which may not be the intended result.
Advice
Revoke privileges for all users on any databases that do not exist. The following users have privileges on databases that do not exist.

    ''@'%' on database test_%

Recommended Action
REVOKE ALL PRIVILEGES ON db_name.* FROM 'user_name'@'host_name';
Links and Further Reading
MySQL Manual: DROP DATABASE Syntax
MySQL Manual: GRANT Syntax
MySQL Manual: REVOKE Syntax
MySQL Manual: Privileges Provided by MySQL
MySQL Manual: How the Privilege System Works
Securing Your MySQL Installation
Securing a MySQL Server on Windows
Expression
%user% != THRESHOLD
Evaluated Expression

    ''@'%' on database test_% != ''

Copyright © 2005, 2013, Oracle and/or its affiliates. All rights reserved.

And here is the script. You may run it as a cron job, where it will check your email every few minutes and then send you a chat message when an email with an alert has arrived. You will need to modify the script to match your email and chat settings, and I have placed notes in the script to help guide you:

#!/usr/bin/perl -w

# POP3 client script source:
# http://forums.devshed.com/perl-programming-6/how-to-get-gmail-mail-by-mail-pop3client-in-perl-555889.html
# Author: a user named keath

# Jabber message script source: 
# ttp://dipinkrishna.com/blog/2010/12/perl-send-chat-message-gmail-buddy-jabber/
# Author: Dipin Krishna

use strict;
# for the email
use Mail::POP3Client;
# for the chat
use Net::Jabber;

# I was having a problem with the Debug module, so I just
# commented line 154 in /Library/Perl/5.16/Net/XMPP/Debug.pm

# this is the email address that you want to use to receive 
# the alert messages from MySQL Enterprise Monitor
my $user = 'MEMalerts@scriptingmysql.com';
my $pass = 'mypassword';

# you will need to use your POP3 server name
my $host = "pop.emailserver.com";

my $pop = new Mail::POP3Client(
	USER     => $user,
	PASSWORD => $pass,
	HOST     => $host,
	PORT     => 995,
	USESSL   => 'true',
);

# I have commented most of the print messages
# - you may uncomment them as you wish
my $count = $pop->Count();
if ($count Message();
} elsif ($count == 0) {
	print "no messages\n";
} else {
	
	#print "$count messsages\n\n";
	
	for my $i (1 .. $count) {
	
	my $subject = "";
	my $message = "";
	
		# if you want to extract data from the head of the email
		# I am extracting data from the body
		#foreach ($pop->Head($i)) {
		foreach ($pop->Body($i)) {

			#print "$_\n" if /^(From|Subject|Date):/i;
			#print "$_\n" if /^(Subject|Date):/i;
			# my $message = "$_\n" if /^(Subject):/i;

			# I am building my message so that it contains the information in this order:
			# Category, Subject, Event

			if ($_ =~ "^Subject")
			{
				#print "$_\n";
				chomp $_;
				$subject = $_;
				$subject =~ s/Subject: //;
 			}

			if ($_ =~ "^Category")
			{
				#print "$_\n";
				chomp $_;
				$message = "$_ || $subject";
				$message =~ s/Category: //;
 			}

			if ($_ =~ "^Event")
			{
				#print "$_\n";
				chomp $_;
				
				$message = "$message || $_";
				$message =~ s/Event: //;
  						
					my $sttime=time;
						
					#print "Message: $_\n";
						
					# this is my Google Talk chat user name and password
					my $username = 'mem.scripting.mysql';;
					my $password = 'mypassword';
					my $to = 'my_email_address';
					my $msg = "$message";
					#print "$to: $msg\n";
						 
					my $resource = "dipin";
					my $hostname = 'talk.google.com';
					my $port = 5222;
					my $componentname = 'gmail.com';
					my $Contype = 'tcpip';
					my $tls = 1;
						 
					my $Con = new Net::Jabber::Client();
					$Con->SetCallBacks(presence=>\&presence,
					message=>\&message );
						 
					my $status = $Con->Connect(
					hostname => $hostname, port => $port,
					componentname => $componentname,
					connectiontype => $Contype, tls => $tls);
						 
					if (!(defined($status))) {
						print "ERROR:  XMPP connection failed.\n";
						print "        ($!)\n";
						exit(0);
					} 
						 
					# Change hostname
					my $sid = $Con->{SESSION}->{id};
					$Con->{STREAM}->{SIDS}->{$sid}->{hostname} = $componentname;
						 
					# Authenticate
						
					#my @result = $Con->AuthSend(
					my @result = $Con->AuthIQAuth(
					username => $username, password => $password,
					resource => $resource);
						
					#print "Result:  $result[0] $result[1]\n";
						 
					if ($result[0] ne "ok") {
						print "ERROR: Authorization failed: $result[0] - $result[1]\n";
					}
					else
					{
						#print "Logged in Sucessfull!\n";
						$Con->PresenceSend(show=>"Available");
						#print "Sending Message!\n";
						$Con->MessageSend(to=>"$to",
						subject=>"Test",
						body=>"$msg\n",
						priority=>10);
					}
						
						# END send Jabbar message
 						# # # # # # # # # # # # # # # # # # # # # # # # # # #

				# this deletes the message from the server
				$pop->Delete( $i );
				
				# if you only want to send one email message as a test, 
				# uncomment this line
				exit;
			}
		}
		print "\n";
	}
}

$pop->Close();

exit;

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 1 of 2

Be sure to check out my other two posts on mysqldump:
- Scripting Backups of MySQL with Perl via mysqldump
- Splitting a MySQL Dump File Into Smaller Files Via Perl

Part 1 of 2: (part two)
If you have used MySQL for a while, you have probably used mysqldump to backup your database. In part one of this blog, I am going to show you how to create a simple full and partial backup using mysqldump. In part two, I will show you how to use MySQL Enterprise Backup (which is the successor to the InnoDB Hot Backup product). MySQL Enterprise Backup allows you to backup your database while it is online and it keeps the database available to users during backup operations (you don’t have to take the database offline or lock any databases/tables).

This post will deal with mysqldump. For those of you that aren’t familiar with mysqldump:

The mysqldump client is a utility that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects, table data, or both. It dumps one or more MySQL database for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

The best feature about mysqldump is that it is easy to use. The main problem with using mysqldump occurs when you need to restore a database. When you execute mysqldump, the database backup (output) is an SQL file that contains all of the necessary SQL statements to restore the database – but restoring requires that you execute these SQL statements to essentially rebuild the database. Since you are recreating your database, the tables and all of your data from this file, the restoration procedure can take a long time to execute if you have a very large database.

There are a lot of features and options with mysqldump – (a complete list is here). I won’t review all of the features, but I will explain some of the ones that I use.

If you have InnoDB tables (InnoDB is the default storage engine as of MySQL 5.5 – replacing MyISAM), when you use mysqldump you will want to use the option –single-transaction or issue the command FLUSH TABLES WITH READ LOCK; in a separate terminal window before you use mysqldump. You will need to release the lock after the dump has completed with the UNLOCK TABLES; command. Either option (–single-transaction or FLUSH TABLES WITH READ LOCK;) acquires a global read lock on all tables at the beginning of the dump. As soon as this lock has been acquired, the binary log coordinates are read and the lock is released. If long-updating statements are running when the FLUSH statement is issued, the MySQL server may get stalled until those statements finish. After that, the dump becomes lock-free and does not disturb reads and writes on the tables. If the update statements that the MySQL server receives are short (in terms of execution time), the initial lock period should not be noticeable, even with many updates.
(from http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html)

Here is the command to use mysqldump to simply backup all of your databases (assuming you have InnoDB tables). This command will create a dump (backup) file named all_databases.sql.

mysqldump --all-databases --single-transaction --user=root --pass > all_databases.sql

After you hit return, you will have to enter your password. You can include the password after the –pass option (example: –pass=my_password), but this is less secure and you will get the following error:

Warning: Using a password on the command line interface can be insecure.

Here is some information about the options that were used:

--all-databases - this dumps all of the tables in all of the databases
--user - The MySQL user name you want to use for the backup
--pass - The password for this user.  You can leave this blank or include the password value (which is less secure)
--single-transaction - for InnoDB tables

If you are using Global Transaction Identifier’s (GTID’s) with InnoDB (GTID’s aren’t available with MyISAM), you will want to use the –set-gtid-purged=OFF option. Then you would issue this command:

mysqldump --all-databases --single-transaction --set-gtid-purged=OFF --user=root --pass > all_databases.sql

Otherwise you will see this error:

Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.

You can also execute a partial backup of all of your databases. This example will be a partial backup because I am not going to backup the default databases for MySQL (which are created during installation) – mysql, test, PERFORMANCE_SCHEMA and INFORMATION_SCHEMA

Note: mysqldump does not dump the INFORMATION_SCHEMA database by default. To dump INFORMATION_SCHEMA, name it explicitly on the command line and also use the –skip-lock-tables option.

mysqldump never dumps the performance_schema database.

mysqldump also does not dump the MySQL Cluster ndbinfo information database.

Before MySQL 5.6.6, mysqldump does not dump the general_log or slow_query_log tables for dumps of the mysql database. As of 5.6.6, the dump includes statements to recreate those tables so that they are not missing after reloading the dump file. Log table contents are not dumped.

If you encounter problems backing up views due to insufficient privileges, see Section E.5, “Restrictions on Views” for a workaround.
(from: http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html)

To do a partial backup, you will need a list of the databases that you want to backup. You may retrieve a list of all of the databases by simply executing the SHOW DATABASES command from a mysql prompt:

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| comicbookdb        |
| coupons            |
| mysql              |
| performance_schema |
| scripts            |
| test               |
| watchdb            |
+--------------------+
8 rows in set (0.00 sec)

In this example, since I don’t want to backup the default mysql databases, I am only going to backup the comicbookdb, coupons, scripts and watchdb databases. I am going to use the following options:

--databases - This allows you to specify the databases that you want to backup.  You can also specify certain tables that you want to backup.  If you want to do a full backup of all of the databases, then leave out this option
--add-drop-database - This will insert a DROP DATABASE statement before each CREATE DATABASE statement.  This is useful if you need to import the data to an existing MySQL instance where you want to overwrite the existing data.  You can also use this to import your backup onto a new MySQL instance, and it will create the databases and tables for you.
--triggers - this will include the triggers for each dumped table
--routines - this will include the stored routines (procedures and functions) from the dumped databases
--events - this will include any events from the dumped databases
--set-gtid-purged=OFF - since I am using replication on this database (it is the master), I like to include this in case I want to create a new slave using the data that I have dumped.  This option enables control over global transaction identifiers (GTID) information written to the dump file, by indicating whether to add a SET @@global.gtid_purged statement to the output.
--user - The MySQL user name you want to use
--pass - Again, you can add the actual value of the password (ex. --pass=mypassword), but it is less secure than typing in the password manually.  This is useful for when you want to put the backup in a script, in cron or in Windows Task Scheduler.
--single-transaction - Since I am using InnoDB tables, I will want to use this option.

Here is the command that I will run from a prompt:

mysqldump --databases comicbookdb coupons scripts watchdb --single-transaction --set-gtid-purged=OFF --add-drop-database --triggers --routines --events --user=root --pass > partial_database_backup.sql

I will need to enter my password on the command line. After the backup has completed, if your backup file isn’t too large, you can open it and see the actual SQL statements that will be used if you decide that you need to recreate the database(s). If you accidentally dump all of the databases into one file, and you want to separate the dump file into smaller files, see my post on using Perl to split the dump file.

For example, here is the section of the dump file (partial_database_backup.db) for the comicbookdb database (without the table definitions). (I omitted the headers from the dump file.)

--
-- Current Database: `comicbookdb`
--

/*!40000 DROP DATABASE IF EXISTS `comicbookdb`*/;

CREATE DATABASE /*!32312 IF NOT EXISTS*/ `comicbookdb` /*!40100 DEFAULT CHARACTER SET latin1 */;

USE `comicbookdb`;

--
-- Table structure for table `comics`
--

DROP TABLE IF EXISTS `comics`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `comics` (
  `serial_id` int(7) NOT NULL AUTO_INCREMENT,
  `date_time_added` datetime NOT NULL,
  `publisher_id` int(6) NOT NULL,
....

If you are using the dump file to create a slave server, you can use the –master-data option, which includes the CHANGE MASTER information, which looks like this:

--
-- Position to start replication or point-in-time recovery from
--

CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000013', MASTER_LOG_POS=79338;

If you used the –set-gtid-purged=OFF option, you would see the value of the Global Transaction Identifier’s (GTID’s):

--
--GTID state at the beginning of the backup 
--

SET @@GLOBAL.GTID_PURGED='82F20158-5A16-11E2-88F9-C4A801092ABB:1-168523';

You may also test your backup without exporting any data by using the –no-data option. This will show you all of the information for creating the databases and tables, but it will not export any data. This is also useful for recreating a blank database on the same or on another server.

When you export your data, mysqldump will create INSERT INTO statements to import the data into the tables. However, the default is for the INSERT INTO statements to contain multiple-row INSERT syntax that includes several VALUES lists. This allows for a quicker import of the data. But, if you think that your data might be corrupt, and you want to be able to isolate a given row of data – or if you simply want to have one INSERT INTO statement per row of data, then you can use the –skip-extended-insert option. If you use the –skip-extended-insert option, importing the data will take much longer to complete, and the backup file size will be larger.

Importing and restoring the data is easy. To import the backup file into a new, blank instance of MySQL, you can simply use the mysql command to import the data:

mysql -uroot -p < partial_database_backup.sql

Again, you will need to enter your password or you can include the value after the -p option (less secure).

There are many more options that you can use with a href=”http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html”>mysqldump. The main thing to remember is that you should backup your data on a regular basis, and move a copy of the backup file off the MySQL server.

Finally, here is a Perl script that I use in cron to backup my databases. This script allows you to specify which databases you want to backup via the mysql_bak.config file. This config file is simply a list of the databases that you want to backup, with an option to ignore any databases that are commented out with a #. This isn’t a secure script, as you have to embed the MySQL user password in the script.

#!/usr/bin/perl
# Perform a mysqldump on all the databases specified in the dbbackup.config file

use warnings;
use File::Basename;

# set the directory where you will keep the backup files
$backup_folder = '/Users/tonydarnell/mysqlbak';

# the config file is a text file with a list of the databases to backup
# this should be in the same location as this script, but you can modify this
# if you want to put the file somewhere else
my $config_file = dirname($0) . "/mysql_bak.config";

# example config file
# You may use a comment to bypass any database that you don't want to backup
# # Unwanted_DB    (commented - will not be backed up)
# twtr
# cbgc

# retrieve a list of the databases from the config file
my @databases = removeComments(getFileContents($config_file));

# change to the directory of the backup files.
chdir($backup_folder) or die("Cannot go to folder '$backup_folder'");

# grab the local time variables
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
$year += 1900;
$mon++;
#Zero padding
$mday = '0'.$mday if ($mday<10);
$mon = '0'.$mon if ($mon<10);

$hour = "0$hour" if $hour < 10;
$min = "0$min" if $min  $folder/$file.Z`;

	print "Done\n";
}
print "Done\n\n";

# this subroutine simply creates an array of the list of the databases

sub getFileContents {
	my $file = shift;
	open (FILE,$file) || die("Can't open '$file': $!");
	my @lines=;
	close(FILE);

	return @lines;
}

# remove any commented tables from the @lines array

sub removeComments {
	my @lines = @_;

	@cleaned = grep(!/^\s*#/, @lines); #Remove Comments
	@cleaned = grep(!/^\s*$/, @cleaned); #Remove Empty lines

	return @cleaned;
}

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

MySQL Replication – Creating a New Master/Slave Topology with or without Virtual Machines

In my last few posts, I wrote about “How to install MySQL replication using GTID’s” (Part One, Part Two). In this post, I will show you how to install MySQL 5.6 and set up replication between two MySQL servers the “old fashioned way” using the binary log and binary log position.

I am going to create some virtual machines instead of using individual servers. But, you can also use these instructions to create a MySQL replication (master/slave) setup with real servers.

Here is how replication works. On the master server, when there are updates (inserts, updates, deletes, alter, etc.) to the database, MySQL will write the appropriate information to the binlog (binary log), depending upon which replication method you choose.

From: http://dev.mysql.com/doc/refman/5.6/en/binary-log.html

The binary log contains “events” that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes (for example, a DELETE which matched no rows), unless row-based logging is used. The binary log also contains information about how long each statement took that updated data. The binary log has two important purposes:

For replication, the binary log on a master replication server provides a record of the data changes to be sent to slave servers. The master server sends the events contained in its binary log to its slaves, which execute those events to make the same data changes that were made on the master. See Section 16.2, “Replication Implementation“.

Certain data recovery operations require use of the binary log. After a backup has been restored, the events in the binary log that were recorded after the backup was made are re-executed. These events bring databases up to date from the point of the backup. See Section 7.5, “Point-in-Time (Incremental) Recovery Using the Binary Log“.

The binary log is not used for statements such as SELECT or SHOW that do not modify data. To log all statements (for example, to identify a problem query), use the general query log. See Section 5.2.3, “The General Query Log“.

Running a server with binary logging enabled makes performance slightly slower. However, the benefits of the binary log in enabling you to set up replication and for restore operations generally outweigh this minor performance decrement.

From: http://dev.mysql.com/doc/refman/5.6/en/replication-formats.html

Replication works because events written to the binary log are read from the master and then processed on the slave. The events are recorded within the binary log in different formats according to the type of event. The different replication formats used correspond to the binary logging format used when the events were recorded in the master’s binary log. The correlation between binary logging formats and the terms used during replication are:

  • Replication capabilities in MySQL originally were based on propagation of SQL statements from master to slave. This is called statement-based replication (often abbreviated as SBR), which corresponds to the standard statement-based binary logging format. In older versions of MySQL (5.1.4 and earlier), binary logging and replication used this format exclusively.
  • Row-based binary logging logs changes in individual table rows. When used with MySQL replication, this is known as row-based replication (often abbreviated as RBR). In row-based replication, the master writes events to the binary log that indicate how individual table rows are changed.
  • The server can change the binary logging format in real time according to the type of event using mixed-format logging. When the mixed format is in effect, statement-based logging is used by default, but automatically switches to row-based logging in particular cases as described later. Replication using the mixed format is often referred to as mixed-based replication or mixed-format replication. For more information, see Section 5.2.4.3, “Mixed Binary Logging Format”.

Once a slave has the binlog and binlog position, the slave will connect to the master and retrieve all of the binlog entries from the specified binlog file and after a specified binlog position. The slave creates a thread (IO thread) that connects to the master server. The master server then creates a binlog dump thread, and sends the data to the slave’s IO thread. The slave will in effect retrieve the data that was written to the specified binary log starting after a specific binlog position and then write it to the slave’s relay log (see IO thread states). The slave will then take the data from the relay log and apply it to the slave’s database via the SQL thread. If this isn’t confusing enough, maybe this poorly drawn diagram will help you understand the steps in replication:

For this example, I am starting with a fresh install on one virtual machine running Mac OS X 10.8. MySQL doesn’t have a version specific to 10.8, but I found that the version for 10.7 will work. I am not going to cover how to install MySQL, but I will show you what you need to change in order to make MySQL replication work.

MySQL uses a configuration file (my.cnf or my.ini for Windows) for all of the database variables that will be read by MySQL at startup. After I install MySQL – but before I start MySQL, I will need to choose a copy of the my.cnf file to use, and then modify that file to match my specific use of MySQL. For this example, I am going to just copy the my.cnf file in my MySQL home directory to /etc/my.cnf and then edit the /etc/my.cnf file. You may use the following variables for your my.cnf options file. There are quite a few variables in this file that you will need to change or add. Since the version of the my.cnf file that I copied is mostly blank, here is what I have in my my.cnf file: (And yes, these are not all of the variables that you might need – these are just the basic variables for this example)

[mysqld_safe]
socket = /tmp/mysql.sock

[client]
port=3306

[mysqld]
port = 3306
user = mysql
tmpdir = /tmp
socket = /tmp/mysql.sock
basedir = /usr/local/mysql
datadir = /usr/local/mysql/data
log_error = /usr/local/mysql/error.log

Since InnoDB is the default storage engine for MySQL (as of 5.5), I want to provide some InnoDB-specific variables:

innodb_data_home_dir = /usr/local/mysql/data
innodb_data_file_path = ibdata1:25M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data

innodb_log_files_in_group = 2
innodb_log_file_size = 25M
innodb_buffer_pool_size = 16M

And then to enable replication, MySQL will need to write all of the select, insert, update and delete etc. statements to the binary log (binlog). I also need to choose the binlog format. I will use “mixed“, which allows MySQL to determine whether or not to use row or statement-based binlog format for each separate SQL statement. The slave will retrieve these statements from the master and then write them to the slave’s relay log before applying these statements to the slave database.

log-bin=mysql-bin
binlog_format=mixed

I need to give each server a unique ID. I always give my master server the id of one (1), and then for the slaves, I will assign a sequential number starting at two for the server-id‘s.

server-id = 1

If you want to possibly use the slave for failover (where you promote the slave to be the master), then you will need to log the updates that are on the slave to the slave’s binary log as well. This will allow any other slaves to use this server as a master. Even though this is a master, if it ever becomes a slave I might want to re-promote it to master at a future date.

log-slave-updates

And in order to enable auto crash recovery on the slaves, enable:

relay-log-recovery

You may now run the installation script for the version of MySQL that you are installing. After you install the new database, you will want to execute the mysql_install_db script. You can also refer to the post-installation procedures on the MySQL web site. Start MySQL, and run the script:

root@macserver01: # ./scripts/mysql_install_db
Installing MySQL system tables...OK

Filling help tables...OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

  ./bin/mysqladmin -u root password 'new-password'
  ./bin/mysqladmin -u root -h macserver01 password 'new-password'

Alternatively you can run:

  ./bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:

  cd . ; ./bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl

  cd mysql-test ; perl mysql-test-run.pl

Please report any problems with the ./bin/mysqlbug script!

The latest information about MySQL is available on the web at

http://www.mysql.com

Support MySQL by buying support/licenses at http://shop.mysql.com

WARNING: Found existing config file ./my.cnf on the system.
Because this file might be in use, it was not replaced,
but was used in bootstrap (unless you used --defaults-file)
and when you later start the server.
The new default config file was created as ./my-new.cnf,
please compare it with your file and take the changes you need.

WARNING: Default config file /etc/my.cnf exists on the system
This file will be read by default by the MySQL server
If you do not want to use this, either remove it, or use the
--defaults-file argument to mysqld_safe when starting the server

If you ran this script as root, you will need to change the ownership of the mysql-bin and mysql-bin.index files in the mysql data directory to the mysql Unix user.

Now you can start the MySQL server (if it isn’t already started). When you executed the mysql_install_db script, it created the grant tables. You are going to want to change the root password and delete any anonymous accounts. See Securing the Initial MySQL Accounts for specific information for your operating system.

An easy way to change the root password is to use mysqladmin from a command prompt:

$ ./bin/mysqladmin -u root password 'new-password'

Right after you change the root password, you will want to test the new root password by logging in with mysql as root at a Unix prompt:

root@macserver01: $ mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2259
Server version: 5.6.9-rc-log MySQL Community Server (GPL)

Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

Running the mysqladmin program (above) only changed your root password for the localhost.

mysql> select host, user, password from user;
+-----------+------+-------------------------------------------+
| host      | user | password                                  |
+-----------+------+-------------------------------------------+
| localhost | root | *8B7D321C58724D1990BB8DE02FBD22FE19DB0D0A |
| 127.0.0.1 | root |                                           |
| ::1       | root |                                           |
| localhost |      |                                           |
+-----------+------+-------------------------------------------+
4 rows in set (0.00 sec)

Now that you have logged in, you can change your password for all of your root accounts:

mysql> UPDATE mysql.user SET Password=PASSWORD('new-password') WHERE User='root';
Query OK, 2 rows affected (0.00 sec)
Rows matched: 3  Changed: 2  Warnings: 0

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> select host, user, password from user;
+-----------+------+-------------------------------------------+
| host      | user | password                                  |
+-----------+------+-------------------------------------------+
| localhost | root | *8B7D321C58724D1990BB8DE02FBD22FE19DB0D0A |
| 127.0.0.1 | root | *8B7D321C58724D1990BB8DE02FBD22FE19DB0D0A |
| ::1       | root | *8B7D321C58724D1990BB8DE02FBD22FE19DB0D0A |
| localhost |      |                                           |
+-----------+------+-------------------------------------------+
4 rows in set (0.00 sec)

To find and delete the anonymous accounts, you can find a list of all of the accounts. From a mysql prompt:

mysql> use mysql;
Database changed
mysql> SELECT user, host FROM user;
+------+-----------+
| user | host      |
+------+-----------+
| root | 127.0.0.1 |
| root | ::1       |
|      | localhost |
| root | localhost |
+------+-----------+
4 rows in set (0.00 sec)

The users that are blank are anonymous users. You can double-check the blank users with this statement:

mysql> select user, host from user where user = '';
+------+-----------+
| user | host      |
+------+-----------+
|      | localhost |
+------+-----------+
1 row in set (0.00 sec)

You may now delete the blank users:

mysql> delete from user where user = '';
Query OK, 1 row affected (0.00 sec)

These are the users that are remaining:

mysql> select user, host from user;
+------+-----------+
| user | host      |
+------+-----------+
| root | 127.0.0.1 |
| root | ::1       |
| root | localhost |
+------+-----------+
3 rows in set (0.00 sec)

Since we are installing MySQL for the first time on your master, you will need to create a replication user for replication. See Creating a User for Replication for more details, but here is a sample replication user creation statement:

mysql> CREATE USER 'replicate'@'%.mydomain.com' IDENTIFIED BY 'password';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'%.mydomain.com';

mysql> select user, host from user;
+-----------+-----------+
| user      | host      |
+-----------+-----------+
| replicate | %         |
| root      | 127.0.0.1 |
| root      | ::1       |
| root      | localhost |
+-----------+-----------+
4 rows in set (0.00 sec)

You will need to make sure that your replication user matches the domain names or IP addresses, so that it has permissions to access the other server(s).

MySQL should now be up and running on your master server. If you aren’t using VM’s, you may now duplicate these same installation steps on your slave server(s) – but you must change the value of server-id in your my.cnf file for each server – to something other than the value that you have for your master server.

If you are working with virtual machines, you will need to:

  • Stop the mysqld process
  • Stop the virtual machine (shutdown the instance)
  • Copy/duplicate the virtual machine
  • Change the IP address of the new virtual machine
  • Change the server-id in the my.cnf file of the new virtual machine
  • Change my server name for use in file sharing (if file sharing is turned on)
  • Generate a new UUID for the new slave servers and edit the auto.cnf file. (more on this below)

Beginning with MySQL 5.6, the MySQL server generates a unique ID (UUID) in addition to the server-id supplied by the user. This is available as the global, read-only variable server_uuid. If you aren’t using VM’s, a new UUID will be installed when you install MySQL. On the new VM copies, we will need to generate a new UUID and then edit the auto.cnf file. You can run the select UUID(); command from the master server:

mysql> select UUID();
+--------------------------------------+
| UUID()                               |
+--------------------------------------+
| 1da4ab9c-7baf-11e2-930f-6a4c3f56f0b5 |
+--------------------------------------+
1 row in set (0.00 sec)

Next, edit the auto.cnf file that is in the MySQL home data directory:

# cat auto.cnf 
[auto]
server-uuid=1da4ab9c-7baf-11e2-930f-6a4c3f56f0b5

Edit the auto.cnf file with a text editor (or use vi) and change the old UUID to the new UUID (example – change 33e3daac-79e5-11e2-9862-ec1bc27a1e29 to 1da4ab9c-7baf-11e2-930f-6a4c3f56f0b5).

After you have installed MySQL on the new slave or copied the original VM and repeated the steps above, you can check to see if your servers have unique server-id‘s and UUID‘s. Login to each instance of mysql:

# mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.6.10-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use mysql;
Database changed
mysql> show variables where variable_name = 'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 2     |
+---------------+-------+
1 row in set (0.00 sec)

mysql> show variables where variable_name = 'server_uuid';
+---------------+--------------------------------------+
| Variable_name | Value                                |
+---------------+--------------------------------------+
| server_uuid   | 33e3daac-79e5-11e2-9862-ec1bc27a1e29 |
+---------------+--------------------------------------+
1 row in set (0.00 sec)

Now that we have our servers up and running, we need to tell our slave server(s) about the master server. If we have already started inserting data into our master servers, we need to put a read lock on the master, show the master status to get the binlog and binlog position of the master, and then release the lock. You may do this on one line separated by a semi-colon to reduce the amount of time that the lock is in place:

mysql> FLUSH TABLES WITH READ LOCK;SHOW MASTER STATUS;UNLOCK TABLES;
Query OK, 0 rows affected (0.00 sec)

+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000015 |      540 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

It is important that you save the SHOW MASTER STATUS information, as you will need this for each slave.

Now we need to tell the slave where the master is located, which binlog file to use, and which position to start. Issue this CHANGE MASTER TO command on the slave server(s): (don’t forget to change the values to match your master server)

mysql> CHANGE MASTER TO
    ->   MASTER_HOST='master IP address',
    ->   MASTER_USER='replication user',
    ->   MASTER_PASSWORD='replication user password',
    ->   MASTER_PORT=3306,
    ->   MASTER_LOG_FILE='mysql-bin.000015',
    ->   MASTER_LOG_POS=540,
    ->   MASTER_CONNECT_RETRY=10;
Query OK, 0 rows affected, 2 warnings (0.27 sec)

We have two warnings from the above statement. Let’s look at the warnings:

mysql> show warnings\G
*************************** 1. row ***************************
  Level: Note
   Code: 1759
Message: Sending passwords in plain text without SSL/TLS is extremely insecure.
*************************** 2. row ***************************
  Level: Note
   Code: 1760
Message: Storing MySQL user name or password information in the master.info repository 
is not secure and is therefore not recommended. Please see the MySQL Manual for more 
about this issue and possible alternatives.
2 rows in set (0.00 sec)

The first error “Sending passwords in plain text…” can be ignored. Since we are setting up replication using this method, we have to send the user name and password as plain text. The second error explains that the information in the CHANGE MASTER TO statement is stored in a non-secure file named master.info in your MySQL data directory:

# pwd
/usr/local/mysql/data
# ls -l master.info
-rw-rw----  1 _mysql  wheel  99 Feb 20 15:26 master.info
# cat master.info
23
mysql-bin.000015
540
192.168.1.2
replicate
replicate999
3306
10
....

There are options to not using the master.info file: “MySQL 5.6 extends the replication START SLAVE command to enable DBAs to specify master user and password as part of the replication slave options and to authenticate the account used to connect to the master through an external authentication plugin (user defined or those provided under MySQL Enterprise Edition). With these options the user and password no longer need to be exposed in plain text in the master.info file.” (from https://blogs.oracle.com/MySQL/entry/mysql_5_6_is_a)

For this example, we don’t need to worry about these errors. To begin replication, we need to start the slave:

mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

We now can check the status of the slave to see if it is working as a slave, with the SHOW SLAVE STATUS\G command:

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.181
                  Master_User: replicate
                  Master_Port: 3306
                Connect_Retry: 10
              Master_Log_File: mysql-bin.000015
          Read_Master_Log_Pos: 540
               Relay_Log_File: macos-108-repl02-relay-bin.000002
                Relay_Log_Pos: 1551
        Relay_Master_Log_File: mysql-bin.000015
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 540
              Relay_Log_Space: 1735
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
                  Master_UUID: 33e3daac-79e5-11e2-9862-ec1bc27a1e29
             Master_Info_File: /usr/local/mysql-advanced-5.6.10-osx10.7-x86_64/data/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
1 row in set (0.00 sec)

Two values to note in the slave status shows us that our CHANGE MASTER TO statement worked:

              Master_Log_File: mysql-bin.000015
          Read_Master_Log_Pos: 540

We can now execute a statement on the master, to see if it propagates to the slave database. Let’s see what databases are on the master:

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.01 sec)

When we execute the same command on the slave, we get the same results. Since we already have a test database, let’s create a table in that database. We can check to see if there are any tables in that database already:

mysql> use test;
Database changed
mysql> show tables;
Empty set (0.00 sec)

Currently there aren’t any tables in the test database. We can now create one on the master database:

mysql> CREATE TABLE `address` (
    ->   `serial_number` int(6) NOT NULL AUTO_INCREMENT,
    ->   `last_name` char(40) NOT NULL DEFAULT '',
    ->   `first_name` char(40) NOT NULL DEFAULT '',
    ->   `address_01` char(40) NOT NULL DEFAULT '',
    ->   `city` char(30) NOT NULL DEFAULT '',
    ->   `state` char(20) NOT NULL DEFAULT '',
    ->   `zip` char(11) NOT NULL DEFAULT '',
    ->   `phone` char(15) NOT NULL DEFAULT '',
    ->   PRIMARY KEY (`serial_number`)
    -> ) ENGINE=InnoDB AUTO_INCREMENT=100000 DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.07 sec)

mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| address        |
+----------------+
1 row in set (0.00 sec)

And let’s take a look at the new master status;

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000015 |     1808 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+

So, if we have replication set up correctly, when we go to the slave, we should see this table on the slave as well. Let’s execute the same same show table statement on the slave:

mysql> use test;
Database changed
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| address        |
+----------------+
1 row in set (0.00 sec)

Replication is now up and running. You may continue these steps for additional slaves that you install manually or slaves where you copy the VM.

I can also use the MySQL Utility script mysqldbcompare to see if both tables are the same. I wrote about mysqldbcompare in an earlier post. Just like this post, on the master and slave databases, I will create a user named “scripts” to execute the mysqldbcompare script: I don’t need to actually create the user on the slave, as when I execute this command on the master, it will be replicated over to the slave.

mysql> CREATE USER 'scripts'@'%' IDENTIFIED BY 'scripts999';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON *.* TO 'scripts'@'%' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)

I can now run the mysqldbcompare script:

$ mysqldbcompare --server1=scripts:scripts999@192.168.1.181 --server2=scripts:scripts999@192.168.1.182 test:test --run-all-tests --changes-for=server2 --difftype=sql
# server1 on 192.168.1.181: ... connected.
# server2 on 192.168.1.182: ... connected.
# Checking databases test on server1 and test on server2
#
#                                                   Defn    Row     Data   
# Type      Object Name                             Diff    Count   Check  
# ------------------------------------------------------------------------- 
# TABLE     address                                 pass    pass    pass   

Databases are consistent.
#
# ...done

The SQL statement that I executed on the master to create the table “address” has been replicated on the slave, so replication is running and confirmed to be working.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Using MySQL Utilities Workbench Script mysqldbcompare To Compare Two Databases In Replication

In my last two posts, I wrote about setting up replication with MySQL 5.6 using Global Transaction Identifiers. Even when I set up replication “the old-fashioned way“, one thought always enters my mind – did all of the data copy over to the slave? And, even after the master/slave has been running for a while, I am always wondering if the data in the slave matches the master. Or did the change that I made to that table make it over to the slave? It is probably more of a case of paranoia on my part, as MySQL replication is very reliable and works really well.

A few months ago, I started writing about the MySQL Utilities. If you haven’t heard about the MySQL Utilities:

“MySQL Utilities is a package of utilities that are used for maintenance and administration of MySQL servers. These utilities encapsulate a set of primitive commands, and bundles them so they can be used to perform macro operations with a single command. MySQL Utilities may be installed via MySQL Workbench, or as a standalone package. The utilities are written in Python, available under the GPLv2 license, and are extendable using the supplied library. They are designed to work with Python 2.x greater than 2.6.” (from the introduction to MySQL Utilities page)

In order to reduce my paranoia (I will never be able to eliminate it), I can simply use the mysqldbcompare utility. The mysqldbcompare utility “compares the objects and data from two databases to find differences. It identifies objects having different definitions in the two databases and presents them in a diff-style format of choice. Differences in the data are shown using a similar diff-style format. Changed or missing rows are shown in a standard format of GRID, CSV, TAB, or VERTICAL.” (from: mysqldbcompare — Compare Two Databases and Identify Differences)

You don’t have to use mysqldcompare to compare two databases in replication. You may compare any two databases, even if they are on the same server. But for this post, I will be comparing two databases on two separate servers in a master/slave replication topology.

The utility is fairly easy to use. You just identify the two servers and which databases you want to compare. You will need to refer to my earlier post on using the MySQL Utilities for more information on how to execute the scripts.

I will be comparing a database that is on my master server (at 192.168.1.2) and the same database that is on one of the slaves (at 192.168.1.122) connected to that master. Instead of using root to execute the scripts, I create and use a MySQL user named “scripts” to use when I run a script. The syntax for mysqldbcompare is fairly easy, and you can refer to the mysqldbcompare man page for more of the commands and their usage:

I need to specify the servers, user name and passwords:

--server1=scripts:scripts999@192.168.1.2 
--server2=scripts:scripts999@192.168.1.122 

The name of the databases to compare (database_server1:database_server2):

cbgc:cbgc 

Do not stop the script at the first difference that is found. Process all objects.

--run-all-tests 

Specify the server to show transformations to match the other server.

--changes-for=server2 

Specify the difference display format. Permitted format values are unified, context, differ, and sql. The default is unified.

--difftype=sql

Now that I have decided on which options to use, I can run the scripts from within the MySQL Workbench Utilities shell:

$ mysqldbcompare --server1=scripts:scripts999@192.168.1.2 \
  --server2=scripts:scripts999@192.168.1.122 \
  cbgc:cbgc --run-all-tests --changes-for=server2 --difftype=sql
# server1 on 192.168.1.2: ... connected.
# server2 on 192.168.1.122: ... connected.
# Checking databases cbgc on server1 and cbgc on server2
#
#                                                   Defn    Row     Data   
# Type      Object Name                             Diff    Count   Check  
# ------------------------------------------------------------------------- 
# TABLE     Activity_Affiliate                      pass    pass    pass    
# TABLE     FedEx_2nd_Day                           pass    pass    pass    
# TABLE     FedEx_Express_Saver                     pass    pass    pass    
# TABLE     FedEx_Ground                            pass    pass    pass    
# TABLE     FedEx_Home                              pass    pass    pass    
# TABLE     FedEx_Priority_Overnight                pass    pass    pass    
# TABLE     FedEx_Standard_Overnight                pass    pass    pass    
# TABLE     Orders                                  pass    pass    pass    
# TABLE     USPS                                    pass    pass    pass    
# TABLE     activity                                pass    pass    pass    
# TABLE     comics                                  pass    pass    pass    
# TABLE     coupons                                 pass    pass    pass    
# TABLE     customer                                pass    pass    pass    
# TABLE     giftcert                                pass    pass    pass   

Databases are consistent.
#
# ...done

The output shows that my databases are consistent. I have a rather small database, and I re-executed the script again with the “time” command, and here are the time results:

real	0m4.519s
user	0m0.429s
sys	0m0.068s

It took about 4.5 seconds to execute on my database which is about 25 megabytes in size. Obviously, the time will increase relative to the complexity and size of your database. And the time will increase relative to the number of differences that the script finds.

In order to show you what happens when the databases are not in sync, I will now stop the slave database (at 192.168.1.122), and then add a line to one of the tables in the master database (at 192.168.1.2), and re-run the script (while the slave is still stopped).

On the slave server:

mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

On the master server:

mysql> insert into `cbgc`.`activity` ( `Customer_Serial_Number`, `Customer_Activity_Action`) \
          values ( '1201201', 'Test Visit');
Query OK, 1 row affected (0.96 sec)

Now I can run the mysqldbcompare script again. The slave is still turned off:

$ mysqldbcompare --server1=scripts:scripts999@192.168.1.2 \
  --server2=scripts:scripts999@192.168.1.122 cbgc:cbgc \ 
  --run-all-tests --changes-for=server2 --difftype=sql
# server1 on 192.168.1.2: ... connected.
# server2 on 192.168.1.122: ... connected.
# Checking databases cbgc on server1 and cbgc on server2
#
#                                                   Defn    Row     Data   
# Type      Object Name                             Diff    Count   Check  
# ------------------------------------------------------------------------- 
# TABLE     Activity_Affiliate                      pass    pass    pass    
# TABLE     FedEx_2nd_Day                           pass    pass    pass    
# TABLE     FedEx_Express_Saver                     pass    pass    pass    
# TABLE     FedEx_Ground                            pass    pass    pass    
# TABLE     FedEx_Home                              pass    pass    pass    
# TABLE     FedEx_Priority_Overnight                pass    pass    pass    
# TABLE     FedEx_Standard_Overnight                pass    pass    pass    
# TABLE     Orders                                  pass    pass    pass    
# TABLE     USPS                                    pass    pass    pass    
# TABLE     activity                                FAIL    FAIL    FAIL    
#
# Transformation for --changes-for=server2:
#

ALTER TABLE cbgc.activity 
  DROP PRIMARY KEY, 
  ADD PRIMARY KEY(serial_id), 
AUTO_INCREMENT=7542;

# Row counts are not the same among cbgc.activity and cbgc.activity.
#
# Transformation for --changes-for=server2:
#

INSERT INTO cbgc.activity (serial_id, Customer_Serial_Number, Customer_Activity_Action, 
Customer_Activity_Date_Time, Customer_Activity_Info, Notes, HTTP_REFERER) 
VALUES('7541', '1201201', 'Test Visit', NULL, NULL, NULL, NULL);


# TABLE     comics                                  pass    pass    pass    
# TABLE     coupons                                 pass    pass    pass    
# TABLE     customer                                pass    pass    pass    
# TABLE     giftcert                                pass    pass    pass   

# Database consistency check failed.
#
# ...done

The script alerted me to two issues. It tells me that I have one row of data missing (from the insert statement), but it also notices that my auto-increment on the activity database needs to be updated as well. If I simply run the insert statement, my auto-increment value will be incorrect. So, I need to run the auto-increment change first, and then I can execute the insert statement.

There are other choices of output that you can use besides sql – unified, context and differ. You can try each one and see which one will fit your needs. I prefer the sql output.

Since I stopped the slave, I can just start it again and it will catch up with the master. But, if you are working with an active master, you might want to put a read lock on the database, run the script again, and then make your changes. This is especially true if the changes involve something dynamic like a primary key or auto-increment column, as those values could change while you are trying to run the statements.

Let’s see what happens when we someone else makes some changes and we aren’t aware of the changes. But first, I will start the slave so it can catch up to the master (for the earlier missing statements). We will then stop the slave and let someone else make a few changes. And then we can run the mysqldbcompare utility again:

$ mysqldbcompare --server1=scripts:scripts999@192.168.1.2 \
  --server2=scripts:scripts999@192.168.1.122 cbgc:cbgc \
  --run-all-tests --changes-for=server2 --difftype=sql
# server1 on 192.168.1.2: ... connected.
# server2 on 192.168.1.122: ... connected.
# Checking databases cbgc on server1 and cbgc on server2
#
#                                                   Defn    Row     Data   
# Type      Object Name                             Diff    Count   Check  
# ------------------------------------------------------------------------- 
# TABLE     Activity_Affiliate                      pass    pass    pass    
# TABLE     FedEx_2nd_Day                           pass    pass    pass    
# TABLE     FedEx_Express_Saver                     pass    pass    pass    
# TABLE     FedEx_Ground                            pass    pass    pass    
# TABLE     FedEx_Home                              pass    pass    pass    
# TABLE     FedEx_Priority_Overnight                pass    pass    pass    
# TABLE     FedEx_Standard_Overnight                pass    pass    pass    
# TABLE     Orders                                  pass    pass    pass    
# TABLE     USPS                                    pass    pass    pass    
# TABLE     activity                                FAIL    pass    pass    
#
# Transformation for --changes-for=server2:
#

ALTER TABLE cbgc.activity 
  DROP PRIMARY KEY, 
  DROP COLUMN Dummy_Field, 
  ADD PRIMARY KEY(serial_id);


# TABLE     comics                                  pass    pass    pass    
# TABLE     coupons                                 FAIL    pass    pass    
#
# Transformation for --changes-for=server2:
#

ALTER TABLE cbgc.coupons 
  CHANGE COLUMN Coupon_Notes Coupon_Notes varchar(100) NULL;


# TABLE     customer                                pass    pass    pass    
# TABLE     giftcert                                pass    pass    pass   

Databases are consistent.
#
# ...done

This time, the script took about 20 seconds to run:

real	0m20.058s
user	0m0.452s
sys	0m0.224s

We can see from the output that a column named Dummy_Field was dropped. We can also see that the Coupon_Notes column has changed (or is different on the master). I can now take these changes and implement them on the slave (again assuming that these changes aren’t waiting to be sent to the slave). In my case, once I start the slave, the changes will propagate over to the slave, but there may be cases where that transaction was lost or skipped on the slave (for example, if you had to do a SET GLOBAL sql_slave_skip_counter = N on the slave). You now have the ability to easily make the slave the same as the master by executing these differences. If the changes are too complicated, then you might want to look at re-creating your slave.

There are many different options and output possibilities, so I would encourage you to just try mysqldbcompare and see how it works for you. And as always, feel free to post your feedback in the comments section.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

MySQL Replication with Global Transaction Identifiers – Step-by-Step Install and Addition of Slaves – Part Two

This post is part two of MySQL Replication with Global Transaction Identifiers – Step-by-Step Install and Addition of Slaves. In this post, I will be showing you how to use the MySQL Utility Script mysqlreplicate to create a new replication slave off a master database.

This is also the fourth in a series that I will be doing on MySQL Workbench Utilities – Administer MySQL with Python Scripts. You may want to read the first half of this post to understand how MySQL Workbench Utilities work and how you access the scripts. These scripts were written by Chuck Bell (a MySQL employee) and are available as stand-alone scripts (see Chuck’s blog for more information) or as part of the MySQL Workbench utility.

The mysqlreplicate “utility permits an administrator to start replication from one server (the master) to another (the slave). The user provides login information for the slave and connection information for connecting to the master.” (From: http://dev.mysql.com/doc/workbench/en/mysqlreplicate.html)

In the first post, I showed you how to create a slave off a master. The mysqlreplicate utility takes care of the manual steps for you, and makes it easy to create a slave database. I am not going to rewrite a lot of the details from the first post, so you may refer to that post for more information.

The first step is to install the MySQL database on a new server that is going to be your slave server. In this example, I am installing MySQL version 5.6.9 (Community Edition) on a Macintosh with OS 10.6.8. This tutorials should work for any platform, with just a few tweaks for Windows users.

After the install, I will need to change the permissions on my mysql directory, since I installed the database as root:

# ls -l
total 8
lrwxr-xr-x   1 root  wheel   29 Jan 25 11:50 mysql -> mysql-5.6.9-rc-osx10.7-x86_64
drwxr-xr-x  18 root  wheel  612 Jan 25 11:51 mysql-5.6.9-rc-osx10.7-x86_64
# chown -R _mysql mysql*
# ls -l
total 8
lrwxr-xr-x   1 _mysql  wheel   29 Jan 25 11:50 mysql -> mysql-5.6.9-rc-osx10.7-x86_64
drwxr-xr-x  18 _mysql  wheel  612 Jan 25 11:51 mysql-5.6.9-rc-osx10.7-x86_64

Next, I will run the mysql_install_db script from my mysql home directory (/usr/local/mysql) – /usr/local/mysql:

root# cd /usr/local/mysql
root# ls -l scripts
total 72
-rwxr-xr-x  1 _mysql  wheel  33018 Nov 22 10:28 mysql_install_db
root# ./scripts/mysql_install_db
Installing MySQL system tables...OK

Filling help tables...OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

  ./bin/mysqladmin -u root password 'new-password'
  ./bin/mysqladmin -u root -h VM-Mac-1081-122.local password 'new-password'

Alternatively you can run:

  ./bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:

  cd . ; ./bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl

  cd mysql-test ; perl mysql-test-run.pl

Please report any problems with the ./bin/mysqlbug script!

The latest information about MySQL is available on the web at

http://www.mysql.com

Support MySQL by buying support/licenses at http://shop.mysql.com

WARNING: Found existing config file ./my.cnf on the system.
Because this file might be in use, it was not replaced,
but was used in bootstrap (unless you used --defaults-file)
and when you later start the server.
The new default config file was created as ./my-new.cnf,
please compare it with your file and take the changes you need.

WARNING: Default config file /etc/my.cnf exists on the system
This file will be read by default by the MySQL server
If you do not want to use this, either remove it, or use the
--defaults-file argument to mysqld_safe when starting the server

Again, since I ran this as root, I will need to change permissions on a few files in the mysql directory to the mysql user. There are some files in the data directory that are owned by root, but I can just do a change ownership of the entire mysql directory.

I will now need to start mysql, and then change the root password. I can change the password with the mysqladmin client application:

./bin/mysqladmin -u root password 'newpassword'

I then check to make sure that my password change worked. I can do this by logging into mysql and I will also list the users in the database:

root# mysql -uroot -p

mysql> use mysql;
Database changed
mysql> select user, host, password from users;
ERROR 1146 (42S02): Table 'mysql.users' doesn't exist
mysql> select user, host, password from user;
+---------+-----------------------+-------------------------------------------+
| user    | host                  | password                                  |
+---------+-----------------------+-------------------------------------------+
| root    | localhost             | *8FF7274XXF360A5BB33835F544D6617707C23968 |
| root    | VM-Mac-1081-122.local |                                           |
| root    | 127.0.0.1             |                                           |
| root    | ::1                   |                                           |
|         | localhost             |                                           |
|         | VM-Mac-1081-122.local |                                           |
+---------+-----------------------+-------------------------------------------+
6 rows in set (0.00 sec)

You can see that mysqladmin only changed the root password for the localhost (the rest of the password fields were blank). I can change the rest of the root passwords once I am in mysql:

mysql> UPDATE mysql.user SET Password=PASSWORD('newpassword') WHERE User='root';
Query OK, 3 rows affected (0.00 sec)
Rows matched: 4  Changed: 3  Warnings: 0
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

I will also want to delete the anonymous users.

mysql> delete from user where user = '';
Query OK, 2 rows affected (0.00 sec)

Prior to starting this server as a slave, we can see that there are only the four mysql databases:

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

And since we haven’t started it as a slave, the SHOW SLAVE STATUS is empty:

mysql> show slave status\G
Empty set (0.00 sec)

I prefer to run all of my scripts with a user named “scripts” versus using the root mysql user, so I will create the scripts user:

CREATE USER 'scripts'@'192.168.1.122' IDENTIFIED BY 'scripts123';
GRANT ALL PRIVILEGES ON *.* TO 'scripts'@'192.168.1.122' WITH GRANT OPTION;

You can just use the root user if you want. But the mysqlreplicate script does not allow for special characters in the password field, so you will want to use a password without any special characters.

We will need to create a replication use on the master for the slave, which is at 192.168.1.2.

CREATE USER 'replicate'@'192.168.1.122' IDENTIFIED BY 'replicate123';
GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.122';

Now we are ready to run the mysqlreplicate script. You will want to review the mysqlreplicate man page for the variables and their use:

$ mysqlreplicate --master=scripts:scripts123@192.168.1.2:3306 \
   --slave=scripts:scripts123@192.168.1.122:3306 \ 
   --rpl-user=replicate:replicate123 --start-from-beginning -vvv
# master on 192.168.1.2: ... connected.
# slave on 192.168.1.122: ... connected.
# master id = 1
#  slave id = 5
# Checking InnoDB statistics for type and version conflicts.
# Checking storage engines...
# Checking for binary logging on master...
# Setting up replication...
# Connecting slave to master...
# CHANGE MASTER TO MASTER_HOST = '192.168.1.2', MASTER_USER = 'replicate', MASTER_PASSWORD = 'replicate123', MASTER_PORT = 3306
# Starting slave from the beginning...
# status: Queueing master event to the relay log
# Waiting for slave to synchronize with master
# status: Queueing master event to the relay log
# Waiting for slave to synchronize with master
# status: Waiting for master to send event
# ...done.

That’s it! The mysqlreplicate utility has done all of the steps necessary to start the slave. After a few minutes, we can now look at the slave status:

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.2
                  Master_User: replicate
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000007
          Read_Master_Log_Pos: 137301
               Relay_Log_File: WEB_SERVER_01-relay-bin.000012
                Relay_Log_Pos: 354
        Relay_Master_Log_File: mysql-bin.000007
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 137301
              Relay_Log_Space: 769
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
                  Master_UUID: 82f20158-5a16-11e2-88f9-c4a801092abb
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162868
            Executed_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162868,
                               B7DB3608-6729-11E2-9E3C-BDE2366761B1:1-4
1 row in set (0.00 sec)

mysql> 

If your slave hasn’t caught up with the master yet, the values for Retrieved_Gtid_Set and Executed_Gtid_Set will not match. It only took my slave a few minutes to catch up to my master, but your situation will be different. We can now compare the status of the Executed_Gtid_Set to the master, and it matches:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000007
         Position: 137301
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162868
1 row in set (0.00 sec)

If your master database was active with write transactions, then master’s Executed_Gtid_Set value may be higher and not match the slave. But your slave is now ready for use.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

MySQL Replication with Global Transaction Identifiers – Step-by-Step Install and Addition of Slaves – Part One

One of my favorite features of MySQL is replication. Replication provides you with the ability to have MySQL automatically copy data from one MySQL instance to another. There are many benefits to using replication, but I just like having an extra copy of my data on another server in case the main server crashes. But if the master crashes, I can then use the MySQL mysqlfailover script to automatically failover from the master to the slave. (see my earlier post – Using the MySQL Script mysqlfailover for Automatic Failover with MySQL 5.6 GTID Replication).

MySQL Replication automatically copies the data from the main database (master) to another database (slave). You can have multiple slaves pulling data from a single master, and you can have slaves replicating data off other slaves. If you are new to replication, check out the MySQL Replication FAQ page.

In this post, I will explain how to install or upgrade MySQL (by exporting and importing the data for a “fresh” upgrade), create a master and slave server, and start replication. If you have never installed MySQL or if you have never used replication, then this blog should be able to help you do this without too many headaches. One warning – this is a long post with a lot of details. And, this is not the only way to setup replication. This is just one way to do it. I will show you a different way in part two of this post. In order to try and keep this post as short as possible, I will not explain each command or feature, but instead I will post as many links as possible. This install was performed on a Macintosh with OS version 10.6.8 (for the master) and 10.8.1 (for the slave). This post should apply to most Unix installs, and it should work with Windows as well, with a few modifications.

I currently have one master database with three slaves attached. I will be upgrading from MySQL 5.6.8 to 5.6.9 and all of the servers are GTID-enabled. This post will be relevant if you are able to stop both the master and the slave during the upgrade process – or if you are installing a new master and a slave (without an existing database on either system).

When I upgrade my master server, I like to export the data and do a fresh install, and then re-import my data back into MySQL. Yes, there are ways to upgrade without doing having to export your data. But since I have a relatively small database (<100MB), I like starting with a new server and re-importing the data. This is just my preference.

The first thing that I will need to do is to export my data with mysqldump. I like to export my databases one at a time, in case I have problems with the import, then I can narrow the problem down to a specific database. I need a list of my databases, so from a MySQL prompt, I execute the show databases command.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| addressbook        |
| comicbooks         |
| genealogy          |
| information_schema |
| inventory          |
| mysql              |
| performance_schema |
| scripting          |
| test               |
| twtr               |
| website            |
+--------------------+
11 rows in set (0.92 sec)

I don’t want to export the four MySQL databases – information_schema, mysql, performance_schema or test – as these will be created in the new install. You are going to want to make sure that you don’t have any activity on your database before you export your data. From a mysql prompt (on the server you want to export data), you can use the FLUSH TABLES WITH READ LOCK command to prevent any additional inserts into the database and to allow all current transactions to be completed.

mysql> flush tables with read lock;
Query OK, 0 rows affected (0.00 sec)

I then export each database separately (my preference – you could do all of your databases at one time as well with the –all-databases option). In the command below, you will need to change DATABASE_NAME to the actual name of each of your databases:

/usr/local/mysql/bin/mysqldump --user=root --password --quick --skip-opt --create-options \
   --add-drop-database DATABASE_NAME > $HOME/mysql_backups/DATABASE_NAME.sql

Here is some information from the mysqldump page that explains each of the options that I used:

--quick, -q
This option is useful for dumping large tables. It forces mysqldump to retrieve rows for a table 
from the server a row at a time rather than retrieving the entire row set and buffering it in 
memory before writing it out.  I use this option in case I have problems importing the data, 
I easily edit the dump file and remove the bad data

--skip-opt
The --opt option is enabled by default, and --opt is shorthand for the combination of 
--add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables 
--quick --set-charset. It gives a fast dump operation and produces a dump file that can be 
reloaded into a MySQL server quickly.  Because the --opt option is enabled by default, you only 
specify its converse, the --skip-opt to turn off several default settings. See the discussion 
of mysqldump option groups for information about selectively enabling or disabling a subset 
of the options affected by --opt.

--create-options
Include all MySQL-specific table options in the CREATE TABLE statements.

--add-drop-database
Add a DROP DATABASE statement before each CREATE DATABASE statement. This option is typically used 
in conjunction with the --all-databases or --databases option because no CREATE DATABASE statements 
are written unless one of those options is specified.

You might want to read the mysqldump page to see which options you will want to use.

Once I have exported my data, I look at the export files to make sure that they were created. You might even want to open one of the smaller dump files (if the file isn’t too large) in a text-editor and just take a look to make sure everything looks good. If you used the same commands for each dump, then the smaller file will show you if the export is in the format you want.

If you have an existing MySQL database, you can also export the user and grant information so you can import this back into the new database. See my last blog post “Retrieving List of MySQL Users and Grants with Perl” to find out how to export your users and grants. You don’t have to use Perl to do this – you can manually use the commands explained in the post.

Now you can shutdown your MySQL instance.

Once the database has shutdown, you can now install the new version of MySQL. In this case, I am installing version 5.6.9. If you are on Unix or Mac, and you are installing as root, you will need to change ownership to the mysql user of the files in your home MySQL directory after the install process.

After you install the new database, you will want to execute the mysql_install_db script. You can also refer to the post-installation procedures on the MySQL web site. Start MySQL, and run the script:

root@macserver01: # ./scripts/mysql_install_db
Installing MySQL system tables...OK

Filling help tables...OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

  ./bin/mysqladmin -u root password 'new-password'
  ./bin/mysqladmin -u root -h macserver01 password 'new-password'

Alternatively you can run:

  ./bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:

  cd . ; ./bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl

  cd mysql-test ; perl mysql-test-run.pl

Please report any problems with the ./bin/mysqlbug script!

The latest information about MySQL is available on the web at

http://www.mysql.com

Support MySQL by buying support/licenses at http://shop.mysql.com

WARNING: Found existing config file ./my.cnf on the system.
Because this file might be in use, it was not replaced,
but was used in bootstrap (unless you used --defaults-file)
and when you later start the server.
The new default config file was created as ./my-new.cnf,
please compare it with your file and take the changes you need.

WARNING: Default config file /etc/my.cnf exists on the system
This file will be read by default by the MySQL server
If you do not want to use this, either remove it, or use the
--defaults-file argument to mysqld_safe when starting the server

If you ran this script as root, you will need to change the ownership of the mysql-bin and mysql-bin.index files in the mysql data directory to the mysql Unix user.

If you intend to use this server as a master server, you will need to edit the my.cnf (my.ini on Windows) file to make it ready to be a master server. The minimum you must change is to add these lines under the [mysqld] section of your my.cnf option file.

log-bin=mysql-bin
server-id=1

The “server-id” must be unique to each server. I usually set my master server-id = 1. Check out the Setting the Replication Master Configuration page on the MySQL web site to make sure you have the correct settings for your server.

We will be using global transaction identifiers (GTID) for replication. GTID’s are a new replication feature as of MySQL 5.6.5. To enable GTID, you will need to add these lines under the [mysqld] section of your my.cnf option file.

gtid_mode=ON
enforce-gtid-consistency
log-bin
log-slave-updates
binlog_format=mixed

Now you can start the MySQL server (if it isn’t already started). When you executed the mysql_install_db script, it created the grant tables. You are going to want to change the root password and delete any anonymous accounts. See Securing the Initial MySQL Accounts for specific information for your operating system.

An easy way to change the root password is to use mysqladmin from a command prompt:

$ ./bin/mysqladmin -u root password 'new-password'

Right after you change the root password, you will want to test the new root password by logging in with mysql as root at a Unix prompt:

root@macserver01: $ mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2259
Server version: 5.6.9-rc-log MySQL Community Server (GPL)

Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

Before we perform any more transactions, we need to make sure that we have GTID enabled. To see if GTID has been enabled, we can execute this statement from the mysql prompt:

mysql> show global variables like '%GTID%';
+--------------------------+-----------------------------------------------+
| Variable_name            | Value                                         |
+--------------------------+-----------------------------------------------+
| enforce_gtid_consistency | ON                                            |
| gtid_executed            | 82F20158-5A16-11E2-88F9-C4A801092ABB:1-26     |
| gtid_mode                | ON                                            |
| gtid_owned               |                                               |
| gtid_purged              |                                               |
+--------------------------+-----------------------------------------------+
5 rows in set (0.00 sec)

The variables enforce_gtid_consistency and gtid_mode should have the value of “ON”. The variable value for gtid_executed shows the UUID of the server, and the 1-26 shows that transactions one through 26 were executed. See GTID Concepts for more information about this value. Now that we know GTID has been enabled and our root password has been changed and confirmed, we can continue with deleting the anonymous accounts, creating our users and importing our data.

To find and delete the anonymous accounts, from a mysql prompt:

mysql> use mysql;
Database changed
mysql> SELECT user, host FROM user;
+------+-----------------------+
| user | host                  |
+------+-----------------------+
| root | 127.0.0.1             |
| root | ::1                   |
|      | macserver01.local     |
| root | macserver01.local     |
|      | localhost             |
| root | localhost             |
+------+-----------------------+
6 rows in set (0.00 sec)

The users that are blank are anonymous users. You can double-check the blank users with this statement:

mysql> select user, host from user where user = '';
+------+-----------------------+
| user | host                  |
+------+-----------------------+
|      | VM-Mac-1081-128.local |
|      | localhost             |
+------+-----------------------+
2 rows in set (0.00 sec)

You may now delete the blank users:

mysql> delete from user where user = '';
Query OK, 2 rows affected (0.00 sec)

These are the users that are remaining:

mysql> select user, host from user;
+------+-----------------------+
| user | host                  |
+------+-----------------------+
| root | 127.0.0.1             |
| root | ::1                   |
| root | VM-Mac-1081-128.local |
| root | localhost             |
+------+-----------------------+
4 rows in set (0.00 sec)

You can now create the users that you exported from your previous instance, or if this is a new install, you may create the users that you think you will need for this instance. If you exported the users, then you will want to remove the “CREATE USER” statement for the root users that match the user and host values above. If you had grants for these users that were different than the default grants, you can still execute the grant statements.

If you are installing MySQL for the first time on your master or if you did not have a replication user in your previous instance, you will need a replication user for replication. See Creating a User for Replication for more details, but here is a sample replication user creation statement:

mysql> CREATE USER 'replicate'@'%.mydomain.com' IDENTIFIED BY 'password';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'%.mydomain.com';

Now that our users have been created, we can import the data from our earlier export. If this is a new install, then you may skip this step. Before we import the data, let’s look at the master status. We will use this information later, so save the output to a text file.

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000006
         Position: 71046480
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-26
1 row in set (0.00 sec)

We can use the mysql program to import the data from the backups that we created earlier. You will need to execute this command for each database backup file:

mysql -uroot -p DATABASE_NAME < $HOME/mysql_backups/DATABASE_NAME.sql

Once you have imported the data, you can check the master status to see how many transactions were executed. Since I exported my data with one insert statement per line, the total number of insert statements that I had in my import should be close to the number of transactions that were executed.

mysql -uroot -p DATABASE_NAME  show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000006
         Position: 71046480
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162551
1 row in set (0.00 sec)

The value of Executed_Gtid_Set contains the same type of information from the variable gtid_executed that we looked at previously. The value 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162551 contains the UUID of the server and shows that transactions one through 162551 have been executed on this new instance. Since we exported the original data with each data row on an individual INSERT line, we can now figure out how many rows of data we imported. Before we imported the data, the value of Executed_Gtid_Set was 82F20158-5A16-11E2-88F9-C4A801092ABB:1-26 – so we executed 26 transactions before the data import. The difference in the total number of rows from the data import less the number of transactions that were executed earlier, minus the total number of any other statements (such as CREATE TABLE) should give us a count of the number of lines of data we imported. This step isn’t really necessary, but I like to do it just to make sure that I didn’t lose any data. And yes, it might be overkill.

We can do a line count for all of the data files that we imported earlier that contained an “INSERT” statement. This will give us a count of the total number of inserts from our import.

$ ls -l $HOME/mysql_backups/*sql
total 61832
-rw-r--r--  1 root  staff   1151006 Jan  9 00:12 addressbook.sql
-rw-r--r--  1 root  staff    492652 Jan  8 23:11 comicbooks.sql
-rw-r--r--  1 root  staff  27485322 Jan  8 23:11 genealogy.sql
-rw-r--r--  1 root  staff    603943 Jan  8 23:11 inventory.sql
-rw-r--r--  1 root  staff    779634 Jan  8 23:11 scripting.sql
-rw-r--r--  1 root  staff   1077248 Jan  8 23:11 twtr.sql
-rw-r--r--  1 root  staff     50643 Jan  8 23:11 website.sql
$ grep INSERT *sql | wc -l
  162444

Now we can get the total number of CREATE statements:

root@macserver01: $ grep CREATE *sql | wc -l
      81

The total number of transactions executed so far is 162551. If we subtract the number of INSERT lines from the import (162444) and CREATE statements (81), we get the total of transactions that had taken place before the data import, which was 26. We can now confirm that all of our data was imported successfully. We can now install MySQL on the slave and start replication.

For the slave, we will want to do the same steps for the install process as we did on the master, but we will stop at importing any data. Also, we will not have to create our additional users (with the exception of the replication user) and we will not have to import any data. Once we turn on the slave instance, the users will be replicated and the data we imported will be copied to the slave. Here are the steps:

  • Install MySQL version 5.6.9 (change ownership of the files in the mysql directory to mysql if you installed as root)
  • Run the post-install script mysql_install_db (change ownership of the mysql-bin and mysql-bin.index files in the data directory if you installed as root)
  • Change the root password and test it.
  • You don’t have to remove the anonymous accounts, as the SQL statements that we performed on the master will also be executed on the slave.
  • Create the replication user.
  • Shutdown the mysql server.
  • Edit the my.cnf options file (my.ini on Windows) and insert the GTID variables as shown earlier. But, the server-id value must be something other than the value from the master server. You can set this value to 2.
  • Start the server
  • Test to make sure GTID is enabled.
  • Stop the slave by logging into mysql as root and executing “stop slave;”
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

We can now provide the slave with the information on which master to use. Before GTID, you would have to tell mysql which binary log you wanted to use, and the position within that binary log. With GTID, you only need to set MASTER_AUTO_POSITION = 1“;

mysql> CHANGE MASTER TO 
    -> MASTER_HOST = '',
    -> MASTER_PORT = 3306,
    -> MASTER_USER = 'replication_user_name',
    -> MASTER_PASSWORD = 'replication_user_password',
    -> MASTER_AUTO_POSITION = 1;
Query OK, 0 rows affected, 2 warnings (0.20 sec)

You will need to change the replication_user_name and replication_user_password to match the values you used when you created the replication user.

Before we turn on the slave, you may check the status of the slave:

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: 
                  Master_Host: 192.168.1.2
                  Master_User: replicate
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: 
          Read_Master_Log_Pos: 4
               Relay_Log_File: WEB_SERVER_01-relay-bin.000001
                Relay_Log_Pos: 4
        Relay_Master_Log_File: 
             Slave_IO_Running: No
            Slave_SQL_Running: No
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 0
              Relay_Log_Space: 151
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 0
                  Master_UUID: 
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: 
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: C242A198-5AAA-11E2-8CC0-387DCB822A4B:1-2
1 row in set (0.00 sec)

We can check the values for Master_Host and Master_User to make sure they match our master server. Since the slave hasn’t been started yet, the value for Retrieved_Gtid_Set is blank – as we haven’t retrieved any data from the master. Once we start the slave, the value for Retrieved_Gtid_Set will show us how many transactions have been retrieved from the master. The value for Executed_Gtid_Set shows that we have executed two transactions on this new slave instance – changing the root password and creating the replication user. Now we can start the slave:

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

After a few moments, you can check on the slave status again.

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.2
                  Master_User: replicate
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000007
          Read_Master_Log_Pos: 1914
               Relay_Log_File: WEB_SERVER_01-relay-bin.000007
                Relay_Log_Pos: 1024
        Relay_Master_Log_File: mysql-bin.000006
             Slave_IO_Running: Yes
            Slave_SQL_Running: No
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 1396
                   Last_Error: Error 'Operation CREATE USER failed for 'replicate'@'%'' on query. 
                               Default database: ''. Query: 'CREATE USER 'replicate'@'%' 
                               IDENTIFIED BY PASSWORD '*BE1BDEC0AA74B4XCB07X943E70X28096CXA985F8''
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 814
              Relay_Log_Space: 71051295
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 1396
               Last_SQL_Error: Error 'Operation CREATE USER failed for 'replicate'@'%'' on query. 
                               Default database: ''. Query: 'CREATE USER 'replicate'@'%' 
                               IDENTIFIED BY PASSWORD '*BE1BDEC0AA74B4DCB079943E70528096CCA985F8''
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
                  Master_UUID: 82f20158-5a16-11e2-88f9-c4a801092abb
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: 
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 130109 21:50:45
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162562
            Executed_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-3,
C242A198-5AAA-11E2-8CC0-387DCB822A4B:1-2

We can see from the value for Retrieved_Gtid_Set is 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162562, which shows that we have already retrieved all of the transactions from the master. The Executed_Gtid_Set value of 82F20158-5A16-11E2-88F9-C4A801092ABB:1-3 shows that we have processed the first three transactions from the master, and the value of C242A198-5AAA-11E2-8CC0-387DCB822A4B:1-2 shows we have executed two transactions from the slave.

The variable Last_Error shows that we also have an error because the CREATE USER statement for the replication user from the master database fails on the slave, as we have already created a replication user. I purposely created the same user to show you what happens when you have an error on the slave. In the above list of installation actions to do on the slave, you can skip “create replication user”.

To skip this error and continue with replication, you can set the SQL_SLAVE_SKIP_COUNTER to “1″, which tells the slave to skip one transaction. You must stop the slave, set SQL_SLAVE_SKIP_COUNTER = 1 and restart the slave.

mysql> stop slave;SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;start slave;
Query OK, 0 rows affected (0.16 sec)

Let’s look at the slave status again: (the output is truncated as we only need to look at the values for Retrieved_Gtid_Set and Executed_Gtid_Set)

mysql> show slave status\G
....
           Retrieved_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162562
            Executed_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-3:5-15401,
C242A198-5AAA-11E2-8CC0-387DCB822A4B:1-12
....

You can see that the value of the Executed_Gtid_Set has changed to 82F20158-5A16-11E2-88F9-C4A801092ABB:1-3:5-15401, which means that the slave is starting to process the transactions from the master, and is on transaction 15401. You can also see that transaction number four was skipped (when we executed the SET GLOBAL SQL_SLAVE_SKIP_COUNTER command). This was the transaction to create the replication slave user.

You can check the slave status until you see that all of the transactions that were retrieved from the master have been completed on the slave:

mysql> show slave status\G
....
           Retrieved_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162562
            Executed_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-3:5-162562,
C242A198-5AAA-11E2-8CC0-387DCB822A4B:1-12
....

You can go back to the master and see the master’s status:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000007
         Position: 1914
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 82F20158-5A16-11E2-88F9-C4A801092ABB:1-162562
1 row in set (0.00 sec)

We can now see that the value for Executed_Gtid_Set on the master is the same as the value on the slave (not including the transactions executed on the slave itself). So, the slave now has the same data as the master, and it is up to date and not lagging behind the master. If you have a busy master server, your slave might lag behind while it updates the records on the slave.

We can also check to make sure that all of the users that we created on the master are also now on the slave:

SLAVE:

mysql> select user, host from user order by user, host;
+-------------+-----------------------+
| user        | host                  |
+-------------+-----------------------+
| WebSite     | 192.168.1.2           |
| WebSite     | localhost             |
| replicate   | 192.168.1.121         |
| replicate   | 192.168.1.2           |
| replicate   | 192.168.1.4           |
| replicate   | localhost             |
| root        | 127.0.0.1             |
| root        | 192.168.1.2           |
| root        | VM-Mac-1081-128.local |
| root        | localhost             |
| scripting   | 192.168.1.121         |
| scripting   | 192.168.1.122         |
| scripting   | 192.168.1.2           |
+-----------+-------------------------+
13 rows in set (0.01 sec)

MASTER:

mysql> select user, host from user order by user, host;
+-------------+---------------+
| user        | host          |
+-------------+---------------+
| WebSite     | 192.168.1.2   |
| WebSite     | localhost     |
| replicate   | 192.168.1.121 |
| replicate   | 192.168.1.2   |
| replicate   | 192.168.1.4   |
| replicate   | localhost     |
| root        | 127.0.0.1     |
| root        | 192.168.1.2   |
| root        | localhost     |
| root        | macserver01   |
| scripting   | 192.168.1.121 |
| scripting   | 192.168.1.122 |
| scripting   | 192.168.1.2   |
+-----------+-----------------+
13 rows in set (0.00 sec)

The only difference in the users on the master and slave is the root user for each machine:

Slave - root, VM-Mac-1081-128.local
Master - root, macserver01

The master and slave are now ready for use. If you have an Enterprise subscription to MySQL, you can use MySQL Enterprise Monitor to look at the status of the master and the slave:

Otherwise, you may just do a periodic “show master status\G” and “show slave status\G” to see if the slave is lagging behind the master.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Retrieving List of MySQL Users and Grants with Perl

Before I upgrade MySQL to the latest and greatest version, one of the first things that I do is export the user and grant information. In the past, I would keep all of my user information (user name, password, grants) in a text file, with the SQL for each user/grant ready to be executed on the upgraded server. I did use my own form of “mental encryption” for my passwords, so the passwords weren’t in plain English. But then I would have to decode my passwords each time before I executed the SQL statements.

When I upgrade, I usually like to dump all of the data and import it into the new version, so I have a fresh copy of the database. The MySQL server that I have is for my personal use and the data size is relatively small, so for my case it doesn’t take long to import the data.

But there were times when I would add a user in the MySQL database and forget to add it to my text file. Then, when it came time to upgrade and I would refer to my text file, these new users would not get recreated. For me it wasn’t that big of a deal, as I am only dealing with my own home-office server, and not a production server. I would eventually figure out that these users weren’t available, and I would simply recreate them. But I often add temporary users for testing purposes. Every once in a while I would want to keep some of the temporary users, and some users would be deleted after the testing was completed. So my text file was rarely up to date.

I am in the process of upgrading my servers to MySQL 5.6.9, and I decided to write a quick Perl script to export all of the users, passwords and grants. Since I will be doing a fresh install of MySQL, I can then just use the output from this script to recreate my users. Of course, you need to run this script before the upgrade.

There are two SQL statements that we will execute with this script. The first, is simply retrieving a list of the user and host names from the mysql.user table:

mysql> SELECT user, host FROM user order by user, host;
+-------------+---------------+
| user        | host          |
+-------------+---------------+
| replicate   | 192.168.1.121 |
| replicate   | 192.168.1.2   |
| replicate   | 192.168.1.4   |
| replicate   | localhost     |
| root        | 127.0.0.1     |
| root        | 192.168.1.2   |
| root        | localhost     |
| root        | macserver01   |
| scripting   | 192.168.1.121 |
| scripting   | 192.168.1.122 |
| scripting   | 192.168.1.2   |
+-----------+---------------+
11 rows in set (0.00 sec)

Then, we will loop through each of the user and host names to retrieve their grants and passwords. Here is a sample of retrieving this data for just one user and one host name:

mysql> SHOW GRANTS FOR 'replicate'@'192.168.1.121';
+-------------------------------------------------------------------------------------------------------------------+
| Grants for replicate@192.168.1.121                                                                                |
+-------------------------------------------------------------------------------------------------------------------+
| GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.121' IDENTIFIED BY PASSWORD '*BF6F715A6EBFE63005BEB705C' |
+-------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

The script will create the necessary SQL statements to recreate the users and grants, with the output displayed in the terminal window. You will just need to copy and save the output from the terminal window into a text file. Or you can change the print statement to output the information directly to a file. Here are the changes to output to a file:

Add this to the top of the script after the “my $Database” line.

open(OUTFILE, ">mysql_usernames.txt") || die "Can't redirect stdout";

Add OUTFILE after each of your print commands:

print OUTFILE "CREATE user \'$user\'\@'$host\'\ identified by '';\n";
print OUTFILE "$privileges;\n\n";

And close the OUTFILE after the last right curly bracket “}” and before the subroutine:

close OUTFILE;

For the script, you will need to install the Perl::DBI module. You will also need to change the values for the $host, $userid and $passwd variables in the sub routine ConnectToMySql to match your system. Here is the Perl script.

#!/usr/bin/perl

use DBI;

my $Database = "mysql";

        $dbh = ConnectToMySql($Database);

        # retrieve a list of users and host names
	$query = "SELECT user, host FROM user order by user, host";
	
        $sth = $dbh->prepare($query);

        $sth->execute();
    
          while (@data = $sth->fetchrow_array()) {
            my $user = $data[0];
            my $host = $data[1];
            
            print "CREATE user \'$user\'\@'$host\'\ identified by '';\n";

                $dbh2 = ConnectToMySql($Database);

	        # retrieve the grants for each user and host combination
                $query2 = "SHOW GRANTS FOR '$user'\@'$host'";
	
                $sth2 = $dbh2->prepare($query2);

                $sth2->execute();

                        while (@data2 = $sth2->fetchrow_array()) {
                            my $privileges = $data2[0];
                            print "$privileges;\n\n";
                        }

          # end first while statement           
          }

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   my $host ="";
   my $userid = "";
   my $passwd = "";
   my $connectionInfo = "dbi:mysql:$db;$host";

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

And here is the output from running the script.

root@macserver01: $ perl get_mysql_users.pl
CREATE user 'replicate'@'192.168.1.121' identified by '';
GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.121' IDENTIFIED BY PASSWORD '*BF6F715A6EBF367E76X705C';

CREATE user 'replicate'@'192.168.1.2' identified by '';
GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.2' IDENTIFIED BY PASSWORD '*BF6F715A6EBF367E76X705C';

CREATE user 'replicate'@'192.168.1.4' identified by '';
GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.4' IDENTIFIED BY PASSWORD '*2A9C19E10B309BF1BE40E4A9C';

CREATE user 'replicate'@'localhost' identified by '';
GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'localhost' IDENTIFIED BY PASSWORD '*2A9C19E10B309BF1BE40E4A9C';

CREATE user 'root'@'127.0.0.1' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'127.0.0.1' IDENTIFIED BY PASSWORD '*BF6F715A6EBFKK367E76X705C' WITH GRANT OPTION;

CREATE user 'root'@'192.168.1.2' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.1.2' IDENTIFIED BY PASSWORD '*BF6F715A6EBFKK367E76X705C' WITH GRANT OPTION;

CREATE user 'root'@'localhost' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*BF6F715A6EBFKK367E76X705C' WITH GRANT OPTION;

CREATE user 'root'@'macserver01' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'macserver01' WITH GRANT OPTION;

CREATE user 'scripting'@'192.168.1.121' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'scripting'@'192.168.1.121' IDENTIFIED BY PASSWORD '*DEE6483B0XX23K3AD402E34F7' WITH GRANT OPTION;

CREATE user 'scripting'@'192.168.1.122' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'scripting'@'192.168.1.122' IDENTIFIED BY PASSWORD '*DEE6483B0XX23K3AD402E34F7' WITH GRANT OPTION;

CREATE user 'scripting'@'192.168.1.2' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'scripting'@'192.168.1.2' IDENTIFIED BY PASSWORD '*DEE6483B0XX23K3AD402E34F7' WITH GRANT OPTION;

Now when I upgrade my server, I can simply run this script prior to my upgrade and save the information. I don’t have to worry about missing a user or keeping my user information in a text file.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

MySQL Replication – Multi-Threaded Slaves (Parallel Event Execution)

If you aren’t familiar with MySQL replication, “Replication enables data from one MySQL database server (the master) to be replicated to one or more MySQL database servers (the slaves). Replication is asynchronous by default – slaves need not to connected permanently to receive updates from the master. This means that updates can occur over long-distance connections and even over temporary or intermittent connections such as a dial-up service. Depending on the configuration, you can replicate all databases, selected databases, or even selected tables within a database.” (From: http://dev.mysql.com/doc/refman/5.5/en/replication.html).

I use MySQL replication on my home office server. I don’t really have much data to store, but it is nice to have several replicated slaves for backup purposes and also for testing new replication features of MySQL. I also use my setup to demo MySQL Enterprise Monitor.

Prior to MySQL 5.6.3, replication slaves were single-threaded. There are three threads involved with replication, but only one of those threads writes the replicated data from the master to the slave database (more info about these threads may be found here). If you had a busy master server, with a high number of writes, these writes could get bottlenecked at the slave, as the slave could only apply each event one at a time, in the same order that the events were executed on the master. If you were using the slaves for your reads, then your data could be stale depending on how quickly (or slowly) the slaves could apply the writes from the master.

I was giving a presentation a couple years ago about MySQL replication. I was trying to demonstrate that adding slaves doesn’t necessarily divide the workload equally among the number of servers that you have. In my example, I stated that we had a master server that had a max load (at 100%) of 10,000 events (reads and writes) per a given time frame. For these 10,000 events, we assumed that the master was performing 6,000 reads and 4,000 writes during this period. If we added three slaves, we couldn’t figure that the master would now be at 25% capacity (100% capacity divided by the new total of four servers). Each slave also has to perform the same number of writes as the master – we were really only scaling the number of reads across four servers.

In order to try and get my point across, I then presented a formula that looked like this:

Master server alone:
Max. Load = 6000 reads + 4000 writes / 10,000

Master Server plus three slaves:
Load = (6000 reads + (4 x 4000 writes) / (4 servers x 10,000)) = (22,000 / 40,000) = 55%

Someone in the audience then pointed out that this formula wouldn’t work exactly, as the slaves were single-threaded. Of course that was a true statement, but the point that I was trying to make is that scaling out with slaves isn’t exactly linear. Later, I tried to come up with an easy way to determine a formula that would take into account the fact that slaves were single-threaded, but I couldn’t find an easy way to do this. There were just too many factors involved. For future presentations, I just made sure that I added the caveat that slaves are single-threaded.

So, what are multi-threaded slaves? Multi-threaded slaves allow you to execute the replication events from a master across different databases in parallel. For best results, you should partition your data per database. In other words, instead of having one database with many tables, you would have a database for each table. If you have some tables that are read-only and you rarely write to these tables, you could include them in their own database or in another database. By splitting the data into multiple databases, MySQL replication is able to update each database separately, in the same order relative to the updates as they occurred on the master. There is a system variable named slave_parallel_workers, which should be set to equal the same number of databases that you have. There are some concerns in using multi-threaded replication, as events from the master might not be executed on the slave in the same order. Click here for more information on MySQL multi-threaded replication (Parallel Event Execution).

Back in April, 2012, Mat Keep wrote about Benchmarking MySQL Replication with Multi-Threaded Slaves in which he demonstrated that multi-threaded slaves could improve slave performance by almost five times. Mat’s example involved inserting 10,000 rows into 10 different schemas on a single slave. Mat stated that there are three key variables that you need to set to achieve maximum performance with multi-threaded slaves:

binlog-format=STATEMENT 
relay-log-info-repository=TABLE 
master-info-repository=TABLE 

(From http://dev.mysql.com/doc/refman/5.6/en/slave-logs-status.html: use ––master-info-repository to have the master info log written to the mysql.slave_master_info table, and use ––relay-log-info-repository to have the relay log info log written to the mysql.slave_relay_log_info table)

Mat also suggests that the variable slave_parallel_workers should be set to equal the number of schemas that you have. If you look at the data he provided, increasing this variable beyond the total number of schemas did not improve performance. Even though the slave is multi-threaded, each schema is still single-threaded, so having multiple worker threads didn’t really make a difference.

While I don’t have the need at this point to switch my replication setup to use multi-threaded slaves, it is still a great new feature of 5.6. You may download a copy of MySQL at http://dev.mysql.com/downloads/mysql. As of this writing, 5.6 is under the Development Releases tab.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Follow

Get every new post delivered to your Inbox.

Join 33 other followers