Upgrade MySQL to a new version with a fresh installation & use shell scripts and mysqldump to reload your data

There are several ways to upgrade MySQL. In this post, we will use a combination of shell scripts and the mysqldump application to export our MySQL data, and then re-import it back into the upgraded version of MySQL.

In this example, we will be doing a minor version upgrade. We will be going from 5.6.17 to 5.6.19. This method may not work if you are upgrading from one major release to another – from 5.1 to 5.5, or 5.5 to 5.6. You will want to check each version and review the new features/functions and also what features/functions have been deprecated. We are also assuming that no one will be using the database during the time it takes for us to do the upgrade.

If you want to upgrade from a version that is more than one major release apart from your current version, then you will want to upgrade to each successive version. For example, if you want to upgrade from 5.0 to 5.6, you will want to upgrade from 5.0 to 5.1, then 5.1 to 5.5, and then 5.5 to 5.6.

You don’t have to export all of your data when you upgrade MySQL. There are ways of upgrading without doing anything to your data. But in this post, I will be exporting the data and re-importing it, for a fresh installation. I don’t have that much data, so I don’t mind doing the export and import. If you have a lot of data, you might want to consider other options.

To get an idea of the size of your database(s), here is a quick script that you can use:

SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB" 
FROM information_schema.TABLES GROUP BY table_schema ; 

When I perform an export/import, I like to export each database as a separate mysqldump file, and then also export all of the databases together in one large file. By exporting/importing the individual databases, if you have an error importing one of the database dump files, you can isolate the error to a single database. It is much easier to fix the error in one smaller data dump file than with a larger all-inclusive dump file.

I am also going to create some simple shell scripts to help me create the commands that I need to make this task much easier. First, you will want to create a directory to store all of the scripts and dump files. Do all of your work inside that directory.

Next, I want to get a list of all of my databases. I will log into mysql, and then issue the show databases; command: (which is the same command as: select schema_name from information_schema.schemata;)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| 12thmedia          |
| cbgc               |
| comicbookdb        |
| coupons            |
| healthcheck        |
| innodb_memcache    |
| landwatch          |
| laurelsprings      |
| ls_directory       |
| mem                |
| mysql              |
| performance_schema |
| protech            |
| scripts            |
| stacy              |
| storelist          |
| test               |
| testcert           |
| tony               |
| twtr               |
| watchdb            |
+--------------------+
22 rows in set (1.08 sec)

I can then just highlight and copy the list of databases, and put that list into a text file named “list.txt“. I do not want to include these databases in my export:

information_schema
mysql
performance_schema
test

However, I will export the mysql table later. You need to check with the MySQL manual to make sure that there haven’t been any changes to the MySQL table from one version to the next.

I will need to manually remove those databases from my list.txt file. I then want to remove all of the spaces and pipe symbols from the text file – assuming that you do not have any spaces in your database names. Instead of using spaces in a database name, I prefer to use an underline character “_“. These scripts assume that you don’t have any spaces in your database names.

If you know how to use the vi editor, you can so a substitution for the pipes and spaces with these commands:

:%s/ //g
:%s/|//g

Otherwise, you will want to use another text editor and manually edit the list to remove the spaces and pipe symbols. Your finished list.txt file should look like this:

12thmedia cbgc
comicbookdb
coupons
healthcheck
innodb_memcache
landwatch
laurelsprings
ls_directory
mem
protech
scripts
stacy
storelist
testcert
tony
twtr
watchdb

You can then create a simple shell script to help create your mysqldump commands – one command for each database. You will want to create this script and the other scripts in the directory you created earlier. Name the script export.sh. You can also change the mysqldump options to meet your needs. I am using GTID’s for replication, so I want to use this option –set-gtid-purged=OFF. You will also want to change the value of my password my_pass to your mysql password. You can also skip including the password by using the -p option, and just enter the password each time you run the mysqldump command.

# export.sh
# script to create the database export commands
k=""
for i in `cat list.txt`
do

echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $i > "$i"_backup.sql"

k="$k $i"

done

# Optional - export the entire database
# use the file extention of .txt so that your script won't import it later
echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $k > all_db_backup.txt"

For the individual databases, I am using the suffix of .sql. For the dump file that contains all of the databases, I am using the prefix .txt – as I use a wildcard search later to get a list of the dump files, and I don’t want to import the one dump file that contains all of the databases.

Now you can run the export.sh script to create a list of your mysqldump commands, and you are going to direct the output into another shell script named export_list.sh.

# sh export.sh > export_list.sh

We can now take a look at what is in the export_list.sh file

# cat export_list.sh
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases 12thmedia > 12thmedia_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases cbgc > cbgc_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases comicbookdb > comicbookdb_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases coupons > coupons_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases healthcheck > healthcheck_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases innodb_memcache > innodb_memcache_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases landwatch > landwatch_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases laurelsprings > laurelsprings_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases ls_directory > ls_directory_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases mem > mem_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases protech > protech_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases scripts > scripts_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases stacy > stacy_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases storelist > storelist_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases testcert > testcert_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases tony > tony_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases twtr > twtr_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases watchdb > watchdb_backup.sql

mysqldump -uroot -p --set-gtid-purged=OFF --password=my_psss --triggers --quick --skip-opt --add-drop-database --create-options --databases  12thmedia cbgc comicbookdb coupons healthcheck innodb_memcache landwatch laurelsprings ls_directory mem protech scripts stacy storelist testcert tony twtr watchdb > all_db_backup.txt

Now you have created a list of mysqldump commands that you can execute to dump all of your databases. You can now go ahead and execute your mysqldump commands by running the export_list.sh script:

# sh export_list.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

The message “Warning: Using a password on the command line interface can be insecure.” is shown because you included the value for “–password“. If you don’t want to put your password on the command line, just change that option to “-p“, and you will have to manually enter your MySQL root user’s password after each mysqldump command.

Here is a list of the dump files that was produced:

# ls -l
total 21424
-rw-r--r--  1 root  staff    26690 Aug  1 16:25 12thmedia_backup.sql
-rw-r--r--  1 root  staff  5455275 Aug  1 16:26 all_db_backup.txt
-rw-r--r--  1 root  staff  1746820 Aug  1 16:25 cbgc_backup.sql
-rw-r--r--  1 root  staff   492943 Aug  1 16:25 comicbookdb_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 coupons_backup.sql
-rw-r--r--  1 root  staff     3366 Aug  1 16:25 export_list.sh
-rw-r--r--  1 root  staff     1077 Aug  1 16:25 healthcheck_backup.sql
-rw-r--r--  1 root  staff     3429 Aug  1 16:25 innodb_memcache_backup.sql
-rw-r--r--  1 root  staff  1815839 Aug  1 16:25 landwatch_backup.sql
-rw-r--r--  1 root  staff   642965 Aug  1 16:25 laurelsprings_backup.sql
-rw-r--r--  1 root  staff   660254 Aug  1 16:25 ls_directory_backup.sql
-rw-r--r--  1 root  staff     1037 Aug  1 16:25 mem_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 protech_backup.sql
-rw-r--r--  1 root  staff     2889 Aug  1 16:25 scripts_backup.sql
-rw-r--r--  1 root  staff    11107 Aug  1 16:25 stacy_backup.sql
-rw-r--r--  1 root  staff     4002 Aug  1 16:25 storelist_backup.sql
-rw-r--r--  1 root  staff     1062 Aug  1 16:25 testcert_backup.sql
-rw-r--r--  1 root  staff     4467 Aug  1 16:25 tony_backup.sql
-rw-r--r--  1 root  staff     1042 Aug  1 16:25 twtr_backup.sql
-rw-r--r--  1 root  staff    52209 Aug  1 16:25 watchdb_backup.sql

You will now want to dump your MySQL table, so you don’t have to recreate all of the MySQL information, including the users, passwords and privileges after the new install.

mysqldump -uroot --password=my_pass --set-gtid-purged=OFF mysql > mysql_user_backup.txt

I am once again using the .txt prefix for this file.

After you execute the above command, make sure that the dump file was created:

# ls -l mysql_user_backup.txt
-rw-r--r--  1 root  staff  9672 Aug  1 16:32 mysql_user_backup.txt

We have now finished exporting all of our data, including our MySQL table data. You will need to shutdown MySQL. You may use mysqladmin to shutdown your database, or here is a link on ways to shutdown MySQL.

# mysqladmin -uroot --password=my_pass shutdown
Warning: Using a password on the command line interface can be insecure.

Before continuing, you might want to check to make sure that the mysqld process isn’t still active.

# ps -ef|grep mysqld
    0 18380 17762   0   0:00.00 ttys002    0:00.00 grep mysqld

You are now going to want to change the name of your mysql directory. This will give you access to the old directory in case the upgrade fails. For my OS (Mac OS 10.9), my MySQL home directory is a symbolic link to another directory that contains the actual MySQL data. All I have to do is to remove the symbolic link. A new symbolic link will be created with the new install. Otherwise, just use the mv command to rename your old MySQL directory.

# cd /usr/local/
# ls -ld mysql* 
lrwxr-xr-x   1 root  wheel   36 Aug  9  2013 mysql -> mysql-advanced-5.6.17-osx10.6-x86_64
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

All I have to do is to remove the link, and the MySQL directory will still be there:

# rm mysql
# ls -ld mysql* 
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

Now I am ready to install the new version of MySQL. I won’t cover the installation process, but here is the link to the installation page.

Tip: After you have installed MySQL, don’t forget to run this script from your MySQL home directory. This will install your mysql database tables. Otherwise, you will get an error when you try to start the mysqld process.

# ./scripts/mysql_install_db

Now you can start the mysqld process. See this page if you don’t know how to start MySQL.

You can test to see if the new installation of MySQL is running by either checking the process table, or logging into mysql. With a fresh install of 5.6, you should not have to include a user name or password.

Note: (Future versions of MySQL may automatically create a random root password and put it in your data directory. You will then need to use that password to login to MySQL for the first time. Check the user’s manual for any MySQL versions beyond 5.6.)

# mysql
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

Now that MySQL is up and running, leave the mysql terminal window open, and open another terminal window so you can import your mysql table information from your dump file:

# mysql < /users/tonydarnell/mysql_2014_0731/2014_0731_mysql_backup.sql

You won't be able to login with your old user names and passwords until you execute the flush privileges command. So, in your other terminal window with the mysql prompt:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Open another terminal window and see if you can login with your old mysql user name and password:

# mysql -uroot -p
Enter password: 
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

You can then look at your the user names and passwords in the mysql.user table:

mysql> select user, host, password from mysql.user order by user, host;
+----------------+---------------+-------------------------------------------+
| user           | host          | password                                  |
+----------------+---------------+-------------------------------------------+
| root           | 127.0.0.1     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.2   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.5   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.50  | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | localhost     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
+----------------+---------------+-------------------------------------------+
5 rows in set (0.00 sec)


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000005
         Position: 644455
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: e1eb3f38-18da-11e4-aa44-0a1a64a61679:1-124
1 row in set (0.00 sec)

We are now ready to import the database dump files. We can use this script to create the import commands. Copy this into a text file named import.sh:

# import.sh
# script to import all of the export files
# run this script in the same directory as the exported dump files
#
> import_files.sh
directory=`pwd`
for file in `ls *sql`
do

if [[ $(grep -c '.txt' $file) != 0 ]];then

echo "# found mysql - do nothing"

else

echo "mysql -uroot -p"my_pass"  < $directory/$file"
echo "mysql -uroot -p"my_pass"  > import_files.sh

fi

done

Then run the import.sh script. The script will print the output to the terminal window as well as into a new script file named import_files.sh.

# sh import.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

Look at the contents of the new script file – import_files.sh – to make sure that it contains all of the database files. You will use this file to help you import your dump files.

# cat import_files.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

WARNING: Be sure that this script file does not contain the main dump file or the mysql user’s file that we created.


I was exporting and importing eighteen (18) database files, so I can also check the line count of the import_files.sh script to make sure it matches:

# wc -l import_files.sh
      18 import_files.sh

I am now ready to import my files.


Optional: add the -v for verbose mode – sh -v import_files.sh


# sh import_files.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

You databases should now be imported into your new instance of MySQL. You can always re-run the script to make sure that the databases are the same size.


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed after importing the dump files, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000003
         Position: 16884001
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: cc68d008-18f3-11e4-aae6-470d6cf89709:1-43160
1 row in set (0.00 sec)

Your new and fresh installation of MySQL should be ready to use.

NOTE:A thank-you to Daniel Van Eeden for pointing out a mistake that I had made.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Use MySQL to store NoSQL and SQL data in the same database using memcached and InnoDB

MySQL is a great relational database, but at some point someone (management) in your company is probably going to say that they need to use NoSQL to store their data. After all, NoSQL is one of the latest buzzwords, so it must be good (correct?). Basically, NoSQL allows you to store data without all of the characteristics of a relational database. A very simple explanation is that you are storing all of a data set with just one primary key, and the primary key is how you also retrieve the data. While NoSQL may be good in some cases, it is hard to beat “old-fashioned” SQL relational databases – especially if that is what you know. But, with MySQL and InnoDB, you can have the best of both worlds.

With MySQL version 5.6 (and above), you have the ability to store and retrieve NoSQL data, using NoSQL commands, while keeping the data inside a MySQL InnoDB database. So, you can use NoSQL and SQL at the same time, on the same data, stored in the same database. And the beauty is that it takes just a few minutes to setup. This post will provide you with a quick lesson on how to setup NoSQL on a MySQL InnoDb database.

I would suggest that you read this MySQL web page to get started – Getting Started with InnoDB Memcached Plugin. You should be able to follow this guide and have your NoSQL database up and running in no time at all.

NoSQL on MySQL uses memcached – a distributed memory object caching system. Currently, the memcached daemon plugin is only supported on Linux, Solaris, and Mac OS X platforms.

There are a few prerequisites. You must have the libevent 1.4.3 (or greater) libraries installed (it is not installed if you used a MySQL installer to install MySQL). Depending upon your operating system, you can use apt-get, yum, or port install. For example, on Ubuntu Linux:

sudo apt-get install libevent-dev

 


On the Mac, it takes a few more steps (I tested this with Mac OS 10.9 – Mavericks). To install libevent, you will need to install Xcode, the Xcode command line tools, and then Homebrew. You may install Xcode via the Apple App Store (and this is the most time-consuming part). Once Xcode is installed, from a command line type:

# xcode-select --install

This will prompt you to install the command-line tools. Then, you can easily install Homebrew via this command:

# ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"

Then install libevent via:

# brew install libevent

 

The libraries for memcached and the InnoDB plugin for memcached are put into the correct place by the MySQL installer. For a typical operation, the files lib/plugin/libmemcached.so and lib/plugin/innodb_engine.so are used.

You may check to make sure you have the libraries: (substitute $MYSQL_HOME for your mysql home directory)

# ls -l $MYSQL_HOME/lib/plugin/libmemcached.so
-rwxr-xr-x  1 mysql  wheel  195664 Mar 14 15:23 lib/plugin/libmemcached.so

# ls -l $MYSQL_HOME/lib/plugin/innodb_engine.so
-rwxr-xr-x  1 mysql  wheel  109056 Mar 14 15:23 lib/plugin/innodb_engine.so

To be able to use memcached so that it can interact with InnoDB tables, you will need to run a configuration script to install the tables necessary for use with memcached. This script is named innodb_memcached_config.sql, and it should be in your $MYSQL_HOME/share directory.

# cd $MYSQL_HOME
# ls -l share/innodb_memcached_config.sql
-rwxr-xr-x  1 mysql  wheel  3963 Mar 14 15:02 share/innodb_memcached_config.sql

To install the script, from the command line run:

# mysql -uroot -p < $MYSQL_HOME/share/innodb_memcached_config.sql

This script will install the three tables (cache_policies, config_options and containers) that it needs for the InnoDB/memcached mapping relationship, along with a sample table named demo_test, which is installed in the test database.

mysql> use innodb_memcache
Database changed
mysql> show tables;
+---------------------------+
| Tables_in_innodb_memcache |
+---------------------------+
| cache_policies            |
| config_options            |
| containers                |
+---------------------------+
3 rows in set (0.01 sec)

mysql> use test;
Database changed
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| demo_test      |
+----------------+
1 row in set (0.00 sec)

The table that we need to use for the InnoDB mapping is the containers table. Here is the DESCRIBE statement for the containers table:

mysql> DESCRIBE innodb_memcache.containers;
+------------------------+--------------+------+-----+---------+-------+
| Field                  | Type         | Null | Key | Default | Extra |
+------------------------+--------------+------+-----+---------+-------+
| name                   | varchar(50)  | NO   | PRI | NULL    |       |
| db_schema              | varchar(250) | NO   |     | NULL    |       |
| db_table               | varchar(250) | NO   |     | NULL    |       |
| key_columns            | varchar(250) | NO   |     | NULL    |       |
| value_columns          | varchar(250) | YES  |     | NULL    |       |
| flags                  | varchar(250) | NO   |     | 0       |       |
| cas_column             | varchar(250) | YES  |     | NULL    |       |
| expire_time_column     | varchar(250) | YES  |     | NULL    |       |
| unique_idx_name_on_key | varchar(250) | NO   |     | NULL    |       |
+------------------------+--------------+------+-----+---------+-------+
9 rows in set (0.00 sec)

Here is some information about the container columns:

  • name: This is the name that is like the primary key for the memcache data collection. If you have a value of default for name, then this will be the default entry that is used. Otherwise it uses the first entry in the container table. You can also specify this name value in the NoSQL statement.
  • db_schema: The InnoDB database name that you will use to store the data.
  • db_table: The InnoDB database table name that you will use to store the data.
  • key_columns: The column that you will use for the key value lookup. This field only contains one column (despite the plural name of key_columns).
  • value_columns: Data will be pulled from and/or stored to these column/columns of data. You use a separator value (such as a pipe "|" symbol) to separate the columns. In the example database that is installed, you can store first name | last name which would pull data from both a firstname and lastname column.
  • flags: This column stores memcache flags, which is an integer that is used to mark rows for memcache operations.
  • cas_column and expire_time_column: These two columns are used for storing memcache compare-and-swap and expiration values. You can ignore these for now.
  • unique_idx_name_on_key: This is the name of the unique index that you will use for the key column, and you should use the primary key for the table. You should not use an auto-incrementing index, as you can’t insert into an auto-incrementing key.

NOTE: If you make any changes to the innodb_memcache.containers table, you will need to restart the plugin or restart mysqld.

The innodb_memcached_config.sql script inserted one line of data for us in the innodb_memcache.containers table:

mysql> select * from innodb_memcache.containers;
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| name | db_schema | db_table  | key_columns | value_columns | flags | cas_column | expire_time_column | unique_idx_name_on_key |
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| aaa  | test      | demo_test | c1          | c2            | c3    | c4         | c5                 | PRIMARY                |
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
1 row in set (0.00 sec)

You can see the name aaa is mapped to the db_table (InnoDB table) demo_test. And the innodb_memcached_config.sql script also created this demo_test table for us to use, and it inserted one row of data. Here is the DESCRIBE statement for the demo_test table:

mysql> DESCRIBE test.demo_test;
+-------+---------------------+------+-----+---------+-------+
| Field | Type                | Null | Key | Default | Extra |
+-------+---------------------+------+-----+---------+-------+
| c1    | varchar(32)         | NO   | PRI |         |       |
| c2    | varchar(1024)       | YES  |     | NULL    |       |
| c3    | int(11)             | YES  |     | NULL    |       |
| c4    | bigint(20) unsigned | YES  |     | NULL    |       |
| c5    | int(11)             | YES  |     | NULL    |       |
+-------+---------------------+------+-----+---------+-------+
5 rows in set (0.00 sec)

And here is the row that was inserted:

mysql> select * from demo_test;
+----+--------------+------+------+------+
| c1 | c2           | c3   | c4   | c5   |
+----+--------------+------+------+------+
| AA | HELLO, HELLO |    8 |    0 |    0 |
+----+--------------+------+------+------+
1 row in set (0.00 sec)

Next you will need to install the memcached plugin. From a mysql prompt:

mysql> install plugin daemon_memcached soname "libmemcached.so";

Once the plugin is installed this way, it is automatically activated each time the MySQL server is booted or restarted.

To turn off the plugin, use this statement:

mysql> uninstall plugin daemon_memcached;

NOTE: You might need to restart mysqld before continuing.

Now we can start testing memcached with InnoDB. MySQL is now listening to the memcached port 11211. We could try writing some code, but the easiest way is to just telnet to the port 11211 and issue some NoSQL commands. We only have one row in the innodb_memcache.containers table, and only one row in the test.demo_test table. For this example, we are only going to use a few NoSQL commands – get and set. Here is a link to the full list of commands.

Let’s telnet and use NoSQL to retrieve the data from the InnoDB demo_test table, which contained one row of data. The key for this one row is AA. We simply need to type get AA to retrieve the data:

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get AA
VALUE AA 8 12
HELLO, HELLO
END

We can now insert a line of data. This is done with the set command. The syntax for inserting data is:

  • set – this is the command to store a value
  • XX – this is the value key
  • # – this is a reference to the flags that we will use
  • # – this is the expiration TTL (ime to live)
  • # – the length of the string that we will insert/store
  • ABCDEF – the value to insert/store

In your telnet session, type this:

set BB 0 0 16
Goodbye, Goodbye
STORED

Now, let’s take a look at the InnoDB table, test.demo_test – and we can see our data:

mysql> select * from test.demo_test;
+----+------------------+------+------+------+
| c1 | c2               | c3   | c4   | c5   |
+----+------------------+------+------+------+
| AA | HELLO, HELLO     |    8 |    0 |    0 |
| BB | Goodbye, Goodbye |    0 |    1 |    0 |
+----+------------------+------+------+------+
2 rows in set (0.00 sec)

If we go back to the telnet session, we can also get the value for BB, which we inserted earlier via NoSQL:

get BB
VALUE BB 0 16
Goodbye, Goodbye
END

 


 
Let’s create a new table, create the relationship in the innodb_memcache.containers and insert (set) and read (get) some new data. Here is the CREATE TABLE statement for our new users table:

use test;
CREATE TABLE `users` (
 `user_id` varchar(32) NOT NULL DEFAULT '',
 `first` varchar(100) DEFAULT NULL,
 `last` varchar(100) DEFAULT NULL,
 `flags` int(11) DEFAULT '0',
 `cas` int(11) DEFAULT '0',
 `expiry` int(11) DEFAULT '0',
 PRIMARY KEY (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

And here is the SQL INSERT for innodb_memcache.containers table, to establish our relationship with memcached and InnoDB:

INSERT INTO `innodb_memcache`.`containers` 
(`name`, `db_schema`, `db_table`, `key_columns`, `value_columns`, `flags`, `cas_column`, `expire_time_column`, `unique_idx_name_on_key`)
VALUES ('default', 'test', 'users', 'user_id', 'first|last', 'flags','cas','expiry','PRIMARY');

NOTE: Since we changed the innodb_memcache.containers table, we will need to either restart mysqld or disable/enable the plugin (as shown above).

Now that we have restarted mysqld or the plugin, let’s insert some data into our new InnoDB table via NoSQL.

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
set jj1 0 0 15
James|Johnson
STORED
set jj2 0 0 14
John|Jackson
STORED

We can now use SQL to query the test.users table to see the InnoDB data we just stored via NoSQL:

mysql> select * from users;
+---------+-------+-----------+-------+------+--------+
| user_id | first | last      | flags | cas  | expiry |
+---------+-------+-----------+-------+------+--------+
| jj1     | James | Johnson   |     0 |    1 |      0 |
| jj2     | John  | Jackson   |     0 |    2 |      0 |
+---------+-------+-----------+-------+------+--------+
2 rows in set (0.00 sec)

Let’s insert some data into the temp.users table via mysql, and then retrieve the data via NoSQL.

mysql> INSERT INTO test.users (user_id, first, last, flags, cas, expiry) VALUES ('bb1', 'Beth', 'Brown', '0', '3', '0');
Query OK, 1 row affected (0.00 sec)

Retrieve the data via NoSQL:

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get bb1
VALUE bb1 0 10
Beth|Brown
END

We now have a way to use NoSQL via memcached and at the same time use good old-fashioned SQL statements on the same data via InnoDB and MySQL. It is the best of both worlds!

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Using mysqldump and the MySQL binary log – a quick guide on how to backup and restore MySQL databases

Be sure to check out my other posts on mysqldump:
Scripting Backups of MySQL with Perl via mysqldump
Splitting a MySQL Dump File Into Smaller Files Via Perl
Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 1 of 2
Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 2 of 2


I have already written several posts on using mysqldump. This post will be a quick guide to using mysqldump to backup and restore your databases. And we will look at using the MySQL binary log (binlog) along with mysqldump for point-in-time recovery. If your database is extremely large, and for databases in a production environment, you might want to consider using MySQL Enterprise Backup (mysqlbackup), which is part of the MySQL Enterprise Edition.

For those of you that aren’t familiar with mysqldump:

The mysqldump client is a utility that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects, table data, or both. It dumps one or more MySQL database for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

The best feature about mysqldump is that it is easy to use. The main problem with using mysqldump occurs when you need to restore a database. When you execute mysqldump, the database backup (output) is an SQL file that contains all of the necessary SQL statements to restore the database – but restoring requires that you execute these SQL statements to essentially rebuild the database. Since you are recreating your database, the tables and all of your data from this file, the restoration procedure can take a long time to execute if you have a very large database.


NOTE: If you are using GTID’s (global transaction identifiers) in your database, you will need to include the –set-gtid-purged=OFF option, otherwise you will receive this error:

Warning: A partial dump from a server that has GTIDs will by default include the
GTIDs of all transactions, even those that changed suppressed parts of the database.
If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete 
dump, pass --all-databases --triggers --routines --events. 

For these examples, I will not include the –set-gtid-purged=OFF option.


Dumping and making a copy of a single database

To dump/backup a single database:

mysqldump -uroot -p database_name > db_dump.sql

To load the dump file back into mysql, you will need to create the new database first. If you use the –databases option before the database name, mysqldump will also dump the CREATE DATABASE and USE statements that you need prior to inserting the data from the dump.

You can either use mysqladmin to create the new database, or create it from a MySQL prompt:

# mysqladmin create new_database_name

mysql> CREATE DATABASE new_database_name;

Next, you can simply import the dump file into mysql.

# mysql new_database_name < db_dump.sql

You can also use the dump file to move the database to another server. If you did not use the –databases option, then you will need to create the database first.

Dumping events, routines, triggers

Here are the options for mysqldump to also dump event scheduler events, stored procedures or functions. If you want to include these, use these options:

–routines – dumps stored procedures and functions
–events – dumps Event Scheduler events
–triggers – dumps triggers

When you use mysqldump, triggers are dumped by default. If you want to disable any of these functions, use the “skip” versions: https://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_events–skip-events, –skip-routines, or –skip-triggers.

Only dump table definitions

If you want to just dump the CREATE TABLE statements without any data, use the –no-data option.

# mysqldump --no-data database_name > db_definitions_dump.sql

You might want to add the –routines and –events options to include stored routines and event definitions.

# mysqldump --no-data --routines --events database_name > db_definitions_dump.sql

Only dump the data

If you want to just dump the data without any table definitions, you can use the –no-create-info option:

# mysqldump --no-create-info database_name > db_data_dump.sql

Using mysqldump to test a new version of mysql

Let’s say that you need to upgrade to a new version of mysql. In order to test to see if your database objects are handled properly by the new version, you can simply dump the data definitions and import them into the new version of MySQL (preferably on a separate computer).

On the computer with the old version of MySQL:

mysqldump --all-databases --no-data --routines --events > db_definitions.sql

Then, on the upgraded server, you can just import this dump file:

mysql -uroot -p < db_definitions.sql

This will help you spot any potential incompatibilities with the new version of MySQL. If you don’t receive any errors, you can then dump the data and load into the new server. Be sure to run some test queries on the new upgraded server as well.

Point-in-time recovery using mysqldump and the binary logs

The MySQL binary logs (binlogs) contains all of the SQL statements or “events” that could change the database (I say “could” because a delete statement that does not delete any rows would still be entered into the binary log – unless you are using row-based logging).

For more information about the binary log, see: http://dev.mysql.com/doc/refman/5.6/en/binary-log.html.

Since the binlog contains all of the events that happen to the database, you can use the binlog to apply these same changes to a different database. If you started your MySQL instance with the binlogs enabled, and you have never flushed the binlogs, then the binlogs contain all of the SQL statements for all of the data that is in your database. The binlog itself is like a backup of your database.

If you want to use the binary logs in addition to mysqldump to restore your database, you need to have the binary logs (binlogs) enabled. There are many options for the binlogs (see http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html, but the only two that you really need for this example are:

 --log-bin[=base_name]
 --log-bin-index[=file_name]

One other option is to use the –binlog-format. You can set this value to STATEMENT (default), ROW or MIXED. For more information about these options, see http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html#sysvar_binlog_format.

These variables need to go into your my.cnf or my.ini file under the [mysqld] section, and this will require a restart of mysqld.

Once you have the binary log enabled, you will need to do a few things differently when you use mysqldump. You will need to:

- flush the tables and place a READ lock on the tables
– check to see what binlog is being used
– check the position of the binlog
– dump your data with mysqldump
– release the lock

By placing a read lock on the tables, you are stopping anyone from modifying the data in the database. By having the binlog and binlog position, these will allow you use the binary logs to restore any statements that happened after the mysqldump. Open two terminal windows – one with a MySQL prompt, and another with a root prompt:

In the MySQL prompt, issue the READ lock and SHOW MASTER STATUS:

mysql> FLUSH TABLES WITH READ LOCK;
Query OK, 0 rows affected (0.47 sec)

mysql> SHOW MASTER STATUS\G
*************************** 1. row ***************************
             File: mysql-bin.000008
         Position: 191
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

Now you are ready to dump the database with whatever options you need:

# mysqldump --all-databases > db_000008_191_dump.sql

Once the dump has finished, you can release the lock:

mysql> UNLOCK TABLES;
Query OK, 0 rows affected (0.00 sec)

It is extremely important that you write down the binlog file and position from the SHOW MASTER STATUS statement and somehow associate it with the dump file. One way to do this is to insert the binlog file name and position into the dump file name. In my example above, I did this by adding both to the dump file name db_000008_191_dump.sql.

When you have to restore the database, you will need to load the dump file first, and then apply the binlog(s) to the database.

Let’s assume that we need to restore the entire database. First, we will import the dump file:

# mysql -uroot -p < db_000008_191_dump.sql

Next, you will need to load the information from the binlog(s). To load information from the binlogs, you need to use the mysqlbinlog utility. You can check your MySQL data directory to see how many binlogs you have used since the one that was in the SHOW MASTER STATUS statement:

$ ls -l mysql-bin*
-rw-rw----  1 mysql  wheel     67110 Apr  4 16:22 mysql-bin.000001
-rw-rw----  1 mysql  wheel   1230893 Apr  4 16:24 mysql-bin.000002
-rw-rw----  1 mysql  wheel  13383907 Apr  4 17:03 mysql-bin.000003
-rw-rw----  1 mysql  wheel  13383907 Apr  4 19:03 mysql-bin.000004
-rw-rw----  1 mysql  wheel  13383907 Apr  4 19:07 mysql-bin.000005
-rw-rw----  1 mysql  wheel  13383907 Apr 18 16:48 mysql-bin.000006
-rw-rw----  1 mysql  wheel  13383907 Apr 21 13:37 mysql-bin.000007
-rw-rw----  1 mysql  wheel  13383907 Apr 21 13:37 mysql-bin.000008
-rw-rw----  1 mysql  wheel    154847 Apr 21 13:37 mysql-bin.000009
-rw-rw----  1 mysql  wheel       171 Apr 21 13:37 mysql-bin.index

You can also just look at the mysql-bin.index file (located in your MySQL data directory), which contains a list of all of the binary files in use:

 # cat mysql-bin.index
./mysql-bin.000001
./mysql-bin.000002
./mysql-bin.000003
./mysql-bin.000004
./mysql-bin.000005
./mysql-bin.000006
./mysql-bin.000007
./mysql-bin.000008
./mysql-bin.000009

In this example, we will need to apply the changes from the binlog file mysql-bin.000008 after position 191, and then all of the mysql-bin.000009 binlog. You will need to add the correct data directory PATH to your mysqlbinlog statement.

mysqlbinlog --start-position=191 $DATA_DIR_PATH/mysql-bin.000008 | mysql -u root -p

After you have inserted all of mysql-bin.000008 after position 191, you can insert the entire mysql-bin.000009 binlog file:

mysqlbinlog $DATA_DIR_PATH/mysql-bin.000009 | mysql -u root -p

Note: During the restore process, you do not want anyone inserting any data into the database.

Your database should now be back to the state when the database crashed or was deleted. It isn’t mandatory, but it is also a good idea to copy the binlogs to a separate location as part of your backup routine. You can use mysqlbinlog to do this as well – see: http://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html.

For more information on using the binary logs for point-in-time recovery, see https://dev.mysql.com/doc/refman/5.6/en/point-in-time-recovery.html. There are a lot of other options for using binlogs. The best option for backing up and restoring your MySQL database is to use the MySQL Enterprise Backup (mysqlbackup), which is part of the MySQL Enterprise Edition subscription, which includes 24×7 MySQL Support and the other Enterprise features.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Installing and testing the MySQL Enterprise Audit plugin

MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.

(from http://www.mysql.com/products/enterprise/)

MySQL Enterprise Audit provides an easy to use, policy-based auditing solution that helps organizations to implement stronger security controls and to satisfy regulatory compliance.

As more sensitive data is collected, stored and used online, database auditing becomes an essential component of any security strategy. To guard against the misuse of information, popular compliance regulations including HIPAA, Sarbanes-Oxley PDF, and the PCI Data Security Standard require organizations to track access to information.

Download the White Paper: MySQL Enterprise Edition Product Guide

To meet these demands, organizations must be able to produce an audit trail of information to help track who does what to which piece of data. This includes login and logoff attempts, attempts to access a database or a table, changes to database schema and much more.

MySQL Enterprise Audit gives DBAs the tools they need to add auditing compliance to their new and existing applications by enabling them to:

  • Dynamically enable/disable audit stream
  • Implement policies that log all or selected login or query activities
  • Automatically rotate audit log files based on size
  • Integrate XML-based audit log stream with MySQL, Oracle and other third party solutions

(from http://www.mysql.com/products/enterprise/audit.html)

MySQL Enterprise Audit is implemented using a server plugin. MySQL Enterprise Audit uses the open MySQL Audit API to enable standard, policy-based monitoring and logging of connection and query activity executed on specific MySQL servers. Designed to meet the Oracle audit specification, MySQL Enterprise Audit provides an out of box, easy to use auditing and compliance solution for applications that are governed by both internal and external regulatory guidelines.

When installed, the audit plugin enables MySQL Server to produce a log file containing an audit record of server activity. The log contents include when clients connect and disconnect, and what actions they perform while connected, such as which databases and tables they access.

(from http://dev.mysql.com/doc/refman/5.6/en/mysql-enterprise-audit.html)

Installing and testing the MySQL Enterprise Audit plugin
Manual: http://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin.html

Check that Enterprise Edition is installed:

The thread pool plugin requires the MySQL Enterprise Edition, and is not available in the Community Edition. After you have installed MySQL Enterprise Edition from http://edelivery.oracle.com, you can check from mysql to make sure that you have the correct version:

mysql> SHOW VARIABLES LIKE ‘version%’;
+-------------------------+---------------------------------------------------------+
| Variable_name           | Value                                                   |
+-------------------------+---------------------------------------------------------+
| version                 | 5.6.14-enterprise-commercial-advanced-log               |
| version_comment         | MySQL Enterprise Server - Advanced Edition (Commercial) |
| version_compile_machine | x86_64                                                  |
| version_compile_os      | osx10.7                                                 |
+-------------------------+---------------------------------------------------------+
4 rows in set (0.00 sec)

Before you attempt to install the plugin, you may verify that the thread pool plugin is in the plugins directory:

In the my.cnf or my.ini file, check for the location of the plugin directory (plugin_dir).

[mysqld]
plugin_dir=/path/to/plugin/directory

If the plugin directory value is not located in the my.cnf or my.ini file, check the location from within mysql:

mysql> SHOW VARIABLES like ‘plugin_dir’;
+---------------+------------------------------+
| Variable_name | Value                        |
+---------------+------------------------------+
| plugin_dir    | /usr/local/mysql/lib/plugin/ |
+---------------+------------------------------+
1 row in set (0.00 sec)

Check to see if the thread pool plugin (thread_pool.so) is in the plugin directory:

$ ls -l /usr/local/mysql/lib/plugin/audit_log.so
-rwxr-xr-x+ 1 _mysql  wheel  38828 Sep 10 03:58 /usr/local/mysql/lib/plugin/audit_log.so

Installing the Enterprise Audit plugin:

To load the plugin at server startup, use the –plugin-load option to name the object file that contains the plugin. With this plugin-loading method, the option must be given each time the server starts. You may also put the following lines in your my.cnf file:

[mysqld]
plugin-load=audit_log.so

If object files have a suffix different from .so on your system, substitute the correct suffix (for example, .dll on Windows). Alternatively, to register the plugin at runtime, use this statement (changing the suffix as necessary):

mysql> INSTALL PLUGIN audit_log SONAME ‘audit_log.so’;

INSTALL PLUGIN loads the plugin, and also registers it in the mysql.plugins table to cause the plugin to be loaded for each subsequent normal server startup.

If the plugin is loaded with –plugin-load or has been previously registered with INSTALL PLUGIN, you can use the –audit-log option at server startup to control plugin activation. For example, to load the plugin and prevent it from being removed at runtime, use these options:

[mysqld]
plugin-load=audit_log.so
audit-log=FORCE_PLUS_PERMANENT

If it is desired to prevent the server from running without the audit plugin, use –audit-log with a value of FORCE or FORCE_PLUS_PERMANENT to force server startup to fail if the plugin does not initialize successfully.

Optional audit variables: http://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin-options-variables.html

To verify that the audit_log was loaded successfully:

mysql> SHOW VARIABLES LIKE ‘audit_log%’;
+---------------------------+--------------+
| Variable_name             | Value        |
+---------------------------+--------------+
| audit_log_buffer_size     | 1048576      |
| audit_log_file            | audit.log    |
| audit_log_flush           | OFF          |
| audit_log_format          | OLD          |
| audit_log_policy          | ALL          |
| audit_log_rotate_on_size  | 0            |
| audit_log_strategy        | ASYNCHRONOUS |
+---------------------------+--------------+
7 rows in set (0.00 sec)

or

mysql> SELECT * FROM information_schema.PLUGINS where PLUGIN_NAME = ‘audit_log’\G
*************************** 1. row ***************************
           PLUGIN_NAME: audit_log
        PLUGIN_VERSION: 0.1
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: AUDIT
   PLUGIN_TYPE_VERSION: 3.1
        PLUGIN_LIBRARY: audit_log.so
PLUGIN_LIBRARY_VERSION: 1.4
         PLUGIN_AUTHOR: Oracle
    PLUGIN_DESCRIPTION: Auditing events logger
        PLUGIN_LICENSE: PROPRIETARY
           LOAD_OPTION: ON
1 row in set (0.02 sec)

Testing Enterprise Audit

Check to see if the audit log file was created. The default location is in the MySQL data directory. To locate the data directory:

mysql> SHOW VARIABLES WHERE Variable_Name = ‘datadir’;
+---------------+---------------------------------------------+
| Variable_name | Value                                       |
+---------------+---------------------------------------------+
| datadir       | /volumes/wv_server_raid_01/mysql_data/data/ |
+---------------+---------------------------------------------+
1 row in set (0.00 sec)

Then list that file to see if it was created:

# ls -l /volumes/wv_server_raid_01/mysql_data/data/audit.log
-rw-rw----  1 mysql  mysql  0 Mar  6 20:07 audit.log

Now, let’s test the plugin. You will need to create a user and a test database to use: (you may need to change the permissions to fit your testing scenario)

mysql> CREATE USER ‘audit_test_user’@’localhost’ IDENTIFIED BY ‘audittest123’;
Query OK, 0 rows affected (0.49 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO ‘audit_test_user’@’localhost’;
Query OK, 0 rows affected (0.02 sec)

mysql> create database audit_test;
Query OK, 1 row affected (0.00 sec)

Once the user has been created, erase the contents of the log file so you can only see the audit trail of this user:

# > /volumes/wv_server_raid_01/mysql_data/data/audit.log

Next, login with the audit_test_user:

# mysql -uaudit_test_user -paudittest123
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 375
Server version: 5.6.14-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql> 

(You may ignore the warning about using the password on the command line)

As the audit test user, create a simple table in the audit_test database, populate the table, and perform a select:

mysql> use audit_test;
Database changed

mysql> CREATE TABLE audit_test_table (firstname VARCHAR(20), lastname VARCHAR(20));
Query OK, 0 rows affected (0.05 sec)

mysql> INSERT INTO audit_test_table values (“John”, “Smith”);
Query OK, 1 row affected (0.47 sec)

mysql> select * from audit_test_table;
+------------+----------+
| firstname  | lastname |
+------------+----------+
| John       | Smith    |
+------------+----------+
1 row in set (0.00 sec)

You may now inspect the contents of the /usr/local/mysql/data/audit_log file:
(this output has been manually formatted for easier viewing)


# cat /usr/local/mysql/data/audit.log
<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:49 UTC”
RECORD_ID=”13376_2014-03-06T01:18:10”
NAME=”Connect”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”connect”
PRIV_USER=”audit_test_user”
PROXY_USER=””
DB=””/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:49 UTC”
RECORD_ID=”13377_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”select”
SQLTEXT=”select @@version_comment limit 1”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:56 UTC”
RECORD_ID=”13378_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”select”
SQLTEXT=”SELECT DATABASE()”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:56 UTC”
RECORD_ID=”13379_2014-03-06T01:18:10”
NAME=”Init DB”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=””/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:26:00 UTC”
RECORD_ID=”13380_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”create_table”
SQLTEXT=”CREATE TABLE audit_test_table (firstname VARCHAR(20), lastname VARCHAR(20))”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:26:14 UTC”
RECORD_ID=”13382_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”insert”
SQLTEXT=”INSERT INTO audit_test_table values ("John", "Smith")”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:26:23 UTC”
RECORD_ID=”13383_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”select”
SQLTEXT=”select * from audit_test_table”/>

MySQL Enterprise Audit is now configured and ready to use. To stop Enterprise Audit, issue this command:

mysql> UNINSTALL PLUGIN audit_log SONAME ‘audit_log.so’;

This command will fail if the audit-log=FORCE_PLUS_PERMANENT variable was used.

Removing the test objects

To remove the audit_test_user user and drop the audit_test database:

DROP USER ‘audit_test_user’@’localhost’;
FLUSH PRIVILEGES;
DROP DATABASE audit_test;

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

MySQL Enterprise Monitor – send advisor events to your chat client with Perl and Jabber

MySQL Enterprise Monitor (MEM) is part of the MySQL Enterprise Edition, and MEM provides real-time visibility into the performance and availability of all your MySQL databases. MEM and the MySQL Query Analyzer continuously monitor your databases and alerts you to potential problems before they impact your system. It’s like having a “Virtual DBA Assistant” at your side to recommend best practices to eliminate security vulnerabilities, improve replication, optimize performance and more. As a result, the productivity of your developers, DBAs and System Administrators is improved significantly.

With MEM, you have a couple of notification options for receiving information when MEM has received an event alert. An event alert is a “significant deviation from the baseline performance trends, and best-practice Advisors recommended changes to configuration and variable settings to improve performance”. From: http://www.mysql.com/products/enterprise/monitor.html

You may choose to receive these alerts via email or SNMP. I am not going to go over how to setup these alerts – but instructions may be found here.

I have MEM installed on my home server, and since I have been using it for a while, I have been able to tune my instances and databases so that I rarely receive any alerts. I normally receive these alerts via email, but I don’t always check email for this particular account during the day. Since most of my colleagues are in other parts of the world, we use a Jabber chat client for quick communications during the day. I wanted to figure out a way for me to receive a chat message whenever MEM had an alert. For my chat client, I use adium – which is an open-source multi-protocol instant messaging client for Mac OS X – it supports MSN, Jabber, Yahoo! and other networks. But this should work with any XMPP-based chat client application.

You will probably want to create a new POP3 email address for these alerts, as this script will delete the email messages from the server. If you want to keep an email copy of these alerts, you may add an additional email address to the alert notification group. If you use a free email service (like gmail), remember that it has a daily limit of the number of messages that you can send. I ran into this problem when testing the scripts, so I created my own email under one of my domains. I found two Perl scripts; one that acts as a POP3 client, and another that sends the Jabber message – and I combined them into one Perl script. You might need to do a little tweaking to get it to work with your service.

This is a sample email message from MEM. The first “Subject” line is the actual email subject. In MEM, you can customize what information you want in your email subject line via Email Notification Groups. I combined several parts of the email message into one message to be sent via chat.


(Email Subject Line)
(macserver01, MacServer01) - MEM WARNING Alert: User Has Rights To Database That Does Not Exist

(Email Body)

Subject: macserver01, MacServer01
Time: 2013-12-30 17:01:44 UTC (2013-12-30 12:01:44 EST)
Category: Security
Event: User Has Rights To Database That Does Not Exist
Problem Description
When a database is dropped, user privileges on the database are not automatically dropped. This has security implications as that user will regain privileges if a database with the same name is created in the future, which may not be the intended result.
Advice
Revoke privileges for all users on any databases that do not exist. The following users have privileges on databases that do not exist.

    ''@'%' on database test_%

Recommended Action
REVOKE ALL PRIVILEGES ON db_name.* FROM 'user_name'@'host_name';
Links and Further Reading
MySQL Manual: DROP DATABASE Syntax
MySQL Manual: GRANT Syntax
MySQL Manual: REVOKE Syntax
MySQL Manual: Privileges Provided by MySQL
MySQL Manual: How the Privilege System Works
Securing Your MySQL Installation
Securing a MySQL Server on Windows
Expression
%user% != THRESHOLD
Evaluated Expression

    ''@'%' on database test_% != ''

Copyright © 2005, 2013, Oracle and/or its affiliates. All rights reserved.

And here is the script. You may run it as a cron job, where it will check your email every few minutes and then send you a chat message when an email with an alert has arrived. You will need to modify the script to match your email and chat settings, and I have placed notes in the script to help guide you:

#!/usr/bin/perl -w

# POP3 client script source:
# http://forums.devshed.com/perl-programming-6/how-to-get-gmail-mail-by-mail-pop3client-in-perl-555889.html
# Author: a user named keath

# Jabber message script source: 
# ttp://dipinkrishna.com/blog/2010/12/perl-send-chat-message-gmail-buddy-jabber/
# Author: Dipin Krishna

use strict;
# for the email
use Mail::POP3Client;
# for the chat
use Net::Jabber;

# I was having a problem with the Debug module, so I just
# commented line 154 in /Library/Perl/5.16/Net/XMPP/Debug.pm

# this is the email address that you want to use to receive 
# the alert messages from MySQL Enterprise Monitor
my $user = 'MEMalerts@scriptingmysql.com';
my $pass = 'mypassword';

# you will need to use your POP3 server name
my $host = "pop.emailserver.com";

my $pop = new Mail::POP3Client(
	USER     => $user,
	PASSWORD => $pass,
	HOST     => $host,
	PORT     => 995,
	USESSL   => 'true',
);

# I have commented most of the print messages
# - you may uncomment them as you wish
my $count = $pop->Count();
if ($count Message();
} elsif ($count == 0) {
	print "no messages\n";
} else {
	
	#print "$count messsages\n\n";
	
	for my $i (1 .. $count) {
	
	my $subject = "";
	my $message = "";
	
		# if you want to extract data from the head of the email
		# I am extracting data from the body
		#foreach ($pop->Head($i)) {
		foreach ($pop->Body($i)) {

			#print "$_\n" if /^(From|Subject|Date):/i;
			#print "$_\n" if /^(Subject|Date):/i;
			# my $message = "$_\n" if /^(Subject):/i;

			# I am building my message so that it contains the information in this order:
			# Category, Subject, Event

			if ($_ =~ "^Subject")
			{
				#print "$_\n";
				chomp $_;
				$subject = $_;
				$subject =~ s/Subject: //;
 			}

			if ($_ =~ "^Category")
			{
				#print "$_\n";
				chomp $_;
				$message = "$_ || $subject";
				$message =~ s/Category: //;
 			}

			if ($_ =~ "^Event")
			{
				#print "$_\n";
				chomp $_;
				
				$message = "$message || $_";
				$message =~ s/Event: //;
  						
					my $sttime=time;
						
					#print "Message: $_\n";
						
					# this is my Google Talk chat user name and password
					my $username = 'mem.scripting.mysql';;
					my $password = 'mypassword';
					my $to = 'my_email_address';
					my $msg = "$message";
					#print "$to: $msg\n";
						 
					my $resource = "dipin";
					my $hostname = 'talk.google.com';
					my $port = 5222;
					my $componentname = 'gmail.com';
					my $Contype = 'tcpip';
					my $tls = 1;
						 
					my $Con = new Net::Jabber::Client();
					$Con->SetCallBacks(presence=>\&presence,
					message=>\&message );
						 
					my $status = $Con->Connect(
					hostname => $hostname, port => $port,
					componentname => $componentname,
					connectiontype => $Contype, tls => $tls);
						 
					if (!(defined($status))) {
						print "ERROR:  XMPP connection failed.\n";
						print "        ($!)\n";
						exit(0);
					} 
						 
					# Change hostname
					my $sid = $Con->{SESSION}->{id};
					$Con->{STREAM}->{SIDS}->{$sid}->{hostname} = $componentname;
						 
					# Authenticate
						
					#my @result = $Con->AuthSend(
					my @result = $Con->AuthIQAuth(
					username => $username, password => $password,
					resource => $resource);
						
					#print "Result:  $result[0] $result[1]\n";
						 
					if ($result[0] ne "ok") {
						print "ERROR: Authorization failed: $result[0] - $result[1]\n";
					}
					else
					{
						#print "Logged in Sucessfull!\n";
						$Con->PresenceSend(show=>"Available");
						#print "Sending Message!\n";
						$Con->MessageSend(to=>"$to",
						subject=>"Test",
						body=>"$msg\n",
						priority=>10);
					}
						
						# END send Jabbar message
 						# # # # # # # # # # # # # # # # # # # # # # # # # # #

				# this deletes the message from the server
				$pop->Delete( $i );
				
				# if you only want to send one email message as a test, 
				# uncomment this line
				exit;
			}
		}
		print "\n";
	}
}

$pop->Close();

exit;

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 1 of 2

Be sure to check out my other posts on mysqldump:
Scripting Backups of MySQL with Perl via mysqldump
Splitting a MySQL Dump File Into Smaller Files Via Perl

Part 1 of 2: (part two)
If you have used MySQL for a while, you have probably used mysqldump to backup your database. In part one of this blog, I am going to show you how to create a simple full and partial backup using mysqldump. In part two, I will show you how to use MySQL Enterprise Backup (which is the successor to the InnoDB Hot Backup product). MySQL Enterprise Backup allows you to backup your database while it is online and it keeps the database available to users during backup operations (you don’t have to take the database offline or lock any databases/tables).

This post will deal with mysqldump. For those of you that aren’t familiar with mysqldump:

The mysqldump client is a utility that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects, table data, or both. It dumps one or more MySQL database for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

The best feature about mysqldump is that it is easy to use. The main problem with using mysqldump occurs when you need to restore a database. When you execute mysqldump, the database backup (output) is an SQL file that contains all of the necessary SQL statements to restore the database – but restoring requires that you execute these SQL statements to essentially rebuild the database. Since you are recreating your database, the tables and all of your data from this file, the restoration procedure can take a long time to execute if you have a very large database.

There are a lot of features and options with mysqldump – (a complete list is here). I won’t review all of the features, but I will explain some of the ones that I use.

If you have InnoDB tables (InnoDB is the default storage engine as of MySQL 5.5 – replacing MyISAM), when you use mysqldump you will want to use the option –single-transaction or issue the command FLUSH TABLES WITH READ LOCK; in a separate terminal window before you use mysqldump. You will need to release the lock after the dump has completed with the UNLOCK TABLES; command. Either option (–single-transaction or FLUSH TABLES WITH READ LOCK;) acquires a global read lock on all tables at the beginning of the dump. As soon as this lock has been acquired, the binary log coordinates are read and the lock is released. If long-updating statements are running when the FLUSH statement is issued, the MySQL server may get stalled until those statements finish. After that, the dump becomes lock-free and does not disturb reads and writes on the tables. If the update statements that the MySQL server receives are short (in terms of execution time), the initial lock period should not be noticeable, even with many updates.
(from http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html)

Here is the command to use mysqldump to simply backup all of your databases (assuming you have InnoDB tables). This command will create a dump (backup) file named all_databases.sql.

mysqldump --all-databases --single-transaction --user=root --pass > all_databases.sql

After you hit return, you will have to enter your password. You can include the password after the –pass option (example: –pass=my_password), but this is less secure and you will get the following error:

Warning: Using a password on the command line interface can be insecure.

Here is some information about the options that were used:

--all-databases - this dumps all of the tables in all of the databases
--user - The MySQL user name you want to use for the backup
--pass - The password for this user.  You can leave this blank or include the password value (which is less secure)
--single-transaction - for InnoDB tables

If you are using Global Transaction Identifier’s (GTID’s) with InnoDB (GTID’s aren’t available with MyISAM), you will want to use the –set-gtid-purged=OFF option. Then you would issue this command:

mysqldump --all-databases --single-transaction --set-gtid-purged=OFF --user=root --pass > all_databases.sql

Otherwise you will see this error:

Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.

You can also execute a partial backup of all of your databases. This example will be a partial backup because I am not going to backup the default databases for MySQL (which are created during installation) – mysql, test, PERFORMANCE_SCHEMA and INFORMATION_SCHEMA

Note: mysqldump does not dump the INFORMATION_SCHEMA database by default. To dump INFORMATION_SCHEMA, name it explicitly on the command line and also use the –skip-lock-tables option.

mysqldump never dumps the performance_schema database.

mysqldump also does not dump the MySQL Cluster ndbinfo information database.

Before MySQL 5.6.6, mysqldump does not dump the general_log or slow_query_log tables for dumps of the mysql database. As of 5.6.6, the dump includes statements to recreate those tables so that they are not missing after reloading the dump file. Log table contents are not dumped.

If you encounter problems backing up views due to insufficient privileges, see Section E.5, “Restrictions on Views” for a workaround.
(from: http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html)

To do a partial backup, you will need a list of the databases that you want to backup. You may retrieve a list of all of the databases by simply executing the SHOW DATABASES command from a mysql prompt:

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| comicbookdb        |
| coupons            |
| mysql              |
| performance_schema |
| scripts            |
| test               |
| watchdb            |
+--------------------+
8 rows in set (0.00 sec)

In this example, since I don’t want to backup the default mysql databases, I am only going to backup the comicbookdb, coupons, scripts and watchdb databases. I am going to use the following options:

--databases - This allows you to specify the databases that you want to backup.  You can also specify certain tables that you want to backup.  If you want to do a full backup of all of the databases, then leave out this option
--add-drop-database - This will insert a DROP DATABASE statement before each CREATE DATABASE statement.  This is useful if you need to import the data to an existing MySQL instance where you want to overwrite the existing data.  You can also use this to import your backup onto a new MySQL instance, and it will create the databases and tables for you.
--triggers - this will include the triggers for each dumped table
--routines - this will include the stored routines (procedures and functions) from the dumped databases
--events - this will include any events from the dumped databases
--set-gtid-purged=OFF - since I am using replication on this database (it is the master), I like to include this in case I want to create a new slave using the data that I have dumped.  This option enables control over global transaction identifiers (GTID) information written to the dump file, by indicating whether to add a SET @@global.gtid_purged statement to the output.
--user - The MySQL user name you want to use
--pass - Again, you can add the actual value of the password (ex. --pass=mypassword), but it is less secure than typing in the password manually.  This is useful for when you want to put the backup in a script, in cron or in Windows Task Scheduler.
--single-transaction - Since I am using InnoDB tables, I will want to use this option.

Here is the command that I will run from a prompt:

mysqldump --databases comicbookdb coupons scripts watchdb --single-transaction --set-gtid-purged=OFF --add-drop-database --triggers --routines --events --user=root --pass > partial_database_backup.sql

I will need to enter my password on the command line. After the backup has completed, if your backup file isn’t too large, you can open it and see the actual SQL statements that will be used if you decide that you need to recreate the database(s). If you accidentally dump all of the databases into one file, and you want to separate the dump file into smaller files, see my post on using Perl to split the dump file.

For example, here is the section of the dump file (partial_database_backup.db) for the comicbookdb database (without the table definitions). (I omitted the headers from the dump file.)

--
-- Current Database: `comicbookdb`
--

/*!40000 DROP DATABASE IF EXISTS `comicbookdb`*/;

CREATE DATABASE /*!32312 IF NOT EXISTS*/ `comicbookdb` /*!40100 DEFAULT CHARACTER SET latin1 */;

USE `comicbookdb`;

--
-- Table structure for table `comics`
--

DROP TABLE IF EXISTS `comics`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `comics` (
  `serial_id` int(7) NOT NULL AUTO_INCREMENT,
  `date_time_added` datetime NOT NULL,
  `publisher_id` int(6) NOT NULL,
....

If you are using the dump file to create a slave server, you can use the –master-data option, which includes the CHANGE MASTER information, which looks like this:

--
-- Position to start replication or point-in-time recovery from
--

CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000013', MASTER_LOG_POS=79338;

If you used the –set-gtid-purged=OFF option, you would see the value of the Global Transaction Identifier’s (GTID’s):

--
--GTID state at the beginning of the backup 
--

SET @@GLOBAL.GTID_PURGED='82F20158-5A16-11E2-88F9-C4A801092ABB:1-168523';

You may also test your backup without exporting any data by using the –no-data option. This will show you all of the information for creating the databases and tables, but it will not export any data. This is also useful for recreating a blank database on the same or on another server.

When you export your data, mysqldump will create INSERT INTO statements to import the data into the tables. However, the default is for the INSERT INTO statements to contain multiple-row INSERT syntax that includes several VALUES lists. This allows for a quicker import of the data. But, if you think that your data might be corrupt, and you want to be able to isolate a given row of data – or if you simply want to have one INSERT INTO statement per row of data, then you can use the –skip-extended-insert option. If you use the –skip-extended-insert option, importing the data will take much longer to complete, and the backup file size will be larger.

Importing and restoring the data is easy. To import the backup file into a new, blank instance of MySQL, you can simply use the mysql command to import the data:

mysql -uroot -p < partial_database_backup.sql

Again, you will need to enter your password or you can include the value after the -p option (less secure).

There are many more options that you can use with a href=”http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html”>mysqldump. The main thing to remember is that you should backup your data on a regular basis, and move a copy of the backup file off the MySQL server.

Finally, here is a Perl script that I use in cron to backup my databases. This script allows you to specify which databases you want to backup via the mysql_bak.config file. This config file is simply a list of the databases that you want to backup, with an option to ignore any databases that are commented out with a #. This isn’t a secure script, as you have to embed the MySQL user password in the script.

#!/usr/bin/perl
# Perform a mysqldump on all the databases specified in the dbbackup.config file

use warnings;
use File::Basename;

# set the directory where you will keep the backup files
$backup_folder = '/Users/tonydarnell/mysqlbak';

# the config file is a text file with a list of the databases to backup
# this should be in the same location as this script, but you can modify this
# if you want to put the file somewhere else
my $config_file = dirname($0) . "/mysql_bak.config";

# example config file
# You may use a comment to bypass any database that you don't want to backup
# # Unwanted_DB    (commented - will not be backed up)
# twtr
# cbgc

# retrieve a list of the databases from the config file
my @databases = removeComments(getFileContents($config_file));

# change to the directory of the backup files.
chdir($backup_folder) or die("Cannot go to folder '$backup_folder'");

# grab the local time variables
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
$year += 1900;
$mon++;
#Zero padding
$mday = '0'.$mday if ($mday<10);
$mon = '0'.$mon if ($mon<10);

$hour = "0$hour" if $hour < 10;
$min = "0$min" if $min  $folder/$file.Z`;

	print "Done\n";
}
print "Done\n\n";

# this subroutine simply creates an array of the list of the databases

sub getFileContents {
	my $file = shift;
	open (FILE,$file) || die("Can't open '$file': $!");
	my @lines=;
	close(FILE);

	return @lines;
}

# remove any commented tables from the @lines array

sub removeComments {
	my @lines = @_;

	@cleaned = grep(!/^\s*#/, @lines); #Remove Comments
	@cleaned = grep(!/^\s*$/, @cleaned); #Remove Empty lines

	return @cleaned;
}

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

MySQL Replication – Creating a New Master/Slave Topology with or without Virtual Machines

In my last few posts, I wrote about “How to install MySQL replication using GTID’s” (Part One, Part Two). In this post, I will show you how to install MySQL 5.6 and set up replication between two MySQL servers the “old fashioned way” using the binary log and binary log position.

I am going to create some virtual machines instead of using individual servers. But, you can also use these instructions to create a MySQL replication (master/slave) setup with real servers.

Here is how replication works. On the master server, when there are updates (inserts, updates, deletes, alter, etc.) to the database, MySQL will write the appropriate information to the binlog (binary log), depending upon which replication method you choose.

From: http://dev.mysql.com/doc/refman/5.6/en/binary-log.html

The binary log contains “events” that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes (for example, a DELETE which matched no rows), unless row-based logging is used. The binary log also contains information about how long each statement took that updated data. The binary log has two important purposes:

For replication, the binary log on a master replication server provides a record of the data changes to be sent to slave servers. The master server sends the events contained in its binary log to its slaves, which execute those events to make the same data changes that were made on the master. See Section 16.2, “Replication Implementation“.

Certain data recovery operations require use of the binary log. After a backup has been restored, the events in the binary log that were recorded after the backup was made are re-executed. These events bring databases up to date from the point of the backup. See Section 7.5, “Point-in-Time (Incremental) Recovery Using the Binary Log“.

The binary log is not used for statements such as SELECT or SHOW that do not modify data. To log all statements (for example, to identify a problem query), use the general query log. See Section 5.2.3, “The General Query Log“.

Running a server with binary logging enabled makes performance slightly slower. However, the benefits of the binary log in enabling you to set up replication and for restore operations generally outweigh this minor performance decrement.

From: http://dev.mysql.com/doc/refman/5.6/en/replication-formats.html

Replication works because events written to the binary log are read from the master and then processed on the slave. The events are recorded within the binary log in different formats according to the type of event. The different replication formats used correspond to the binary logging format used when the events were recorded in the master’s binary log. The correlation between binary logging formats and the terms used during replication are:

  • Replication capabilities in MySQL originally were based on propagation of SQL statements from master to slave. This is called statement-based replication (often abbreviated as SBR), which corresponds to the standard statement-based binary logging format. In older versions of MySQL (5.1.4 and earlier), binary logging and replication used this format exclusively.
  • Row-based binary logging logs changes in individual table rows. When used with MySQL replication, this is known as row-based replication (often abbreviated as RBR). In row-based replication, the master writes events to the binary log that indicate how individual table rows are changed.
  • The server can change the binary logging format in real time according to the type of event using mixed-format logging. When the mixed format is in effect, statement-based logging is used by default, but automatically switches to row-based logging in particular cases as described later. Replication using the mixed format is often referred to as mixed-based replication or mixed-format replication. For more information, see Section 5.2.4.3, “Mixed Binary Logging Format”.

Once a slave has the binlog and binlog position, the slave will connect to the master and retrieve all of the binlog entries from the specified binlog file and after a specified binlog position. The slave creates a thread (IO thread) that connects to the master server. The master server then creates a binlog dump thread, and sends the data to the slave’s IO thread. The slave will in effect retrieve the data that was written to the specified binary log starting after a specific binlog position and then write it to the slave’s relay log (see IO thread states). The slave will then take the data from the relay log and apply it to the slave’s database via the SQL thread. If this isn’t confusing enough, maybe this poorly drawn diagram will help you understand the steps in replication:

For this example, I am starting with a fresh install on one virtual machine running Mac OS X 10.8. MySQL doesn’t have a version specific to 10.8, but I found that the version for 10.7 will work. I am not going to cover how to install MySQL, but I will show you what you need to change in order to make MySQL replication work.

MySQL uses a configuration file (my.cnf or my.ini for Windows) for all of the database variables that will be read by MySQL at startup. After I install MySQL – but before I start MySQL, I will need to choose a copy of the my.cnf file to use, and then modify that file to match my specific use of MySQL. For this example, I am going to just copy the my.cnf file in my MySQL home directory to /etc/my.cnf and then edit the /etc/my.cnf file. You may use the following variables for your my.cnf options file. There are quite a few variables in this file that you will need to change or add. Since the version of the my.cnf file that I copied is mostly blank, here is what I have in my my.cnf file: (And yes, these are not all of the variables that you might need – these are just the basic variables for this example)

[mysqld_safe]
socket = /tmp/mysql.sock

[client]
port=3306

[mysqld]
port = 3306
user = mysql
tmpdir = /tmp
socket = /tmp/mysql.sock
basedir = /usr/local/mysql
datadir = /usr/local/mysql/data
log_error = /usr/local/mysql/error.log

Since InnoDB is the default storage engine for MySQL (as of 5.5), I want to provide some InnoDB-specific variables:

innodb_data_home_dir = /usr/local/mysql/data
innodb_data_file_path = ibdata1:25M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data

innodb_log_files_in_group = 2
innodb_log_file_size = 25M
innodb_buffer_pool_size = 16M

And then to enable replication, MySQL will need to write all of the select, insert, update and delete etc. statements to the binary log (binlog). I also need to choose the binlog format. I will use “mixed“, which allows MySQL to determine whether or not to use row or statement-based binlog format for each separate SQL statement. The slave will retrieve these statements from the master and then write them to the slave’s relay log before applying these statements to the slave database.

log-bin=mysql-bin
binlog_format=mixed

I need to give each server a unique ID. I always give my master server the id of one (1), and then for the slaves, I will assign a sequential number starting at two for the server-id‘s.

server-id = 1

If you want to possibly use the slave for failover (where you promote the slave to be the master), then you will need to log the updates that are on the slave to the slave’s binary log as well. This will allow any other slaves to use this server as a master. Even though this is a master, if it ever becomes a slave I might want to re-promote it to master at a future date.

log-slave-updates

And in order to enable auto crash recovery on the slaves, enable:

relay-log-recovery

You may now run the installation script for the version of MySQL that you are installing. After you install the new database, you will want to execute the mysql_install_db script. You can also refer to the post-installation procedures on the MySQL web site. Start MySQL, and run the script:

root@macserver01: # ./scripts/mysql_install_db
Installing MySQL system tables...OK

Filling help tables...OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

  ./bin/mysqladmin -u root password 'new-password'
  ./bin/mysqladmin -u root -h macserver01 password 'new-password'

Alternatively you can run:

  ./bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:

  cd . ; ./bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl

  cd mysql-test ; perl mysql-test-run.pl

Please report any problems with the ./bin/mysqlbug script!

The latest information about MySQL is available on the web at

http://www.mysql.com

Support MySQL by buying support/licenses at http://shop.mysql.com

WARNING: Found existing config file ./my.cnf on the system.
Because this file might be in use, it was not replaced,
but was used in bootstrap (unless you used --defaults-file)
and when you later start the server.
The new default config file was created as ./my-new.cnf,
please compare it with your file and take the changes you need.

WARNING: Default config file /etc/my.cnf exists on the system
This file will be read by default by the MySQL server
If you do not want to use this, either remove it, or use the
--defaults-file argument to mysqld_safe when starting the server

If you ran this script as root, you will need to change the ownership of the mysql-bin and mysql-bin.index files in the mysql data directory to the mysql Unix user.

Now you can start the MySQL server (if it isn’t already started). When you executed the mysql_install_db script, it created the grant tables. You are going to want to change the root password and delete any anonymous accounts. See Securing the Initial MySQL Accounts for specific information for your operating system.

An easy way to change the root password is to use mysqladmin from a command prompt:

$ ./bin/mysqladmin -u root password 'new-password'

Right after you change the root password, you will want to test the new root password by logging in with mysql as root at a Unix prompt:

root@macserver01: $ mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2259
Server version: 5.6.9-rc-log MySQL Community Server (GPL)

Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

Running the mysqladmin program (above) only changed your root password for the localhost.

mysql> select host, user, password from user;
+-----------+------+-------------------------------------------+
| host      | user | password                                  |
+-----------+------+-------------------------------------------+
| localhost | root | *8B7D321C58724D1990BB8DE02FBD22FE19DB0D0A |
| 127.0.0.1 | root |                                           |
| ::1       | root |                                           |
| localhost |      |                                           |
+-----------+------+-------------------------------------------+
4 rows in set (0.00 sec)

Now that you have logged in, you can change your password for all of your root accounts:

mysql> UPDATE mysql.user SET Password=PASSWORD('new-password') WHERE User='root';
Query OK, 2 rows affected (0.00 sec)
Rows matched: 3  Changed: 2  Warnings: 0

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> select host, user, password from user;
+-----------+------+-------------------------------------------+
| host      | user | password                                  |
+-----------+------+-------------------------------------------+
| localhost | root | *8B7D321C58724D1990BB8DE02FBD22FE19DB0D0A |
| 127.0.0.1 | root | *8B7D321C58724D1990BB8DE02FBD22FE19DB0D0A |
| ::1       | root | *8B7D321C58724D1990BB8DE02FBD22FE19DB0D0A |
| localhost |      |                                           |
+-----------+------+-------------------------------------------+
4 rows in set (0.00 sec)

To find and delete the anonymous accounts, you can find a list of all of the accounts. From a mysql prompt:

mysql> use mysql;
Database changed
mysql> SELECT user, host FROM user;
+------+-----------+
| user | host      |
+------+-----------+
| root | 127.0.0.1 |
| root | ::1       |
|      | localhost |
| root | localhost |
+------+-----------+
4 rows in set (0.00 sec)

The users that are blank are anonymous users. You can double-check the blank users with this statement:

mysql> select user, host from user where user = '';
+------+-----------+
| user | host      |
+------+-----------+
|      | localhost |
+------+-----------+
1 row in set (0.00 sec)

You may now delete the blank users:

mysql> delete from user where user = '';
Query OK, 1 row affected (0.00 sec)

These are the users that are remaining:

mysql> select user, host from user;
+------+-----------+
| user | host      |
+------+-----------+
| root | 127.0.0.1 |
| root | ::1       |
| root | localhost |
+------+-----------+
3 rows in set (0.00 sec)

Since we are installing MySQL for the first time on your master, you will need to create a replication user for replication. See Creating a User for Replication for more details, but here is a sample replication user creation statement:

mysql> CREATE USER 'replicate'@'%.mydomain.com' IDENTIFIED BY 'password';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'%.mydomain.com';

mysql> select user, host from user;
+-----------+-----------+
| user      | host      |
+-----------+-----------+
| replicate | %         |
| root      | 127.0.0.1 |
| root      | ::1       |
| root      | localhost |
+-----------+-----------+
4 rows in set (0.00 sec)

You will need to make sure that your replication user matches the domain names or IP addresses, so that it has permissions to access the other server(s).

MySQL should now be up and running on your master server. If you aren’t using VM’s, you may now duplicate these same installation steps on your slave server(s) – but you must change the value of server-id in your my.cnf file for each server – to something other than the value that you have for your master server.

If you are working with virtual machines, you will need to:

  • Stop the mysqld process
  • Stop the virtual machine (shutdown the instance)
  • Copy/duplicate the virtual machine
  • Change the IP address of the new virtual machine
  • Change the server-id in the my.cnf file of the new virtual machine
  • Change my server name for use in file sharing (if file sharing is turned on)
  • Generate a new UUID for the new slave servers and edit the auto.cnf file. (more on this below)

Beginning with MySQL 5.6, the MySQL server generates a unique ID (UUID) in addition to the server-id supplied by the user. This is available as the global, read-only variable server_uuid. If you aren’t using VM’s, a new UUID will be installed when you install MySQL. On the new VM copies, we will need to generate a new UUID and then edit the auto.cnf file. You can run the select UUID(); command from the master server:

mysql> select UUID();
+--------------------------------------+
| UUID()                               |
+--------------------------------------+
| 1da4ab9c-7baf-11e2-930f-6a4c3f56f0b5 |
+--------------------------------------+
1 row in set (0.00 sec)

Next, edit the auto.cnf file that is in the MySQL home data directory:

# cat auto.cnf 
[auto]
server-uuid=1da4ab9c-7baf-11e2-930f-6a4c3f56f0b5

Edit the auto.cnf file with a text editor (or use vi) and change the old UUID to the new UUID (example – change 33e3daac-79e5-11e2-9862-ec1bc27a1e29 to 1da4ab9c-7baf-11e2-930f-6a4c3f56f0b5).

After you have installed MySQL on the new slave or copied the original VM and repeated the steps above, you can check to see if your servers have unique server-id‘s and UUID‘s. Login to each instance of mysql:

# mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.6.10-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use mysql;
Database changed
mysql> show variables where variable_name = 'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 2     |
+---------------+-------+
1 row in set (0.00 sec)

mysql> show variables where variable_name = 'server_uuid';
+---------------+--------------------------------------+
| Variable_name | Value                                |
+---------------+--------------------------------------+
| server_uuid   | 33e3daac-79e5-11e2-9862-ec1bc27a1e29 |
+---------------+--------------------------------------+
1 row in set (0.00 sec)

Now that we have our servers up and running, we need to tell our slave server(s) about the master server. If we have already started inserting data into our master servers, we need to put a read lock on the master, show the master status to get the binlog and binlog position of the master, and then release the lock. You may do this on one line separated by a semi-colon to reduce the amount of time that the lock is in place:

mysql> FLUSH TABLES WITH READ LOCK;SHOW MASTER STATUS;UNLOCK TABLES;
Query OK, 0 rows affected (0.00 sec)

+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000015 |      540 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

It is important that you save the SHOW MASTER STATUS information, as you will need this for each slave.

Now we need to tell the slave where the master is located, which binlog file to use, and which position to start. Issue this CHANGE MASTER TO command on the slave server(s): (don’t forget to change the values to match your master server)

mysql> CHANGE MASTER TO
    ->   MASTER_HOST='master IP address',
    ->   MASTER_USER='replication user',
    ->   MASTER_PASSWORD='replication user password',
    ->   MASTER_PORT=3306,
    ->   MASTER_LOG_FILE='mysql-bin.000015',
    ->   MASTER_LOG_POS=540,
    ->   MASTER_CONNECT_RETRY=10;
Query OK, 0 rows affected, 2 warnings (0.27 sec)

We have two warnings from the above statement. Let’s look at the warnings:

mysql> show warnings\G
*************************** 1. row ***************************
  Level: Note
   Code: 1759
Message: Sending passwords in plain text without SSL/TLS is extremely insecure.
*************************** 2. row ***************************
  Level: Note
   Code: 1760
Message: Storing MySQL user name or password information in the master.info repository 
is not secure and is therefore not recommended. Please see the MySQL Manual for more 
about this issue and possible alternatives.
2 rows in set (0.00 sec)

The first error “Sending passwords in plain text…” can be ignored. Since we are setting up replication using this method, we have to send the user name and password as plain text. The second error explains that the information in the CHANGE MASTER TO statement is stored in a non-secure file named master.info in your MySQL data directory:

# pwd
/usr/local/mysql/data
# ls -l master.info
-rw-rw----  1 _mysql  wheel  99 Feb 20 15:26 master.info
# cat master.info
23
mysql-bin.000015
540
192.168.1.2
replicate
replicate999
3306
10
....

There are options to not using the master.info file: “MySQL 5.6 extends the replication START SLAVE command to enable DBAs to specify master user and password as part of the replication slave options and to authenticate the account used to connect to the master through an external authentication plugin (user defined or those provided under MySQL Enterprise Edition). With these options the user and password no longer need to be exposed in plain text in the master.info file.” (from https://blogs.oracle.com/MySQL/entry/mysql_5_6_is_a)

For this example, we don’t need to worry about these errors. To begin replication, we need to start the slave:

mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

We now can check the status of the slave to see if it is working as a slave, with the SHOW SLAVE STATUS\G command:

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.181
                  Master_User: replicate
                  Master_Port: 3306
                Connect_Retry: 10
              Master_Log_File: mysql-bin.000015
          Read_Master_Log_Pos: 540
               Relay_Log_File: macos-108-repl02-relay-bin.000002
                Relay_Log_Pos: 1551
        Relay_Master_Log_File: mysql-bin.000015
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 540
              Relay_Log_Space: 1735
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
                  Master_UUID: 33e3daac-79e5-11e2-9862-ec1bc27a1e29
             Master_Info_File: /usr/local/mysql-advanced-5.6.10-osx10.7-x86_64/data/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
1 row in set (0.00 sec)

Two values to note in the slave status shows us that our CHANGE MASTER TO statement worked:

              Master_Log_File: mysql-bin.000015
          Read_Master_Log_Pos: 540

We can now execute a statement on the master, to see if it propagates to the slave database. Let’s see what databases are on the master:

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.01 sec)

When we execute the same command on the slave, we get the same results. Since we already have a test database, let’s create a table in that database. We can check to see if there are any tables in that database already:

mysql> use test;
Database changed
mysql> show tables;
Empty set (0.00 sec)

Currently there aren’t any tables in the test database. We can now create one on the master database:

mysql> CREATE TABLE `address` (
    ->   `serial_number` int(6) NOT NULL AUTO_INCREMENT,
    ->   `last_name` char(40) NOT NULL DEFAULT '',
    ->   `first_name` char(40) NOT NULL DEFAULT '',
    ->   `address_01` char(40) NOT NULL DEFAULT '',
    ->   `city` char(30) NOT NULL DEFAULT '',
    ->   `state` char(20) NOT NULL DEFAULT '',
    ->   `zip` char(11) NOT NULL DEFAULT '',
    ->   `phone` char(15) NOT NULL DEFAULT '',
    ->   PRIMARY KEY (`serial_number`)
    -> ) ENGINE=InnoDB AUTO_INCREMENT=100000 DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.07 sec)

mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| address        |
+----------------+
1 row in set (0.00 sec)

And let’s take a look at the new master status;

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000015 |     1808 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+

So, if we have replication set up correctly, when we go to the slave, we should see this table on the slave as well. Let’s execute the same same show table statement on the slave:

mysql> use test;
Database changed
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| address        |
+----------------+
1 row in set (0.00 sec)

Replication is now up and running. You may continue these steps for additional slaves that you install manually or slaves where you copy the VM.

I can also use the MySQL Utility script mysqldbcompare to see if both tables are the same. I wrote about mysqldbcompare in an earlier post. Just like this post, on the master and slave databases, I will create a user named “scripts” to execute the mysqldbcompare script: I don’t need to actually create the user on the slave, as when I execute this command on the master, it will be replicated over to the slave.

mysql> CREATE USER 'scripts'@'%' IDENTIFIED BY 'scripts999';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON *.* TO 'scripts'@'%' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)

I can now run the mysqldbcompare script:

$ mysqldbcompare --server1=scripts:scripts999@192.168.1.181 --server2=scripts:scripts999@192.168.1.182 test:test --run-all-tests --changes-for=server2 --difftype=sql
# server1 on 192.168.1.181: ... connected.
# server2 on 192.168.1.182: ... connected.
# Checking databases test on server1 and test on server2
#
#                                                   Defn    Row     Data   
# Type      Object Name                             Diff    Count   Check  
# ------------------------------------------------------------------------- 
# TABLE     address                                 pass    pass    pass   

Databases are consistent.
#
# ...done

The SQL statement that I executed on the master to create the table “address” has been replicated on the slave, so replication is running and confirmed to be working.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Follow

Get every new post delivered to your Inbox.

Join 40 other followers