Upgrade MySQL to a new version with a fresh installation & use shell scripts and mysqldump to reload your data

There are several ways to upgrade MySQL. In this post, we will use a combination of shell scripts and the mysqldump application to export our MySQL data, and then re-import it back into the upgraded version of MySQL.

In this example, we will be doing a minor version upgrade. We will be going from 5.6.17 to 5.6.19. This method may not work if you are upgrading from one major release to another – from 5.1 to 5.5, or 5.5 to 5.6. You will want to check each version and review the new features/functions and also what features/functions have been deprecated. We are also assuming that no one will be using the database during the time it takes for us to do the upgrade.

If you want to upgrade from a version that is more than one major release apart from your current version, then you will want to upgrade to each successive version. For example, if you want to upgrade from 5.0 to 5.6, you will want to upgrade from 5.0 to 5.1, then 5.1 to 5.5, and then 5.5 to 5.6.

You don’t have to export all of your data when you upgrade MySQL. There are ways of upgrading without doing anything to your data. But in this post, I will be exporting the data and re-importing it, for a fresh installation. I don’t have that much data, so I don’t mind doing the export and import. If you have a lot of data, you might want to consider other options.

To get an idea of the size of your database(s), here is a quick script that you can use:

SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB" 
FROM information_schema.TABLES GROUP BY table_schema ; 

When I perform an export/import, I like to export each database as a separate mysqldump file, and then also export all of the databases together in one large file. By exporting/importing the individual databases, if you have an error importing one of the database dump files, you can isolate the error to a single database. It is much easier to fix the error in one smaller data dump file than with a larger all-inclusive dump file.

I am also going to create some simple shell scripts to help me create the commands that I need to make this task much easier. First, you will want to create a directory to store all of the scripts and dump files. Do all of your work inside that directory.

Next, I want to get a list of all of my databases. I will log into mysql, and then issue the show databases; command: (which is the same command as: select schema_name from information_schema.schemata;)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| 12thmedia          |
| cbgc               |
| comicbookdb        |
| coupons            |
| healthcheck        |
| innodb_memcache    |
| landwatch          |
| laurelsprings      |
| ls_directory       |
| mem                |
| mysql              |
| performance_schema |
| protech            |
| scripts            |
| stacy              |
| storelist          |
| test               |
| testcert           |
| tony               |
| twtr               |
| watchdb            |
+--------------------+
22 rows in set (1.08 sec)

I can then just highlight and copy the list of databases, and put that list into a text file named “list.txt“. I do not want to include these databases in my export:

information_schema
mysql
performance_schema
test

However, I will export the mysql table later. You need to check with the MySQL manual to make sure that there haven’t been any changes to the MySQL table from one version to the next.

I will need to manually remove those databases from my list.txt file. I then want to remove all of the spaces and pipe symbols from the text file – assuming that you do not have any spaces in your database names. Instead of using spaces in a database name, I prefer to use an underline character “_“. These scripts assume that you don’t have any spaces in your database names.

If you know how to use the vi editor, you can so a substitution for the pipes and spaces with these commands:

:%s/ //g
:%s/|//g

Otherwise, you will want to use another text editor and manually edit the list to remove the spaces and pipe symbols. Your finished list.txt file should look like this:

12thmedia cbgc
comicbookdb
coupons
healthcheck
innodb_memcache
landwatch
laurelsprings
ls_directory
mem
protech
scripts
stacy
storelist
testcert
tony
twtr
watchdb

You can then create a simple shell script to help create your mysqldump commands – one command for each database. You will want to create this script and the other scripts in the directory you created earlier. Name the script export.sh. You can also change the mysqldump options to meet your needs. I am using GTID’s for replication, so I want to use this option –set-gtid-purged=OFF. You will also want to change the value of my password my_pass to your mysql password. You can also skip including the password by using the -p option, and just enter the password each time you run the mysqldump command.

# export.sh
# script to create the database export commands
k=""
for i in `cat list.txt`
do

echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $i > "$i"_backup.sql"

k="$k $i"

done

# Optional - export the entire database
# use the file extention of .txt so that your script won't import it later
echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $k > all_db_backup.txt"

For the individual databases, I am using the suffix of .sql. For the dump file that contains all of the databases, I am using the prefix .txt – as I use a wildcard search later to get a list of the dump files, and I don’t want to import the one dump file that contains all of the databases.

Now you can run the export.sh script to create a list of your mysqldump commands, and you are going to direct the output into another shell script named export_list.sh.

# sh export.sh > export_list.sh

We can now take a look at what is in the export_list.sh file

# cat export_list.sh
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases 12thmedia > 12thmedia_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases cbgc > cbgc_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases comicbookdb > comicbookdb_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases coupons > coupons_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases healthcheck > healthcheck_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases innodb_memcache > innodb_memcache_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases landwatch > landwatch_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases laurelsprings > laurelsprings_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases ls_directory > ls_directory_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases mem > mem_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases protech > protech_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases scripts > scripts_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases stacy > stacy_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases storelist > storelist_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases testcert > testcert_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases tony > tony_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases twtr > twtr_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases watchdb > watchdb_backup.sql

mysqldump -uroot -p --set-gtid-purged=OFF --password=my_psss --triggers --quick --skip-opt --add-drop-database --create-options --databases  12thmedia cbgc comicbookdb coupons healthcheck innodb_memcache landwatch laurelsprings ls_directory mem protech scripts stacy storelist testcert tony twtr watchdb > all_db_backup.txt

Now you have created a list of mysqldump commands that you can execute to dump all of your databases. You can now go ahead and execute your mysqldump commands by running the export_list.sh script:

# sh export_list.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

The message “Warning: Using a password on the command line interface can be insecure.” is shown because you included the value for “–password“. If you don’t want to put your password on the command line, just change that option to “-p“, and you will have to manually enter your MySQL root user’s password after each mysqldump command.

Here is a list of the dump files that was produced:

# ls -l
total 21424
-rw-r--r--  1 root  staff    26690 Aug  1 16:25 12thmedia_backup.sql
-rw-r--r--  1 root  staff  5455275 Aug  1 16:26 all_db_backup.txt
-rw-r--r--  1 root  staff  1746820 Aug  1 16:25 cbgc_backup.sql
-rw-r--r--  1 root  staff   492943 Aug  1 16:25 comicbookdb_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 coupons_backup.sql
-rw-r--r--  1 root  staff     3366 Aug  1 16:25 export_list.sh
-rw-r--r--  1 root  staff     1077 Aug  1 16:25 healthcheck_backup.sql
-rw-r--r--  1 root  staff     3429 Aug  1 16:25 innodb_memcache_backup.sql
-rw-r--r--  1 root  staff  1815839 Aug  1 16:25 landwatch_backup.sql
-rw-r--r--  1 root  staff   642965 Aug  1 16:25 laurelsprings_backup.sql
-rw-r--r--  1 root  staff   660254 Aug  1 16:25 ls_directory_backup.sql
-rw-r--r--  1 root  staff     1037 Aug  1 16:25 mem_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 protech_backup.sql
-rw-r--r--  1 root  staff     2889 Aug  1 16:25 scripts_backup.sql
-rw-r--r--  1 root  staff    11107 Aug  1 16:25 stacy_backup.sql
-rw-r--r--  1 root  staff     4002 Aug  1 16:25 storelist_backup.sql
-rw-r--r--  1 root  staff     1062 Aug  1 16:25 testcert_backup.sql
-rw-r--r--  1 root  staff     4467 Aug  1 16:25 tony_backup.sql
-rw-r--r--  1 root  staff     1042 Aug  1 16:25 twtr_backup.sql
-rw-r--r--  1 root  staff    52209 Aug  1 16:25 watchdb_backup.sql

You will now want to dump your MySQL table, so you don’t have to recreate all of the MySQL information, including the users, passwords and privileges after the new install.

mysqldump -uroot --password=my_pass --set-gtid-purged=OFF mysql > mysql_user_backup.txt

I am once again using the .txt prefix for this file.

After you execute the above command, make sure that the dump file was created:

# ls -l mysql_user_backup.txt
-rw-r--r--  1 root  staff  9672 Aug  1 16:32 mysql_user_backup.txt

We have now finished exporting all of our data, including our MySQL table data. You will need to shutdown MySQL. You may use mysqladmin to shutdown your database, or here is a link on ways to shutdown MySQL.

# mysqladmin -uroot --password=my_pass shutdown
Warning: Using a password on the command line interface can be insecure.

Before continuing, you might want to check to make sure that the mysqld process isn’t still active.

# ps -ef|grep mysqld
    0 18380 17762   0   0:00.00 ttys002    0:00.00 grep mysqld

You are now going to want to change the name of your mysql directory. This will give you access to the old directory in case the upgrade fails. For my OS (Mac OS 10.9), my MySQL home directory is a symbolic link to another directory that contains the actual MySQL data. All I have to do is to remove the symbolic link. A new symbolic link will be created with the new install. Otherwise, just use the mv command to rename your old MySQL directory.

# cd /usr/local/
# ls -ld mysql* 
lrwxr-xr-x   1 root  wheel   36 Aug  9  2013 mysql -> mysql-advanced-5.6.17-osx10.6-x86_64
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

All I have to do is to remove the link, and the MySQL directory will still be there:

# rm mysql
# ls -ld mysql* 
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

Now I am ready to install the new version of MySQL. I won’t cover the installation process, but here is the link to the installation page.

Tip: After you have installed MySQL, don’t forget to run this script from your MySQL home directory. This will install your mysql database tables. Otherwise, you will get an error when you try to start the mysqld process.

# ./scripts/mysql_install_db

Now you can start the mysqld process. See this page if you don’t know how to start MySQL.

You can test to see if the new installation of MySQL is running by either checking the process table, or logging into mysql. With a fresh install of 5.6, you should not have to include a user name or password.

Note: (Future versions of MySQL may automatically create a random root password and put it in your data directory. You will then need to use that password to login to MySQL for the first time. Check the user’s manual for any MySQL versions beyond 5.6.)

# mysql
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

Now that MySQL is up and running, leave the mysql terminal window open, and open another terminal window so you can import your mysql table information from your dump file:

# mysql < /users/tonydarnell/mysql_2014_0731/2014_0731_mysql_backup.sql

You won't be able to login with your old user names and passwords until you execute the flush privileges command. So, in your other terminal window with the mysql prompt:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Open another terminal window and see if you can login with your old mysql user name and password:

# mysql -uroot -p
Enter password: 
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

You can then look at your the user names and passwords in the mysql.user table:

mysql> select user, host, password from mysql.user order by user, host;
+----------------+---------------+-------------------------------------------+
| user           | host          | password                                  |
+----------------+---------------+-------------------------------------------+
| root           | 127.0.0.1     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.2   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.5   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.50  | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | localhost     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
+----------------+---------------+-------------------------------------------+
5 rows in set (0.00 sec)


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000005
         Position: 644455
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: e1eb3f38-18da-11e4-aa44-0a1a64a61679:1-124
1 row in set (0.00 sec)

We are now ready to import the database dump files. We can use this script to create the import commands. Copy this into a text file named import.sh:

# import.sh
# script to import all of the export files
# run this script in the same directory as the exported dump files
#
> import_files.sh
directory=`pwd`
for file in `ls *sql`
do

if [[ $(grep -c '.txt' $file) != 0 ]];then

echo "# found mysql - do nothing"

else

echo "mysql -uroot -p"my_pass"  < $directory/$file"
echo "mysql -uroot -p"my_pass"  > import_files.sh

fi

done

Then run the import.sh script. The script will print the output to the terminal window as well as into a new script file named import_files.sh.

# sh import.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

Look at the contents of the new script file – import_files.sh – to make sure that it contains all of the database files. You will use this file to help you import your dump files.

# cat import_files.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

WARNING: Be sure that this script file does not contain the main dump file or the mysql user’s file that we created.


I was exporting and importing eighteen (18) database files, so I can also check the line count of the import_files.sh script to make sure it matches:

# wc -l import_files.sh
      18 import_files.sh

I am now ready to import my files.


Optional: add the -v for verbose mode – sh -v import_files.sh


# sh import_files.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

You databases should now be imported into your new instance of MySQL. You can always re-run the script to make sure that the databases are the same size.


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed after importing the dump files, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000003
         Position: 16884001
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: cc68d008-18f3-11e4-aae6-470d6cf89709:1-43160
1 row in set (0.00 sec)

Your new and fresh installation of MySQL should be ready to use.

NOTE:A thank-you to Daniel Van Eeden for pointing out a mistake that I had made.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Use MySQL to store NoSQL and SQL data in the same database using memcached and InnoDB

MySQL is a great relational database, but at some point someone (management) in your company is probably going to say that they need to use NoSQL to store their data. After all, NoSQL is one of the latest buzzwords, so it must be good (correct?). Basically, NoSQL allows you to store data without all of the characteristics of a relational database. A very simple explanation is that you are storing all of a data set with just one primary key, and the primary key is how you also retrieve the data. While NoSQL may be good in some cases, it is hard to beat “old-fashioned” SQL relational databases – especially if that is what you know. But, with MySQL and InnoDB, you can have the best of both worlds.

With MySQL version 5.6 (and above), you have the ability to store and retrieve NoSQL data, using NoSQL commands, while keeping the data inside a MySQL InnoDB database. So, you can use NoSQL and SQL at the same time, on the same data, stored in the same database. And the beauty is that it takes just a few minutes to setup. This post will provide you with a quick lesson on how to setup NoSQL on a MySQL InnoDb database.

I would suggest that you read this MySQL web page to get started – Getting Started with InnoDB Memcached Plugin. You should be able to follow this guide and have your NoSQL database up and running in no time at all.

NoSQL on MySQL uses memcached – a distributed memory object caching system. Currently, the memcached daemon plugin is only supported on Linux, Solaris, and Mac OS X platforms.

There are a few prerequisites. You must have the libevent 1.4.3 (or greater) libraries installed (it is not installed if you used a MySQL installer to install MySQL). Depending upon your operating system, you can use apt-get, yum, or port install. For example, on Ubuntu Linux:

sudo apt-get install libevent-dev

 


On the Mac, it takes a few more steps (I tested this with Mac OS 10.9 – Mavericks). To install libevent, you will need to install Xcode, the Xcode command line tools, and then Homebrew. You may install Xcode via the Apple App Store (and this is the most time-consuming part). Once Xcode is installed, from a command line type:

# xcode-select --install

This will prompt you to install the command-line tools. Then, you can easily install Homebrew via this command:

# ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"

Then install libevent via:

# brew install libevent

 

The libraries for memcached and the InnoDB plugin for memcached are put into the correct place by the MySQL installer. For a typical operation, the files lib/plugin/libmemcached.so and lib/plugin/innodb_engine.so are used.

You may check to make sure you have the libraries: (substitute $MYSQL_HOME for your mysql home directory)

# ls -l $MYSQL_HOME/lib/plugin/libmemcached.so
-rwxr-xr-x  1 mysql  wheel  195664 Mar 14 15:23 lib/plugin/libmemcached.so

# ls -l $MYSQL_HOME/lib/plugin/innodb_engine.so
-rwxr-xr-x  1 mysql  wheel  109056 Mar 14 15:23 lib/plugin/innodb_engine.so

To be able to use memcached so that it can interact with InnoDB tables, you will need to run a configuration script to install the tables necessary for use with memcached. This script is named innodb_memcached_config.sql, and it should be in your $MYSQL_HOME/share directory.

# cd $MYSQL_HOME
# ls -l share/innodb_memcached_config.sql
-rwxr-xr-x  1 mysql  wheel  3963 Mar 14 15:02 share/innodb_memcached_config.sql

To install the script, from the command line run:

# mysql -uroot -p < $MYSQL_HOME/share/innodb_memcached_config.sql

This script will install the three tables (cache_policies, config_options and containers) that it needs for the InnoDB/memcached mapping relationship, along with a sample table named demo_test, which is installed in the test database.

mysql> use innodb_memcache
Database changed
mysql> show tables;
+---------------------------+
| Tables_in_innodb_memcache |
+---------------------------+
| cache_policies            |
| config_options            |
| containers                |
+---------------------------+
3 rows in set (0.01 sec)

mysql> use test;
Database changed
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| demo_test      |
+----------------+
1 row in set (0.00 sec)

The table that we need to use for the InnoDB mapping is the containers table. Here is the DESCRIBE statement for the containers table:

mysql> DESCRIBE innodb_memcache.containers;
+------------------------+--------------+------+-----+---------+-------+
| Field                  | Type         | Null | Key | Default | Extra |
+------------------------+--------------+------+-----+---------+-------+
| name                   | varchar(50)  | NO   | PRI | NULL    |       |
| db_schema              | varchar(250) | NO   |     | NULL    |       |
| db_table               | varchar(250) | NO   |     | NULL    |       |
| key_columns            | varchar(250) | NO   |     | NULL    |       |
| value_columns          | varchar(250) | YES  |     | NULL    |       |
| flags                  | varchar(250) | NO   |     | 0       |       |
| cas_column             | varchar(250) | YES  |     | NULL    |       |
| expire_time_column     | varchar(250) | YES  |     | NULL    |       |
| unique_idx_name_on_key | varchar(250) | NO   |     | NULL    |       |
+------------------------+--------------+------+-----+---------+-------+
9 rows in set (0.00 sec)

Here is some information about the container columns:

  • name: This is the name that is like the primary key for the memcache data collection. If you have a value of default for name, then this will be the default entry that is used. Otherwise it uses the first entry in the container table. You can also specify this name value in the NoSQL statement.
  • db_schema: The InnoDB database name that you will use to store the data.
  • db_table: The InnoDB database table name that you will use to store the data.
  • key_columns: The column that you will use for the key value lookup. This field only contains one column (despite the plural name of key_columns).
  • value_columns: Data will be pulled from and/or stored to these column/columns of data. You use a separator value (such as a pipe "|" symbol) to separate the columns. In the example database that is installed, you can store first name | last name which would pull data from both a firstname and lastname column.
  • flags: This column stores memcache flags, which is an integer that is used to mark rows for memcache operations.
  • cas_column and expire_time_column: These two columns are used for storing memcache compare-and-swap and expiration values. You can ignore these for now.
  • unique_idx_name_on_key: This is the name of the unique index that you will use for the key column, and you should use the primary key for the table. You should not use an auto-incrementing index, as you can’t insert into an auto-incrementing key.

NOTE: If you make any changes to the innodb_memcache.containers table, you will need to restart the plugin or restart mysqld.

The innodb_memcached_config.sql script inserted one line of data for us in the innodb_memcache.containers table:

mysql> select * from innodb_memcache.containers;
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| name | db_schema | db_table  | key_columns | value_columns | flags | cas_column | expire_time_column | unique_idx_name_on_key |
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| aaa  | test      | demo_test | c1          | c2            | c3    | c4         | c5                 | PRIMARY                |
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
1 row in set (0.00 sec)

You can see the name aaa is mapped to the db_table (InnoDB table) demo_test. And the innodb_memcached_config.sql script also created this demo_test table for us to use, and it inserted one row of data. Here is the DESCRIBE statement for the demo_test table:

mysql> DESCRIBE test.demo_test;
+-------+---------------------+------+-----+---------+-------+
| Field | Type                | Null | Key | Default | Extra |
+-------+---------------------+------+-----+---------+-------+
| c1    | varchar(32)         | NO   | PRI |         |       |
| c2    | varchar(1024)       | YES  |     | NULL    |       |
| c3    | int(11)             | YES  |     | NULL    |       |
| c4    | bigint(20) unsigned | YES  |     | NULL    |       |
| c5    | int(11)             | YES  |     | NULL    |       |
+-------+---------------------+------+-----+---------+-------+
5 rows in set (0.00 sec)

And here is the row that was inserted:

mysql> select * from demo_test;
+----+--------------+------+------+------+
| c1 | c2           | c3   | c4   | c5   |
+----+--------------+------+------+------+
| AA | HELLO, HELLO |    8 |    0 |    0 |
+----+--------------+------+------+------+
1 row in set (0.00 sec)

Next you will need to install the memcached plugin. From a mysql prompt:

mysql> install plugin daemon_memcached soname "libmemcached.so";

Once the plugin is installed this way, it is automatically activated each time the MySQL server is booted or restarted.

To turn off the plugin, use this statement:

mysql> uninstall plugin daemon_memcached;

NOTE: You might need to restart mysqld before continuing.

Now we can start testing memcached with InnoDB. MySQL is now listening to the memcached port 11211. We could try writing some code, but the easiest way is to just telnet to the port 11211 and issue some NoSQL commands. We only have one row in the innodb_memcache.containers table, and only one row in the test.demo_test table. For this example, we are only going to use a few NoSQL commands – get and set. Here is a link to the full list of commands.

Let’s telnet and use NoSQL to retrieve the data from the InnoDB demo_test table, which contained one row of data. The key for this one row is AA. We simply need to type get AA to retrieve the data:

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get AA
VALUE AA 8 12
HELLO, HELLO
END

We can now insert a line of data. This is done with the set command. The syntax for inserting data is:

  • set – this is the command to store a value
  • XX – this is the value key
  • # – this is a reference to the flags that we will use
  • # – this is the expiration TTL (ime to live)
  • # – the length of the string that we will insert/store
  • ABCDEF – the value to insert/store

In your telnet session, type this:

set BB 0 0 16
Goodbye, Goodbye
STORED

Now, let’s take a look at the InnoDB table, test.demo_test – and we can see our data:

mysql> select * from test.demo_test;
+----+------------------+------+------+------+
| c1 | c2               | c3   | c4   | c5   |
+----+------------------+------+------+------+
| AA | HELLO, HELLO     |    8 |    0 |    0 |
| BB | Goodbye, Goodbye |    0 |    1 |    0 |
+----+------------------+------+------+------+
2 rows in set (0.00 sec)

If we go back to the telnet session, we can also get the value for BB, which we inserted earlier via NoSQL:

get BB
VALUE BB 0 16
Goodbye, Goodbye
END

 


 
Let’s create a new table, create the relationship in the innodb_memcache.containers and insert (set) and read (get) some new data. Here is the CREATE TABLE statement for our new users table:

use test;
CREATE TABLE `users` (
 `user_id` varchar(32) NOT NULL DEFAULT '',
 `first` varchar(100) DEFAULT NULL,
 `last` varchar(100) DEFAULT NULL,
 `flags` int(11) DEFAULT '0',
 `cas` int(11) DEFAULT '0',
 `expiry` int(11) DEFAULT '0',
 PRIMARY KEY (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

And here is the SQL INSERT for innodb_memcache.containers table, to establish our relationship with memcached and InnoDB:

INSERT INTO `innodb_memcache`.`containers` 
(`name`, `db_schema`, `db_table`, `key_columns`, `value_columns`, `flags`, `cas_column`, `expire_time_column`, `unique_idx_name_on_key`)
VALUES ('default', 'test', 'users', 'user_id', 'first|last', 'flags','cas','expiry','PRIMARY');

NOTE: Since we changed the innodb_memcache.containers table, we will need to either restart mysqld or disable/enable the plugin (as shown above).

Now that we have restarted mysqld or the plugin, let’s insert some data into our new InnoDB table via NoSQL.

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
set jj1 0 0 15
James|Johnson
STORED
set jj2 0 0 14
John|Jackson
STORED

We can now use SQL to query the test.users table to see the InnoDB data we just stored via NoSQL:

mysql> select * from users;
+---------+-------+-----------+-------+------+--------+
| user_id | first | last      | flags | cas  | expiry |
+---------+-------+-----------+-------+------+--------+
| jj1     | James | Johnson   |     0 |    1 |      0 |
| jj2     | John  | Jackson   |     0 |    2 |      0 |
+---------+-------+-----------+-------+------+--------+
2 rows in set (0.00 sec)

Let’s insert some data into the temp.users table via mysql, and then retrieve the data via NoSQL.

mysql> INSERT INTO test.users (user_id, first, last, flags, cas, expiry) VALUES ('bb1', 'Beth', 'Brown', '0', '3', '0');
Query OK, 1 row affected (0.00 sec)

Retrieve the data via NoSQL:

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get bb1
VALUE bb1 0 10
Beth|Brown
END

We now have a way to use NoSQL via memcached and at the same time use good old-fashioned SQL statements on the same data via InnoDB and MySQL. It is the best of both worlds!

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Using mysqldump and the MySQL binary log – a quick guide on how to backup and restore MySQL databases

Be sure to check out my other posts on mysqldump:
Scripting Backups of MySQL with Perl via mysqldump
Splitting a MySQL Dump File Into Smaller Files Via Perl
Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 1 of 2
Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 2 of 2


I have already written several posts on using mysqldump. This post will be a quick guide to using mysqldump to backup and restore your databases. And we will look at using the MySQL binary log (binlog) along with mysqldump for point-in-time recovery. If your database is extremely large, and for databases in a production environment, you might want to consider using MySQL Enterprise Backup (mysqlbackup), which is part of the MySQL Enterprise Edition.

For those of you that aren’t familiar with mysqldump:

The mysqldump client is a utility that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects, table data, or both. It dumps one or more MySQL database for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

The best feature about mysqldump is that it is easy to use. The main problem with using mysqldump occurs when you need to restore a database. When you execute mysqldump, the database backup (output) is an SQL file that contains all of the necessary SQL statements to restore the database – but restoring requires that you execute these SQL statements to essentially rebuild the database. Since you are recreating your database, the tables and all of your data from this file, the restoration procedure can take a long time to execute if you have a very large database.


NOTE: If you are using GTID’s (global transaction identifiers) in your database, you will need to include the –set-gtid-purged=OFF option, otherwise you will receive this error:

Warning: A partial dump from a server that has GTIDs will by default include the
GTIDs of all transactions, even those that changed suppressed parts of the database.
If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete 
dump, pass --all-databases --triggers --routines --events. 

For these examples, I will not include the –set-gtid-purged=OFF option.


Dumping and making a copy of a single database

To dump/backup a single database:

mysqldump -uroot -p database_name > db_dump.sql

To load the dump file back into mysql, you will need to create the new database first. If you use the –databases option before the database name, mysqldump will also dump the CREATE DATABASE and USE statements that you need prior to inserting the data from the dump.

You can either use mysqladmin to create the new database, or create it from a MySQL prompt:

# mysqladmin create new_database_name

mysql> CREATE DATABASE new_database_name;

Next, you can simply import the dump file into mysql.

# mysql new_database_name < db_dump.sql

You can also use the dump file to move the database to another server. If you did not use the –databases option, then you will need to create the database first.

Dumping events, routines, triggers

Here are the options for mysqldump to also dump event scheduler events, stored procedures or functions. If you want to include these, use these options:

–routines – dumps stored procedures and functions
–events – dumps Event Scheduler events
–triggers – dumps triggers

When you use mysqldump, triggers are dumped by default. If you want to disable any of these functions, use the “skip” versions: https://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_events–skip-events, –skip-routines, or –skip-triggers.

Only dump table definitions

If you want to just dump the CREATE TABLE statements without any data, use the –no-data option.

# mysqldump --no-data database_name > db_definitions_dump.sql

You might want to add the –routines and –events options to include stored routines and event definitions.

# mysqldump --no-data --routines --events database_name > db_definitions_dump.sql

Only dump the data

If you want to just dump the data without any table definitions, you can use the –no-create-info option:

# mysqldump --no-create-info database_name > db_data_dump.sql

Using mysqldump to test a new version of mysql

Let’s say that you need to upgrade to a new version of mysql. In order to test to see if your database objects are handled properly by the new version, you can simply dump the data definitions and import them into the new version of MySQL (preferably on a separate computer).

On the computer with the old version of MySQL:

mysqldump --all-databases --no-data --routines --events > db_definitions.sql

Then, on the upgraded server, you can just import this dump file:

mysql -uroot -p < db_definitions.sql

This will help you spot any potential incompatibilities with the new version of MySQL. If you don’t receive any errors, you can then dump the data and load into the new server. Be sure to run some test queries on the new upgraded server as well.

Point-in-time recovery using mysqldump and the binary logs

The MySQL binary logs (binlogs) contains all of the SQL statements or “events” that could change the database (I say “could” because a delete statement that does not delete any rows would still be entered into the binary log – unless you are using row-based logging).

For more information about the binary log, see: http://dev.mysql.com/doc/refman/5.6/en/binary-log.html.

Since the binlog contains all of the events that happen to the database, you can use the binlog to apply these same changes to a different database. If you started your MySQL instance with the binlogs enabled, and you have never flushed the binlogs, then the binlogs contain all of the SQL statements for all of the data that is in your database. The binlog itself is like a backup of your database.

If you want to use the binary logs in addition to mysqldump to restore your database, you need to have the binary logs (binlogs) enabled. There are many options for the binlogs (see http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html, but the only two that you really need for this example are:

 --log-bin[=base_name]
 --log-bin-index[=file_name]

One other option is to use the –binlog-format. You can set this value to STATEMENT (default), ROW or MIXED. For more information about these options, see http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html#sysvar_binlog_format.

These variables need to go into your my.cnf or my.ini file under the [mysqld] section, and this will require a restart of mysqld.

Once you have the binary log enabled, you will need to do a few things differently when you use mysqldump. You will need to:

- flush the tables and place a READ lock on the tables
– check to see what binlog is being used
– check the position of the binlog
– dump your data with mysqldump
– release the lock

By placing a read lock on the tables, you are stopping anyone from modifying the data in the database. By having the binlog and binlog position, these will allow you use the binary logs to restore any statements that happened after the mysqldump. Open two terminal windows – one with a MySQL prompt, and another with a root prompt:

In the MySQL prompt, issue the READ lock and SHOW MASTER STATUS:

mysql> FLUSH TABLES WITH READ LOCK;
Query OK, 0 rows affected (0.47 sec)

mysql> SHOW MASTER STATUS\G
*************************** 1. row ***************************
             File: mysql-bin.000008
         Position: 191
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

Now you are ready to dump the database with whatever options you need:

# mysqldump --all-databases > db_000008_191_dump.sql

Once the dump has finished, you can release the lock:

mysql> UNLOCK TABLES;
Query OK, 0 rows affected (0.00 sec)

It is extremely important that you write down the binlog file and position from the SHOW MASTER STATUS statement and somehow associate it with the dump file. One way to do this is to insert the binlog file name and position into the dump file name. In my example above, I did this by adding both to the dump file name db_000008_191_dump.sql.

When you have to restore the database, you will need to load the dump file first, and then apply the binlog(s) to the database.

Let’s assume that we need to restore the entire database. First, we will import the dump file:

# mysql -uroot -p < db_000008_191_dump.sql

Next, you will need to load the information from the binlog(s). To load information from the binlogs, you need to use the mysqlbinlog utility. You can check your MySQL data directory to see how many binlogs you have used since the one that was in the SHOW MASTER STATUS statement:

$ ls -l mysql-bin*
-rw-rw----  1 mysql  wheel     67110 Apr  4 16:22 mysql-bin.000001
-rw-rw----  1 mysql  wheel   1230893 Apr  4 16:24 mysql-bin.000002
-rw-rw----  1 mysql  wheel  13383907 Apr  4 17:03 mysql-bin.000003
-rw-rw----  1 mysql  wheel  13383907 Apr  4 19:03 mysql-bin.000004
-rw-rw----  1 mysql  wheel  13383907 Apr  4 19:07 mysql-bin.000005
-rw-rw----  1 mysql  wheel  13383907 Apr 18 16:48 mysql-bin.000006
-rw-rw----  1 mysql  wheel  13383907 Apr 21 13:37 mysql-bin.000007
-rw-rw----  1 mysql  wheel  13383907 Apr 21 13:37 mysql-bin.000008
-rw-rw----  1 mysql  wheel    154847 Apr 21 13:37 mysql-bin.000009
-rw-rw----  1 mysql  wheel       171 Apr 21 13:37 mysql-bin.index

You can also just look at the mysql-bin.index file (located in your MySQL data directory), which contains a list of all of the binary files in use:

 # cat mysql-bin.index
./mysql-bin.000001
./mysql-bin.000002
./mysql-bin.000003
./mysql-bin.000004
./mysql-bin.000005
./mysql-bin.000006
./mysql-bin.000007
./mysql-bin.000008
./mysql-bin.000009

In this example, we will need to apply the changes from the binlog file mysql-bin.000008 after position 191, and then all of the mysql-bin.000009 binlog. You will need to add the correct data directory PATH to your mysqlbinlog statement.

mysqlbinlog --start-position=191 $DATA_DIR_PATH/mysql-bin.000008 | mysql -u root -p

After you have inserted all of mysql-bin.000008 after position 191, you can insert the entire mysql-bin.000009 binlog file:

mysqlbinlog $DATA_DIR_PATH/mysql-bin.000009 | mysql -u root -p

Note: During the restore process, you do not want anyone inserting any data into the database.

Your database should now be back to the state when the database crashed or was deleted. It isn’t mandatory, but it is also a good idea to copy the binlogs to a separate location as part of your backup routine. You can use mysqlbinlog to do this as well – see: http://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html.

For more information on using the binary logs for point-in-time recovery, see https://dev.mysql.com/doc/refman/5.6/en/point-in-time-recovery.html. There are a lot of other options for using binlogs. The best option for backing up and restoring your MySQL database is to use the MySQL Enterprise Backup (mysqlbackup), which is part of the MySQL Enterprise Edition subscription, which includes 24×7 MySQL Support and the other Enterprise features.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

MySQL Partitioning – A Quick Look at Partitioning – Separate Your Data for Faster Searches

In MySQL, partitioning is a way to separate the data in one table into smaller “sub-tables” for better query performance and data management.

For example, let’s say that you have a database containing numerous accounting transactions. You could just store all of these transactions in one table, but you only need to keep seven year’s worth of data for tax purposes. Instead of placing all of the data in one table, and then deleting the old data from that table, you could split the table into partitions with each partition representing one year’s worth of data.

Then, after seven years, you could delete/drop the old partition. Partitions are flexible, as you can add, drop, redefine, merge, or split existing partitions (there are other options on what you could do with this data as well). Also, if you have a table that is going to contain a lot of rows, partitioning your data would allow your searches to be much faster, as the search can then be limited to a single partition. As of MySQL 5.6, you can split a table into as many as 8192 partitions.

Here is the MySQL website’s explanation about partitions:

The SQL standard does not provide much in the way of guidance regarding the physical aspects of data storage. The SQL language itself is intended to work independently of any data structures or media underlying the schemas, tables, rows, or columns with which it works. Nonetheless, most advanced database management systems have evolved some means of determining the physical location to be used for storing specific pieces of data in terms of the file system, hardware or even both. In MySQL, the InnoDB storage engine has long supported the notion of a tablespace, and the MySQL Server, even prior to the introduction of partitioning, could be configured to employ different physical directories for storing different databases (see Section 8.11.3.1, “Using Symbolic Links“, for an explanation of how this is done).

Partitioning takes this notion a step further, by enabling you to distribute portions of individual tables across a file system according to rules which you can set largely as needed. In effect, different portions of a table are stored as separate tables in different locations. The user-selected rule by which the division of data is accomplished is known as a partitioning function, which in MySQL can be the modulus, simple matching against a set of ranges or value lists, an internal hashing function, or a linear hashing function. The function is selected according to the partitioning type specified by the user, and takes as its parameter the value of a user-supplied expression. This expression can be a column value, a function acting on one or more column values, or a set of one or more column values, depending on the type of partitioning that is used.

(From: https://dev.mysql.com/doc/refman/5.6/en/partitioning-overview.html)


There are four types of partition options for your data:

RANGE – This type of partitioning assigns rows to partitions based on column values falling within a given range.

LIST – Similar to partitioning by RANGE, except that the partition is selected based on columns matching one of a set of discrete values.

HASH – With this type of partitioning, a partition is selected based on the value returned by a user-defined expression that operates on column values in rows to be inserted into the table. The function may consist of any expression valid in MySQL that yields a nonnegative integer value. An extension to this type, LINEAR HASH, is also available.

KEY – This type of partitioning is similar to partitioning by HASH, except that only one or more columns to be evaluated are supplied, and the MySQL server provides its own hashing function. These columns can contain other than integer values, since the hashing function supplied by MySQL guarantees an integer result regardless of the column data type. An extension to this type, LINEAR KEY, is also available.

(From: https://dev.mysql.com/doc/refman/5.6/en/partitioning-types.html)


This post will just give you one example of how to partition your data, and then how to verify that your query is searching only the correct partition. It doesn’t do you any good if you partition your data but then write queries that perform a table scan to get your results. In this example, I am going to be separating the table data by the year.

We are going to create a simple membership table, and partition by RANGE. We will separate the partition by the year that the person joined and we will add one member to each year. Our members table will be very simple, with an ID, the date the person joined, and their first and last name. We will create the partition by using just the YEAR that they joined, while we keep the full date they joined in the joined column. We are also assigning the columns id and joined to be primary keys. Here is the CREATE TABLE statement:

CREATE TABLE `members` (
  `id` int(5) NOT NULL AUTO_INCREMENT,
  `joined` date NOT NULL,
  `lastname` varchar(25) NOT NULL,
  `firstname` varchar(25) NOT NULL,
  PRIMARY KEY (`id`,`joined`)
) ENGINE=InnoDB AUTO_INCREMENT=10000 DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE ( YEAR(joined))
(PARTITION p0 VALUES LESS THAN (2011) ENGINE = InnoDB,
 PARTITION p1 VALUES LESS THAN (2012) ENGINE = InnoDB,
 PARTITION p2 VALUES LESS THAN (2013) ENGINE = InnoDB,
 PARTITION p3 VALUES LESS THAN (2014) ENGINE = InnoDB,
 PARTITION p4 VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */;

Our partitions will contain rows that have joined dates earlier than the dates shown in the PARTITION BY statement. In other words, partition p0 will contain dates earlier than 01/01/2011 (i.e. dates in 2010 or earlier). Partition p2 will contain dates earlier than 01/01/2012 but greater than 12/31/2010 (i.e. dates in 2011). Partition p3 will contains dates for 2013, and p4 will contain dates for 2014 and greater. Before the year 2015 arrives, you will need to add an additional partition for 2015. Of course, you could go ahead and add partitions for the next several years.

If you want the partition p0 to contain all of the dates in 2011 (instead of those dates LESS THAN 2011), you can change the VALUES LESS THAN (2011) statement to VALUES IN (2011). But then 2011 will be the earliest year that your partition would be able to contain. Any values less than 2011 would not be inserted into the database.

Now, let’s insert some data. We will insert one row into each partition, and then do a:

select id, joined, lastname, firstname from members;

to see what our data looks like:

mysql> insert into members (firstname, lastname, joined) values ("Mary", "Davis", "2010-01-14");
Query OK, 1 row affected (0.64 sec)

mysql> insert into members (firstname, lastname, joined) values ("John", "Hill", "2011-02-12");
Query OK, 1 row affected (0.01 sec)

mysql> insert into members (firstname, lastname, joined) values ("Steve", "Johnson", "2012-03-18");
Query OK, 1 row affected (0.01 sec)

mysql> insert into members (firstname, lastname, joined) values ("Beth", "Daniels", "2013-04-22");
Query OK, 1 row affected (0.03 sec)

mysql> insert into members (firstname, lastname, joined) values ("Bob", "Smith", "2014-05-29");
Query OK, 1 row affected (0.01 sec)

mysql> select id, joined, lastname, firstname from members;
+-------+------------+----------+-----------+
| id    | joined     | lastname | firstname |
+-------+------------+----------+-----------+
| 10000 | 2010-01-14 | Davis    | Mary      |
| 10001 | 2011-02-12 | Hill     | John      |
| 10002 | 2012-03-18 | Johnson  | Steve     |
| 10003 | 2013-04-22 | Daniels  | Beth      |
| 10004 | 2014-05-29 | Smith    | Bob       |
+-------+------------+----------+-----------+
5 rows in set (0.00 sec)

When you start building your queries, you want to make sure that the query is using the partitions. You can do this by including the EXPLAIN PARTITIONS statement before your select statement. Visit this link you want to learn more about Obtaining Information About Partitions.

Since we made the id column a primary key, let’s look at what happens when we do a search by primary key. We will use the EXPLAIN PARTITIONS statement to see what partitions are being used in the search. Let’s look for Mary’s information. She has the ID of 10000.

mysql> EXPLAIN PARTITIONS select id, firstname, lastname, joined from members where id = '10000';
+----+-------------+---------+----------------+------+---------------+---------+---------+-------+------+-------+
| id | select_type | table   | partitions     | type | possible_keys | key     | key_len | ref   | rows | Extra |
+----+-------------+---------+----------------+------+---------------+---------+---------+-------+------+-------+
|  1 | SIMPLE      | members | p0,p1,p2,p3,p4 | ref  | PRIMARY       | PRIMARY | 4       | const |    5 | NULL  |
+----+-------------+---------+----------------+------+---------------+---------+---------+-------+------+-------+
1 row in set (0.05 sec)

As you can see under the partitions column, all five partitions (p0,p1,p2,p3,p4) were searched for this information because the partitions were separated by the year, and not the id. So this query would not take advantage of our partitions.

Look at what happens when we also include Mary’s joined date along with the id column:

mysql> EXPLAIN PARTITIONS select id, firstname, lastname, joined from members where id = '10000' and joined = '2010-01-14';
+----+-------------+---------+------------+-------+---------------+---------+---------+-------------+------+-------+
| id | select_type | table   | partitions | type  | possible_keys | key     | key_len | ref         | rows | Extra |
+----+-------------+---------+------------+-------+---------------+---------+---------+-------------+------+-------+
|  1 | SIMPLE      | members | p0         | const | PRIMARY       | PRIMARY | 7       | const,const |    1 | NULL  |
+----+-------------+---------+------------+-------+---------------+---------+---------+-------------+------+-------+
1 row in set (0.00 sec)

As you can see, MySQL only had to search in partition p0. Since the joined column was included in the query, MySQL can go to that partition and use the PRIMARY key of id and quickly find the record it needs.

Let’s see what we would need to do if you wanted to find all of the members who joined in the year 2010 (like Mary). You would think that you could just use the YEAR function on the joined column. But, you can’t use a function to convert the joined date to a year, as MySQL will need to convert all of the values in the joined columns first, and then it won’t be able to use the partition:

mysql> EXPLAIN PARTITIONS select id, firstname, lastname, joined from members where YEAR(joined) = '2010';
+----+-------------+---------+----------------+------+---------------+------+---------+------+------+-------------+
| id | select_type | table   | partitions     | type | possible_keys | key  | key_len | ref  | rows | Extra       |
+----+-------------+---------+----------------+------+---------------+------+---------+------+------+-------------+
|  1 | SIMPLE      | members | p0,p1,p2,p3,p4 | ALL  | NULL          | NULL | NULL    | NULL |    5 | Using where |
+----+-------------+---------+----------------+------+---------------+------+---------+------+------+-------------+
1 row in set (0.03 sec)

In this case, you are still having to go through all partitions because of the YEAR function. It would be better to use a range in the WHERE clause to find the members from 2010:

mysql> EXPLAIN PARTITIONS select id, firstname, lastname, joined from members where joined  '2009-12-31';
+----+-------------+---------+------------+------+---------------+------+---------+------+------+-------------+
| id | select_type | table   | partitions | type | possible_keys | key  | key_len | ref  | rows | Extra       |
+----+-------------+---------+------------+------+---------------+------+---------+------+------+-------------+
|  1 | SIMPLE      | members | p0         | ALL  | NULL          | NULL | NULL    | NULL |    2 | Using where |
+----+-------------+---------+------------+------+---------------+------+---------+------+------+-------------+
1 row in set (0.00 sec)

But what happens when you need to change the partitioned value of the joined date? What if Mary’s date was incorrect, and she really joined in 2011? What happens to the data? When you change the value of the partitioned column, MySQL will move that data to the appropriate partition. Let’s look at Mary’s information again, and also look at the EXPLAIN PARTITIONS statement for the same query.

mysql> select id, firstname, lastname, joined from members where id = '10000' and joined = '2010-01-14';
+-------+-----------+----------+------------+
| id    | firstname | lastname | joined     |
+-------+-----------+----------+------------+
| 10000 | Mary      | Davis    | 2010-01-14 |
+-------+-----------+----------+------------+
1 row in set (0.00 sec)

mysql> EXPLAIN PARTITIONS select id, firstname, lastname, joined from members where id = '10000' and joined = '2010-01-14';
+----+-------------+---------+------------+-------+---------------+---------+---------+-------------+------+-------+
| id | select_type | table   | partitions | type  | possible_keys | key     | key_len | ref         | rows | Extra |
+----+-------------+---------+------------+-------+---------------+---------+---------+-------------+------+-------+
|  1 | SIMPLE      | members | p0         | const | PRIMARY       | PRIMARY | 7       | const,const |    1 | NULL  |
+----+-------------+---------+------------+-------+---------------+---------+---------+-------------+------+-------+
1 row in set (0.00 sec)

We can see that Mary’s data is in partition p0. Now let’s change Mary’s joined date from 2010-01-14 to 2011-05-30, and then run both of the above statements again (but in the query we need to change Mary’s joined date to reflect the new date):

mysql> update members set joined = '2011-05-30' where id = '10000';
Query OK, 1 row affected (0.06 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> select id, firstname, lastname, joined from members where id = '10000' and joined = '2011-05-30';
+-------+-----------+----------+------------+
| id    | firstname | lastname | joined     |
+-------+-----------+----------+------------+
| 10000 | Mary      | Davis    | 2011-05-30 |
+-------+-----------+----------+------------+
1 row in set (0.00 sec)

mysql> EXPLAIN PARTITIONS select id, firstname, lastname, joined from members where id = '10000' and joined = '2011-05-30';
+----+-------------+---------+------------+-------+---------------+---------+---------+-------------+------+-------+
| id | select_type | table   | partitions | type  | possible_keys | key     | key_len | ref         | rows | Extra |
+----+-------------+---------+------------+-------+---------------+---------+---------+-------------+------+-------+
|  1 | SIMPLE      | members | p1         | const | PRIMARY       | PRIMARY | 7       | const,const |    1 | NULL  |
+----+-------------+---------+------------+-------+---------------+---------+---------+-------------+------+-------+
1 row in set (0.00 sec)

We can now see that Mary’s data is now in partition p1.

Partitioning data can really add performance to your queries, but only if you know how to write the proper queries to take advantage of the partitioning. Using the EXPLAIN PARTITIONS statement can really help you figure out if your queries are properly working. You can also store separate partitions on separate storage devices (by using innodb_file_per_table), and in MySQL 5.7.4 (or greater), you can even move partitioned tables to another server.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Installing and testing the MySQL Enterprise Audit plugin

MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.

(from http://www.mysql.com/products/enterprise/)

MySQL Enterprise Audit provides an easy to use, policy-based auditing solution that helps organizations to implement stronger security controls and to satisfy regulatory compliance.

As more sensitive data is collected, stored and used online, database auditing becomes an essential component of any security strategy. To guard against the misuse of information, popular compliance regulations including HIPAA, Sarbanes-Oxley PDF, and the PCI Data Security Standard require organizations to track access to information.

Download the White Paper: MySQL Enterprise Edition Product Guide

To meet these demands, organizations must be able to produce an audit trail of information to help track who does what to which piece of data. This includes login and logoff attempts, attempts to access a database or a table, changes to database schema and much more.

MySQL Enterprise Audit gives DBAs the tools they need to add auditing compliance to their new and existing applications by enabling them to:

  • Dynamically enable/disable audit stream
  • Implement policies that log all or selected login or query activities
  • Automatically rotate audit log files based on size
  • Integrate XML-based audit log stream with MySQL, Oracle and other third party solutions

(from http://www.mysql.com/products/enterprise/audit.html)

MySQL Enterprise Audit is implemented using a server plugin. MySQL Enterprise Audit uses the open MySQL Audit API to enable standard, policy-based monitoring and logging of connection and query activity executed on specific MySQL servers. Designed to meet the Oracle audit specification, MySQL Enterprise Audit provides an out of box, easy to use auditing and compliance solution for applications that are governed by both internal and external regulatory guidelines.

When installed, the audit plugin enables MySQL Server to produce a log file containing an audit record of server activity. The log contents include when clients connect and disconnect, and what actions they perform while connected, such as which databases and tables they access.

(from http://dev.mysql.com/doc/refman/5.6/en/mysql-enterprise-audit.html)

Installing and testing the MySQL Enterprise Audit plugin
Manual: http://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin.html

Check that Enterprise Edition is installed:

The thread pool plugin requires the MySQL Enterprise Edition, and is not available in the Community Edition. After you have installed MySQL Enterprise Edition from http://edelivery.oracle.com, you can check from mysql to make sure that you have the correct version:

mysql> SHOW VARIABLES LIKE ‘version%’;
+-------------------------+---------------------------------------------------------+
| Variable_name           | Value                                                   |
+-------------------------+---------------------------------------------------------+
| version                 | 5.6.14-enterprise-commercial-advanced-log               |
| version_comment         | MySQL Enterprise Server - Advanced Edition (Commercial) |
| version_compile_machine | x86_64                                                  |
| version_compile_os      | osx10.7                                                 |
+-------------------------+---------------------------------------------------------+
4 rows in set (0.00 sec)

Before you attempt to install the plugin, you may verify that the thread pool plugin is in the plugins directory:

In the my.cnf or my.ini file, check for the location of the plugin directory (plugin_dir).

[mysqld]
plugin_dir=/path/to/plugin/directory

If the plugin directory value is not located in the my.cnf or my.ini file, check the location from within mysql:

mysql> SHOW VARIABLES like ‘plugin_dir’;
+---------------+------------------------------+
| Variable_name | Value                        |
+---------------+------------------------------+
| plugin_dir    | /usr/local/mysql/lib/plugin/ |
+---------------+------------------------------+
1 row in set (0.00 sec)

Check to see if the thread pool plugin (thread_pool.so) is in the plugin directory:

$ ls -l /usr/local/mysql/lib/plugin/audit_log.so
-rwxr-xr-x+ 1 _mysql  wheel  38828 Sep 10 03:58 /usr/local/mysql/lib/plugin/audit_log.so

Installing the Enterprise Audit plugin:

To load the plugin at server startup, use the –plugin-load option to name the object file that contains the plugin. With this plugin-loading method, the option must be given each time the server starts. You may also put the following lines in your my.cnf file:

[mysqld]
plugin-load=audit_log.so

If object files have a suffix different from .so on your system, substitute the correct suffix (for example, .dll on Windows). Alternatively, to register the plugin at runtime, use this statement (changing the suffix as necessary):

mysql> INSTALL PLUGIN audit_log SONAME ‘audit_log.so’;

INSTALL PLUGIN loads the plugin, and also registers it in the mysql.plugins table to cause the plugin to be loaded for each subsequent normal server startup.

If the plugin is loaded with –plugin-load or has been previously registered with INSTALL PLUGIN, you can use the –audit-log option at server startup to control plugin activation. For example, to load the plugin and prevent it from being removed at runtime, use these options:

[mysqld]
plugin-load=audit_log.so
audit-log=FORCE_PLUS_PERMANENT

If it is desired to prevent the server from running without the audit plugin, use –audit-log with a value of FORCE or FORCE_PLUS_PERMANENT to force server startup to fail if the plugin does not initialize successfully.

Optional audit variables: http://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin-options-variables.html

To verify that the audit_log was loaded successfully:

mysql> SHOW VARIABLES LIKE ‘audit_log%’;
+---------------------------+--------------+
| Variable_name             | Value        |
+---------------------------+--------------+
| audit_log_buffer_size     | 1048576      |
| audit_log_file            | audit.log    |
| audit_log_flush           | OFF          |
| audit_log_format          | OLD          |
| audit_log_policy          | ALL          |
| audit_log_rotate_on_size  | 0            |
| audit_log_strategy        | ASYNCHRONOUS |
+---------------------------+--------------+
7 rows in set (0.00 sec)

or

mysql> SELECT * FROM information_schema.PLUGINS where PLUGIN_NAME = ‘audit_log’\G
*************************** 1. row ***************************
           PLUGIN_NAME: audit_log
        PLUGIN_VERSION: 0.1
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: AUDIT
   PLUGIN_TYPE_VERSION: 3.1
        PLUGIN_LIBRARY: audit_log.so
PLUGIN_LIBRARY_VERSION: 1.4
         PLUGIN_AUTHOR: Oracle
    PLUGIN_DESCRIPTION: Auditing events logger
        PLUGIN_LICENSE: PROPRIETARY
           LOAD_OPTION: ON
1 row in set (0.02 sec)

Testing Enterprise Audit

Check to see if the audit log file was created. The default location is in the MySQL data directory. To locate the data directory:

mysql> SHOW VARIABLES WHERE Variable_Name = ‘datadir’;
+---------------+---------------------------------------------+
| Variable_name | Value                                       |
+---------------+---------------------------------------------+
| datadir       | /volumes/wv_server_raid_01/mysql_data/data/ |
+---------------+---------------------------------------------+
1 row in set (0.00 sec)

Then list that file to see if it was created:

# ls -l /volumes/wv_server_raid_01/mysql_data/data/audit.log
-rw-rw----  1 mysql  mysql  0 Mar  6 20:07 audit.log

Now, let’s test the plugin. You will need to create a user and a test database to use: (you may need to change the permissions to fit your testing scenario)

mysql> CREATE USER ‘audit_test_user’@’localhost’ IDENTIFIED BY ‘audittest123’;
Query OK, 0 rows affected (0.49 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO ‘audit_test_user’@’localhost’;
Query OK, 0 rows affected (0.02 sec)

mysql> create database audit_test;
Query OK, 1 row affected (0.00 sec)

Once the user has been created, erase the contents of the log file so you can only see the audit trail of this user:

# > /volumes/wv_server_raid_01/mysql_data/data/audit.log

Next, login with the audit_test_user:

# mysql -uaudit_test_user -paudittest123
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 375
Server version: 5.6.14-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql> 

(You may ignore the warning about using the password on the command line)

As the audit test user, create a simple table in the audit_test database, populate the table, and perform a select:

mysql> use audit_test;
Database changed

mysql> CREATE TABLE audit_test_table (firstname VARCHAR(20), lastname VARCHAR(20));
Query OK, 0 rows affected (0.05 sec)

mysql> INSERT INTO audit_test_table values (“John”, “Smith”);
Query OK, 1 row affected (0.47 sec)

mysql> select * from audit_test_table;
+------------+----------+
| firstname  | lastname |
+------------+----------+
| John       | Smith    |
+------------+----------+
1 row in set (0.00 sec)

You may now inspect the contents of the /usr/local/mysql/data/audit_log file:
(this output has been manually formatted for easier viewing)


# cat /usr/local/mysql/data/audit.log
<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:49 UTC”
RECORD_ID=”13376_2014-03-06T01:18:10”
NAME=”Connect”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”connect”
PRIV_USER=”audit_test_user”
PROXY_USER=””
DB=””/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:49 UTC”
RECORD_ID=”13377_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”select”
SQLTEXT=”select @@version_comment limit 1”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:56 UTC”
RECORD_ID=”13378_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”select”
SQLTEXT=”SELECT DATABASE()”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:25:56 UTC”
RECORD_ID=”13379_2014-03-06T01:18:10”
NAME=”Init DB”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=””/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:26:00 UTC”
RECORD_ID=”13380_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”create_table”
SQLTEXT=”CREATE TABLE audit_test_table (firstname VARCHAR(20), lastname VARCHAR(20))”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:26:14 UTC”
RECORD_ID=”13382_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”insert”
SQLTEXT=”INSERT INTO audit_test_table values ("John", "Smith")”/>

<AUDIT_RECORD TIMESTAMP=”2014-03-07T01:26:23 UTC”
RECORD_ID=”13383_2014-03-06T01:18:10”
NAME=”Query”
CONNECTION_ID=”377”
STATUS=”0”
STATUS_CODE=”0”
USER=”audit_test_user[audit_test_user] @ localhost []”
OS_LOGIN=””
HOST=”localhost”
IP=””
COMMAND_CLASS=”select”
SQLTEXT=”select * from audit_test_table”/>

MySQL Enterprise Audit is now configured and ready to use. To stop Enterprise Audit, issue this command:

mysql> UNINSTALL PLUGIN audit_log SONAME ‘audit_log.so’;

This command will fail if the audit-log=FORCE_PLUS_PERMANENT variable was used.

Removing the test objects

To remove the audit_test_user user and drop the audit_test database:

DROP USER ‘audit_test_user’@’localhost’;
FLUSH PRIVILEGES;
DROP DATABASE audit_test;

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

MySQL Enterprise Monitor – send advisor events to your chat client with Perl and Jabber

MySQL Enterprise Monitor (MEM) is part of the MySQL Enterprise Edition, and MEM provides real-time visibility into the performance and availability of all your MySQL databases. MEM and the MySQL Query Analyzer continuously monitor your databases and alerts you to potential problems before they impact your system. It’s like having a “Virtual DBA Assistant” at your side to recommend best practices to eliminate security vulnerabilities, improve replication, optimize performance and more. As a result, the productivity of your developers, DBAs and System Administrators is improved significantly.

With MEM, you have a couple of notification options for receiving information when MEM has received an event alert. An event alert is a “significant deviation from the baseline performance trends, and best-practice Advisors recommended changes to configuration and variable settings to improve performance”. From: http://www.mysql.com/products/enterprise/monitor.html

You may choose to receive these alerts via email or SNMP. I am not going to go over how to setup these alerts – but instructions may be found here.

I have MEM installed on my home server, and since I have been using it for a while, I have been able to tune my instances and databases so that I rarely receive any alerts. I normally receive these alerts via email, but I don’t always check email for this particular account during the day. Since most of my colleagues are in other parts of the world, we use a Jabber chat client for quick communications during the day. I wanted to figure out a way for me to receive a chat message whenever MEM had an alert. For my chat client, I use adium – which is an open-source multi-protocol instant messaging client for Mac OS X – it supports MSN, Jabber, Yahoo! and other networks. But this should work with any XMPP-based chat client application.

You will probably want to create a new POP3 email address for these alerts, as this script will delete the email messages from the server. If you want to keep an email copy of these alerts, you may add an additional email address to the alert notification group. If you use a free email service (like gmail), remember that it has a daily limit of the number of messages that you can send. I ran into this problem when testing the scripts, so I created my own email under one of my domains. I found two Perl scripts; one that acts as a POP3 client, and another that sends the Jabber message – and I combined them into one Perl script. You might need to do a little tweaking to get it to work with your service.

This is a sample email message from MEM. The first “Subject” line is the actual email subject. In MEM, you can customize what information you want in your email subject line via Email Notification Groups. I combined several parts of the email message into one message to be sent via chat.


(Email Subject Line)
(macserver01, MacServer01) - MEM WARNING Alert: User Has Rights To Database That Does Not Exist

(Email Body)

Subject: macserver01, MacServer01
Time: 2013-12-30 17:01:44 UTC (2013-12-30 12:01:44 EST)
Category: Security
Event: User Has Rights To Database That Does Not Exist
Problem Description
When a database is dropped, user privileges on the database are not automatically dropped. This has security implications as that user will regain privileges if a database with the same name is created in the future, which may not be the intended result.
Advice
Revoke privileges for all users on any databases that do not exist. The following users have privileges on databases that do not exist.

    ''@'%' on database test_%

Recommended Action
REVOKE ALL PRIVILEGES ON db_name.* FROM 'user_name'@'host_name';
Links and Further Reading
MySQL Manual: DROP DATABASE Syntax
MySQL Manual: GRANT Syntax
MySQL Manual: REVOKE Syntax
MySQL Manual: Privileges Provided by MySQL
MySQL Manual: How the Privilege System Works
Securing Your MySQL Installation
Securing a MySQL Server on Windows
Expression
%user% != THRESHOLD
Evaluated Expression

    ''@'%' on database test_% != ''

Copyright © 2005, 2013, Oracle and/or its affiliates. All rights reserved.

And here is the script. You may run it as a cron job, where it will check your email every few minutes and then send you a chat message when an email with an alert has arrived. You will need to modify the script to match your email and chat settings, and I have placed notes in the script to help guide you:

#!/usr/bin/perl -w

# POP3 client script source:
# http://forums.devshed.com/perl-programming-6/how-to-get-gmail-mail-by-mail-pop3client-in-perl-555889.html
# Author: a user named keath

# Jabber message script source: 
# ttp://dipinkrishna.com/blog/2010/12/perl-send-chat-message-gmail-buddy-jabber/
# Author: Dipin Krishna

use strict;
# for the email
use Mail::POP3Client;
# for the chat
use Net::Jabber;

# I was having a problem with the Debug module, so I just
# commented line 154 in /Library/Perl/5.16/Net/XMPP/Debug.pm

# this is the email address that you want to use to receive 
# the alert messages from MySQL Enterprise Monitor
my $user = 'MEMalerts@scriptingmysql.com';
my $pass = 'mypassword';

# you will need to use your POP3 server name
my $host = "pop.emailserver.com";

my $pop = new Mail::POP3Client(
	USER     => $user,
	PASSWORD => $pass,
	HOST     => $host,
	PORT     => 995,
	USESSL   => 'true',
);

# I have commented most of the print messages
# - you may uncomment them as you wish
my $count = $pop->Count();
if ($count Message();
} elsif ($count == 0) {
	print "no messages\n";
} else {
	
	#print "$count messsages\n\n";
	
	for my $i (1 .. $count) {
	
	my $subject = "";
	my $message = "";
	
		# if you want to extract data from the head of the email
		# I am extracting data from the body
		#foreach ($pop->Head($i)) {
		foreach ($pop->Body($i)) {

			#print "$_\n" if /^(From|Subject|Date):/i;
			#print "$_\n" if /^(Subject|Date):/i;
			# my $message = "$_\n" if /^(Subject):/i;

			# I am building my message so that it contains the information in this order:
			# Category, Subject, Event

			if ($_ =~ "^Subject")
			{
				#print "$_\n";
				chomp $_;
				$subject = $_;
				$subject =~ s/Subject: //;
 			}

			if ($_ =~ "^Category")
			{
				#print "$_\n";
				chomp $_;
				$message = "$_ || $subject";
				$message =~ s/Category: //;
 			}

			if ($_ =~ "^Event")
			{
				#print "$_\n";
				chomp $_;
				
				$message = "$message || $_";
				$message =~ s/Event: //;
  						
					my $sttime=time;
						
					#print "Message: $_\n";
						
					# this is my Google Talk chat user name and password
					my $username = 'mem.scripting.mysql';;
					my $password = 'mypassword';
					my $to = 'my_email_address';
					my $msg = "$message";
					#print "$to: $msg\n";
						 
					my $resource = "dipin";
					my $hostname = 'talk.google.com';
					my $port = 5222;
					my $componentname = 'gmail.com';
					my $Contype = 'tcpip';
					my $tls = 1;
						 
					my $Con = new Net::Jabber::Client();
					$Con->SetCallBacks(presence=>\&presence,
					message=>\&message );
						 
					my $status = $Con->Connect(
					hostname => $hostname, port => $port,
					componentname => $componentname,
					connectiontype => $Contype, tls => $tls);
						 
					if (!(defined($status))) {
						print "ERROR:  XMPP connection failed.\n";
						print "        ($!)\n";
						exit(0);
					} 
						 
					# Change hostname
					my $sid = $Con->{SESSION}->{id};
					$Con->{STREAM}->{SIDS}->{$sid}->{hostname} = $componentname;
						 
					# Authenticate
						
					#my @result = $Con->AuthSend(
					my @result = $Con->AuthIQAuth(
					username => $username, password => $password,
					resource => $resource);
						
					#print "Result:  $result[0] $result[1]\n";
						 
					if ($result[0] ne "ok") {
						print "ERROR: Authorization failed: $result[0] - $result[1]\n";
					}
					else
					{
						#print "Logged in Sucessfull!\n";
						$Con->PresenceSend(show=>"Available");
						#print "Sending Message!\n";
						$Con->MessageSend(to=>"$to",
						subject=>"Test",
						body=>"$msg\n",
						priority=>10);
					}
						
						# END send Jabbar message
 						# # # # # # # # # # # # # # # # # # # # # # # # # # #

				# this deletes the message from the server
				$pop->Delete( $i );
				
				# if you only want to send one email message as a test, 
				# uncomment this line
				exit;
			}
		}
		print "\n";
	}
}

$pop->Close();

exit;

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 2 of 2

In part one of this post, I gave you a couple examples of how to backup your MySQL databases using mysqldump. In part two, I will show you how to use the MySQL Enterprise Backup (MEB) to create a full and partial backup.


MySQL Enterprise Backup provides enterprise-grade backup and recovery for MySQL. It delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris. To learn more, you may download a whitepaper on MEB.

MySQL Enterprise Backup delivers:

  • NEW! Continuous monitoring – Monitor the progress and disk space usage
  • “Hot” Online Backups – Backups take place entirely online, without interrupting MySQL transactions
  • High Performance – Save time with faster backup and recovery
  • Incremental Backup – Backup only data that has changed since the last backup
  • Partial Backup – Target particular tables or tablespaces
  • Compression – Cut costs by reducing storage requirements up to 90%
  • Backup to Tape – Stream backup to tape or other media management solutions
  • Fast Recovery – Get servers back online and create replicated servers
  • Point-in-Time Recovery (PITR) – Recover to a specific transaction
  • Partial restore – Recover targeted tables or tablespaces
  • Restore to a separate location – Rapidly create clones for fast replication setup
  • Reduce Failures – Use a proven high quality solution from the developers of MySQL
  • Multi-platform – Backup and Restore on Linux, Windows, Mac & Solaris

    (from http://www.mysql.com/products/enterprise/backup.html

    While mysqldump is free to use, MEB is part of MySQL’s Enterprise Edition (EE) – so you need a license to use it. But if you are using MySQL in a production environment, you might want to look at EE, as:

    MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.
    (from: http://www.mysql.com/products/enterprise/)

    Before using MEB and backing up your database for the first time, you will need some information:

    Information to gather – Where to Find It – How It Is Used

    • Path to MySQL configuration file – Default system locations, hardcoded application default locations, or from –defaults-file option in mysqld startup script. - This is the preferred way to convey database configuration information to the mysqlbackup command, using the –defaults-file option. When connection and data layout information is available from the configuration file, you can skip most of the other choices listed below.
    • MySQL port – MySQL configuration file or mysqld startup script. Used to connect to the database instance during backup operations. Specified via the –port option of mysqlbackup. –port is not needed if available from MySQL configuration file. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions.
    • Path to MySQL data directory – MySQL configuration file or mysqld startup script. – Used to retrieve files from the database instance during backup operations, and to copy files back to the database instance during restore operations. Automatically retrieved from database connection for hot and warm backups. Taken from MySQL configuration file for cold backups.
    • ID and password of privileged MySQL user – You record this during installation of your own databases, or get it from the DBA when backing up databases you do not own. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions. For cold backups, you log in as an administrative user. – Specified via the –password option of the mysqlbackup. Prompted from the terminal if the –password option is present without the password argument.
    • Path under which to store backup data – You choose this. See Section 3.1.3, “Designate a Location for Backup Data” for details. – By default, this directory must be empty for mysqlbackup to write data into it, to avoid overwriting old backups or mixing up data from different backups. Use the –with-timestamp option to automatically create a subdirectory with a unique name, when storing multiple sets of backup data under the same main directory.
    • Owner and permission information for backed-up files (for Linux, Unix, and OS X systems) – In the MySQL data directory. – If you do the backup using a different OS user ID or a different umask setting than applies to the original files, you might need to run commands such as chown and chmod on the backup data. See Section A.1, “Limitations of mysqlbackup Command” for details.
    • Size of InnoDB redo log files – Calculated from the values of the innodb_log_file_size and innodb_log_files_in_group configuration variables. Use the technique explained for the –incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the –incremental-with-redo-log-only option rather than the –incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups.
    • Rate at which redo data is generated – Calculated from the values of the InnoDB logical sequence number at different points in time. Use the technique explained for the –incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the –incremental-with-redo-log-only option rather than the –incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups.

      (from: http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/backup-prep-gather.html

      For most backup operations, the mysqlbackup command connects to the MySQL server through –user and –password options. If you aren’t going to use the root user, then you will need to create a separate user. Follow these instructions for setting the proper permissions.

      All backup-related operations either create new files or reference existing files underneath a specified directory that holds backup data. Choose this directory in advance, on a file system with sufficient storage. (It could even be remotely mounted from a different server.) You specify the path to this directory with the –backup-dir option for many invocations of the mysqlbackup command.

      Once you establish a regular backup schedule with automated jobs, it is preferable to keep each backup within a timestamped subdirectory underneath the main backup directory. To make the mysqlbackup command create these subdirectories automatically, specify the –with-timestamp option each time you run mysqlbackup.

      For one-time backup operations, for example when cloning a database to set up a replication slave, you might specify a new directory each time, or specify the –force option of mysqlbackup to overwrite older backup files.
      (from http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/backup-prep-storage.html

      If you haven’t downloaded and installed mysqlbackup, you may download it from edelivery.oracle.com (registration is required). Install the MySQL Enterprise Backup product on each database server whose contents you intend to back up. You perform all backup and restore operations locally, by running the mysqlbackup command on the same server as the MySQL instance. Information on installation may be found here.

      Now that we have gathered all of the required information and installed mysqlbackup, let’s run a simple and easy backup of the entire database. I installed MEB in my /usr/local directory, so I am including the full path of mysqlbackup. I am using the backup-and-apply-log option, which combines the –backup and the –apply-log options into one. The –backup option performs the initial phase of a backup. The second phase is performed later by running mysqlbackup again with the –apply-log option, which brings the InnoDB tables in the backup up-to-date, including any changes made to the data while the backup was running.

      $ /usr/local/meb/bin/mysqlbackup --user=root --password --backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log
      MySQL Enterprise Backup version 3.8.2 [2013/06/18] 
      Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved.
      
       mysqlbackup: INFO: Starting with following command line ...
       /usr/local/meb/bin/mysqlbackup --user=root --password 
              --backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log 
      
      Enter password: 
       mysqlbackup: INFO: MySQL server version is '5.6.9-rc-log'.
       mysqlbackup: INFO: Got some server configuration information from running server.
      
      IMPORTANT: Please check that mysqlbackup run completes successfully.
                 At the end of a successful 'backup-and-apply-log' run mysqlbackup
                 prints "mysqlbackup completed OK!".
      
      --------------------------------------------------------------------
                             Server Repository Options:
      --------------------------------------------------------------------
        datadir = /usr/local/mysql/data/
        innodb_data_home_dir = /usr/local/mysql/data
        innodb_data_file_path = ibdata1:40M:autoextend
        innodb_log_group_home_dir = /usr/local/mysql/data
        innodb_log_files_in_group = 2
        innodb_log_file_size = 5242880
        innodb_page_size = 16384
        innodb_checksum_algorithm = innodb
        innodb_undo_directory = /usr/local/mysql/data/
        innodb_undo_tablespaces = 0
        innodb_undo_logs = 128
      
      --------------------------------------------------------------------
                             Backup Config Options:
      --------------------------------------------------------------------
        datadir = /Users/tonydarnell/hotbackups/datadir
        innodb_data_home_dir = /Users/tonydarnell/hotbackups/datadir
        innodb_data_file_path = ibdata1:40M:autoextend
        innodb_log_group_home_dir = /Users/tonydarnell/hotbackups/datadir
        innodb_log_files_in_group = 2
        innodb_log_file_size = 5242880
        innodb_page_size = 16384
        innodb_checksum_algorithm = innodb
        innodb_undo_directory = /Users/tonydarnell/hotbackups/datadir
        innodb_undo_tablespaces = 0
        innodb_undo_logs = 128
      
       mysqlbackup: INFO: Unique generated backup id for this is 13742482113579320
      
       mysqlbackup: INFO: Creating 14 buffers each of size 16777216.
      130719 11:36:53 mysqlbackup: INFO: Full Backup operation starts with following threads
      		1 read-threads    6 process-threads    1 write-threads
      130719 11:36:53 mysqlbackup: INFO: System tablespace file format is Antelope.
      130719 11:36:53 mysqlbackup: INFO: Starting to copy all innodb files...
      130719 11:36:53 mysqlbackup: INFO: Copying /usr/local/mysql/data/ibdata1 (Antelope file format).
      130719 11:36:53 mysqlbackup: INFO: Found checkpoint at lsn 135380756.
      130719 11:36:53 mysqlbackup: INFO: Starting log scan from lsn 135380480.
      130719 11:36:53 mysqlbackup: INFO: Copying log...
      130719 11:36:54 mysqlbackup: INFO: Log copied, lsn 135380756.
      
      (I have truncated some of the database and table output to save space)
      .....
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/innodb_index_stats.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/innodb_table_stats.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_master_info.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_relay_log_info.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_worker_info.ibd (Antelope file format).
      .....
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/testcert/t1.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/testcert/t3.ibd (Antelope file format).
      .....
      130719 11:36:57 mysqlbackup: INFO: Copying /usr/local/mysql/data/watchdb/watches.ibd (Antelope file format).
      .....
      130719 11:36:57 mysqlbackup: INFO: Completing the copy of innodb files.
      130719 11:36:58 mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server.
      130719 11:36:58 mysqlbackup: INFO: Starting to lock all the tables...
      130719 11:36:58 mysqlbackup: INFO: All tables are locked and flushed to disk
      130719 11:36:58 mysqlbackup: INFO: Opening backup source directory '/usr/local/mysql/data/'
      130719 11:36:58 mysqlbackup: INFO: Starting to backup all non-innodb files in 
      	subdirectories of '/usr/local/mysql/data/'
      .....
      130719 11:36:58 mysqlbackup: INFO: Copying the database directory 'comicbookdb'
      .....
      130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'mysql'
      130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'performance_schema'
      .....
      130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'test'
      .....
      130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'watchdb'
      130719 11:36:59 mysqlbackup: INFO: Completing the copy of all non-innodb files.
      130719 11:37:00 mysqlbackup: INFO: A copied database page was modified at 135380756.
                (This is the highest lsn found on page)
                Scanned log up to lsn 135384397.
                Was able to parse the log up to lsn 135384397.
                Maximum page number for a log record 375
      130719 11:37:00 mysqlbackup: INFO: All tables unlocked
      130719 11:37:00 mysqlbackup: INFO: All MySQL tables were locked for 1.589 seconds.
      130719 11:37:00 mysqlbackup: INFO: Full Backup operation completed successfully.
      130719 11:37:00 mysqlbackup: INFO: Backup created in directory '/Users/tonydarnell/hotbackups'
      130719 11:37:00 mysqlbackup: INFO: MySQL binlog position: filename mysql-bin.000013, position 85573
      
      -------------------------------------------------------------
         Parameters Summary         
      -------------------------------------------------------------
         Start LSN                  : 135380480
         End LSN                    : 135384397
      -------------------------------------------------------------
      
       mysqlbackup: INFO: Creating 14 buffers each of size 65536.
      130719 11:37:00 mysqlbackup: INFO: Apply-log operation starts with following threads
      		1 read-threads    1 process-threads
      130719 11:37:00 mysqlbackup: INFO: ibbackup_logfile's creation parameters:
                start lsn 135380480, end lsn 135384397,
                start checkpoint 135380756.
       mysqlbackup: INFO: InnoDB: Starting an apply batch of log records to the database...
      InnoDB: Progress in percent: 0 1 .... 99 Setting log file size to 5242880
      Setting log file size to 5242880
      130719 11:37:00 mysqlbackup: INFO: We were able to parse ibbackup_logfile up to
                lsn 135384397.
       mysqlbackup: INFO: Last MySQL binlog file position 0 85573, file name mysql-bin.000013
      130719 11:37:00 mysqlbackup: INFO: The first data file is '/Users/tonydarnell/hotbackups/datadir/ibdata1'
                and the new created log files are at '/Users/tonydarnell/hotbackups/datadir'
      130719 11:37:01 mysqlbackup: INFO: Apply-log operation completed successfully.
      130719 11:37:01 mysqlbackup: INFO: Full backup prepared for recovery successfully.
      
      mysqlbackup completed OK!
      

      Now, I can take a look at the backup file that was created:

      root@macserver01: $ pwd
      /Users/tonydarnell/hotbackups
      root@macserver01: $ ls -l
      total 8
      -rw-r--r--   1 root  staff  351 Jul 19 11:36 backup-my.cnf
      drwx------  21 root  staff  714 Jul 19 11:37 datadir
      drwx------   6 root  staff  204 Jul 19 11:37 meta
      $ ls -l datadir
      total 102416
      drwx------   5 root  staff       170 Jul 19 11:36 comicbookdb
      -rw-r-----   1 root  staff   5242880 Jul 19 11:37 ib_logfile0
      -rw-r-----   1 root  staff   5242880 Jul 19 11:37 ib_logfile1
      -rw-r--r--   1 root  staff      4608 Jul 19 11:37 ibbackup_logfile
      -rw-r--r--   1 root  staff  41943040 Jul 19 11:37 ibdata1
      drwx------  88 root  staff      2992 Jul 19 11:36 mysql
      drwx------  55 root  staff      1870 Jul 19 11:36 performance_schema
      drwx------   3 root  staff       102 Jul 19 11:36 test
      drwx------  30 root  staff      1020 Jul 19 11:36 testcert
      drwx------  19 root  staff       646 Jul 19 11:36 watchdb
      
      root@macserver01: $ ls -l meta
      total 216
      -rw-r--r--  1 root  staff  90786 Jul 19 11:37 backup_content.xml
      -rw-r--r--  1 root  staff   5746 Jul 19 11:36 backup_create.xml
      -rw-r--r--  1 root  staff    265 Jul 19 11:37 backup_gtid_executed.sql
      -rw-r--r--  1 root  staff    321 Jul 19 11:37 backup_variables.txt
      

      As you can see, the backup was created in /Users/tonydarnell/hotbackups. If I wanted to have a unique folder for this backup, I can use the –with-timestamp.

      The –with-timestamp option places the backup in a subdirectory created under the directory you specified above. The name of the backup subdirectory is formed from the date and the clock time of the backup run.
      (from: http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/mysqlbackup.html)

      I will run the same backup command again, but with the –with-timestamp option:

      (I am not going to duplicate the entire output – but I will only show you the output where it creates the sub-directory under /Users/tonydarnell/hotbackups)

      $ /usr/local/meb/bin/mysqlbackup --user=root --password --backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log --with-timestamp
      ......
      130719 11:49:54 mysqlbackup: INFO: The first data file is '/Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/ibdata1'
                and the new created log files are at '/Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir'
      130719 11:49:54 mysqlbackup: INFO: Apply-log operation completed successfully.
      130719 11:49:54 mysqlbackup: INFO: Full backup prepared for recovery successfully.
      
      mysqlbackup completed OK!
      

      So, I ran the backup again to get a unique directory. Instead of the backup files/directories being placed in /Users/tonydarnell/hotbackups, it created a sub-directory with a timestamp for the directory name:

      $ pwd
      /Users/tonydarnell/hotbackups
      root@macserver01: $ ls -l
      total 0
      drwx------  5 root  staff  170 Jul 19 11:49 2013-07-19_11-49-48
      $ ls -l 2013-07-19_11-49-48
      total 8
      -rw-r--r--   1 root  staff  371 Jul 19 11:49 backup-my.cnf
      drwx------  21 root  staff  714 Jul 19 11:49 datadir
      drwx------   6 root  staff  204 Jul 19 11:49 meta
      

      Note: If you don’t use the backup-and-apply-log option you will need to read this: Immediately after the backup job completes, the backup files might not be in a consistent state, because data could be inserted, updated, or deleted while the backup is running. These initial backup files are known as the raw backup.

      You must update the backup files so that they reflect the state of the database corresponding to a specific InnoDB log sequence number. (The same kind of operation as crash recovery.) When this step is complete, these final files are known as the prepared backup.

      During the backup, mysqlbackup copies the accumulated InnoDB log to a file called ibbackup_logfile. This log file is used to “roll forward” the backed-up data files, so that every page in the data files corresponds to the same log sequence number of the InnoDB log. This phase also creates new ib_logfiles that correspond to the data files.

      The mysqlbackup option for turning a raw backup into a prepared backup is apply-log. You can run this step on the same database server where you did the backup, or transfer the raw backup files to a different system first, to limit the CPU and storage overhead on the database server.

      Note: Since the apply-log operation does not modify any of the original files in the backup, nothing is lost if the operation fails for some reason (for example, insufficient disk space). After fixing the problem, you can safely retry apply-log and by specifying the –force option, which allows the data and log files created by the failed apply-log operation to be overwritten.

      For simple backups (without compression or incremental backup), you can combine the initial backup and the apply-log step using the option backup-and-apply-log.
      (from: http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/backup-apply-log.html)

      One file that was not copied was the my.cnf file. You will want to have a separate script to copy this at regular intervals. If you put the mysqlbackup command in a cron or Windows Task Manager job, you can add a way to copy the my.cnf file as well.

      Now that we have a completed backup, we are going to copy the backup files and the my.cnf file over to a different server to restore the databases. We will be using a server that was setup as a slave server to the server where the backup occurred. If you need to restore the backup to the same server, you will need to refer to this section of the mysqlbackup manual. I copied the backup files as well as the my.cnf file to the new server:

      # pwd
      /Users/tonydarnell/hotbackups
      # ls -l
      total 16
      drwxrwxrwx  5 tonydarnell  staff   170 Jul 19 15:38 2013-07-19_11-49-48
      

      On the new server (where I will restore the data), I shutdown the mysqld process (mysqladmin -uroot -p shutdown), copied the my.cnf file to the proper directory, and now I can restore the database to the new server, using the copy-back option. The copy-back option requires the database server to be already shut down, then copies the data files, logs, and other backed-up files from the backup directory back to their original locations, and performs any required postprocessing on them.
      (from: http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/restore.restore.html)

      # /usr/local/meb/bin/mysqlbackup --defaults-file=/etc/my.cnf --backup-dir=/Users/tonydarnell/hotbackups/2013-07-19_11-49-48 copy-back
      MySQL Enterprise Backup version 3.8.2 [2013/06/18] 
      Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved.
      
       mysqlbackup: INFO: Starting with following command line ...
       /usr/local/meb/bin/mysqlbackup --defaults-file=/etc/my.cnf 
              --backup-dir=/Users/tonydarnell/hotbackups/2013-07-19_11-49-48 
              copy-back 
      
      IMPORTANT: Please check that mysqlbackup run completes successfully.
                 At the end of a successful 'copy-back' run mysqlbackup
                 prints "mysqlbackup completed OK!".
      
      --------------------------------------------------------------------
                             Server Repository Options:
      --------------------------------------------------------------------
        datadir = /usr/local/mysql/data
        innodb_data_home_dir = /usr/local/mysql/data
        innodb_data_file_path = ibdata1:40M:autoextend
        innodb_log_group_home_dir = /usr/local/mysql/data
        innodb_log_files_in_group = 2
        innodb_log_file_size = 5M
        innodb_page_size = Null
        innodb_checksum_algorithm = innodb
      
      --------------------------------------------------------------------
                             Backup Config Options:
      --------------------------------------------------------------------
        datadir = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir
        innodb_data_home_dir = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir
        innodb_data_file_path = ibdata1:40M:autoextend
        innodb_log_group_home_dir = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir
        innodb_log_files_in_group = 2
        innodb_log_file_size = 5242880
        innodb_page_size = 16384
        innodb_checksum_algorithm = innodb
        innodb_undo_directory = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir
        innodb_undo_tablespaces = 0
        innodb_undo_logs = 128
      
       mysqlbackup: INFO: Creating 14 buffers each of size 16777216.
      130719 15:54:41 mysqlbackup: INFO: Copy-back operation starts with following threads
      		1 read-threads    1 write-threads
      130719 15:54:41 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/ibdata1.
      .....
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/comicbookdb/comics.ibd.
      .....
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/innodb_index_stats.ibd.
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/innodb_table_stats.ibd.
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/slave_master_info.ibd.
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/slave_relay_log_info.ibd.
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/slave_worker_info.ibd.
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/watchdb/watches.ibd.
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'comicbookdb'
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'mysql'
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'performance_schema'
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'test'
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'watchdb'
      130719 15:54:43 mysqlbackup: INFO: Completing the copy of all non-innodb files.
      130719 15:54:43 mysqlbackup: INFO: Copying the log file 'ib_logfile0'
      130719 15:54:43 mysqlbackup: INFO: Copying the log file 'ib_logfile1'
      130719 15:54:44 mysqlbackup: INFO: Copy-back operation completed successfully.
      130719 15:54:44 mysqlbackup: INFO: Finished copying backup files to '/usr/local/mysql/data'
      
      mysqlbackup completed OK!
      

      I can now restart MySQL. I have a very small database (less than 50 megabytes). But it took less than a minute to restore the database. If I had to rebuild my database using mysqldump, it would take a lot longer. If you have a very large database, the different in using mysqlbackup and mysqldump could be in hours. For example, a 32-gig database with 33 tables takes about eight minutes to restore with mysqlbackup. Restoring the same database with a mysqldump file takes over two hours.

      An easy way to check to see if the databases match (assuming that I haven’t added any new records in any of the original databases – which I haven’t), I can use one of the MySQL Utilities – mysqldbcompare. I wrote about how to do this in an earlier blog about using it to test two replicated databases, but it will work here as well – see Using MySQL Utilities Workbench Script mysqldbcompare To Compare Two Databases In Replication.

      The mysqldbcompare utility “compares the objects and data from two databases to find differences. It identifies objects having different definitions in the two databases and presents them in a diff-style format of choice. Differences in the data are shown using a similar diff-style format. Changed or missing rows are shown in a standard format of GRID, CSV, TAB, or VERTICAL.” (from: mysqldbcompare — Compare Two Databases and Identify Differences)

      Some of the syntax may have changed for mysqldbcompare since I wrote that blog, so you will need to reference the help notes for mysqldbcompare. You would need to run this for each of your databases.

      $ mysqldbcompare --server1=scripts:scripts999@192.168.1.2   --server2=scripts:scripts999@192.168.1.123 --run-all-tests --difftype=context comicbookdb:comicbookdb
      # server1 on 192.168.1.2: ... connected.
      # server2 on 192.168.1.123: ... connected.
      # Checking databases comicbookdb on server1 and comicbookdb on server2
      
                                                          Defn    Row     Data   
      Type      Object Name                               Diff    Count   Check  
      --------------------------------------------------------------------------- 
      TABLE     comics                                    pass    pass    pass   
      
      Databases are consistent.
      
      # ...done
      

      You can try and run this for the mysql database, but you may get a few errors regarding the mysql.backup_history and mysql.backup_progress tables:

      $ mysqldbcompare --server1=scripts:scripts999@192.168.1.2   --server2=scripts:scripts999@192.168.1.123 --run-all-tests --difftype=context mysql:mysql
      # server1 on 192.168.1.2: ... connected.
      # server2 on 192.168.1.123: ... connected.
      # Checking databases mysql on server1 and mysql on server2
      
                                                          Defn    Row     Data   
      Type      Object Name                               Diff    Count   Check  
      --------------------------------------------------------------------------- 
      TABLE     backup_history                            pass    FAIL    SKIP    
      
      Row counts are not the same among mysql.backup_history and mysql.backup_history.
      
      No primary key found.
      
      TABLE     backup_progress                           pass    FAIL    SKIP    
      
      Row counts are not the same among mysql.backup_progress and mysql.backup_progress.
      
      No primary key found.
      
      TABLE     columns_priv                              pass    pass    pass    
      TABLE     db                                        pass    pass    pass    
      TABLE     event                                     pass    pass    pass    
      TABLE     func                                      pass    pass    pass    
      TABLE     general_log                               pass    pass    SKIP    
      
      No primary key found.
      
      TABLE     help_category                             pass    pass    pass    
      TABLE     help_keyword                              pass    pass    pass    
      TABLE     help_relation                             pass    pass    pass    
      TABLE     help_topic                                pass    pass    pass    
      TABLE     innodb_index_stats                        pass    pass    pass    
      TABLE     innodb_table_stats                        pass    pass    pass    
      TABLE     inventory                                 pass    pass    pass    
      TABLE     ndb_binlog_index                          pass    pass    pass    
      TABLE     plugin                                    pass    pass    pass    
      TABLE     proc                                      pass    pass    pass    
      TABLE     procs_priv                                pass    pass    pass    
      TABLE     proxies_priv                              pass    pass    pass    
      TABLE     servers                                   pass    pass    pass    
      TABLE     slave_master_info                         pass    pass    pass    
      TABLE     slave_relay_log_info                      pass    pass    pass    
      TABLE     slave_worker_info                         pass    pass    pass    
      TABLE     slow_log                                  pass    pass    SKIP    
      
      No primary key found.
      
      TABLE     tables_priv                               pass    pass    pass    
      TABLE     time_zone                                 pass    pass    pass    
      TABLE     time_zone_leap_second                     pass    pass    pass    
      TABLE     time_zone_name                            pass    pass    pass    
      TABLE     time_zone_transition                      pass    pass    pass    
      TABLE     time_zone_transition_type                 pass    pass    pass    
      TABLE     user                                      pass    pass    pass   
      
      Database consistency check failed.
      
      # ...done
      

      For example, when you compare the mysql.backup_history tables, the original database will have two entries – as I ran mysqlbackup twice. But the second backup entry doesn’t get entered until after the backup has occurred, and it isn’t reflected in the backup files.

      Original Server

      mysql> select count(*) from mysql.backup_history;
      +----------+
      | count(*) |
      +----------+
      |        2 |
      +----------+
      1 row in set (0.00 sec)
      

      Restored Server

      mysql> select count(*) from mysql.backup_history;
      +----------+
      | count(*) |
      +----------+
      |        1 |
      +----------+
      1 row in set (0.00 sec)
      

      For the mysql.backup_progress tables, the original database has ten rows, while the restored database has seven.

      There are many options for using mysqlbackup, including (but not limited to) incremental backup, partial backup , compression, backup to tape, point-in-time recovery (PITR), partial restore, etc. If you are running MySQL in a production environment, then you should look at MySQL Enterprise Edition, which includes MySQL Enterprise Backup. Of course, you should always have a backup and recovery plan in place. Finally, if and when possible, practice restoring your backup on a regular basis, to make sure that if your server crashes, you can restore your database quickly.

       


      Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Follow

Get every new post delivered to your Inbox.

Join 37 other followers