MySQL Enterprise Transparent Data Encryption (TDE) – provides at-rest encryption for physical InnoDB tablespace data files

With MySQL version 5.7.11 and up, Oracle continues to improve MySQL’s security features by adding MySQL Enterprise Transparent Data Encryption (TDE) for InnoDB tables stored in innodb_file_per_table tablespaces. This feature provides at-rest encryption for physical tablespace data files.

MySQL Enterprise TDE uses a two-tier encryption key architecture, consisting of a master encryption key and tablespace keys. When an InnoDB table is encrypted, a tablespace key is encrypted with the master key and the encrypted value of the tablespace key is stored in the tablespace header. When encrypting tablespace data, InnoDB transparently uses the master encryption key to decrypt the tablespace key and then uses it to encrypt (on write) and decrypt (on read).

The master encryption key is stored in memory on the MySQL keyring. Using a key vault, this key ring is both persisted and protected. The master key nor any of the tablespace keys are ever written to disk in plain text. When the the user first enables TDE, and when the server starts up or when the master key is rotated – the master key is requested and updated from the key vault. Databases and their tablespaces can get large, and had we chosen a single-tiered method, then key rotation would require re-encrypting entire tablespaces. This process is slow, costly and risky. With a two-tiered scheme, the decrypted version of a tablespace key never changes, but the master encryption key may be changed as required. This action is referred to as master-key rotation and it re-encrypts all of the tablespace keys in one atomic operation very rapidly (milliseconds).

There are various key vaults in the market and there’s also a common industry protocol supported by most vaults. The protocol is KMIP and it comes from OASIS. Losing keys (whether by accident, mismanagement, or getting hacked) means you lose your data. To ensure your keys are available, secure, auditable, etc., you will want to use a key-vault management software application such as Oracle Key Vault. Oracle Key Vault enables you to quickly deploy encryption and other security solutions by centrally managing encryption keys not only for MySQL, but across many things, including (but not limited to) Oracle Wallets, Java Keystores, and credential files.

Using the label for TDE requires:
1 – Applications aren’t aware of encryption – it is provided seamlessly and doesn’t require any changes
2 – The data is encrypted at-rest – so the operating system users or other access to the files on the media or the file system can’t read the data (without a key)
3 – The keys are secured, available and protected – you can’t leave the key in the door (or under the mat)
4 – The keys are never written to the filesystem (in plain text)

MySQL does include tablespace encryption with the community version but that is not TDE as it doesn’t meet requirements #3 and #4 – the keys aren’t necessarily secured.

MySQL Enterprise TDE supports the Advanced Encryption Standard (AES) block-based encryption algorithm. It uses Electronic Codebook (ECB) block encryption mode for tablespace key encryption and Cipher Block Chaining (CBC) block encryption mode for data encryption. (from https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html)

In this post, I will show you how easy it is to use this new feature to encrypt your InnoDB tables (requires a licensed version of the MySQL Enterprise Edition MySQL 5.7.11 or higher).

Setting up MySQL TDE

Get Oracle Key Vault Credentials

First you will need to set up an account in Oracle Key Vault and create an endpoint (the key storage that keyring_okv plugin will talk to). Once you are done with that, you will have to create a configuration folder for keyring_okv. This folder will contain credentials to log into Oracle Key Vault. This, of course, needs to be stored securely. Go to this link to find information on how to accomplish this:
https://dev.mysql.com/doc/mysql-security-excerpt/5.7/en/keyring-okv-plugin.html
As the result you should have a configured endpoint in Oracle Key Vault and the configuration folder which should look like this:

# ls -l
total 8
-rw-rw-r-- 1 rob rob 427 lip 7 18:49 okvclient.ora
drwxrwxr-x 2 rob rob 4096 lip 7 18:49 ssl
# ls -l ssl
10:47 $ ls -l ssl
total 16
-rw-rw-r-- 1 rob rob 1200 lip 7 18:49 CA.pem
-rw-rw-r-- 1 rob rob 1209 lip 7 18:49 cert.pem
-rw-rw-r-- 1 rob rob 1027 lip 7 18:49 cert_req.pem
-rw-rw-r-- 1 rob rob 1675 lip 7 18:49 key.pem

From now on we will call this folder okv_enrollment.

Note: If you want to trial Oracle Key Vault it can be downloaded from the Oracle Software Delivery Cloud at https://edelivery.oracle.com/.

Install MySQL Server

Next you will need to install MySQL 5.7.11, and follow the post-installation steps, which include initializing the data directory.

Load keyring_okv plugin

To use the tablespace encryption feature, you will also need to load the keyring plugin (keyring_okv) by adding the following to your configuration file (under the [mysqld] section, let us assume that your path to the Oracle Key Vault folder is /home/user/okv_enrollment):

[mysqld]
early-plugin-load=keyring_okv.so
keyring_okv_conf_dir=/home/user/okv_enrollment

Before you attempt to create an encrypted table, you can check and make sure the plugin is active via this command:

mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS
-> WHERE plugin_name='keyring_okv';
    
+--------------+---------------+
| PLUGIN_NAME  | PLUGIN_STATUS |
+--------------+---------------+
| keyring_okv  | ACTIVE        |
+--------------+---------------+

Creating encrypted tables

To create a table with encryption, you only need to add ENCRYPTION = ‘Y’ to the CREATE TABLE statement. This example will create a new database and an encrypted table:

CREATE DATABASE scripting;
USE scripting;
CREATE TABLE `test_encryption` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `server_name` varchar(15) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 ENCRYPTION = 'Y';
Technical side note (can be skipped :)
As this is the first encrypted table we are creating - InnoDB will first ask 
keyring_okv to generate master key in Oracle Key Vault. From now on this key will 
be used to encrypt tablespace keys. Next InnoDB will ask Oracle Key Vault to 
generate random key for encrypting table test_encryption. keyring_okv will use 
Oracle Key Vault to ensure that this key will get generated with high entropy. 
This random key will then get encrypted using the master key and stored alongside
the encrypted test_encryption table. The next time encrypted table gets created - 
only the tablespace key will get generated and the master key will get reused 
(to encrypt new tablespace key).

This encryption feature only works with InnoDB tables. If you try this with a MyISAM table, you will get an error:

CREATE TABLE `test_encryption2` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `server_name` varchar(15) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MYISAM AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 ENCRYPTION = 'Y';
ERROR 1031 (HY000): Table storage engine for 'test_encryption2' doesn't have this option

You may view a list of all of the encrypted tables via this command:

mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES
-> WHERE CREATE_OPTIONS like 'ENCRYPTION="Y"';
+--------------+-----------------+----------------+
| TABLE_SCHEMA | TABLE_NAME      | CREATE_OPTIONS |
+--------------+-----------------+----------------+
| scripting    | test_encryption | ENCRYPTION="Y" |
+--------------+-----------------+----------------+
1 row in set (0.20 sec)
Encrypting tables with different endpoints

It is possible to use different endpoints with one instance of MySQL. You can start server with (say) endpoint1 and encrypt table test_encryption with it. Then you can restart the server with some other endpoint and use it to encrypt table (for instance) test_more_encryption. However test_encryption will not be accessible. You will see the following error:

mysql> select * from test_encryption; ERROR 3185 (HY000): Can't find master key from keyring, please check keyring plugin is loaded.

Also please note that, although keyring_okv_conf_dir is settable in runtime, it should be changed with care. Please see Appendix 1, at the bottom of this article, for more information.
Overall using multiple endpoints with one server is not encourage and should be done with extra care.

Encrypting existing tables

To enable encryption for an existing InnoDB table, specify the ENCRYPTION option in an ALTER TABLE statement.

mysql> ALTER TABLE t1 ENCRYPTION='Y';

To disable encryption for an InnoDB table, set ENCRYPTION=’N’ using ALTER TABLE.

mysql> ALTER TABLE t1 ENCRYPTION='N';

One warning – “Plan appropriately when altering an existing table with the ENCRYPTION option. ALTER TABLE … ENCRYPTION operations rebuild the table using ALGORITHM=COPY. ALGORITM=INPLACE is not supported.” (from https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html)

In other words, it may take some time to encrypt (or decrypt) an already-existing InnoDB table.

Rotating InnoDB master key

For security reasons InnoDB master key should be rotated periodically. This operation creates a new master key in Oracle Key Vault, re-encrypts all the tablespace keys with the new master key and deactivates the old master key. In case when rotation gets interrupted by the server crash/restart it will be continued on the next server startup. To start innoDB key rotation use this command:

mysql> ALTER INSTANCE ROTATE INNODB MASTER KEY;

For more information go this link (paragraph InnoDB Tablespace Encryption and Master Key Rotation):
https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html

Moving encrypted tables between servers

You can also move a copy of an encrypted table to another server, following these steps. There are some limitations you will need to review.

“When exporting a tablespace that is encrypted using the InnoDB tablespace encryption feature, InnoDB generates a .cfp file in addition to a .cfg metadata file. The .cfp file must be copied to the destination server together with the .cfg file and tablespace file before performing the ALTER TABLE … IMPORT TABLESPACE operation on the destination server. The .cfp file contains a transfer key and an encrypted tablespace key. On import, InnoDB uses the transfer key to decrypt the tablespace key. For related information, see Section 14.4.10, InnoDB Tablespace Encryption.” (from: https://dev.mysql.com/doc/refman/5.7/en/tablespace-copying.html#innodb-transportable-tablespace-limitations

You now know how to use encryption for your InnoDB tables.

Troubleshooting

  • In case keyring_okv can not be accessed after restarting mysqld, you will see an error like this in the error log:
[ERROR] Plugin keyring_okv reported: 'Could not connect to the OKV server'
[ERROR] Plugin keyring_okv reported: 'keyring_okv initialization failure. Please check that the keyring_okv_conf_dir points to a readable directory and that the directory contains Oracle Key Vault configuration file and ssl materials. Please also check that Oracle Key Vault is up and running.

As error indicates it seems that Oracle Key Vault server is either down or there is something wrong with the credential directory. Make sure MySQL Server has privileges to access the configuration directory and that keyring_okv_conf_dir variable is set correctly (you can change it in runtime). Check also that structure of the configuration file is correct (see point Get Oracle Key Vault Credentials).

  • In case you forget to set keyring_okv_conf_dir variable during server startup or you install the plugin in runtime you will see the following error in the log file:
[Warning] Plugin keyring_okv reported: 'For keyring_okv to be initialized, please point keyring_okv_conf_dir variable to a directory with Oracle Key Vault configuration file and ssl materials

As you can see you have to set keyring_okv_conf_dir variable.


Appendix 1. Things to consider when changing keyring_okv_conf_dir in runtime

When server is loaded with keyring_okv – InnoDB fetches Master Key from Oracle Key Vault and uses it to decrypt each tablespace key – then it caches the decrypted tablespace keys. Thus the following code works:

CREATE TABLE t1(c1 INT, c2 char(20)) ENCRYPTION="Y" ENGINE = InnoDB; 
INSERT INTO t1 VALUES(0, "aaaaa"); 
SELECT * FROM t1; 
# Changing endpoint 
SET @@global.keyring_okv_conf_dir= /home/user/okv_enrollment2';
# Table should accessible,keys in memory. 
SELECT * FROM t1;

Although, we have changed the endpoint, table t1 is accessible because its decrypted tablespace key was cached. This seems nice, but you have to be aware that changing endpoint in runtime does not implicitly mean blocking access to tables encrypted with the previously loaded endpoints. Also, we can get into troubles when endpoint we are changing to does not contain master key created by the server instance we are using. Then the following is not possible:

mysql> create table `test_encryption_4` (`id` int(10) unsigned) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 ENCRYPTION = 'Y';
ERROR 3185 (HY000): Can't find master key from keyring, please check keyring plugin is loaded.

This is because InnoDB tries to fetch Master Key from the endpoint and it fails. This is different from starting the server with keyring_okv – then, when InnoDB cannot find Master Key in the keyring it asks keyring to generate a new one.


Thanks to Robert Golebiowski for help with this blog.


Here are some various links regarding MySQL Enterprise TDE:

https://www.mysql.com/products/enterprise/tde.html
https://dev.mysql.com/doc/refman/5.7/en/faqs-tablespace-encryption.html
https://dev.mysql.com/doc/refman/5.7/en/keyring-okv-plugin.html
https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html
https://dev.mysql.com/doc/refman/5.7/en/keyring-installation.html
https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_keyring_okv_conf_dir
http://www.mysql.com/news-and-events/web-seminars/mysql-security-transparent-data-encryption/

Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

Visit http://2044thebook.com for more information.

Upgrade MySQL to a new version with a fresh installation & use shell scripts and mysqldump to reload your data

There are several ways to upgrade MySQL. In this post, we will use a combination of shell scripts and the mysqldump application to export our MySQL data, and then re-import it back into the upgraded version of MySQL.

In this example, we will be doing a minor version upgrade. We will be going from 5.6.17 to 5.6.19. This method may not work if you are upgrading from one major release to another – from 5.1 to 5.5, or 5.5 to 5.6. You will want to check each version and review the new features/functions and also what features/functions have been deprecated. We are also assuming that no one will be using the database during the time it takes for us to do the upgrade.

If you want to upgrade from a version that is more than one major release apart from your current version, then you will want to upgrade to each successive version. For example, if you want to upgrade from 5.0 to 5.6, you will want to upgrade from 5.0 to 5.1, then 5.1 to 5.5, and then 5.5 to 5.6.

You don’t have to export all of your data when you upgrade MySQL. There are ways of upgrading without doing anything to your data. But in this post, I will be exporting the data and re-importing it, for a fresh installation. I don’t have that much data, so I don’t mind doing the export and import. If you have a lot of data, you might want to consider other options.

To get an idea of the size of your database(s), here is a quick script that you can use:

SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB" 
FROM information_schema.TABLES GROUP BY table_schema ; 

When I perform an export/import, I like to export each database as a separate mysqldump file, and then also export all of the databases together in one large file. By exporting/importing the individual databases, if you have an error importing one of the database dump files, you can isolate the error to a single database. It is much easier to fix the error in one smaller data dump file than with a larger all-inclusive dump file.

I am also going to create some simple shell scripts to help me create the commands that I need to make this task much easier. First, you will want to create a directory to store all of the scripts and dump files. Do all of your work inside that directory.

Next, I want to get a list of all of my databases. I will log into mysql, and then issue the show databases; command: (which is the same command as: select schema_name from information_schema.schemata;)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| 12thmedia          |
| cbgc               |
| comicbookdb        |
| coupons            |
| healthcheck        |
| innodb_memcache    |
| landwatch          |
| laurelsprings      |
| ls_directory       |
| mem                |
| mysql              |
| performance_schema |
| protech            |
| scripts            |
| stacy              |
| storelist          |
| test               |
| testcert           |
| tony               |
| twtr               |
| watchdb            |
+--------------------+
22 rows in set (1.08 sec)

I can then just highlight and copy the list of databases, and put that list into a text file named “list.txt“. I do not want to include these databases in my export:

information_schema
mysql
performance_schema
test

However, I will export the mysql table later. You need to check with the MySQL manual to make sure that there haven’t been any changes to the MySQL table from one version to the next.

I will need to manually remove those databases from my list.txt file. I then want to remove all of the spaces and pipe symbols from the text file – assuming that you do not have any spaces in your database names. Instead of using spaces in a database name, I prefer to use an underline character “_“. These scripts assume that you don’t have any spaces in your database names.

If you know how to use the vi editor, you can so a substitution for the pipes and spaces with these commands:

:%s/ //g
:%s/|//g

Otherwise, you will want to use another text editor and manually edit the list to remove the spaces and pipe symbols. Your finished list.txt file should look like this:

12thmedia cbgc
comicbookdb
coupons
healthcheck
innodb_memcache
landwatch
laurelsprings
ls_directory
mem
protech
scripts
stacy
storelist
testcert
tony
twtr
watchdb

You can then create a simple shell script to help create your mysqldump commands – one command for each database. You will want to create this script and the other scripts in the directory you created earlier. Name the script export.sh. You can also change the mysqldump options to meet your needs. I am using GTID’s for replication, so I want to use this option –set-gtid-purged=OFF. You will also want to change the value of my password my_pass to your mysql password. You can also skip including the password by using the -p option, and just enter the password each time you run the mysqldump command.

# export.sh
# script to create the database export commands
k=""
for i in `cat list.txt`
do

echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $i > "$i"_backup.sql"

k="$k $i"

done

# Optional - export the entire database
# use the file extention of .txt so that your script won't import it later
echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $k > all_db_backup.txt"

For the individual databases, I am using the suffix of .sql. For the dump file that contains all of the databases, I am using the prefix .txt – as I use a wildcard search later to get a list of the dump files, and I don’t want to import the one dump file that contains all of the databases.

Now you can run the export.sh script to create a list of your mysqldump commands, and you are going to direct the output into another shell script named export_list.sh.

# sh export.sh > export_list.sh

We can now take a look at what is in the export_list.sh file

# cat export_list.sh
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases 12thmedia > 12thmedia_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases cbgc > cbgc_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases comicbookdb > comicbookdb_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases coupons > coupons_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases healthcheck > healthcheck_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases innodb_memcache > innodb_memcache_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases landwatch > landwatch_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases laurelsprings > laurelsprings_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases ls_directory > ls_directory_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases mem > mem_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases protech > protech_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases scripts > scripts_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases stacy > stacy_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases storelist > storelist_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases testcert > testcert_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases tony > tony_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases twtr > twtr_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases watchdb > watchdb_backup.sql

mysqldump -uroot -p --set-gtid-purged=OFF --password=my_psss --triggers --quick --skip-opt --add-drop-database --create-options --databases  12thmedia cbgc comicbookdb coupons healthcheck innodb_memcache landwatch laurelsprings ls_directory mem protech scripts stacy storelist testcert tony twtr watchdb > all_db_backup.txt

Now you have created a list of mysqldump commands that you can execute to dump all of your databases. You can now go ahead and execute your mysqldump commands by running the export_list.sh script:

# sh export_list.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

The message “Warning: Using a password on the command line interface can be insecure.” is shown because you included the value for “–password“. If you don’t want to put your password on the command line, just change that option to “-p“, and you will have to manually enter your MySQL root user’s password after each mysqldump command.

Here is a list of the dump files that was produced:

# ls -l
total 21424
-rw-r--r--  1 root  staff    26690 Aug  1 16:25 12thmedia_backup.sql
-rw-r--r--  1 root  staff  5455275 Aug  1 16:26 all_db_backup.txt
-rw-r--r--  1 root  staff  1746820 Aug  1 16:25 cbgc_backup.sql
-rw-r--r--  1 root  staff   492943 Aug  1 16:25 comicbookdb_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 coupons_backup.sql
-rw-r--r--  1 root  staff     3366 Aug  1 16:25 export_list.sh
-rw-r--r--  1 root  staff     1077 Aug  1 16:25 healthcheck_backup.sql
-rw-r--r--  1 root  staff     3429 Aug  1 16:25 innodb_memcache_backup.sql
-rw-r--r--  1 root  staff  1815839 Aug  1 16:25 landwatch_backup.sql
-rw-r--r--  1 root  staff   642965 Aug  1 16:25 laurelsprings_backup.sql
-rw-r--r--  1 root  staff   660254 Aug  1 16:25 ls_directory_backup.sql
-rw-r--r--  1 root  staff     1037 Aug  1 16:25 mem_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 protech_backup.sql
-rw-r--r--  1 root  staff     2889 Aug  1 16:25 scripts_backup.sql
-rw-r--r--  1 root  staff    11107 Aug  1 16:25 stacy_backup.sql
-rw-r--r--  1 root  staff     4002 Aug  1 16:25 storelist_backup.sql
-rw-r--r--  1 root  staff     1062 Aug  1 16:25 testcert_backup.sql
-rw-r--r--  1 root  staff     4467 Aug  1 16:25 tony_backup.sql
-rw-r--r--  1 root  staff     1042 Aug  1 16:25 twtr_backup.sql
-rw-r--r--  1 root  staff    52209 Aug  1 16:25 watchdb_backup.sql

You will now want to dump your MySQL table, so you don’t have to recreate all of the MySQL information, including the users, passwords and privileges after the new install.

mysqldump -uroot --password=my_pass --set-gtid-purged=OFF mysql > mysql_user_backup.txt

I am once again using the .txt prefix for this file.

After you execute the above command, make sure that the dump file was created:

# ls -l mysql_user_backup.txt
-rw-r--r--  1 root  staff  9672 Aug  1 16:32 mysql_user_backup.txt

We have now finished exporting all of our data, including our MySQL table data. You will need to shutdown MySQL. You may use mysqladmin to shutdown your database, or here is a link on ways to shutdown MySQL.

# mysqladmin -uroot --password=my_pass shutdown
Warning: Using a password on the command line interface can be insecure.

Before continuing, you might want to check to make sure that the mysqld process isn’t still active.

# ps -ef|grep mysqld
    0 18380 17762   0   0:00.00 ttys002    0:00.00 grep mysqld

You are now going to want to change the name of your mysql directory. This will give you access to the old directory in case the upgrade fails. For my OS (Mac OS 10.9), my MySQL home directory is a symbolic link to another directory that contains the actual MySQL data. All I have to do is to remove the symbolic link. A new symbolic link will be created with the new install. Otherwise, just use the mv command to rename your old MySQL directory.

# cd /usr/local/
# ls -ld mysql* 
lrwxr-xr-x   1 root  wheel   36 Aug  9  2013 mysql -> mysql-advanced-5.6.17-osx10.6-x86_64
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

All I have to do is to remove the link, and the MySQL directory will still be there:

# rm mysql
# ls -ld mysql* 
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

Now I am ready to install the new version of MySQL. I won’t cover the installation process, but here is the link to the installation page.

Tip: After you have installed MySQL, don’t forget to run this script from your MySQL home directory. This will install your mysql database tables. Otherwise, you will get an error when you try to start the mysqld process.

# ./scripts/mysql_install_db

Now you can start the mysqld process. See this page if you don’t know how to start MySQL.

You can test to see if the new installation of MySQL is running by either checking the process table, or logging into mysql. With a fresh install of 5.6, you should not have to include a user name or password.

Note: (Future versions of MySQL may automatically create a random root password and put it in your data directory. You will then need to use that password to login to MySQL for the first time. Check the user’s manual for any MySQL versions beyond 5.6.)

# mysql
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

Now that MySQL is up and running, leave the mysql terminal window open, and open another terminal window so you can import your mysql table information from your dump file:

# mysql < /users/tonydarnell/mysql_2014_0731/2014_0731_mysql_backup.sql

You won't be able to login with your old user names and passwords until you execute the flush privileges command. So, in your other terminal window with the mysql prompt:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Open another terminal window and see if you can login with your old mysql user name and password:

# mysql -uroot -p
Enter password: 
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

You can then look at your the user names and passwords in the mysql.user table:

mysql> select user, host, password from mysql.user order by user, host;
+----------------+---------------+-------------------------------------------+
| user           | host          | password                                  |
+----------------+---------------+-------------------------------------------+
| root           | 127.0.0.1     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.2   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.5   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.50  | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | localhost     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
+----------------+---------------+-------------------------------------------+
5 rows in set (0.00 sec)


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000005
         Position: 644455
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: e1eb3f38-18da-11e4-aa44-0a1a64a61679:1-124
1 row in set (0.00 sec)

We are now ready to import the database dump files. We can use this script to create the import commands. Copy this into a text file named import.sh:

# import.sh
# script to import all of the export files
# run this script in the same directory as the exported dump files
#
> import_files.sh
directory=`pwd`
for file in `ls *sql`
do

if [[ $(grep -c '.txt' $file) != 0 ]];then

echo "# found mysql - do nothing"

else

echo "mysql -uroot -p"my_pass"  < $directory/$file"
echo "mysql -uroot -p"my_pass"  > import_files.sh

fi

done

Then run the import.sh script. The script will print the output to the terminal window as well as into a new script file named import_files.sh.

# sh import.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

Look at the contents of the new script file – import_files.sh – to make sure that it contains all of the database files. You will use this file to help you import your dump files.

# cat import_files.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

WARNING: Be sure that this script file does not contain the main dump file or the mysql user’s file that we created.


I was exporting and importing eighteen (18) database files, so I can also check the line count of the import_files.sh script to make sure it matches:

# wc -l import_files.sh
      18 import_files.sh

I am now ready to import my files.


Optional: add the -v for verbose mode – sh -v import_files.sh


# sh import_files.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

You databases should now be imported into your new instance of MySQL. You can always re-run the script to make sure that the databases are the same size.


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed after importing the dump files, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000003
         Position: 16884001
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: cc68d008-18f3-11e4-aae6-470d6cf89709:1-43160
1 row in set (0.00 sec)

Your new and fresh installation of MySQL should be ready to use.

NOTE:A thank-you to Daniel Van Eeden for pointing out a mistake that I had made.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 2 of 2

In part one of this post, I gave you a couple examples of how to backup your MySQL databases using mysqldump. In part two, I will show you how to use the MySQL Enterprise Backup (MEB) to create a full and partial backup.


MySQL Enterprise Backup provides enterprise-grade backup and recovery for MySQL. It delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris. To learn more, you may download a whitepaper on MEB.

MySQL Enterprise Backup delivers:

  • NEW! Continuous monitoring – Monitor the progress and disk space usage
  • “Hot” Online Backups – Backups take place entirely online, without interrupting MySQL transactions
  • High Performance – Save time with faster backup and recovery
  • Incremental Backup – Backup only data that has changed since the last backup
  • Partial Backup – Target particular tables or tablespaces
  • Compression – Cut costs by reducing storage requirements up to 90%
  • Backup to Tape – Stream backup to tape or other media management solutions
  • Fast Recovery – Get servers back online and create replicated servers
  • Point-in-Time Recovery (PITR) – Recover to a specific transaction
  • Partial restore – Recover targeted tables or tablespaces
  • Restore to a separate location – Rapidly create clones for fast replication setup
  • Reduce Failures – Use a proven high quality solution from the developers of MySQL
  • Multi-platform – Backup and Restore on Linux, Windows, Mac & Solaris(from http://www.mysql.com/products/enterprise/backup.html

    While mysqldump is free to use, MEB is part of MySQL’s Enterprise Edition (EE) – so you need a license to use it. But if you are using MySQL in a production environment, you might want to look at EE, as:

    MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.
    (from: http://www.mysql.com/products/enterprise/)

    Before using MEB and backing up your database for the first time, you will need some information:

    Information to gather – Where to Find It – How It Is Used

    • Path to MySQL configuration file – Default system locations, hardcoded application default locations, or from –defaults-file option in mysqld startup script. – This is the preferred way to convey database configuration information to the mysqlbackup command, using the –defaults-file option. When connection and data layout information is available from the configuration file, you can skip most of the other choices listed below.
    • MySQL port – MySQL configuration file or mysqld startup script. Used to connect to the database instance during backup operations. Specified via the –port option of mysqlbackup. –port is not needed if available from MySQL configuration file. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions.
    • Path to MySQL data directory – MySQL configuration file or mysqld startup script. – Used to retrieve files from the database instance during backup operations, and to copy files back to the database instance during restore operations. Automatically retrieved from database connection for hot and warm backups. Taken from MySQL configuration file for cold backups.
    • ID and password of privileged MySQL user – You record this during installation of your own databases, or get it from the DBA when backing up databases you do not own. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions. For cold backups, you log in as an administrative user. – Specified via the –password option of the mysqlbackup. Prompted from the terminal if the –password option is present without the password argument.
    • Path under which to store backup data – You choose this. See Section 3.1.3, of the MySQL online manual – “Designate a Location for Backup Data” for details. – By default, this directory must be empty for mysqlbackup to write data into it, to avoid overwriting old backups or mixing up data from different backups. Use the –with-timestamp option to automatically create a subdirectory with a unique name, when storing multiple sets of backup data under the same main directory.
    • Owner and permission information for backed-up files (for Linux, Unix, and OS X systems) – In the MySQL data directory. – If you do the backup using a different OS user ID or a different umask setting than applies to the original files, you might need to run commands such as chown and chmod on the backup data. See Section A.1, of the MySQL online manual – “Limitations of mysqlbackup Command” for details.
    • Size of InnoDB redo log files – Calculated from the values of the innodb_log_file_size and innodb_log_files_in_group configuration variables. Use the technique explained for the –incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the –incremental-with-redo-log-only option rather than the –incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups.
    • Rate at which redo data is generated – Calculated from the values of the InnoDB logical sequence number at different points in time. Use the technique explained for the –incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the –incremental-with-redo-log-only option rather than the –incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups.For most backup operations, the mysqlbackup command connects to the MySQL server through –user and –password options. If you aren’t going to use the root user, then you will need to create a separate user. Follow these instructions for setting the proper permissions.

      All backup-related operations either create new files or reference existing files underneath a specified directory that holds backup data. Choose this directory in advance, on a file system with sufficient storage. (It could even be remotely mounted from a different server.) You specify the path to this directory with the –backup-dir option for many invocations of the mysqlbackup command.

      Once you establish a regular backup schedule with automated jobs, it is preferable to keep each backup within a timestamped subdirectory underneath the main backup directory. To make the mysqlbackup command create these subdirectories automatically, specify the –with-timestamp option each time you run mysqlbackup.

      For one-time backup operations, for example when cloning a database to set up a replication slave, you might specify a new directory each time, or specify the –force option of mysqlbackup to overwrite older backup files.

      If you haven’t downloaded and installed mysqlbackup, you may download it from edelivery.oracle.com (registration is required). Install the MySQL Enterprise Backup product on each database server whose contents you intend to back up. You perform all backup and restore operations locally, by running the mysqlbackup command on the same server as the MySQL instance.

      Now that we have gathered all of the required information and installed mysqlbackup, let’s run a simple and easy backup of the entire database. I installed MEB in my /usr/local directory, so I am including the full path of mysqlbackup. I am using the backup-and-apply-log option, which combines the –backup and the –apply-log options into one. The –backup option performs the initial phase of a backup. The second phase is performed later by running mysqlbackup again with the –apply-log option, which brings the InnoDB tables in the backup up-to-date, including any changes made to the data while the backup was running.

      $ /usr/local/meb/bin/mysqlbackup --user=root --password --backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log
      MySQL Enterprise Backup version 3.12.0 [2014/11/12] 
      Copyright (c) 2003, 2014, Oracle and/or its affiliates. All Rights Reserved.
      
       mysqlbackup: INFO: Starting with following command line ...
       /usr/local/meb/bin/mysqlbackup --user=root --password 
              --backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log 
      
      Enter password: 
       mysqlbackup: INFO: MySQL server version is '5.6.9-rc-log'.
       mysqlbackup: INFO: Got some server configuration information from running server.
      
      IMPORTANT: Please check that mysqlbackup run completes successfully.
                 At the end of a successful 'backup-and-apply-log' run mysqlbackup
                 prints "mysqlbackup completed OK!".
      
      --------------------------------------------------------------------
                             Server Repository Options:
      --------------------------------------------------------------------
        datadir = /usr/local/mysql/data/
        innodb_data_home_dir = /usr/local/mysql/data
        innodb_data_file_path = ibdata1:40M:autoextend
        innodb_log_group_home_dir = /usr/local/mysql/data
        innodb_log_files_in_group = 2
        innodb_log_file_size = 5242880
        innodb_page_size = 16384
        innodb_checksum_algorithm = innodb
        innodb_undo_directory = /usr/local/mysql/data/
        innodb_undo_tablespaces = 0
        innodb_undo_logs = 128
      
      --------------------------------------------------------------------
                             Backup Config Options:
      --------------------------------------------------------------------
        datadir = /Users/tonydarnell/hotbackups/datadir
        innodb_data_home_dir = /Users/tonydarnell/hotbackups/datadir
        innodb_data_file_path = ibdata1:40M:autoextend
        innodb_log_group_home_dir = /Users/tonydarnell/hotbackups/datadir
        innodb_log_files_in_group = 2
        innodb_log_file_size = 5242880
        innodb_page_size = 16384
        innodb_checksum_algorithm = innodb
        innodb_undo_directory = /Users/tonydarnell/hotbackups/datadir
        innodb_undo_tablespaces = 0
        innodb_undo_logs = 128
      
       mysqlbackup: INFO: Unique generated backup id for this is 13742482113579320
      
       mysqlbackup: INFO: Creating 14 buffers each of size 16777216.
      130719 11:36:53 mysqlbackup: INFO: Full Backup operation starts with following threads
      		1 read-threads    6 process-threads    1 write-threads
      130719 11:36:53 mysqlbackup: INFO: System tablespace file format is Antelope.
      130719 11:36:53 mysqlbackup: INFO: Starting to copy all innodb files...
      130719 11:36:53 mysqlbackup: INFO: Copying /usr/local/mysql/data/ibdata1 (Antelope file format).
      130719 11:36:53 mysqlbackup: INFO: Found checkpoint at lsn 135380756.
      130719 11:36:53 mysqlbackup: INFO: Starting log scan from lsn 135380480.
      130719 11:36:53 mysqlbackup: INFO: Copying log...
      130719 11:36:54 mysqlbackup: INFO: Log copied, lsn 135380756.
      
      (I have truncated some of the database and table output to save space)
      .....
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/innodb_index_stats.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/innodb_table_stats.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_master_info.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_relay_log_info.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_worker_info.ibd (Antelope file format).
      .....
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/testcert/t1.ibd (Antelope file format).
      130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/testcert/t3.ibd (Antelope file format).
      .....
      130719 11:36:57 mysqlbackup: INFO: Copying /usr/local/mysql/data/watchdb/watches.ibd (Antelope file format).
      .....
      130719 11:36:57 mysqlbackup: INFO: Completing the copy of innodb files.
      130719 11:36:58 mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server.
      130719 11:36:58 mysqlbackup: INFO: Starting to lock all the tables...
      130719 11:36:58 mysqlbackup: INFO: All tables are locked and flushed to disk
      130719 11:36:58 mysqlbackup: INFO: Opening backup source directory '/usr/local/mysql/data/'
      130719 11:36:58 mysqlbackup: INFO: Starting to backup all non-innodb files in 
      	subdirectories of '/usr/local/mysql/data/'
      .....
      130719 11:36:58 mysqlbackup: INFO: Copying the database directory 'comicbookdb'
      .....
      130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'mysql'
      130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'performance_schema'
      .....
      130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'test'
      .....
      130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'watchdb'
      130719 11:36:59 mysqlbackup: INFO: Completing the copy of all non-innodb files.
      130719 11:37:00 mysqlbackup: INFO: A copied database page was modified at 135380756.
                (This is the highest lsn found on page)
                Scanned log up to lsn 135384397.
                Was able to parse the log up to lsn 135384397.
                Maximum page number for a log record 375
      130719 11:37:00 mysqlbackup: INFO: All tables unlocked
      130719 11:37:00 mysqlbackup: INFO: All MySQL tables were locked for 1.589 seconds.
      130719 11:37:00 mysqlbackup: INFO: Full Backup operation completed successfully.
      130719 11:37:00 mysqlbackup: INFO: Backup created in directory '/Users/tonydarnell/hotbackups'
      130719 11:37:00 mysqlbackup: INFO: MySQL binlog position: filename mysql-bin.000013, position 85573
      
      -------------------------------------------------------------
         Parameters Summary         
      -------------------------------------------------------------
         Start LSN                  : 135380480
         End LSN                    : 135384397
      -------------------------------------------------------------
      
       mysqlbackup: INFO: Creating 14 buffers each of size 65536.
      130719 11:37:00 mysqlbackup: INFO: Apply-log operation starts with following threads
      		1 read-threads    1 process-threads
      130719 11:37:00 mysqlbackup: INFO: ibbackup_logfile's creation parameters:
                start lsn 135380480, end lsn 135384397,
                start checkpoint 135380756.
       mysqlbackup: INFO: InnoDB: Starting an apply batch of log records to the database...
      InnoDB: Progress in percent: 0 1 .... 99 Setting log file size to 5242880
      Setting log file size to 5242880
      130719 11:37:00 mysqlbackup: INFO: We were able to parse ibbackup_logfile up to
                lsn 135384397.
       mysqlbackup: INFO: Last MySQL binlog file position 0 85573, file name mysql-bin.000013
      130719 11:37:00 mysqlbackup: INFO: The first data file is '/Users/tonydarnell/hotbackups/datadir/ibdata1'
                and the new created log files are at '/Users/tonydarnell/hotbackups/datadir'
      130719 11:37:01 mysqlbackup: INFO: Apply-log operation completed successfully.
      130719 11:37:01 mysqlbackup: INFO: Full backup prepared for recovery successfully.
      
      mysqlbackup completed OK!
      

      Now, I can take a look at the backup file that was created:

      root@macserver01: $ pwd
      /Users/tonydarnell/hotbackups
      root@macserver01: $ ls -l
      total 8
      -rw-r--r--   1 root  staff  351 Jul 19 11:36 backup-my.cnf
      drwx------  21 root  staff  714 Jul 19 11:37 datadir
      drwx------   6 root  staff  204 Jul 19 11:37 meta
      $ ls -l datadir
      total 102416
      drwx------   5 root  staff       170 Jul 19 11:36 comicbookdb
      -rw-r-----   1 root  staff   5242880 Jul 19 11:37 ib_logfile0
      -rw-r-----   1 root  staff   5242880 Jul 19 11:37 ib_logfile1
      -rw-r--r--   1 root  staff      4608 Jul 19 11:37 ibbackup_logfile
      -rw-r--r--   1 root  staff  41943040 Jul 19 11:37 ibdata1
      drwx------  88 root  staff      2992 Jul 19 11:36 mysql
      drwx------  55 root  staff      1870 Jul 19 11:36 performance_schema
      drwx------   3 root  staff       102 Jul 19 11:36 test
      drwx------  30 root  staff      1020 Jul 19 11:36 testcert
      drwx------  19 root  staff       646 Jul 19 11:36 watchdb
      
      root@macserver01: $ ls -l meta
      total 216
      -rw-r--r--  1 root  staff  90786 Jul 19 11:37 backup_content.xml
      -rw-r--r--  1 root  staff   5746 Jul 19 11:36 backup_create.xml
      -rw-r--r--  1 root  staff    265 Jul 19 11:37 backup_gtid_executed.sql
      -rw-r--r--  1 root  staff    321 Jul 19 11:37 backup_variables.txt
      

      As you can see, the backup was created in /Users/tonydarnell/hotbackups. If I wanted to have a unique folder for this backup, I can use the –with-timestamp.

      The –with-timestamp option places the backup in a subdirectory created under the directory you specified above. The name of the backup subdirectory is formed from the date and the clock time of the backup run.

      I will run the same backup command again, but with the –with-timestamp option:

      (I am not going to duplicate the entire output – but I will only show you the output where it creates the sub-directory under /Users/tonydarnell/hotbackups)

      $ /usr/local/meb/bin/mysqlbackup --user=root --password --backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log --with-timestamp
      ......
      130719 11:49:54 mysqlbackup: INFO: The first data file is '/Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir/ibdata1'
                and the new created log files are at '/Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir'
      130719 11:49:54 mysqlbackup: INFO: Apply-log operation completed successfully.
      130719 11:49:54 mysqlbackup: INFO: Full backup prepared for recovery successfully.
      
      mysqlbackup completed OK!
      

      So, I ran the backup again to get a unique directory. Instead of the backup files/directories being placed in /Users/tonydarnell/hotbackups, it created a sub-directory with a timestamp for the directory name:

      $ pwd
      /Users/tonydarnell/hotbackups
      root@macserver01: $ ls -l
      total 0
      drwx------  5 root  staff  170 Jul 19 11:49 2015-05-19_11-49-48
      $ ls -l 2015-05-19_11-49-48
      total 8
      -rw-r--r--   1 root  staff  371 Jul 19 11:49 backup-my.cnf
      drwx------  21 root  staff  714 Jul 19 11:49 datadir
      drwx------   6 root  staff  204 Jul 19 11:49 meta
      

      Note: If you don’t use the backup-and-apply-log option you will need to read this: Immediately after the backup job completes, the backup files might not be in a consistent state, because data could be inserted, updated, or deleted while the backup is running. These initial backup files are known as the raw backup.

      You must update the backup files so that they reflect the state of the database corresponding to a specific InnoDB log sequence number. (The same kind of operation as crash recovery.) When this step is complete, these final files are known as the prepared backup.

      During the backup, mysqlbackup copies the accumulated InnoDB log to a file called ibbackup_logfile. This log file is used to “roll forward” the backed-up data files, so that every page in the data files corresponds to the same log sequence number of the InnoDB log. This phase also creates new ib_logfiles that correspond to the data files.

      The mysqlbackup option for turning a raw backup into a prepared backup is apply-log. You can run this step on the same database server where you did the backup, or transfer the raw backup files to a different system first, to limit the CPU and storage overhead on the database server.

      Note: Since the apply-log operation does not modify any of the original files in the backup, nothing is lost if the operation fails for some reason (for example, insufficient disk space). After fixing the problem, you can safely retry apply-log and by specifying the –force option, which allows the data and log files created by the failed apply-log operation to be overwritten.

      For simple backups (without compression or incremental backup), you can combine the initial backup and the apply-log step using the option backup-and-apply-log.

      MEB 3.9 and later creates two .cnf files based on the output of SHOW GLOBAL VARIABLES: server-my.cnf (non-default values) and server-all.cnf (all values).

      Now that we have a completed backup, we are going to copy the backup files and the my.cnf file over to a different server to restore the databases. We will be using a server that was setup as a slave server to the server where the backup occurred. If you need to restore the backup to the same server, you will need to refer to the mysqlbackup manual. I copied the backup files as well as the my.cnf file to the new server:

      # pwd
      /Users/tonydarnell/hotbackups
      # ls -l
      total 16
      drwxrwxrwx  5 tonydarnell  staff   170 Jul 19 15:38 2015-05-19_11-49-48
      

      On the new server (where I will restore the data), I shutdown the mysqld process (mysqladmin -uroot -p shutdown), copied the my.cnf file to the proper directory, and now I can restore the database to the new server, using the copy-back option. The copy-back option requires the database server to be already shut down, then copies the data files, logs, and other backed-up files from the backup directory back to their original locations, and performs any required postprocessing on them.

      # /usr/local/meb/bin/mysqlbackup --defaults-file=/etc/my.cnf --backup-dir=/Users/tonydarnell/hotbackups/2015-05-19_11-49-48 copy-back
      MySQL Enterprise Backup version 3.12.0 [2014/11/12] 
      Copyright (c) 2003, 2014, Oracle and/or its affiliates. All Rights Reserved.
      
       mysqlbackup: INFO: Starting with following command line ...
       /usr/local/meb/bin/mysqlbackup --defaults-file=/etc/my.cnf 
              --backup-dir=/Users/tonydarnell/hotbackups/2015-05-19_11-49-48 
              copy-back 
      
      IMPORTANT: Please check that mysqlbackup run completes successfully.
                 At the end of a successful 'copy-back' run mysqlbackup
                 prints "mysqlbackup completed OK!".
      
      --------------------------------------------------------------------
                             Server Repository Options:
      --------------------------------------------------------------------
        datadir = /usr/local/mysql/data
        innodb_data_home_dir = /usr/local/mysql/data
        innodb_data_file_path = ibdata1:40M:autoextend
        innodb_log_group_home_dir = /usr/local/mysql/data
        innodb_log_files_in_group = 2
        innodb_log_file_size = 5M
        innodb_page_size = Null
        innodb_checksum_algorithm = innodb
      
      --------------------------------------------------------------------
                             Backup Config Options:
      --------------------------------------------------------------------
        datadir = /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir
        innodb_data_home_dir = /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir
        innodb_data_file_path = ibdata1:40M:autoextend
        innodb_log_group_home_dir = /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir
        innodb_log_files_in_group = 2
        innodb_log_file_size = 5242880
        innodb_page_size = 16384
        innodb_checksum_algorithm = innodb
        innodb_undo_directory = /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir
        innodb_undo_tablespaces = 0
        innodb_undo_logs = 128
      
       mysqlbackup: INFO: Creating 14 buffers each of size 16777216.
      130719 15:54:41 mysqlbackup: INFO: Copy-back operation starts with following threads
      		1 read-threads    1 write-threads
      130719 15:54:41 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir/ibdata1.
      .....
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir/comicbookdb/comics.ibd.
      .....
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir/mysql/innodb_index_stats.ibd.
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir/mysql/innodb_table_stats.ibd.
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir/mysql/slave_master_info.ibd.
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir/mysql/slave_relay_log_info.ibd.
      130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir/mysql/slave_worker_info.ibd.
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2015-05-19_11-49-48/datadir/watchdb/watches.ibd.
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'comicbookdb'
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'mysql'
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'performance_schema'
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'test'
      .....
      130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'watchdb'
      130719 15:54:43 mysqlbackup: INFO: Completing the copy of all non-innodb files.
      130719 15:54:43 mysqlbackup: INFO: Copying the log file 'ib_logfile0'
      130719 15:54:43 mysqlbackup: INFO: Copying the log file 'ib_logfile1'
      130719 15:54:44 mysqlbackup: INFO: Copy-back operation completed successfully.
      130719 15:54:44 mysqlbackup: INFO: Finished copying backup files to '/usr/local/mysql/data'
      
      mysqlbackup completed OK!
      

      I can now restart MySQL. I have a very small database (less than 50 megabytes). But it took less than a minute to restore the database. If I had to rebuild my database using mysqldump, it would take a lot longer. If you have a very large database, the different in using mysqlbackup and mysqldump could be in hours. For example, a 32-gig database with 33 tables takes about eight minutes to restore with mysqlbackup. Restoring the same database with a mysqldump file takes over two hours.

      An easy way to check to see if the databases match (assuming that I haven’t added any new records in any of the original databases – which I haven’t), I can use one of the MySQL Utilities – mysqldbcompare. I wrote about how to do this in an earlier blog about using it to test two replicated databases, but it will work here as well – see Using MySQL Utilities Workbench Script mysqldbcompare To Compare Two Databases In Replication.

      The mysqldbcompare utility “compares the objects and data from two databases to find differences. It identifies objects having different definitions in the two databases and presents them in a diff-style format of choice. Differences in the data are shown using a similar diff-style format. Changed or missing rows are shown in a standard format of GRID, CSV, TAB, or VERTICAL.” (from: mysqldbcompare — Compare Two Databases and Identify Differences)

      Some of the syntax may have changed for mysqldbcompare since I wrote that blog, so you will need to reference the help notes for mysqldbcompare. You would need to run this for each of your databases.

      $ mysqldbcompare --server1=scripts:scripts999@192.168.1.2   --server2=scripts:scripts999@192.168.1.123 --run-all-tests --difftype=context comicbookdb:comicbookdb
      # server1 on 192.168.1.2: ... connected.
      # server2 on 192.168.1.123: ... connected.
      # Checking databases comicbookdb on server1 and comicbookdb on server2
      
                                                          Defn    Row     Data   
      Type      Object Name                               Diff    Count   Check  
      --------------------------------------------------------------------------- 
      TABLE     comics                                    pass    pass    pass   
      
      Databases are consistent.
      
      # ...done
      

      You can try and run this for the mysql database, but you may get a few errors regarding the mysql.backup_history and mysql.backup_progress tables:

      $ mysqldbcompare --server1=scripts:scripts999@192.168.1.2   --server2=scripts:scripts999@192.168.1.123 --run-all-tests --difftype=context mysql:mysql
      # server1 on 192.168.1.2: ... connected.
      # server2 on 192.168.1.123: ... connected.
      # Checking databases mysql on server1 and mysql on server2
      
                                                          Defn    Row     Data   
      Type      Object Name                               Diff    Count   Check  
      --------------------------------------------------------------------------- 
      TABLE     backup_history                            pass    FAIL    SKIP    
      
      Row counts are not the same among mysql.backup_history and mysql.backup_history.
      
      No primary key found.
      
      TABLE     backup_progress                           pass    FAIL    SKIP    
      
      Row counts are not the same among mysql.backup_progress and mysql.backup_progress.
      
      No primary key found.
      
      TABLE     columns_priv                              pass    pass    pass    
      TABLE     db                                        pass    pass    pass    
      TABLE     event                                     pass    pass    pass    
      TABLE     func                                      pass    pass    pass    
      TABLE     general_log                               pass    pass    SKIP    
      
      No primary key found.
      
      TABLE     help_category                             pass    pass    pass    
      TABLE     help_keyword                              pass    pass    pass    
      TABLE     help_relation                             pass    pass    pass    
      TABLE     help_topic                                pass    pass    pass    
      TABLE     innodb_index_stats                        pass    pass    pass    
      TABLE     innodb_table_stats                        pass    pass    pass    
      TABLE     inventory                                 pass    pass    pass    
      TABLE     ndb_binlog_index                          pass    pass    pass    
      TABLE     plugin                                    pass    pass    pass    
      TABLE     proc                                      pass    pass    pass    
      TABLE     procs_priv                                pass    pass    pass    
      TABLE     proxies_priv                              pass    pass    pass    
      TABLE     servers                                   pass    pass    pass    
      TABLE     slave_master_info                         pass    pass    pass    
      TABLE     slave_relay_log_info                      pass    pass    pass    
      TABLE     slave_worker_info                         pass    pass    pass    
      TABLE     slow_log                                  pass    pass    SKIP    
      
      No primary key found.
      
      TABLE     tables_priv                               pass    pass    pass    
      TABLE     time_zone                                 pass    pass    pass    
      TABLE     time_zone_leap_second                     pass    pass    pass    
      TABLE     time_zone_name                            pass    pass    pass    
      TABLE     time_zone_transition                      pass    pass    pass    
      TABLE     time_zone_transition_type                 pass    pass    pass    
      TABLE     user                                      pass    pass    pass   
      
      Database consistency check failed.
      
      # ...done
      

      For example, when you compare the mysql.backup_history tables, the original database will have two entries – as I ran mysqlbackup twice. But the second backup entry doesn’t get entered until after the backup has occurred, and it isn’t reflected in the backup files.

      Original Server

      mysql> select count(*) from mysql.backup_history;
      +----------+
      | count(*) |
      +----------+
      |        2 |
      +----------+
      1 row in set (0.00 sec)
      

      Restored Server

      mysql> select count(*) from mysql.backup_history;
      +----------+
      | count(*) |
      +----------+
      |        1 |
      +----------+
      1 row in set (0.00 sec)
      

      For the mysql.backup_progress tables, the original database has ten rows, while the restored database has seven.

      There are many options for using mysqlbackup, including (but not limited to) incremental backup, partial backup , compression, backup to tape, point-in-time recovery (PITR), partial restore, etc. If you are running MySQL in a production environment, then you should look at MySQL Enterprise Edition, which includes MySQL Enterprise Backup. Of course, you should always have a backup and recovery plan in place. Finally, if and when possible, practice restoring your backup on a regular basis, to make sure that if your server crashes, you can restore your database quickly.

       


      Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
      Tony is the author of Twenty Forty-Four: The League of Patriots 

      Visit http://2044thebook.com for more information.