MySQL Enterprise Audit – parsing audit information from log files, inserting into MySQL table

The MySQL Enterprise Audit plug-in is part of the MySQL Enterprise Edition (available through a paid license). Basically, Enterprise Audit tracks everything that is happening on your MySQL server, and can be used to protect/detect the misuse of information, and to meet popular compliance regulations including HIPAA, Sarbanes-Oxley, and the PCI Data Security Standard.

MySQL Enterprise Audit uses the open MySQL Audit API to enable standard, policy-based monitoring and logging of connection and query activity executed on specific MySQL servers. Designed to meet the Oracle audit specification, MySQL Enterprise Audit provides an out of box, easy to use auditing and compliance solution for applications that are governed by both internal and external regulatory guidelines.

When installed, the audit plugin enables MySQL Server to produce a log file containing an audit record of server activity. The log contents include when clients connect and disconnect, and what actions they perform while connected, such as which databases and tables they access.
(from https://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin.html)

When you enable MySQL Enterprise Audit, log files are generated in your MySQL data directory. You can use tools like MySQL Workbench (Enterprise Edition) or Oracle Audit Vault to import the log data, to view the information and to generate reports.

I was talking with a client, and he wanted to know if the audit data could be stored in a table. Currently (as of MySQL 5.6.25), the audit information is stored as XML in the audit log files. So, I decided to write a quick Perl script that would parse the XML log files and insert the information into a MySQL database. You will need to set the size of your audit log files in your my.cnf or my.ini configuration file via the audit_log_rotate_on_size variable. You might need to adjust the size of your log files based upon database activity and how well the script parses the log files. If your log files are very large, the Perl script might have issues processing it, and you might want to decrease the size of your log files and run the script more frequently.


CAVEAT
Enterprise Audit does require a license from MySQL. If you are interested in an Enterprise subscription, contact me via the comment section below. If you are an Enterprise customer, you will need to configure Enterprise Audit first. See the Enterprise Audit online documentation page for more information, or contact MySQL Support.


For the data fields, I used the audit log file format information found at The Audit Log File page on MySQL.com.

My MySQL server doesn’t have a lot of activity, so I tried to configure the size of the data fields as best as possible to accommodate the possible size of the data in each field. There may be instances where you will have to increase the size of these fields or change their data types. The largest field is the SQL_TEXT field which will contain your SQL statements. Every table has a max row size of 65,535 bytes. So, the largest possible size of the SQL_TEXT field could be for this example is around 63,200 bytes (65,535 bytes minus the sum of the size of all of the other fields, and minus the 1-byte or 2-byte length prefix used for each varchar field). In this example, the SQL_TEXT field is set to 8,096 bytes, so you may need to increase or decrease this value.

I used varchar data types for each field, excluding the primary key field named ID. I did not spend a lot of time on the database schema, so you might want to modify it a little. I am sure that some of the fields are integers, but I did not have enough data in my log files to positively determine all of the possible values for each field. I did read the online manual, and it stated that CONNECTION_ID, SERVER_ID, STATUS, STATUS_CODE and VERSION were unsigned integers – but I left them as varchar.


NOTICE
This script requires the use of the new format for the audit log files, which is available in MySQL versions 5.6.20 or later.


I created a database along with two tables; one to store the log file information, and a history table to keep track of what files had already been parsed and inserted into MySQL, as well as the number of log file entries. The CREATE DATABASE and CREATE TABLE syntax is as follows:

CREATE DATABASE `audit_information` /*!40100 DEFAULT CHARACTER SET latin1 */

CREATE TABLE `audit_parsed` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `COMMAND_CLASS` varchar(64) DEFAULT NULL,
  `CONNECTIONID` varchar(32) DEFAULT NULL,
  `DB_NAME` varchar(64) DEFAULT NULL,
  `HOST_NAME` varchar(256) DEFAULT NULL,
  `IP_ADDRESS` varchar(16) DEFAULT NULL,
  `MYSQL_VERSION` varchar(64) DEFAULT NULL,
  `COMMAND_NAME` varchar(64) DEFAULT NULL,
  `OS_LOGIN` varchar(64) DEFAULT NULL,
  `OS_VERSION` varchar(256) DEFAULT NULL,
  `PRIV_USER` varchar(16) DEFAULT NULL,
  `PROXY_USER` varchar(16) DEFAULT NULL,
  `RECORD_ID` varchar(64) DEFAULT NULL,
  `SERVER_ID` varchar(32) DEFAULT NULL,
  `SQL_TEXT` varchar(8096) DEFAULT NULL,
  `STARTUP_OPTIONS` varchar(1024) DEFAULT NULL,
  `COMMAND_STATUS` varchar(64) DEFAULT NULL,
  `STATUS_CODE` varchar(11) DEFAULT NULL,
  `DATE_TIMESTAMP` varchar(24) DEFAULT NULL,
  `USER_NAME` varchar(128) DEFAULT NULL,
  `LOG_VERSION` varchar(11) DEFAULT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

CREATE TABLE `audit_history` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `AUDIT_LOG_NAME` varchar(64) DEFAULT NULL,
  `PARSED_DATE_TIME` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `LOG_ENTRIES` int(11) DEFAULT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

The Perl script finds the non-active log files (which end in .xml – example: audit.log.14357895017796690.xml), parses the data, creates an SQL file with INSERT statements, imports the data via the mysql command-line program, and then moves the log file(s) and SQL file(s) to a directory. The history table records what files have been processed, so you don’t accidentally process the same file twice.

In the beginning of the Perl script, there are several values you need to replace to match your system. The values are under the section titled “values needed”. Here is the Perl script (named audit.pl):

#!/usr/bin/perl
# audit.pl

use DBI;
use CGI;
use XML::Simple;

#----------------------------------------------------------
# values needed
$Database = "audit_information";
$MYSQL_DATA_DIR = "/usr/local/mysql/data";
$MySQL_Host_IP_Name = "192.168.1.2";
$mysql_user = "root";
$mysql_password = "password_needed";

# directory to store old audit files after parsing
$audit_directory = "$MYSQL_DATA_DIR/audit_files";

# make an audit_files directory if one does not exist
mkdir($audit_directory) unless(-d $audit_directory);
#----------------------------------------------------------


#----------------------------------------------------------
#for each file do this
@files = @files = ;;
foreach $file_name_to_parse (@files) {

	#----------------------------------------------------------
	# check to see if file has already been parsed
	$dbh1 = ConnectToMySql($Database);
	$query1 = "select AUDIT_LOG_NAME from audit_history where AUDIT_LOG_NAME = '$file_name_to_parse'";
	$sth1 = $dbh1->prepare($query1);
	$sth1->execute();

          while (@data = $sth1->fetchrow_array()) {
            
         	   $audit_log_name = $data[0];

			}

	# if length of audit_log_name is less than 1, process file
	if (length($audit_log_name) $PARSED_FILE") or die print "Couldn't open log_file: $!";
		
		$count = 0;
		
		# XML::Simple variable - SuppressEmpty => 1   ignore empty values
		$xml = XML::Simple->new(SuppressEmpty => 1);
		$data = $xml->XMLin("$file_name_to_parse");
		
		foreach $info (@{$data->{AUDIT_RECORD}})
		{
			# replace tick marks ' with \' in the SQL TEXT
			$info->{"SQLTEXT"} =~ s/'/\\'/g;
		
			print LOGOUT "INSERT INTO audit_information.AUDIT_PARSED (COMMAND_CLASS, CONNECTIONID, DB_NAME, HOST_NAME, IP_ADDRESS, MYSQL_VERSION, COMMAND_NAME, OS_LOGIN, OS_VERSION, PRIV_USER, PROXY_USER, RECORD_ID, SERVER_ID, SQL_TEXT, STARTUP_OPTIONS, COMMAND_STATUS, STATUS_CODE, DATE_TIMESTAMP, USER_NAME, LOG_VERSION) values ('" . $info->{"COMMAND_CLASS"} . "', '" . $info->{"CONNECTION_ID"} . "', '" . $info->{"DB"} . "', '" . $info->{"HOST"} . "', '" . $info->{"IP"} . "', '" . $info->{"MYSQL_VERSION"} . "', '" . $info->{"NAME"} . "', '" . $info->{"OS_LOGIN"} . "', '" . $info->{"OS_VERSION"} . "', '" . $info->{"PRIV_USER"} . "', '" . $info->{"PROXY_USER"} . "', '" . $info->{"RECORD_ID"} . "', '" . $info->{"SERVER_ID"} . "', '" . $info->{"SQLTEXT"} . "', '" . $info->{"STARTUP_OPTIONS"} . "', '" . $info->{"STATUS"} . "', '" . $info->{"STATUS_CODE"} . "', '" . $info->{"TIMESTAMP"} . "', '" . $info->{"USER"} . "', '" . $info->{"VERSION"} . "');\n";
			$count++;
		
		# end foreach $info (@{$data->{AUDIT_RECORD}})
		}
		
		# load parsed file into MySQL - hide warnings
		system("mysql -u$mysql_user -p$mysql_password  /dev/null 2>&1");
										
		$dbh2 = ConnectToMySql($Database);
		$query2 = "insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('$file_name_to_parse', '$count')";
		
		# optional print output - uncomment if desired
		# print "$query2\n";
																													
		$sth2 = $dbh2->prepare($query2);
		$sth2->execute();

		# close audit log file
		close(INFILE);

		# optional print output - uncomment if desired
		# print "Moving audit log ($file_name_to_parse) and log file ($PARSED_FILE) to $audit_directory.\n";
		
		# strip directories off $file_name_to_parse
		@file_name_to_move_array = split("\/",$file_name_to_parse);
		$directory_count = $#file_name_to_move_array;
		$file_name_to_move = $file_name_to_move_array[$directory_count];
		
		
		# optional print output - uncomment if desired
		# print "mv $file_name_to_move $file_name_to_parse\n";
		# print "mv $PARSED_FILE $audit_directory\n";

		# move audit log files and parsed log files to $audit_directory
		system("mv $file_name_to_parse $audit_directory");
		system("mv $PARSED_FILE $audit_directory");


	# end - if (length($audit_log_name) connect($connectionInfo,$mysql_user,$mysql_password);
   return $l_dbh;

}

It should not matter where you execute audit.pl, as long as you have correctly entered the required values in the script. You might get errors if you try to run this script on a log file that has not been rotated, which is the current log file in your MySQL data directory. The current log file is named audit.log.

# pwd
/usr/local/mysql/data
# ls -l audit.log
-rw-rw----  1 mysql  _mysql  9955118 Jul  2 15:25 audit.log

The script will only work on files ending in .xml. For testing, I used four small (and identical) audit log files:

# pwd
/usr/local/mysql/data
# ls -l *xml
-rw-rw----  1 mysql  wheel   15508 Jul  2 12:20 audit.log.14357895017796690.xml
-rw-r-----  1 mysql  _mysql  15508 Jul  2 13:46 audit.log.14357895017796691.xml
-rw-r-----  1 mysql  _mysql  15508 Jul  2 13:46 audit.log.14357895017796692.xml
-rw-r-----  1 mysql  _mysql  15508 Jul  2 13:46 audit.log.14357895017796693.xml

I have commented-out the print statements in the Perl script, but if you uncomment them, running the script gives you this output for each log file:

# perl audit.pl
Parsing - /usr/local/mysql/data/audit.log.14357895017796690.xml
insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('/usr/local/mysql/data/audit.log.14357895017796690.xml', '34')
Moving audit log (/usr/local/mysql/data/audit.log.14357895017796690.xml) and log file (/usr/local/mysql/data/audit.log.14357895017796690_parsed.sql) to /usr/local/mysql/data/audit_files.
mv audit.log.14357895017796690.xml /usr/local/mysql/data/audit.log.14357895017796690.xml
mv /usr/local/mysql/data/audit.log.14357895017796690_parsed.sql /usr/local/mysql/data/audit_files
....

After running my test script, the following data is what is in the audit_history table:

mysql> use audit_information
Database changed
mysql> select * from audit_history;
+----+-------------------------------------------------------+---------------------+-------------+
| ID | AUDIT_LOG_NAME                                        | PARSED_DATE_TIME    | LOG_ENTRIES |
+----+-------------------------------------------------------+---------------------+-------------+
|  1 | /usr/local/mysql/data/audit.log.14357895017796690.xml | 2015-07-02 15:25:07 | 34          |
|  2 | /usr/local/mysql/data/audit.log.14357895017796691.xml | 2015-07-02 15:25:08 | 34          |
|  3 | /usr/local/mysql/data/audit.log.14357895017796692.xml | 2015-07-02 15:25:08 | 34          |
|  4 | /usr/local/mysql/data/audit.log.14357895017796693.xml | 2015-07-02 15:25:09 | 34          |
+----+-------------------------------------------------------+---------------------+-------------+
4 rows in set (0.00 sec)

And here is an example of one line from the audit_parsed table.

mysql> select * from audit_parsed limit 1 \G
*************************** 1. row ***************************
             ID: 1
  COMMAND_CLASS: select
   CONNECTIONID: 10093
        DB_NAME: 
      HOST_NAME: localhost
     IP_ADDRESS: 127.0.0.1
  MYSQL_VERSION: 
   COMMAND_NAME: Query
       OS_LOGIN: 
     OS_VERSION: 
      PRIV_USER: 
     PROXY_USER: 
      RECORD_ID: 1614933_2015-07-01T22:08:58
      SERVER_ID: 
       SQL_TEXT: SELECT (UNIX_TIMESTAMP(now()) - CAST(variable_value AS SIGNED))*1000 as serverStartMillis 
  FROM information_schema.global_status 
 WHERE variable_name='uptime'
STARTUP_OPTIONS: 
 COMMAND_STATUS: 0
    STATUS_CODE: 0
 DATE_TIMESTAMP: 2015-07-01T22:08:58 UTC
      USER_NAME: root[root] @ localhost [127.0.0.1]
    LOG_VERSION: 
1 row in set (0.00 sec)

After parsing the log files, you can then write your own queries for searching through your audit data. You can even include this script in cron, so it runs and parses your files automatically. But as always, test this script and use it with caution before putting it in a production environment. You could also modify the Perl script to filter out values you did not want or need to store.

If you do use this script or if you have any suggestions or other questions, please leave a comment below.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Using Perl and MySQL to automatically respond to retweets on twitter

In an earlier post titled Using Perl to send tweets stored in a MySQL database to twitter, I showed you a way to use MySQL to store tweets, and then use Perl to automatically send your tweets to twitter.

In this post, we will look at automatically sending a “thank you” to people who retweet your tweets – and we will be using Perl and MySQL again.

Just like in the first post, you will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

One caveat: twitter has a rate limit on how often you may connect with your application – depending upon what you are trying to do. See Rate Limiting and Rate Limits for more information. So, if you are going to put this into a cron job, I wouldn’t run it more than once every 15 minutes.

We will also be using the same tables we created in the first post – tweets and history – as well as a new table, named retweets. The retweets table will contain all of the user names and tweet ID’s for those retweets we have discovered and already sent a thank-you tweet response.

The Perl script will connect to your tweet history table, and retrieve a set of your tweet ID’s, with the most recent tweet first. The script will then connect to twitter and check to see if there are any retweets for each ID. If a retweet is found, the script will check your retweets table to see if you have already thanked the tweeter for the retweet. If this is a new retweet, the script will connect to twitter and send a “thank-you” message to that user, and then insert the user name and tweet ID into the retweets table. This will ensure that you do not send a thank-you response more than one time.

Here is a flow chart that will attempt to explain what the script does:

We will be using the API call/method retweets(id) to see if a tweet ID was retweeted, and then we will send the thank-you tweet via the update call. More information about the Perl twitter API may be found at Net::Twitter::Lite::WithAPIv1_1.

First we will need to create the retweets table, where we will store the information about our tweets that were retweeted. Here is the CREATE TABLE statement for the retweets table:

CREATE TABLE `retweets` (
  `id` int(8) NOT NULL AUTO_INCREMENT,
  `tweet_id` bigint(24) DEFAULT NULL,
  `user_name` varchar(24) DEFAULT NULL,
  `retweet_update` varchar(36) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

All you need to do is to use edit this script with your own consumer_key, consumer_secret, access_token and access_token_secret for your application, and edit the accessTweets file used by the subroutine ConnectToMySql. You may also comment-out the “print” commands.

#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

my $Database = "tweets";

# Credentials for your twitter application
# you will need to subsitute your own application information for these four variables
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

# Grab the last X number of tweets to check for retweets
# - determined by the number after "limit"

$dbh = ConnectToMySql($Database);
$query = "select tweet_id, tweet_update FROM history order by tweet_update desc, id limit 10";	
$sth = $dbh->prepare($query);
$sth->execute();

# loop through our results - one tweet at a time
while (@data = $sth->fetchrow_array()) {

	$tweet_id = $data[0];
	$tweet_update = $data[1];

	print "----------------------------------------------------------------------\n";
	print "Checking:  $tweet_id $tweet_update\n";
	print "----------------------------------------------------------------------\n";

		# Connect to twitter and see if anyone retweeted this tweet
		my $results = eval { $nt->retweets($tweet_id)};

		for my $status ( @$results ) {
       
			$user_name = "$status->{user}{screen_name}";
			$retweet_update = "$status->{created_at}";

			# see if this person has retweeted this before, and we already
			# have a record of the retweet in our database
					
			$dbh2 = ConnectToMySql($Database);
			$query2 = "select tweet_id, user_name FROM retweets where tweet_id = '$tweet_id' and user_name = '$user_name' limit 1";	
			$sth2 = $dbh2->prepare($query2);
			$sth2->execute();
    
			@data2 = $sth2->fetchrow_array();
    
			# Uncomment if you want to see it in action
			# print "Query: $query\n";
		
			# Check to see if we had any results, and if not, then insert
			# tweet into database and send them a "thank you" tweet
			if (length($data2[0]) prepare($query3);
				$sth3->execute();

				# Uncomment if you want to see it in action
				# print "Query2: $query2\n";


				# ----------------------------------------------------------------------------
				# send tweet
				# ----------------------------------------------------------------------------

				# This pause is just to slow down the action - you can remove this line if you want
				sleep 5;
				
				my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
					traits              => [qw/API::RESTv1_1/],
					consumer_key        => "$consumer_key",
					consumer_secret     => "$consumer_secret",
					access_token        => "$access_token",
					access_token_secret => "$access_token_secret",
					ssl                 => 1
					);
					
					# Here is the message you want to send - 
					# the thank-you to the user who sent the retweet
					$tweet = "\@$user_name thanks for the retweet!";

					# send thank-you tweet
					my $results = eval { $nt->update("$tweet") };

						undef @data2;
						undef @data3;
					}
		
					else
				
					{
  						# we have already thanked this user - as their name and this tweet-id was found in the database
    					print "----- Found tweet: $tweet_id\n";
    
						while (@data2) {

							print "----------------------------------------------------------------------\n";
							print "Checking retweet by $user_name for $tweet_id\n";
							print "Found retweet:  $tweet_id $user_name $retweet_update \n";

							$tweet_id = $data2[0];
							$user_name = $data2[1];
					
							print "***** Retweet by $user_name already in database \n";
							print "----------------------------------------------------------------------\n";
					
							#exit;
							undef @data2;
							undef @data3;

							# This pause is just to slow down the action - you can remove this line if you want
							sleep 5;

							# end while
							}
						
					# end else
					}

		# end for my $status ( @$results ) {
		}

# end while
}

exit;

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file – named accessTweets contains this information:

database_name
hostname or IP
MySQL user name
password

I tested this on two twitter accounts, and everything worked for me – but let me know if you have problems. I am not the best Perl programmer, nor am I an expert at the twitter API, so there is probably a better/easier way to do this.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

MySQL Enterprise Edition Database Firewall – Control and Monitor SQL Statement Executions

As of MySQL 5.6.24, MySQL Enterprise Edition includes MySQL Enterprise Firewall, an application-level firewall (it runs within the mysql database process) that enables database administrators to permit or deny SQL statement execution based on matching against whitelists of accepted statement patterns. This helps harden MySQL Server against attacks such as SQL injection or attempts to exploit applications by using them outside of their legitimate query workload characteristics.

Each MySQL account registered with the firewall has its own whitelist of statement patterns (a tokenized representation of a SQL statement), enabling protection to be tailored per account. For a given account, the firewall can operate in recording or protecting mode, for training in the accepted statement patterns or protection against unacceptable statements. The diagram illustrates how the firewall processes incoming statements in each mode.

MySQL Enterprise Firewall Operation

(from https://dev.mysql.com/doc/refman/5.6/en/firewall.html)

If you do not have a MySQL Enterprise Edition license, you may download a trial version of the software via Oracle eDelivery. The MySQL Firewall is included in the MySQL Product Pack, specifically for MySQL Database 5.6.24 or higher.

MySQL Enterprise Firewall has these components:

  • A server-side plugin named MYSQL_FIREWALL that examines SQL statements before they execute and, based on its in-memory cache, renders a decision whether to execute or reject each statement.
  • Server-side plugins named MYSQL_FIREWALL_USERS and MYSQL_FIREWALL_WHITELIST implement INFORMATION_SCHEMA tables that provide views into the firewall data cache.
  • System tables named firewall_users and firewall_whitelist in the mysql database provide persistent storage of firewall data.
  • A stored procedure named sp_set_firewall_mode() registers MySQL accounts with the firewall, establishes their operational mode, and manages transfer of firewall data between the cache and the underlying system tables.
  • A set of user-defined functions provides an SQL-level API for synchronizing the cache with the underlying system tables.
  • System variables enable firewall configuration and status variables provide runtime operational information.

(from https://dev.mysql.com/doc/refman/5.6/en/firewall-components.html)

Installing the Firewall

Installing the firewall is fairly easy. After you install MySQL version 5.6.24 or greater, you simply execute an SQL script that is located in the $MYSQL_HOME/share directory. There are two versions of the script, one for Linux and one for Windows (the firewall isn’t supported on the Mac yet).

The scripts are named win_install_firewall.sql for Windows and linux_install_firewall.sql for linux. You may execute this script from the command line or via MySQL Workbench. For the command line, be sure you are in the directory where the script is located.

shell> mysql -u root -p mysql < win_install_firewall.sql
Enter password: (enter root password here)

The script create the firewall tables, functions, stored procedures and installs the necessary plugins. The script contains the following:

# Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved.
# Install firewall tables
USE mysql;
CREATE TABLE IF NOT EXISTS mysql.firewall_whitelist( USERHOST VARCHAR(80) NOT NULL, RULE text NOT NULL) engine= MyISAM;
CREATE TABLE IF NOT EXISTS mysql.firewall_users( USERHOST VARCHAR(80) PRIMARY KEY, MODE ENUM ('OFF', 'RECORDING', 'PROTECTING', 'RESET') DEFAULT 'OFF') engine= MyISAM;

INSTALL PLUGIN mysql_firewall SONAME 'firewall.dll';
INSTALL PLUGIN mysql_firewall_whitelist SONAME 'firewall.dll';
INSTALL PLUGIN mysql_firewall_users SONAME 'firewall.dll';

CREATE FUNCTION set_firewall_mode RETURNS STRING SONAME 'firewall.dll';
CREATE FUNCTION normalize_statement RETURNS STRING SONAME 'firewall.dll';
CREATE AGGREGATE FUNCTION read_firewall_whitelist RETURNS STRING SONAME 'firewall.dll';
CREATE AGGREGATE FUNCTION read_firewall_users RETURNS STRING SONAME 'firewall.dll';
delimiter //
CREATE PROCEDURE sp_set_firewall_mode (IN arg_userhost VARCHAR(80), IN arg_mode varchar(12))
BEGIN
IF arg_mode = "RECORDING" THEN
  SELECT read_firewall_whitelist(arg_userhost,FW.rule) FROM mysql.firewall_whitelist FW WHERE FW.userhost=arg_userhost;
END IF;
SELECT set_firewall_mode(arg_userhost, arg_mode);
if arg_mode = "RESET" THEN
  SET arg_mode = "OFF";
END IF;
INSERT IGNORE INTO mysql.firewall_users VALUES (arg_userhost, arg_mode);
UPDATE mysql.firewall_users SET mode=arg_mode WHERE userhost = arg_userhost;

IF arg_mode = "PROTECTING" OR arg_mode = "OFF" THEN
  DELETE FROM mysql.firewall_whitelist WHERE USERHOST = arg_userhost;
  INSERT INTO mysql.firewall_whitelist SELECT USERHOST,RULE FROM INFORMATION_SCHEMA.mysql_firewall_whitelist WHERE USERHOST=arg_userhost;
END IF;
END //
delimiter ;

After you run the script, the firewall should be enabled. You may verify it by running this statement:

mysql> SHOW GLOBAL VARIABLES LIKE 'mysql_firewall_mode';
+-------------------------------+-------+
| Variable_name                 | Value |
+-------------------------------+-------+
| mysql_firewall_max_query_size |  4096 |
| mysql_firewall_mode           |    ON |
| mysql_firewall_trace          |   OFF |
+-------------------------------+-------+

Testing the Firewall

To test the firewall, you may use a current mysql user, but we are going to create a test user for this example – webuser@localhost. (The user probably doesn’t need all privileges, but for this example we will grant everything to this user)

CREATE USER 'webuser'@'localhost' IDENTIFIED BY 'Yobuddy!';
'GRANT ALL PRIVILEGES ON *.* TO 'webuser'@'localhost' WITH GRANT OPTION'

OPTIONAL: For our test, we will be using the sakila schema provided by MySQL. You may download the sakila database schema (requires MySQL 5.0 or later) at http://dev.mysql.com/doc/index-other.html. If you don’t want to use the sakila database, you may use your own existing database or create a new database.

After downloading the sakila schema, you will have two files, named sakila-schema.sql and sakila-data.sql. Execute the sakila-schema.sql first, and then sakila-data.sql to populate the database with data. If you are using the command line, simply do the following: (substitute UserName for a mysql user name)

# mysql -uUserName -p < sakila-schema.sql
# mysql -uUserName -p < sakila-data.sql

After creating the sakila schema and importing the data, we now set the firewall to record those queries which we want to allow:

mysql> CALL `mysql`.`sp_set_firewall_mode`("webuser@localhost","RECORDING")
+-----------------------------------------------+
| read_firewall_whitelist(arg_userhost,FW.rule) |
+-----------------------------------------------+
| Imported users: 0  Imported rules: 0          |
+-----------------------------------------------+
1 row in set (0.14 sec)

+-------------------------------------------+
| set_firewall_mode(arg_userhost, arg_mode) |
+-------------------------------------------+
| OK                                        |
+-------------------------------------------+
1 row in set (0.22 sec)
Query OK, 5 rows affected (0.28 sec)

We can check to see the firewall mode via this statement, to be sure we are in the recording mode:

mysql> SELECT * FROM MYSQL.FIREWALL_USERS;
+-------------------+------------+
| USERHOST          | MODE       |
+-------------------+------------+
| webuser@localhost |  RECORDING |
+-------------------+------------+
1 row in set (0.02 sec)

Now that we have recording turned on, let’s run a few queries:

mysql> use sakila
Database changed
mysql> show tables;
+----------------------------+
| Tables_in_sakila           |
+----------------------------+
| actor                      |
| actor_info                 |
| address                    |
| category                   |
| city                       |
| country                    |
| customer                   |
| customer_list              |
| film                       |
| film_actor                 |
| film_category              |
| film_list                  |
| film_text                  |
| inventory                  |
| language                   |
| nicer_but_slower_film_list |
| payment                    |
| rental                     |
| sales_by_film_category     |
| sales_by_store             |
| staff                      |
| staff_list                 |
| store                      |
+----------------------------+
23 rows in set (0.00 sec)

mysql> select * from actor limit 2;
+----------+------------+-----------+---------------------+
| actor_id | first_name | last_name | last_update         |
+----------+------------+-----------+---------------------+
|        1 | PENELOPE   | GUINESS   | 2006-02-15 04:34:33 |
|        2 | NICK       | WAHLBERG  | 2006-02-15 04:34:33 |
+----------+------------+-----------+---------------------+
2 rows in set (0.13 sec)

mysql> select first_name, last_name from actor where first_name like 'T%';
+------------+-----------+
| first_name | last_name |
+------------+-----------+
| TIM        | HACKMAN   |
| TOM        | MCKELLEN  |
| TOM        | MIRANDA   |
| THORA      | TEMPLE    |
+------------+-----------+
4 rows in set (0.00 sec)

We turn off the recording by turning on the protection mode:

mysql> CALL `mysql`.`sp_set_firewall_mode`("webuser@localhost","PROTECTING");
+-------------------------------------------+
| set_firewall_mode(arg_userhost, arg_mode) |
+-------------------------------------------+
| OK                                        |
+-------------------------------------------+
1 row in set (0.00 sec)

We can check to see the firewall mode via this statement:

mysql> SELECT * FROM MYSQL.FIREWALL_USERS;
+-------------------+------------+
| USERHOST          | MODE       |
+-------------------+------------+
| webuser@localhost | PROTECTING |
+-------------------+------------+
1 row in set (0.02 sec)

And we can look at our whitelist of statements:

mysql>  SELECT * FROM MYSQL.FIREWALL_WHITELIST;
+-------------------+-------------------------------------------------------------------+
| USERHOST          | RULE                                                              |
+-------------------+-------------------------------------------------------------------+
| webuser@localhost | SELECT * FROM actor LIMIT ?                                       |
| webuser@localhost | SELECT SCHEMA ( )                                                 |
| webuser@localhost | SELECT first_name , last_name FROM actor WHERE first_name LIKE ?  |
| webuser@localhost | SHOW TABLES                                                       |
+-------------------+-------------------------------------------------------------------+
4 rows in set (0.00 sec)

The firewall is now protecting against non-whitelisted queries. We can execute a couple of the queries we previously ran, which should be allowed by the firewall.

mysql> show tables;
+----------------------------+
| Tables_in_sakila           |
+----------------------------+
| actor                      |
| actor_info                 |
| address                    |
| category                   |
| city                       |
| country                    |
| customer                   |
| customer_list              |
| film                       |
| film_actor                 |
| film_category              |
| film_list                  |
| film_text                  |
| inventory                  |
| language                   |
| nicer_but_slower_film_list |
| payment                    |
| rental                     |
| sales_by_film_category     |
| sales_by_store             |
| staff                      |
| staff_list                 |
| store                      |
+----------------------------+
23 rows in set (0.01 sec)

Now we run two new queries, which should be blocked by the firewall.

mysql> select * from rental;
ERROR 1045 (42000): Firewall prevents statement

mysql> select * from staff;
ERROR 1045 (42000): Firewall prevents statement

The server will write an error message to the log for each statement that is rejected. Example:

2015-03-21T22:59:05.371772Z 14 [Note] Plugin MYSQL_FIREWALL reported:
'ACCESS DENIED for webuser@localhost. Reason: No match in whitelist.
Statement: select * from rental '

You can use these log messages in your efforts to identify the source of attacks.

To see how much firewall activity you have, you may look look at the status variables:

mysql> SHOW GLOBAL STATUS LIKE 'Firewall%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Firewall_access_denied  | 42    |
| Firewall_access_granted | 55    |
| Firewall_cached_entries | 78    |
+-------------------------+-------+

The variables indicate the number of statements rejected, accepted, and added to the cache, respectively.

The MySQL Enterprise Firewall Reference is found at https://dev.mysql.com/doc/refman/5.6/en/firewall-reference.html.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots 

Visit http://2044thebook.com for more information.

MySQL 5.7 labs and the HTTP Plugin – inserting, updating and deleting records in MySQL via HTTP

In the MySQL Labs version of MySQL version 5.7, there is a new HTTP plugin. The HTTP plugin documentation from the labs site provides this information (from MySQL Labs):

The HTTP Plugin for MySQL adds HTTP(S) interfaces to MySQL. Clients can use the HTTP respectively HTTPS (SSL) protocol to query data stored in MySQL. The query language is SQL but other, simpler interfaces exist. All data is serialized as JSON. This version of MySQL Server HTTP Plugin is a Labs release, which means it’s at an early development stage. It contains several known bugs and limitation, and is meant primarily to give you a rough idea how this plugin will look some day. Likewise, the user API is anything but finalized. Be aware it will change in many respects.

In other words, with a simple HTTP URL, you can access and modify your data stored in MySQL. Here is an overview from the documentation:


The HTTP Plugin for MySQL is a proof-of concept of a HTTP(S) interface for MySQL 5.7.

The plugin adds a new protocol to the list of protocols understood by the server. It adds the HTTP respectively HTTPS (SSL) protocol to the list of protocols that can be used to issue SQL commands. Clients can now connect to MySQL either using the MySQL Client Server protocol and programming language-dependent drivers, the MySQL Connectors, or using an arbitrary HTTP client.
Results for SQL commands are returned using the JSON format.

The server plugin is most useful in environments where protocols other than HTTP are blocked:
• JavaScript code run in a browser
• an application server behind a firewall and restricted to HTTP access
• a web services oriented environment

In such environments the plugin can be used instead of a self developed proxy which translates HTTP requests into MySQL requests. Compared to a user-developed proxy, the plugin means less latency, lower complexity and the benefit of using a MySQL product. Please note, for very large deployments an architecture using a proxy not integrated into MySQL may be a better solution to clearly separate software layers and physical hardware used for the different layers.

The HTTP plugin implements multiple HTTP interfaces, for example:
• plain SQL access including meta data
• a CRUD (Create-Read-Update-Delete) interface to relational tables
• an interface for storing JSON documents in relational tables

Some of the interfaces follow Representational State Transfer (REST) ideas, some don’t. See below for a description of the various interfaces.

The plugin maps all HTTP accesses to SQL statements internally. Using SQL greatly simplifies the development of the public HTTP interface. Please note, at this early stage of development performance is not a primary goal. For example, it is possible to develop a similar plugin that uses lower level APIs of the MySQL server to overcome SQL parsing and query planning overhead.


In this post, I will show you how to install the plugin and use HTTP commands to retrieve data. The documentation also provides other examples. We aren’t going to explain everything about the plugin, as you will need to download the documentation.

First, you will need to download the MySQL Labs 5.7 version which includes the plugin. This download is available from the MySQL Labs web site.

After MySQL 5.7 is installed, you will want to add these lines to your my.cnf/my.ini file under the [mysqld] section:

#
# Default database, if no database given in URL
#
myhttp_default_db = httptest
#
# Non-SSL default MySQL SQL user
#
myhttp_default_mysql_user_name = http_sql_user
myhttp_default_mysql_user_passwd = sql_secret
myhttp_default_mysql_user_host = 127.0.0.1

There are other options for the plugin, but we will skip them for this post.

# Change only, if need be to run the examples!
#
# General settings
# 
# myhttp_http_enabled = 1
# myhttp_http_port = 8080
# myhttp_crud_url_prefix = /crud/
# myhttp_document_url_prefix = /doc/
# myhttp_sql_url_prefix = /sql/
# 
# 
# 
# Non-SSL HTTP basic authentication
# 
# myhttp_basic_auth_user_name = basic_auth_user
# myhttp_basic_auth_user_passwd = basic_auth_passwd

# 
# SSL
# 
# myhttp_https_enabled = 1
# myhttp_https_port = 8081
# myhttp_https_ssl_key = /path/to/mysql/lib/plugin/myhttp_sslkey.pem

After modifying the my.cnf/my.ini file, restart mysql and then install the plugin from a mysql prompt. Before proceeding, be sure to also check to make sure the plugin is installed:

mysql> INSTALL PLUGIN myhttp SONAME 'libmyhttp.so';
Query OK, 0 rows affected (0.09 sec)


mysql> SELECT * FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME='myhttp'\G
*************************** 1. row ***************************
           PLUGIN_NAME: myhttp
        PLUGIN_VERSION: 1.0
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: DAEMON
   PLUGIN_TYPE_VERSION: 50705.0
        PLUGIN_LIBRARY: libmyhttp.so
PLUGIN_LIBRARY_VERSION: 1.5
         PLUGIN_AUTHOR: Andrey Hristov, Ulf Wendel
    PLUGIN_DESCRIPTION: HTTP Plugin for MySQL
        PLUGIN_LICENSE: GPL
           LOAD_OPTION: ON
1 row in set (0.03 sec)

We will need to create the user for accessing our database, and grant permissions:

mysql> CREATE USER 'http_sql_user'@'127.0.0.1' IDENTIFIED WITH mysql_native_password;
Query OK, 0 rows affected (1.89 sec)

mysql> SET old_passwords = 0;
Query OK, 0 rows affected (0.05 sec)

mysql> SET PASSWORD FOR 'http_sql_user'@'127.0.0.1' = PASSWORD('sql_secret');
Query OK, 0 rows affected (0.05 sec)

mysql> GRANT ALL ON myhttp.* TO
    -> 'http_sql_user'@'127.0.0.1';
Query OK, 0 rows affected (0.12 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.58 sec)

We will need to create a table for our example. The table will be a very simple table with three fields – ID, first and last names:

mysql> CREATE TABLE `names` (
    ->   `id` int(11) NOT NULL DEFAULT '1000',
    ->   `name_first` varchar(40) DEFAULT NULL,
    ->   `name_last` varchar(40) DEFAULT NULL,
    ->   PRIMARY KEY (`id`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.04 sec)

We need to insert some data into the table:

INSERT INTO `names` (name_first, name_last) VALUES ('Clark','Kent');
INSERT INTO `names` (name_first, name_last) VALUES ('Bruce','Wayne');
INSERT INTO `names` (name_first, name_last) VALUES ('Hal','Jordan');
INSERT INTO `names` (name_first, name_last) VALUES ('Barry','Allen');
INSERT INTO `names` (name_first, name_last) VALUES ('Diana','Prince');
INSERT INTO `names` (name_first, name_last) VALUES ('Arthur','Curry');
INSERT INTO `names` (name_first, name_last) VALUES ('Oliver','Queen');
INSERT INTO `names` (name_first, name_last) VALUES ('Ray','Palmer');
INSERT INTO `names` (name_first, name_last) VALUES ('Carter','Hall');
Query OK, 9 rows affected (0.01 sec)

Now that we have our table and table data, we can test a select statement with an HTTP URL. You may use a browser for this, but since I like to work with command line tools, I am going to use curl, a command line tool for doing all sorts of URL manipulations and transfers. Here is a simple select statement via curl. Use the plus sign (+) for spaces.

Select all of the names in the table:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/SELECT+name_first,+name_last+FROM+names"
[
{
"meta":[
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_first","org_column":"name_first","charset":33,"length":120,"flags":0,"decimals":0},
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_last","org_column":"name_last","charset":33,"length":120,"flags":0,"decimals":0}
],
"data":[ 
	["Clark","Kent"],
	["Bruce","Wayne"],
	["Hal","Jordan"],
	["Barry","Allen"],
	["Diana","Prince"],
	["Arthur","Curry"],
	["Oliver","Queen"],
	["Ray","Palmer"],
	["Carter","Hall"]
],
"status":[{"server_status":34,"warning_count":0}]
}
]

If you want to use a browser, you might have to authenticate the connection (enter the user name and password):

And here is the output from submitting the URL in a browser:

URL:  http://127.0.0.1:8080/sql/myhttp/SELECT+name_first,+name_last+FROM+names

Selecting a single name:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/SELECT+name_first,+name_last+FROM+names+where+name_first+=+'Clark'"
[
{
"meta":[
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_first","org_column":"name_first","charset":33,"length":120,"flags":0,"decimals":0},
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_last","org_column":"name_last","charset":33,"length":120,"flags":0,"decimals":0}
],
"data":[ 
	["Clark","Kent"]
],
"status":[{"server_status":34,"warning_count":0}]
}
]

Deleting a row:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/delete+from+names+where+name_first+=+'Hal'"
{"server_status":34,"warning_count":0,"affected_rows":1,"last_insert_id":0}

Inserting a row:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/INSERT+INTO+names+(name_first,+name_last)+VALUES+('Hal','Jordan');"
{"server_status":2,"warning_count":0,"affected_rows":1,"last_insert_id":1018}

In a future post, I will show you how to use Perl to connect via HTTP and then parse the results.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Using Perl to send tweets stored in a MySQL database to twitter

Twitter is not my favorite social media site. Using twitter is like driving downtown, screaming what you want to say out the window, and hoping someone hears you. There might be tens of thousands of people downtown, but your message will only be heard by a few. Your best bet is to repeat your message as often as possible.

However, twitter is free and if you want to reach as many people (theoretically) as possible, you might as well use it. But sending tweets on a scheduled basis can be a pain. There are client programs available which allow you to schedule your tweets (Hootsuite is one I have used in the past). You can load your tweets in the morning, and have the application tweet for you all day long. But you still have to load the application with your tweets – one by one.

A friend of mine asked me if there was a way to send the same 200 tweets over and over again, spaced out every 20 minutes or so. He has a consulting business, and just wants to build up a list of twitter followers by tweeting inspirational quotes. If he tweets for twenty hours a day, and sends three quotes an hour, it will take him a little more than three days to burn through his 200 quotes. And he can always add more quotes or space out the tweets as necessary. I decided to write a Perl script to do this for him.

To start, we will need a MySQL database to store the tweets. I use MySQL’s Workbench product as my client application for connecting to MySQL. From within Workbench, I can create my tweet database:

CREATE DATABASE 'tweets' /*!40100 DEFAULT CHARACTER SET latin1 */

I will then need a table inside my database to store my tweets.

CREATE TABLE 'tweets' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'tweet' char(140) DEFAULT NULL,
  'last_tweet_date' datetime NOT NULL DEFAULT '2015-01-01 00:00:00',
  'tweet_count' int(5) DEFAULT NULL,
  'tweet_length' int(3) DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB AUTO_INCREMENT=100001 DEFAULT CHARSET=latin1

The tweet messages will be stored in the tweet column, and the last date the tweet was sent will have a time stamp in the last_tweet_date column. When we perform a search to find the next tweet, we will simply sort our search by the last_tweet_date and then id, and limit our output to one tweet. After we send the tweet, we will update the last_tweet_date column and send that tweet to the end of the line. The script will also incrementally change the tweet_count value (number of times the tweet has been sent), and record the length of the tweet in tweet_length. I do not do any error checking in the script to make sure the tweet was sent, but errors are printed.

We now need to insert some tweets into our table. Since my friend is going to be sending inspirational quotes, I found a few I can import. In order to make it easier for importing, all single quote marks () will be replaced by the carat symbol (^). I can then swap these symbols inside the Perl script. You could use the backslash (\) before the single quote, but I prefer a single character substitution so I know how long the tweet will be.

I will also use the tilde (~) as a way to designate a carriage return in my tweet. The Perl script will replace the tilde with a carriage return (\n). Two tildes give me two carriage returns and a blank line.

insert into tweets (tweet) VALUES('I^m not afraid. -Luke~~You will be. -Yoda~~http://SomeWebSiteHere.com');
insert into tweets (tweet) VALUES('Do or do not.  There is no try.~~-Yoda~~http://SomeWebSiteHere.com');
insert into tweets (tweet) VALUES('No, I am your father.~~-Darth~~http://SomeWebSiteHere.com');

I also create a history table to store the tweet identification numbers. Each tweet is assigned a unique number by twitter, and this is how you can access this tweet. I save this information so I can delete the tweets later using this number. I have included a short script for deleting tweets near the end of this post.

CREATE TABLE 'history' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'tweet' char(140) DEFAULT NULL,
  'tweet_id' varchar(30) DEFAULT NULL,
  'tweet_update' datetime DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB AUTO_INCREMENT=1000032 DEFAULT CHARSET=latin1

You will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

You will also need to register your mobile phone in order to link your twitter account to your application. I have not figured out how to use this script with someone else’s account, as the instructions for scripting Perl for use with twitter are not very thorough. I will try to add this at a later date.

Now that you have your application information and all of your tables created with data inserted, here is the Perl script for sending tweets. You will need to install the necessary Perl modules that are used.


#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

my $Database = "tweets";

# ----------------------------------------------------------------------------------
# this has to be near the top - as other parts of the script rely on these figures
# ----------------------------------------------------------------------------------

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;

$year = $year + 1900;
$mon = $mon + 1;

# add a zero if the value is less than 10

if ($sec < 10) { $sec = "0$sec"; }
if ($min < 10) { $min = "0$min"; }
if ($hour < 10) { $hour = "0$hour"; }
if ($mday < 10) { $mday = "0$mday"; }
if ($mon < 10) { $mon = "0$mon"; }
if ($year < 10) { $year = "0$year"; }
if ($wday < 10) { $wday = "0$wday"; }
if ($yday < 10) { $yday = "0$yday"; }
if ($isdst < 10) { $isdst = "0$isdst"; }

$DateTime = "$year-$mon-$mday $hour:$min:$sec";

# ----------------------------------------------------------------------------------
# retrieve tweet from database
# ----------------------------------------------------------------------------------

     $dbh = ConnectToMySql($Database);
     $query = "select id, tweet, last_tweet_date, tweet_count FROM tweets order by last_tweet_date, id limit 1";	
     $sth = $dbh->prepare($query);
     $sth->execute();
    
          while (@data = $sth->fetchrow_array()) {
            
		 $id = $data[0];
		 $tweet = $data[1];
		 $last_tweet_date = $data[2];
		 $tweet_count = $data[3];
	}

$tweet_original = $tweet;

# ----------------------------------------------------------------------------
# find tildes ~ and substitute for carriage return
# find carats and substitue for single quote
# ----------------------------------------------------------------------------

$tweet =~ s/~/\n/g;
$tweet =~ s/\^/\'/g;

# ----------------------------------------------------------------------------------
# check length of tweet
# ----------------------------------------------------------------------------------

$tweet_length = length($tweet);

if (length($tweet) > 140)

{
	print "Error - tweet is longer than 140 characters\n";
	exit;
}

# add to the tweet count
$tweet_count++;

# ----------------------------------------------------------------------------
# send tweet
# ----------------------------------------------------------------------------

my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

my $results = eval { $nt->update("$tweet") };
 
  if ( my $err = $@ ) {
      die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

      warn "HTTP Response Code: ", $err->code, "\n",
           "HTTP Message......: ", $err->message, "\n",
           "Twitter error.....: ", $err->error, "\n";
  } 

# ----------------------------------------------------------------------------
# update mysql with new date for last_tweet date/time
# ----------------------------------------------------------------------------

$dbh = ConnectToMySql($Database);
$query = "UPDATE tweets SET last_tweet_date = '$DateTime' , tweet_count = '$tweet_count' , tweet_length = '$tweet_length' where id = '$id'";
$sth = $dbh->prepare($query);
$sth->execute();

# ----------------------------------------------------------------------------
# get the status id of the last tweet
# ----------------------------------------------------------------------------

my $statuses = $nt->user_timeline({ user => "2044_The_Book", count=> 1 });

for my $status ( @$statuses ) {
	$tweet_id = "$status->{id}";
}

if ( my $err = $@ ) {
      die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

      warn "HTTP Response Code: ", $err->code, "\n",
           "HTTP Message......: ", $err->message, "\n",
           "Twitter error.....: ", $err->error, "\n";
} 

# ----------------------------------------------------------------------------
# replace special characters
# ----------------------------------------------------------------------------

$tweet =~ s/\\\n/~/g;
$tweet =~ s/\'/^/g;

# update mysql with new date for last_tweet date/time

$dbh = ConnectToMySql($Database);	
$query = "insert into history (tweet,tweet_id,tweet_update) values ('$tweet_original','$tweet_id','$DateTime')";
$sth = $dbh->prepare($query);
$sth->execute();

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= <PW>;
   my $host= <PW>;
   my $userid= <PW>;
   my $passwd= <PW>;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}



In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file contains this information:

database_name
hostname or IP
MySQL user name
password

You can include your information inside the file if you prefer.

If you want to delete your tweets, you can create a script to access the tweets in your history table, and then delete them one at a time. Here is an example without the database connections:


#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

# ----------------------------------------------------------------------------
# delete tweet
# ----------------------------------------------------------------------------

# replace the values for $consumer_key $consumer_secret $access_token $access_token_secret
# with your values for your application

my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1,
	 );

my $statuses = $nt->destroy_status({ id => "$tweet_id" });

exit;

Be sure to replace the value of $tweet_id with the value from the tweet you want to delete.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Upgrade MySQL to a new version with a fresh installation & use shell scripts and mysqldump to reload your data

There are several ways to upgrade MySQL. In this post, we will use a combination of shell scripts and the mysqldump application to export our MySQL data, and then re-import it back into the upgraded version of MySQL.

In this example, we will be doing a minor version upgrade. We will be going from 5.6.17 to 5.6.19. This method may not work if you are upgrading from one major release to another – from 5.1 to 5.5, or 5.5 to 5.6. You will want to check each version and review the new features/functions and also what features/functions have been deprecated. We are also assuming that no one will be using the database during the time it takes for us to do the upgrade.

If you want to upgrade from a version that is more than one major release apart from your current version, then you will want to upgrade to each successive version. For example, if you want to upgrade from 5.0 to 5.6, you will want to upgrade from 5.0 to 5.1, then 5.1 to 5.5, and then 5.5 to 5.6.

You don’t have to export all of your data when you upgrade MySQL. There are ways of upgrading without doing anything to your data. But in this post, I will be exporting the data and re-importing it, for a fresh installation. I don’t have that much data, so I don’t mind doing the export and import. If you have a lot of data, you might want to consider other options.

To get an idea of the size of your database(s), here is a quick script that you can use:

SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB" 
FROM information_schema.TABLES GROUP BY table_schema ; 

When I perform an export/import, I like to export each database as a separate mysqldump file, and then also export all of the databases together in one large file. By exporting/importing the individual databases, if you have an error importing one of the database dump files, you can isolate the error to a single database. It is much easier to fix the error in one smaller data dump file than with a larger all-inclusive dump file.

I am also going to create some simple shell scripts to help me create the commands that I need to make this task much easier. First, you will want to create a directory to store all of the scripts and dump files. Do all of your work inside that directory.

Next, I want to get a list of all of my databases. I will log into mysql, and then issue the show databases; command: (which is the same command as: select schema_name from information_schema.schemata;)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| 12thmedia          |
| cbgc               |
| comicbookdb        |
| coupons            |
| healthcheck        |
| innodb_memcache    |
| landwatch          |
| laurelsprings      |
| ls_directory       |
| mem                |
| mysql              |
| performance_schema |
| protech            |
| scripts            |
| stacy              |
| storelist          |
| test               |
| testcert           |
| tony               |
| twtr               |
| watchdb            |
+--------------------+
22 rows in set (1.08 sec)

I can then just highlight and copy the list of databases, and put that list into a text file named “list.txt“. I do not want to include these databases in my export:

information_schema
mysql
performance_schema
test

However, I will export the mysql table later. You need to check with the MySQL manual to make sure that there haven’t been any changes to the MySQL table from one version to the next.

I will need to manually remove those databases from my list.txt file. I then want to remove all of the spaces and pipe symbols from the text file – assuming that you do not have any spaces in your database names. Instead of using spaces in a database name, I prefer to use an underline character “_“. These scripts assume that you don’t have any spaces in your database names.

If you know how to use the vi editor, you can so a substitution for the pipes and spaces with these commands:

:%s/ //g
:%s/|//g

Otherwise, you will want to use another text editor and manually edit the list to remove the spaces and pipe symbols. Your finished list.txt file should look like this:

12thmedia cbgc
comicbookdb
coupons
healthcheck
innodb_memcache
landwatch
laurelsprings
ls_directory
mem
protech
scripts
stacy
storelist
testcert
tony
twtr
watchdb

You can then create a simple shell script to help create your mysqldump commands – one command for each database. You will want to create this script and the other scripts in the directory you created earlier. Name the script export.sh. You can also change the mysqldump options to meet your needs. I am using GTID’s for replication, so I want to use this option –set-gtid-purged=OFF. You will also want to change the value of my password my_pass to your mysql password. You can also skip including the password by using the -p option, and just enter the password each time you run the mysqldump command.

# export.sh
# script to create the database export commands
k=""
for i in `cat list.txt`
do

echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $i > "$i"_backup.sql"

k="$k $i"

done

# Optional - export the entire database
# use the file extention of .txt so that your script won't import it later
echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $k > all_db_backup.txt"

For the individual databases, I am using the suffix of .sql. For the dump file that contains all of the databases, I am using the prefix .txt – as I use a wildcard search later to get a list of the dump files, and I don’t want to import the one dump file that contains all of the databases.

Now you can run the export.sh script to create a list of your mysqldump commands, and you are going to direct the output into another shell script named export_list.sh.

# sh export.sh > export_list.sh

We can now take a look at what is in the export_list.sh file

# cat export_list.sh
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases 12thmedia > 12thmedia_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases cbgc > cbgc_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases comicbookdb > comicbookdb_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases coupons > coupons_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases healthcheck > healthcheck_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases innodb_memcache > innodb_memcache_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases landwatch > landwatch_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases laurelsprings > laurelsprings_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases ls_directory > ls_directory_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases mem > mem_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases protech > protech_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases scripts > scripts_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases stacy > stacy_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases storelist > storelist_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases testcert > testcert_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases tony > tony_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases twtr > twtr_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases watchdb > watchdb_backup.sql

mysqldump -uroot -p --set-gtid-purged=OFF --password=my_psss --triggers --quick --skip-opt --add-drop-database --create-options --databases  12thmedia cbgc comicbookdb coupons healthcheck innodb_memcache landwatch laurelsprings ls_directory mem protech scripts stacy storelist testcert tony twtr watchdb > all_db_backup.txt

Now you have created a list of mysqldump commands that you can execute to dump all of your databases. You can now go ahead and execute your mysqldump commands by running the export_list.sh script:

# sh export_list.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

The message “Warning: Using a password on the command line interface can be insecure.” is shown because you included the value for “–password“. If you don’t want to put your password on the command line, just change that option to “-p“, and you will have to manually enter your MySQL root user’s password after each mysqldump command.

Here is a list of the dump files that was produced:

# ls -l
total 21424
-rw-r--r--  1 root  staff    26690 Aug  1 16:25 12thmedia_backup.sql
-rw-r--r--  1 root  staff  5455275 Aug  1 16:26 all_db_backup.txt
-rw-r--r--  1 root  staff  1746820 Aug  1 16:25 cbgc_backup.sql
-rw-r--r--  1 root  staff   492943 Aug  1 16:25 comicbookdb_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 coupons_backup.sql
-rw-r--r--  1 root  staff     3366 Aug  1 16:25 export_list.sh
-rw-r--r--  1 root  staff     1077 Aug  1 16:25 healthcheck_backup.sql
-rw-r--r--  1 root  staff     3429 Aug  1 16:25 innodb_memcache_backup.sql
-rw-r--r--  1 root  staff  1815839 Aug  1 16:25 landwatch_backup.sql
-rw-r--r--  1 root  staff   642965 Aug  1 16:25 laurelsprings_backup.sql
-rw-r--r--  1 root  staff   660254 Aug  1 16:25 ls_directory_backup.sql
-rw-r--r--  1 root  staff     1037 Aug  1 16:25 mem_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 protech_backup.sql
-rw-r--r--  1 root  staff     2889 Aug  1 16:25 scripts_backup.sql
-rw-r--r--  1 root  staff    11107 Aug  1 16:25 stacy_backup.sql
-rw-r--r--  1 root  staff     4002 Aug  1 16:25 storelist_backup.sql
-rw-r--r--  1 root  staff     1062 Aug  1 16:25 testcert_backup.sql
-rw-r--r--  1 root  staff     4467 Aug  1 16:25 tony_backup.sql
-rw-r--r--  1 root  staff     1042 Aug  1 16:25 twtr_backup.sql
-rw-r--r--  1 root  staff    52209 Aug  1 16:25 watchdb_backup.sql

You will now want to dump your MySQL table, so you don’t have to recreate all of the MySQL information, including the users, passwords and privileges after the new install.

mysqldump -uroot --password=my_pass --set-gtid-purged=OFF mysql > mysql_user_backup.txt

I am once again using the .txt prefix for this file.

After you execute the above command, make sure that the dump file was created:

# ls -l mysql_user_backup.txt
-rw-r--r--  1 root  staff  9672 Aug  1 16:32 mysql_user_backup.txt

We have now finished exporting all of our data, including our MySQL table data. You will need to shutdown MySQL. You may use mysqladmin to shutdown your database, or here is a link on ways to shutdown MySQL.

# mysqladmin -uroot --password=my_pass shutdown
Warning: Using a password on the command line interface can be insecure.

Before continuing, you might want to check to make sure that the mysqld process isn’t still active.

# ps -ef|grep mysqld
    0 18380 17762   0   0:00.00 ttys002    0:00.00 grep mysqld

You are now going to want to change the name of your mysql directory. This will give you access to the old directory in case the upgrade fails. For my OS (Mac OS 10.9), my MySQL home directory is a symbolic link to another directory that contains the actual MySQL data. All I have to do is to remove the symbolic link. A new symbolic link will be created with the new install. Otherwise, just use the mv command to rename your old MySQL directory.

# cd /usr/local/
# ls -ld mysql* 
lrwxr-xr-x   1 root  wheel   36 Aug  9  2013 mysql -> mysql-advanced-5.6.17-osx10.6-x86_64
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

All I have to do is to remove the link, and the MySQL directory will still be there:

# rm mysql
# ls -ld mysql* 
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

Now I am ready to install the new version of MySQL. I won’t cover the installation process, but here is the link to the installation page.

Tip: After you have installed MySQL, don’t forget to run this script from your MySQL home directory. This will install your mysql database tables. Otherwise, you will get an error when you try to start the mysqld process.

# ./scripts/mysql_install_db

Now you can start the mysqld process. See this page if you don’t know how to start MySQL.

You can test to see if the new installation of MySQL is running by either checking the process table, or logging into mysql. With a fresh install of 5.6, you should not have to include a user name or password.

Note: (Future versions of MySQL may automatically create a random root password and put it in your data directory. You will then need to use that password to login to MySQL for the first time. Check the user’s manual for any MySQL versions beyond 5.6.)

# mysql
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

Now that MySQL is up and running, leave the mysql terminal window open, and open another terminal window so you can import your mysql table information from your dump file:

# mysql < /users/tonydarnell/mysql_2014_0731/2014_0731_mysql_backup.sql

You won't be able to login with your old user names and passwords until you execute the flush privileges command. So, in your other terminal window with the mysql prompt:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Open another terminal window and see if you can login with your old mysql user name and password:

# mysql -uroot -p
Enter password: 
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

You can then look at your the user names and passwords in the mysql.user table:

mysql> select user, host, password from mysql.user order by user, host;
+----------------+---------------+-------------------------------------------+
| user           | host          | password                                  |
+----------------+---------------+-------------------------------------------+
| root           | 127.0.0.1     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.2   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.5   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.50  | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | localhost     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
+----------------+---------------+-------------------------------------------+
5 rows in set (0.00 sec)


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000005
         Position: 644455
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: e1eb3f38-18da-11e4-aa44-0a1a64a61679:1-124
1 row in set (0.00 sec)

We are now ready to import the database dump files. We can use this script to create the import commands. Copy this into a text file named import.sh:

# import.sh
# script to import all of the export files
# run this script in the same directory as the exported dump files
#
> import_files.sh
directory=`pwd`
for file in `ls *sql`
do

if [[ $(grep -c '.txt' $file) != 0 ]];then

echo "# found mysql - do nothing"

else

echo "mysql -uroot -p"my_pass"  < $directory/$file"
echo "mysql -uroot -p"my_pass"  > import_files.sh

fi

done

Then run the import.sh script. The script will print the output to the terminal window as well as into a new script file named import_files.sh.

# sh import.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

Look at the contents of the new script file – import_files.sh – to make sure that it contains all of the database files. You will use this file to help you import your dump files.

# cat import_files.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

WARNING: Be sure that this script file does not contain the main dump file or the mysql user’s file that we created.


I was exporting and importing eighteen (18) database files, so I can also check the line count of the import_files.sh script to make sure it matches:

# wc -l import_files.sh
      18 import_files.sh

I am now ready to import my files.


Optional: add the -v for verbose mode – sh -v import_files.sh


# sh import_files.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

You databases should now be imported into your new instance of MySQL. You can always re-run the script to make sure that the databases are the same size.


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed after importing the dump files, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000003
         Position: 16884001
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: cc68d008-18f3-11e4-aae6-470d6cf89709:1-43160
1 row in set (0.00 sec)

Your new and fresh installation of MySQL should be ready to use.

NOTE:A thank-you to Daniel Van Eeden for pointing out a mistake that I had made.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Use MySQL to store NoSQL and SQL data in the same database using memcached and InnoDB

MySQL is a great relational database, but at some point someone (management) in your company is probably going to say that they need to use NoSQL to store their data. After all, NoSQL is one of the latest buzzwords, so it must be good (correct?). Basically, NoSQL allows you to store data without all of the characteristics of a relational database. A very simple explanation is that you are storing all of a data set with just one primary key, and the primary key is how you also retrieve the data. While NoSQL may be good in some cases, it is hard to beat “old-fashioned” SQL relational databases – especially if that is what you know. But, with MySQL and InnoDB, you can have the best of both worlds.

With MySQL version 5.6 (and above), you have the ability to store and retrieve NoSQL data, using NoSQL commands, while keeping the data inside a MySQL InnoDB database. So, you can use NoSQL and SQL at the same time, on the same data, stored in the same database. And the beauty is that it takes just a few minutes to setup. This post will provide you with a quick lesson on how to setup NoSQL on a MySQL InnoDb database.

I would suggest that you read this MySQL web page to get started – Getting Started with InnoDB Memcached Plugin. You should be able to follow this guide and have your NoSQL database up and running in no time at all.

NoSQL on MySQL uses memcached – a distributed memory object caching system. Currently, the memcached daemon plugin is only supported on Linux, Solaris, and Mac OS X platforms.

There are a few prerequisites. You must have the libevent 1.4.3 (or greater) libraries installed (it is not installed if you used a MySQL installer to install MySQL). Depending upon your operating system, you can use apt-get, yum, or port install. For example, on Ubuntu Linux:

sudo apt-get install libevent-dev

 


On the Mac, it takes a few more steps (I tested this with Mac OS 10.9 – Mavericks). To install libevent, you will need to install Xcode, the Xcode command line tools, and then Homebrew. You may install Xcode via the Apple App Store (and this is the most time-consuming part). Once Xcode is installed, from a command line type:

# xcode-select --install

This will prompt you to install the command-line tools. Then, you can easily install Homebrew via this command:

# ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"

Then install libevent via:

# brew install libevent

 

The libraries for memcached and the InnoDB plugin for memcached are put into the correct place by the MySQL installer. For a typical operation, the files lib/plugin/libmemcached.so and lib/plugin/innodb_engine.so are used.

You may check to make sure you have the libraries: (substitute $MYSQL_HOME for your mysql home directory)

# ls -l $MYSQL_HOME/lib/plugin/libmemcached.so
-rwxr-xr-x  1 mysql  wheel  195664 Mar 14 15:23 lib/plugin/libmemcached.so

# ls -l $MYSQL_HOME/lib/plugin/innodb_engine.so
-rwxr-xr-x  1 mysql  wheel  109056 Mar 14 15:23 lib/plugin/innodb_engine.so

To be able to use memcached so that it can interact with InnoDB tables, you will need to run a configuration script to install the tables necessary for use with memcached. This script is named innodb_memcached_config.sql, and it should be in your $MYSQL_HOME/share directory.

# cd $MYSQL_HOME
# ls -l share/innodb_memcached_config.sql
-rwxr-xr-x  1 mysql  wheel  3963 Mar 14 15:02 share/innodb_memcached_config.sql

To install the script, from the command line run:

# mysql -uroot -p < $MYSQL_HOME/share/innodb_memcached_config.sql

This script will install the three tables (cache_policies, config_options and containers) that it needs for the InnoDB/memcached mapping relationship, along with a sample table named demo_test, which is installed in the test database.

mysql> use innodb_memcache
Database changed
mysql> show tables;
+---------------------------+
| Tables_in_innodb_memcache |
+---------------------------+
| cache_policies            |
| config_options            |
| containers                |
+---------------------------+
3 rows in set (0.01 sec)

mysql> use test;
Database changed
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| demo_test      |
+----------------+
1 row in set (0.00 sec)

The table that we need to use for the InnoDB mapping is the containers table. Here is the DESCRIBE statement for the containers table:

mysql> DESCRIBE innodb_memcache.containers;
+------------------------+--------------+------+-----+---------+-------+
| Field                  | Type         | Null | Key | Default | Extra |
+------------------------+--------------+------+-----+---------+-------+
| name                   | varchar(50)  | NO   | PRI | NULL    |       |
| db_schema              | varchar(250) | NO   |     | NULL    |       |
| db_table               | varchar(250) | NO   |     | NULL    |       |
| key_columns            | varchar(250) | NO   |     | NULL    |       |
| value_columns          | varchar(250) | YES  |     | NULL    |       |
| flags                  | varchar(250) | NO   |     | 0       |       |
| cas_column             | varchar(250) | YES  |     | NULL    |       |
| expire_time_column     | varchar(250) | YES  |     | NULL    |       |
| unique_idx_name_on_key | varchar(250) | NO   |     | NULL    |       |
+------------------------+--------------+------+-----+---------+-------+
9 rows in set (0.00 sec)

Here is some information about the container columns:

  • name: This is the name that is like the primary key for the memcache data collection. If you have a value of default for name, then this will be the default entry that is used. Otherwise it uses the first entry in the container table. You can also specify this name value in the NoSQL statement.
  • db_schema: The InnoDB database name that you will use to store the data.
  • db_table: The InnoDB database table name that you will use to store the data.
  • key_columns: The column that you will use for the key value lookup. This field only contains one column (despite the plural name of key_columns).
  • value_columns: Data will be pulled from and/or stored to these column/columns of data. You use a separator value (such as a pipe "|" symbol) to separate the columns. In the example database that is installed, you can store first name | last name which would pull data from both a firstname and lastname column.
  • flags: This column stores memcache flags, which is an integer that is used to mark rows for memcache operations.
  • cas_column and expire_time_column: These two columns are used for storing memcache compare-and-swap and expiration values. You can ignore these for now.
  • unique_idx_name_on_key: This is the name of the unique index that you will use for the key column, and you should use the primary key for the table. You should not use an auto-incrementing index, as you can’t insert into an auto-incrementing key.

NOTE: If you make any changes to the innodb_memcache.containers table, you will need to restart the plugin or restart mysqld.

The innodb_memcached_config.sql script inserted one line of data for us in the innodb_memcache.containers table:

mysql> select * from innodb_memcache.containers;
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| name | db_schema | db_table  | key_columns | value_columns | flags | cas_column | expire_time_column | unique_idx_name_on_key |
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| aaa  | test      | demo_test | c1          | c2            | c3    | c4         | c5                 | PRIMARY                |
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
1 row in set (0.00 sec)

You can see the name aaa is mapped to the db_table (InnoDB table) demo_test. And the innodb_memcached_config.sql script also created this demo_test table for us to use, and it inserted one row of data. Here is the DESCRIBE statement for the demo_test table:

mysql> DESCRIBE test.demo_test;
+-------+---------------------+------+-----+---------+-------+
| Field | Type                | Null | Key | Default | Extra |
+-------+---------------------+------+-----+---------+-------+
| c1    | varchar(32)         | NO   | PRI |         |       |
| c2    | varchar(1024)       | YES  |     | NULL    |       |
| c3    | int(11)             | YES  |     | NULL    |       |
| c4    | bigint(20) unsigned | YES  |     | NULL    |       |
| c5    | int(11)             | YES  |     | NULL    |       |
+-------+---------------------+------+-----+---------+-------+
5 rows in set (0.00 sec)

And here is the row that was inserted:

mysql> select * from demo_test;
+----+--------------+------+------+------+
| c1 | c2           | c3   | c4   | c5   |
+----+--------------+------+------+------+
| AA | HELLO, HELLO |    8 |    0 |    0 |
+----+--------------+------+------+------+
1 row in set (0.00 sec)

Next you will need to install the memcached plugin. From a mysql prompt:

mysql> install plugin daemon_memcached soname "libmemcached.so";

Once the plugin is installed this way, it is automatically activated each time the MySQL server is booted or restarted.

To turn off the plugin, use this statement:

mysql> uninstall plugin daemon_memcached;

NOTE: You might need to restart mysqld before continuing.

Now we can start testing memcached with InnoDB. MySQL is now listening to the memcached port 11211. We could try writing some code, but the easiest way is to just telnet to the port 11211 and issue some NoSQL commands. We only have one row in the innodb_memcache.containers table, and only one row in the test.demo_test table. For this example, we are only going to use a few NoSQL commands – get and set. Here is a link to the full list of commands.

Let’s telnet and use NoSQL to retrieve the data from the InnoDB demo_test table, which contained one row of data. The key for this one row is AA. We simply need to type get AA to retrieve the data:

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get AA
VALUE AA 8 12
HELLO, HELLO
END

We can now insert a line of data. This is done with the set command. The syntax for inserting data is:

  • set – this is the command to store a value
  • XX – this is the value key
  • # – this is a reference to the flags that we will use
  • # – this is the expiration TTL (ime to live)
  • # – the length of the string that we will insert/store
  • ABCDEF – the value to insert/store

In your telnet session, type this:

set BB 0 0 16
Goodbye, Goodbye
STORED

Now, let’s take a look at the InnoDB table, test.demo_test – and we can see our data:

mysql> select * from test.demo_test;
+----+------------------+------+------+------+
| c1 | c2               | c3   | c4   | c5   |
+----+------------------+------+------+------+
| AA | HELLO, HELLO     |    8 |    0 |    0 |
| BB | Goodbye, Goodbye |    0 |    1 |    0 |
+----+------------------+------+------+------+
2 rows in set (0.00 sec)

If we go back to the telnet session, we can also get the value for BB, which we inserted earlier via NoSQL:

get BB
VALUE BB 0 16
Goodbye, Goodbye
END

 


 
Let’s create a new table, create the relationship in the innodb_memcache.containers and insert (set) and read (get) some new data. Here is the CREATE TABLE statement for our new users table:

use test;
CREATE TABLE `users` (
 `user_id` varchar(32) NOT NULL DEFAULT '',
 `first` varchar(100) DEFAULT NULL,
 `last` varchar(100) DEFAULT NULL,
 `flags` int(11) DEFAULT '0',
 `cas` int(11) DEFAULT '0',
 `expiry` int(11) DEFAULT '0',
 PRIMARY KEY (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

And here is the SQL INSERT for innodb_memcache.containers table, to establish our relationship with memcached and InnoDB:

INSERT INTO `innodb_memcache`.`containers` 
(`name`, `db_schema`, `db_table`, `key_columns`, `value_columns`, `flags`, `cas_column`, `expire_time_column`, `unique_idx_name_on_key`)
VALUES ('default', 'test', 'users', 'user_id', 'first|last', 'flags','cas','expiry','PRIMARY');

NOTE: Since we changed the innodb_memcache.containers table, we will need to either restart mysqld or disable/enable the plugin (as shown above).

Now that we have restarted mysqld or the plugin, let’s insert some data into our new InnoDB table via NoSQL.

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
set jj1 0 0 15
James|Johnson
STORED
set jj2 0 0 14
John|Jackson
STORED

We can now use SQL to query the test.users table to see the InnoDB data we just stored via NoSQL:

mysql> select * from users;
+---------+-------+-----------+-------+------+--------+
| user_id | first | last      | flags | cas  | expiry |
+---------+-------+-----------+-------+------+--------+
| jj1     | James | Johnson   |     0 |    1 |      0 |
| jj2     | John  | Jackson   |     0 |    2 |      0 |
+---------+-------+-----------+-------+------+--------+
2 rows in set (0.00 sec)

Let’s insert some data into the temp.users table via mysql, and then retrieve the data via NoSQL.

mysql> INSERT INTO test.users (user_id, first, last, flags, cas, expiry) VALUES ('bb1', 'Beth', 'Brown', '0', '3', '0');
Query OK, 1 row affected (0.00 sec)

Retrieve the data via NoSQL:

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get bb1
VALUE bb1 0 10
Beth|Brown
END

We now have a way to use NoSQL via memcached and at the same time use good old-fashioned SQL statements on the same data via InnoDB and MySQL. It is the best of both worlds!

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Follow

Get every new post delivered to your Inbox.

Join 46 other followers