MySQL Enterprise Audit – parsing audit information from log files, inserting into MySQL table

The MySQL Enterprise Audit plug-in is part of the MySQL Enterprise Edition (available through a paid license). Basically, Enterprise Audit tracks everything that is happening on your MySQL server, and can be used to protect/detect the misuse of information, and to meet popular compliance regulations including HIPAA, Sarbanes-Oxley, and the PCI Data Security Standard.

MySQL Enterprise Audit uses the open MySQL Audit API to enable standard, policy-based monitoring and logging of connection and query activity executed on specific MySQL servers. Designed to meet the Oracle audit specification, MySQL Enterprise Audit provides an out of box, easy to use auditing and compliance solution for applications that are governed by both internal and external regulatory guidelines.

When installed, the audit plugin enables MySQL Server to produce a log file containing an audit record of server activity. The log contents include when clients connect and disconnect, and what actions they perform while connected, such as which databases and tables they access.
(from https://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin.html)

When you enable MySQL Enterprise Audit, log files are generated in your MySQL data directory. You can use tools like MySQL Workbench (Enterprise Edition) or Oracle Audit Vault to import the log data, to view the information and to generate reports.

I was talking with a client, and he wanted to know if the audit data could be stored in a table. Currently (as of MySQL 5.6.25), the audit information is stored as XML in the audit log files. So, I decided to write a quick Perl script that would parse the XML log files and insert the information into a MySQL database. You will need to set the size of your audit log files in your my.cnf or my.ini configuration file via the audit_log_rotate_on_size variable. You might need to adjust the size of your log files based upon database activity and how well the script parses the log files. If your log files are very large, the Perl script might have issues processing it, and you might want to decrease the size of your log files and run the script more frequently.


CAVEAT
Enterprise Audit does require a license from MySQL. If you are interested in an Enterprise subscription, contact me via the comment section below. If you are an Enterprise customer, you will need to configure Enterprise Audit first. See the Enterprise Audit online documentation page for more information, or contact MySQL Support.


For the data fields, I used the audit log file format information found at The Audit Log File page on MySQL.com.

My MySQL server doesn’t have a lot of activity, so I tried to configure the size of the data fields as best as possible to accommodate the possible size of the data in each field. There may be instances where you will have to increase the size of these fields or change their data types. The largest field is the SQL_TEXT field which will contain your SQL statements. Every table has a max row size of 65,535 bytes. So, the largest possible size of the SQL_TEXT field could be for this example is around 63,200 bytes (65,535 bytes minus the sum of the size of all of the other fields, and minus the 1-byte or 2-byte length prefix used for each varchar field). In this example, the SQL_TEXT field is set to 8,096 bytes, so you may need to increase or decrease this value.

I used varchar data types for each field, excluding the primary key field named ID. I did not spend a lot of time on the database schema, so you might want to modify it a little. I am sure that some of the fields are integers, but I did not have enough data in my log files to positively determine all of the possible values for each field. I did read the online manual, and it stated that CONNECTION_ID, SERVER_ID, STATUS, STATUS_CODE and VERSION were unsigned integers – but I left them as varchar.


NOTICE
This script requires the use of the new format for the audit log files, which is available in MySQL versions 5.6.20 or later.


I created a database along with two tables; one to store the log file information, and a history table to keep track of what files had already been parsed and inserted into MySQL, as well as the number of log file entries. The CREATE DATABASE and CREATE TABLE syntax is as follows:

CREATE DATABASE `audit_information` /*!40100 DEFAULT CHARACTER SET latin1 */

CREATE TABLE `audit_parsed` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `COMMAND_CLASS` varchar(64) DEFAULT NULL,
  `CONNECTIONID` varchar(32) DEFAULT NULL,
  `DB_NAME` varchar(64) DEFAULT NULL,
  `HOST_NAME` varchar(256) DEFAULT NULL,
  `IP_ADDRESS` varchar(16) DEFAULT NULL,
  `MYSQL_VERSION` varchar(64) DEFAULT NULL,
  `COMMAND_NAME` varchar(64) DEFAULT NULL,
  `OS_LOGIN` varchar(64) DEFAULT NULL,
  `OS_VERSION` varchar(256) DEFAULT NULL,
  `PRIV_USER` varchar(16) DEFAULT NULL,
  `PROXY_USER` varchar(16) DEFAULT NULL,
  `RECORD_ID` varchar(64) DEFAULT NULL,
  `SERVER_ID` varchar(32) DEFAULT NULL,
  `SQL_TEXT` varchar(8096) DEFAULT NULL,
  `STARTUP_OPTIONS` varchar(1024) DEFAULT NULL,
  `COMMAND_STATUS` varchar(64) DEFAULT NULL,
  `STATUS_CODE` varchar(11) DEFAULT NULL,
  `DATE_TIMESTAMP` varchar(24) DEFAULT NULL,
  `USER_NAME` varchar(128) DEFAULT NULL,
  `LOG_VERSION` varchar(11) DEFAULT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

CREATE TABLE `audit_history` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `AUDIT_LOG_NAME` varchar(64) DEFAULT NULL,
  `PARSED_DATE_TIME` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `LOG_ENTRIES` int(11) DEFAULT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

The Perl script finds the non-active log files (which end in .xml – example: audit.log.14357895017796690.xml), parses the data, creates an SQL file with INSERT statements, imports the data via the mysql command-line program, and then moves the log file(s) and SQL file(s) to a directory. The history table records what files have been processed, so you don’t accidentally process the same file twice.

In the beginning of the Perl script, there are several values you need to replace to match your system. The values are under the section titled “values needed”. Here is the Perl script (named audit.pl):

#!/usr/bin/perl
# audit.pl

use DBI;
use CGI;
use XML::Simple;

#----------------------------------------------------------
# values needed
$Database = "audit_information";
$MYSQL_DATA_DIR = "/usr/local/mysql/data";
$MySQL_Host_IP_Name = "192.168.1.2";
$mysql_user = "root";
$mysql_password = "password_needed";

# directory to store old audit files after parsing
$audit_directory = "$MYSQL_DATA_DIR/audit_files";

# make an audit_files directory if one does not exist
mkdir($audit_directory) unless(-d $audit_directory);
#----------------------------------------------------------


#----------------------------------------------------------
#for each file do this
@files = @files = ;;
foreach $file_name_to_parse (@files) {

	#----------------------------------------------------------
	# check to see if file has already been parsed
	$dbh1 = ConnectToMySql($Database);
	$query1 = "select AUDIT_LOG_NAME from audit_history where AUDIT_LOG_NAME = '$file_name_to_parse'";
	$sth1 = $dbh1->prepare($query1);
	$sth1->execute();

          while (@data = $sth1->fetchrow_array()) {
            
         	   $audit_log_name = $data[0];

			}

	# if length of audit_log_name is less than 1, process file
	if (length($audit_log_name) $PARSED_FILE") or die print "Couldn't open log_file: $!";
		
		$count = 0;
		
		# XML::Simple variable - SuppressEmpty => 1   ignore empty values
		$xml = XML::Simple->new(SuppressEmpty => 1);
		$data = $xml->XMLin("$file_name_to_parse");
		
		foreach $info (@{$data->{AUDIT_RECORD}})
		{
			# replace tick marks ' with \' in the SQL TEXT
			$info->{"SQLTEXT"} =~ s/'/\\'/g;
		
			print LOGOUT "INSERT INTO audit_information.AUDIT_PARSED (COMMAND_CLASS, CONNECTIONID, DB_NAME, HOST_NAME, IP_ADDRESS, MYSQL_VERSION, COMMAND_NAME, OS_LOGIN, OS_VERSION, PRIV_USER, PROXY_USER, RECORD_ID, SERVER_ID, SQL_TEXT, STARTUP_OPTIONS, COMMAND_STATUS, STATUS_CODE, DATE_TIMESTAMP, USER_NAME, LOG_VERSION) values ('" . $info->{"COMMAND_CLASS"} . "', '" . $info->{"CONNECTION_ID"} . "', '" . $info->{"DB"} . "', '" . $info->{"HOST"} . "', '" . $info->{"IP"} . "', '" . $info->{"MYSQL_VERSION"} . "', '" . $info->{"NAME"} . "', '" . $info->{"OS_LOGIN"} . "', '" . $info->{"OS_VERSION"} . "', '" . $info->{"PRIV_USER"} . "', '" . $info->{"PROXY_USER"} . "', '" . $info->{"RECORD_ID"} . "', '" . $info->{"SERVER_ID"} . "', '" . $info->{"SQLTEXT"} . "', '" . $info->{"STARTUP_OPTIONS"} . "', '" . $info->{"STATUS"} . "', '" . $info->{"STATUS_CODE"} . "', '" . $info->{"TIMESTAMP"} . "', '" . $info->{"USER"} . "', '" . $info->{"VERSION"} . "');\n";
			$count++;
		
		# end foreach $info (@{$data->{AUDIT_RECORD}})
		}
		
		# load parsed file into MySQL - hide warnings
		system("mysql -u$mysql_user -p$mysql_password  /dev/null 2>&1");
										
		$dbh2 = ConnectToMySql($Database);
		$query2 = "insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('$file_name_to_parse', '$count')";
		
		# optional print output - uncomment if desired
		# print "$query2\n";
																													
		$sth2 = $dbh2->prepare($query2);
		$sth2->execute();

		# close audit log file
		close(INFILE);

		# optional print output - uncomment if desired
		# print "Moving audit log ($file_name_to_parse) and log file ($PARSED_FILE) to $audit_directory.\n";
		
		# strip directories off $file_name_to_parse
		@file_name_to_move_array = split("\/",$file_name_to_parse);
		$directory_count = $#file_name_to_move_array;
		$file_name_to_move = $file_name_to_move_array[$directory_count];
		
		
		# optional print output - uncomment if desired
		# print "mv $file_name_to_move $file_name_to_parse\n";
		# print "mv $PARSED_FILE $audit_directory\n";

		# move audit log files and parsed log files to $audit_directory
		system("mv $file_name_to_parse $audit_directory");
		system("mv $PARSED_FILE $audit_directory");


	# end - if (length($audit_log_name) connect($connectionInfo,$mysql_user,$mysql_password);
   return $l_dbh;

}

It should not matter where you execute audit.pl, as long as you have correctly entered the required values in the script. You might get errors if you try to run this script on a log file that has not been rotated, which is the current log file in your MySQL data directory. The current log file is named audit.log.

# pwd
/usr/local/mysql/data
# ls -l audit.log
-rw-rw----  1 mysql  _mysql  9955118 Jul  2 15:25 audit.log

The script will only work on files ending in .xml. For testing, I used four small (and identical) audit log files:

# pwd
/usr/local/mysql/data
# ls -l *xml
-rw-rw----  1 mysql  wheel   15508 Jul  2 12:20 audit.log.14357895017796690.xml
-rw-r-----  1 mysql  _mysql  15508 Jul  2 13:46 audit.log.14357895017796691.xml
-rw-r-----  1 mysql  _mysql  15508 Jul  2 13:46 audit.log.14357895017796692.xml
-rw-r-----  1 mysql  _mysql  15508 Jul  2 13:46 audit.log.14357895017796693.xml

I have commented-out the print statements in the Perl script, but if you uncomment them, running the script gives you this output for each log file:

# perl audit.pl
Parsing - /usr/local/mysql/data/audit.log.14357895017796690.xml
insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('/usr/local/mysql/data/audit.log.14357895017796690.xml', '34')
Moving audit log (/usr/local/mysql/data/audit.log.14357895017796690.xml) and log file (/usr/local/mysql/data/audit.log.14357895017796690_parsed.sql) to /usr/local/mysql/data/audit_files.
mv audit.log.14357895017796690.xml /usr/local/mysql/data/audit.log.14357895017796690.xml
mv /usr/local/mysql/data/audit.log.14357895017796690_parsed.sql /usr/local/mysql/data/audit_files
....

After running my test script, the following data is what is in the audit_history table:

mysql> use audit_information
Database changed
mysql> select * from audit_history;
+----+-------------------------------------------------------+---------------------+-------------+
| ID | AUDIT_LOG_NAME                                        | PARSED_DATE_TIME    | LOG_ENTRIES |
+----+-------------------------------------------------------+---------------------+-------------+
|  1 | /usr/local/mysql/data/audit.log.14357895017796690.xml | 2015-07-02 15:25:07 | 34          |
|  2 | /usr/local/mysql/data/audit.log.14357895017796691.xml | 2015-07-02 15:25:08 | 34          |
|  3 | /usr/local/mysql/data/audit.log.14357895017796692.xml | 2015-07-02 15:25:08 | 34          |
|  4 | /usr/local/mysql/data/audit.log.14357895017796693.xml | 2015-07-02 15:25:09 | 34          |
+----+-------------------------------------------------------+---------------------+-------------+
4 rows in set (0.00 sec)

And here is an example of one line from the audit_parsed table.

mysql> select * from audit_parsed limit 1 \G
*************************** 1. row ***************************
             ID: 1
  COMMAND_CLASS: select
   CONNECTIONID: 10093
        DB_NAME: 
      HOST_NAME: localhost
     IP_ADDRESS: 127.0.0.1
  MYSQL_VERSION: 
   COMMAND_NAME: Query
       OS_LOGIN: 
     OS_VERSION: 
      PRIV_USER: 
     PROXY_USER: 
      RECORD_ID: 1614933_2015-07-01T22:08:58
      SERVER_ID: 
       SQL_TEXT: SELECT (UNIX_TIMESTAMP(now()) - CAST(variable_value AS SIGNED))*1000 as serverStartMillis 
  FROM information_schema.global_status 
 WHERE variable_name='uptime'
STARTUP_OPTIONS: 
 COMMAND_STATUS: 0
    STATUS_CODE: 0
 DATE_TIMESTAMP: 2015-07-01T22:08:58 UTC
      USER_NAME: root[root] @ localhost [127.0.0.1]
    LOG_VERSION: 
1 row in set (0.00 sec)

After parsing the log files, you can then write your own queries for searching through your audit data. You can even include this script in cron, so it runs and parses your files automatically. But as always, test this script and use it with caution before putting it in a production environment. You could also modify the Perl script to filter out values you did not want or need to store.

If you do use this script or if you have any suggestions or other questions, please leave a comment below.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Using Perl and MySQL to automatically respond to retweets on twitter

In an earlier post titled Using Perl to send tweets stored in a MySQL database to twitter, I showed you a way to use MySQL to store tweets, and then use Perl to automatically send your tweets to twitter.

In this post, we will look at automatically sending a “thank you” to people who retweet your tweets – and we will be using Perl and MySQL again.

Just like in the first post, you will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

One caveat: twitter has a rate limit on how often you may connect with your application – depending upon what you are trying to do. See Rate Limiting and Rate Limits for more information. So, if you are going to put this into a cron job, I wouldn’t run it more than once every 15 minutes.

We will also be using the same tables we created in the first post – tweets and history – as well as a new table, named retweets. The retweets table will contain all of the user names and tweet ID’s for those retweets we have discovered and already sent a thank-you tweet response.

The Perl script will connect to your tweet history table, and retrieve a set of your tweet ID’s, with the most recent tweet first. The script will then connect to twitter and check to see if there are any retweets for each ID. If a retweet is found, the script will check your retweets table to see if you have already thanked the tweeter for the retweet. If this is a new retweet, the script will connect to twitter and send a “thank-you” message to that user, and then insert the user name and tweet ID into the retweets table. This will ensure that you do not send a thank-you response more than one time.

Here is a flow chart that will attempt to explain what the script does:

We will be using the API call/method retweets(id) to see if a tweet ID was retweeted, and then we will send the thank-you tweet via the update call. More information about the Perl twitter API may be found at Net::Twitter::Lite::WithAPIv1_1.

First we will need to create the retweets table, where we will store the information about our tweets that were retweeted. Here is the CREATE TABLE statement for the retweets table:

CREATE TABLE `retweets` (
  `id` int(8) NOT NULL AUTO_INCREMENT,
  `tweet_id` bigint(24) DEFAULT NULL,
  `user_name` varchar(24) DEFAULT NULL,
  `retweet_update` varchar(36) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

All you need to do is to use edit this script with your own consumer_key, consumer_secret, access_token and access_token_secret for your application, and edit the accessTweets file used by the subroutine ConnectToMySql. You may also comment-out the “print” commands.

#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

my $Database = "tweets";

# Credentials for your twitter application
# you will need to subsitute your own application information for these four variables
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

# Grab the last X number of tweets to check for retweets
# - determined by the number after "limit"

$dbh = ConnectToMySql($Database);
$query = "select tweet_id, tweet_update FROM history order by tweet_update desc, id limit 10";	
$sth = $dbh->prepare($query);
$sth->execute();

# loop through our results - one tweet at a time
while (@data = $sth->fetchrow_array()) {

	$tweet_id = $data[0];
	$tweet_update = $data[1];

	print "----------------------------------------------------------------------\n";
	print "Checking:  $tweet_id $tweet_update\n";
	print "----------------------------------------------------------------------\n";

		# Connect to twitter and see if anyone retweeted this tweet
		my $results = eval { $nt->retweets($tweet_id)};

		for my $status ( @$results ) {
       
			$user_name = "$status->{user}{screen_name}";
			$retweet_update = "$status->{created_at}";

			# see if this person has retweeted this before, and we already
			# have a record of the retweet in our database
					
			$dbh2 = ConnectToMySql($Database);
			$query2 = "select tweet_id, user_name FROM retweets where tweet_id = '$tweet_id' and user_name = '$user_name' limit 1";	
			$sth2 = $dbh2->prepare($query2);
			$sth2->execute();
    
			@data2 = $sth2->fetchrow_array();
    
			# Uncomment if you want to see it in action
			# print "Query: $query\n";
		
			# Check to see if we had any results, and if not, then insert
			# tweet into database and send them a "thank you" tweet
			if (length($data2[0]) prepare($query3);
				$sth3->execute();

				# Uncomment if you want to see it in action
				# print "Query2: $query2\n";


				# ----------------------------------------------------------------------------
				# send tweet
				# ----------------------------------------------------------------------------

				# This pause is just to slow down the action - you can remove this line if you want
				sleep 5;
				
				my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
					traits              => [qw/API::RESTv1_1/],
					consumer_key        => "$consumer_key",
					consumer_secret     => "$consumer_secret",
					access_token        => "$access_token",
					access_token_secret => "$access_token_secret",
					ssl                 => 1
					);
					
					# Here is the message you want to send - 
					# the thank-you to the user who sent the retweet
					$tweet = "\@$user_name thanks for the retweet!";

					# send thank-you tweet
					my $results = eval { $nt->update("$tweet") };

						undef @data2;
						undef @data3;
					}
		
					else
				
					{
  						# we have already thanked this user - as their name and this tweet-id was found in the database
    					print "----- Found tweet: $tweet_id\n";
    
						while (@data2) {

							print "----------------------------------------------------------------------\n";
							print "Checking retweet by $user_name for $tweet_id\n";
							print "Found retweet:  $tweet_id $user_name $retweet_update \n";

							$tweet_id = $data2[0];
							$user_name = $data2[1];
					
							print "***** Retweet by $user_name already in database \n";
							print "----------------------------------------------------------------------\n";
					
							#exit;
							undef @data2;
							undef @data3;

							# This pause is just to slow down the action - you can remove this line if you want
							sleep 5;

							# end while
							}
						
					# end else
					}

		# end for my $status ( @$results ) {
		}

# end while
}

exit;

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file – named accessTweets contains this information:

database_name
hostname or IP
MySQL user name
password

I tested this on two twitter accounts, and everything worked for me – but let me know if you have problems. I am not the best Perl programmer, nor am I an expert at the twitter API, so there is probably a better/easier way to do this.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Using Perl to send tweets stored in a MySQL database to twitter

Twitter is not my favorite social media site. Using twitter is like driving downtown, screaming what you want to say out the window, and hoping someone hears you. There might be tens of thousands of people downtown, but your message will only be heard by a few. Your best bet is to repeat your message as often as possible.

However, twitter is free and if you want to reach as many people (theoretically) as possible, you might as well use it. But sending tweets on a scheduled basis can be a pain. There are client programs available which allow you to schedule your tweets (Hootsuite is one I have used in the past). You can load your tweets in the morning, and have the application tweet for you all day long. But you still have to load the application with your tweets – one by one.

A friend of mine asked me if there was a way to send the same 200 tweets over and over again, spaced out every 20 minutes or so. He has a consulting business, and just wants to build up a list of twitter followers by tweeting inspirational quotes. If he tweets for twenty hours a day, and sends three quotes an hour, it will take him a little more than three days to burn through his 200 quotes. And he can always add more quotes or space out the tweets as necessary. I decided to write a Perl script to do this for him.

To start, we will need a MySQL database to store the tweets. I use MySQL’s Workbench product as my client application for connecting to MySQL. From within Workbench, I can create my tweet database:

CREATE DATABASE 'tweets' /*!40100 DEFAULT CHARACTER SET latin1 */

I will then need a table inside my database to store my tweets.

CREATE TABLE 'tweets' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'tweet' char(140) DEFAULT NULL,
  'last_tweet_date' datetime NOT NULL DEFAULT '2015-01-01 00:00:00',
  'tweet_count' int(5) DEFAULT NULL,
  'tweet_length' int(3) DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB AUTO_INCREMENT=100001 DEFAULT CHARSET=latin1

The tweet messages will be stored in the tweet column, and the last date the tweet was sent will have a time stamp in the last_tweet_date column. When we perform a search to find the next tweet, we will simply sort our search by the last_tweet_date and then id, and limit our output to one tweet. After we send the tweet, we will update the last_tweet_date column and send that tweet to the end of the line. The script will also incrementally change the tweet_count value (number of times the tweet has been sent), and record the length of the tweet in tweet_length. I do not do any error checking in the script to make sure the tweet was sent, but errors are printed.

We now need to insert some tweets into our table. Since my friend is going to be sending inspirational quotes, I found a few I can import. In order to make it easier for importing, all single quote marks () will be replaced by the carat symbol (^). I can then swap these symbols inside the Perl script. You could use the backslash (\) before the single quote, but I prefer a single character substitution so I know how long the tweet will be.

I will also use the tilde (~) as a way to designate a carriage return in my tweet. The Perl script will replace the tilde with a carriage return (\n). Two tildes give me two carriage returns and a blank line.

insert into tweets (tweet) VALUES('I^m not afraid. -Luke~~You will be. -Yoda~~http://SomeWebSiteHere.com');
insert into tweets (tweet) VALUES('Do or do not.  There is no try.~~-Yoda~~http://SomeWebSiteHere.com');
insert into tweets (tweet) VALUES('No, I am your father.~~-Darth~~http://SomeWebSiteHere.com');

I also create a history table to store the tweet identification numbers. Each tweet is assigned a unique number by twitter, and this is how you can access this tweet. I save this information so I can delete the tweets later using this number. I have included a short script for deleting tweets near the end of this post.

CREATE TABLE 'history' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'tweet' char(140) DEFAULT NULL,
  'tweet_id' varchar(30) DEFAULT NULL,
  'tweet_update' datetime DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB AUTO_INCREMENT=1000032 DEFAULT CHARSET=latin1

You will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

You will also need to register your mobile phone in order to link your twitter account to your application. I have not figured out how to use this script with someone else’s account, as the instructions for scripting Perl for use with twitter are not very thorough. I will try to add this at a later date.

Now that you have your application information and all of your tables created with data inserted, here is the Perl script for sending tweets. You will need to install the necessary Perl modules that are used.


#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

my $Database = "tweets";

# ----------------------------------------------------------------------------------
# this has to be near the top - as other parts of the script rely on these figures
# ----------------------------------------------------------------------------------

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;

$year = $year + 1900;
$mon = $mon + 1;

# add a zero if the value is less than 10

if ($sec < 10) { $sec = "0$sec"; }
if ($min < 10) { $min = "0$min"; }
if ($hour < 10) { $hour = "0$hour"; }
if ($mday < 10) { $mday = "0$mday"; }
if ($mon < 10) { $mon = "0$mon"; }
if ($year < 10) { $year = "0$year"; }
if ($wday < 10) { $wday = "0$wday"; }
if ($yday < 10) { $yday = "0$yday"; }
if ($isdst < 10) { $isdst = "0$isdst"; }

$DateTime = "$year-$mon-$mday $hour:$min:$sec";

# ----------------------------------------------------------------------------------
# retrieve tweet from database
# ----------------------------------------------------------------------------------

     $dbh = ConnectToMySql($Database);
     $query = "select id, tweet, last_tweet_date, tweet_count FROM tweets order by last_tweet_date, id limit 1";	
     $sth = $dbh->prepare($query);
     $sth->execute();
    
          while (@data = $sth->fetchrow_array()) {
            
		 $id = $data[0];
		 $tweet = $data[1];
		 $last_tweet_date = $data[2];
		 $tweet_count = $data[3];
	}

$tweet_original = $tweet;

# ----------------------------------------------------------------------------
# find tildes ~ and substitute for carriage return
# find carats and substitue for single quote
# ----------------------------------------------------------------------------

$tweet =~ s/~/\n/g;
$tweet =~ s/\^/\'/g;

# ----------------------------------------------------------------------------------
# check length of tweet
# ----------------------------------------------------------------------------------

$tweet_length = length($tweet);

if (length($tweet) > 140)

{
	print "Error - tweet is longer than 140 characters\n";
	exit;
}

# add to the tweet count
$tweet_count++;

# ----------------------------------------------------------------------------
# send tweet
# ----------------------------------------------------------------------------

my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

my $results = eval { $nt->update("$tweet") };
 
  if ( my $err = $@ ) {
      die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

      warn "HTTP Response Code: ", $err->code, "\n",
           "HTTP Message......: ", $err->message, "\n",
           "Twitter error.....: ", $err->error, "\n";
  } 

# ----------------------------------------------------------------------------
# update mysql with new date for last_tweet date/time
# ----------------------------------------------------------------------------

$dbh = ConnectToMySql($Database);
$query = "UPDATE tweets SET last_tweet_date = '$DateTime' , tweet_count = '$tweet_count' , tweet_length = '$tweet_length' where id = '$id'";
$sth = $dbh->prepare($query);
$sth->execute();

# ----------------------------------------------------------------------------
# get the status id of the last tweet
# ----------------------------------------------------------------------------

my $statuses = $nt->user_timeline({ user => "2044_The_Book", count=> 1 });

for my $status ( @$statuses ) {
	$tweet_id = "$status->{id}";
}

if ( my $err = $@ ) {
      die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

      warn "HTTP Response Code: ", $err->code, "\n",
           "HTTP Message......: ", $err->message, "\n",
           "Twitter error.....: ", $err->error, "\n";
} 

# ----------------------------------------------------------------------------
# replace special characters
# ----------------------------------------------------------------------------

$tweet =~ s/\\\n/~/g;
$tweet =~ s/\'/^/g;

# update mysql with new date for last_tweet date/time

$dbh = ConnectToMySql($Database);	
$query = "insert into history (tweet,tweet_id,tweet_update) values ('$tweet_original','$tweet_id','$DateTime')";
$sth = $dbh->prepare($query);
$sth->execute();

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= <PW>;
   my $host= <PW>;
   my $userid= <PW>;
   my $passwd= <PW>;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}



In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file contains this information:

database_name
hostname or IP
MySQL user name
password

You can include your information inside the file if you prefer.

If you want to delete your tweets, you can create a script to access the tweets in your history table, and then delete them one at a time. Here is an example without the database connections:


#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

# ----------------------------------------------------------------------------
# delete tweet
# ----------------------------------------------------------------------------

# replace the values for $consumer_key $consumer_secret $access_token $access_token_secret
# with your values for your application

my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1,
	 );

my $statuses = $nt->destroy_status({ id => "$tweet_id" });

exit;

Be sure to replace the value of $tweet_id with the value from the tweet you want to delete.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

MySQL Enterprise Monitor – send advisor events to your chat client with Perl and Jabber

MySQL Enterprise Monitor (MEM) is part of the MySQL Enterprise Edition, and MEM provides real-time visibility into the performance and availability of all your MySQL databases. MEM and the MySQL Query Analyzer continuously monitor your databases and alerts you to potential problems before they impact your system. It’s like having a “Virtual DBA Assistant” at your side to recommend best practices to eliminate security vulnerabilities, improve replication, optimize performance and more. As a result, the productivity of your developers, DBAs and System Administrators is improved significantly.

With MEM, you have a couple of notification options for receiving information when MEM has received an event alert. An event alert is a “significant deviation from the baseline performance trends, and best-practice Advisors recommended changes to configuration and variable settings to improve performance”. From: http://www.mysql.com/products/enterprise/monitor.html

You may choose to receive these alerts via email or SNMP. I am not going to go over how to setup these alerts – but instructions may be found here.

I have MEM installed on my home server, and since I have been using it for a while, I have been able to tune my instances and databases so that I rarely receive any alerts. I normally receive these alerts via email, but I don’t always check email for this particular account during the day. Since most of my colleagues are in other parts of the world, we use a Jabber chat client for quick communications during the day. I wanted to figure out a way for me to receive a chat message whenever MEM had an alert. For my chat client, I use adium – which is an open-source multi-protocol instant messaging client for Mac OS X – it supports MSN, Jabber, Yahoo! and other networks. But this should work with any XMPP-based chat client application.

You will probably want to create a new POP3 email address for these alerts, as this script will delete the email messages from the server. If you want to keep an email copy of these alerts, you may add an additional email address to the alert notification group. If you use a free email service (like gmail), remember that it has a daily limit of the number of messages that you can send. I ran into this problem when testing the scripts, so I created my own email under one of my domains. I found two Perl scripts; one that acts as a POP3 client, and another that sends the Jabber message – and I combined them into one Perl script. You might need to do a little tweaking to get it to work with your service.

This is a sample email message from MEM. The first “Subject” line is the actual email subject. In MEM, you can customize what information you want in your email subject line via Email Notification Groups. I combined several parts of the email message into one message to be sent via chat.


(Email Subject Line)
(macserver01, MacServer01) - MEM WARNING Alert: User Has Rights To Database That Does Not Exist

(Email Body)

Subject: macserver01, MacServer01
Time: 2013-12-30 17:01:44 UTC (2013-12-30 12:01:44 EST)
Category: Security
Event: User Has Rights To Database That Does Not Exist
Problem Description
When a database is dropped, user privileges on the database are not automatically dropped. This has security implications as that user will regain privileges if a database with the same name is created in the future, which may not be the intended result.
Advice
Revoke privileges for all users on any databases that do not exist. The following users have privileges on databases that do not exist.

    ''@'%' on database test_%

Recommended Action
REVOKE ALL PRIVILEGES ON db_name.* FROM 'user_name'@'host_name';
Links and Further Reading
MySQL Manual: DROP DATABASE Syntax
MySQL Manual: GRANT Syntax
MySQL Manual: REVOKE Syntax
MySQL Manual: Privileges Provided by MySQL
MySQL Manual: How the Privilege System Works
Securing Your MySQL Installation
Securing a MySQL Server on Windows
Expression
%user% != THRESHOLD
Evaluated Expression

    ''@'%' on database test_% != ''

Copyright © 2005, 2013, Oracle and/or its affiliates. All rights reserved.

And here is the script. You may run it as a cron job, where it will check your email every few minutes and then send you a chat message when an email with an alert has arrived. You will need to modify the script to match your email and chat settings, and I have placed notes in the script to help guide you:

#!/usr/bin/perl -w

# POP3 client script source:
# http://forums.devshed.com/perl-programming-6/how-to-get-gmail-mail-by-mail-pop3client-in-perl-555889.html
# Author: a user named keath

# Jabber message script source: 
# ttp://dipinkrishna.com/blog/2010/12/perl-send-chat-message-gmail-buddy-jabber/
# Author: Dipin Krishna

use strict;
# for the email
use Mail::POP3Client;
# for the chat
use Net::Jabber;

# I was having a problem with the Debug module, so I just
# commented line 154 in /Library/Perl/5.16/Net/XMPP/Debug.pm

# this is the email address that you want to use to receive 
# the alert messages from MySQL Enterprise Monitor
my $user = 'MEMalerts@scriptingmysql.com';
my $pass = 'mypassword';

# you will need to use your POP3 server name
my $host = "pop.emailserver.com";

my $pop = new Mail::POP3Client(
	USER     => $user,
	PASSWORD => $pass,
	HOST     => $host,
	PORT     => 995,
	USESSL   => 'true',
);

# I have commented most of the print messages
# - you may uncomment them as you wish
my $count = $pop->Count();
if ($count Message();
} elsif ($count == 0) {
	print "no messages\n";
} else {
	
	#print "$count messsages\n\n";
	
	for my $i (1 .. $count) {
	
	my $subject = "";
	my $message = "";
	
		# if you want to extract data from the head of the email
		# I am extracting data from the body
		#foreach ($pop->Head($i)) {
		foreach ($pop->Body($i)) {

			#print "$_\n" if /^(From|Subject|Date):/i;
			#print "$_\n" if /^(Subject|Date):/i;
			# my $message = "$_\n" if /^(Subject):/i;

			# I am building my message so that it contains the information in this order:
			# Category, Subject, Event

			if ($_ =~ "^Subject")
			{
				#print "$_\n";
				chomp $_;
				$subject = $_;
				$subject =~ s/Subject: //;
 			}

			if ($_ =~ "^Category")
			{
				#print "$_\n";
				chomp $_;
				$message = "$_ || $subject";
				$message =~ s/Category: //;
 			}

			if ($_ =~ "^Event")
			{
				#print "$_\n";
				chomp $_;
				
				$message = "$message || $_";
				$message =~ s/Event: //;
  						
					my $sttime=time;
						
					#print "Message: $_\n";
						
					# this is my Google Talk chat user name and password
					my $username = 'mem.scripting.mysql';;
					my $password = 'mypassword';
					my $to = 'my_email_address';
					my $msg = "$message";
					#print "$to: $msg\n";
						 
					my $resource = "dipin";
					my $hostname = 'talk.google.com';
					my $port = 5222;
					my $componentname = 'gmail.com';
					my $Contype = 'tcpip';
					my $tls = 1;
						 
					my $Con = new Net::Jabber::Client();
					$Con->SetCallBacks(presence=>\&presence,
					message=>\&message );
						 
					my $status = $Con->Connect(
					hostname => $hostname, port => $port,
					componentname => $componentname,
					connectiontype => $Contype, tls => $tls);
						 
					if (!(defined($status))) {
						print "ERROR:  XMPP connection failed.\n";
						print "        ($!)\n";
						exit(0);
					} 
						 
					# Change hostname
					my $sid = $Con->{SESSION}->{id};
					$Con->{STREAM}->{SIDS}->{$sid}->{hostname} = $componentname;
						 
					# Authenticate
						
					#my @result = $Con->AuthSend(
					my @result = $Con->AuthIQAuth(
					username => $username, password => $password,
					resource => $resource);
						
					#print "Result:  $result[0] $result[1]\n";
						 
					if ($result[0] ne "ok") {
						print "ERROR: Authorization failed: $result[0] - $result[1]\n";
					}
					else
					{
						#print "Logged in Sucessfull!\n";
						$Con->PresenceSend(show=>"Available");
						#print "Sending Message!\n";
						$Con->MessageSend(to=>"$to",
						subject=>"Test",
						body=>"$msg\n",
						priority=>10);
					}
						
						# END send Jabbar message
 						# # # # # # # # # # # # # # # # # # # # # # # # # # #

				# this deletes the message from the server
				$pop->Delete( $i );
				
				# if you only want to send one email message as a test, 
				# uncomment this line
				exit;
			}
		}
		print "\n";
	}
}

$pop->Close();

exit;

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 1 of 2

Be sure to check out my other posts on mysqldump:
Scripting Backups of MySQL with Perl via mysqldump
Splitting a MySQL Dump File Into Smaller Files Via Perl

Part 1 of 2: (part two)
If you have used MySQL for a while, you have probably used mysqldump to backup your database. In part one of this blog, I am going to show you how to create a simple full and partial backup using mysqldump. In part two, I will show you how to use MySQL Enterprise Backup (which is the successor to the InnoDB Hot Backup product). MySQL Enterprise Backup allows you to backup your database while it is online and it keeps the database available to users during backup operations (you don’t have to take the database offline or lock any databases/tables – but to do this, you need to use the –no-locking option).

This post will deal with mysqldump. For those of you that aren’t familiar with mysqldump:

The mysqldump client is a utility that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects, table data, or both. It dumps one or more MySQL database for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

The best feature about mysqldump is that it is easy to use. The main problem with using mysqldump occurs when you need to restore a database. When you execute mysqldump, the database backup (output) is an SQL file that contains all of the necessary SQL statements to restore the database – but restoring requires that you execute these SQL statements to essentially rebuild the database. Since you are recreating your database, the tables and all of your data from this file, the restoration procedure can take a long time to execute if you have a very large database.

There are a lot of features and options with mysqldump – (a complete list is here). I won’t review all of the features, but I will explain some of the ones that I use.

Here is the command to use mysqldump to simply backup all of your databases (assuming you have InnoDB tables). This command will create a dump (backup) file named all_databases.sql.

mysqldump --all-databases --single-transaction --user=root --password > all_databases.sql

After you hit return, you will have to enter your password. You can include the password after the –password option (example: –password=my_password), but this is less secure and you will get the following error:

Warning: Using a password on the command line interface can be insecure.

Here is some information about the options that were used:

--all-databases - this dumps all of the tables in all of the databases
--user - The MySQL user name you want to use for the backup
--password - The password for this user.  You can leave this blank or include the password value (which is less secure)
--single-transaction - for InnoDB tables

If you are using Global Transaction Identifier’s (GTID’s) with InnoDB (GTID’s aren’t available with MyISAM), you will want to use the –set-gtid-purged=OFF option. Then you would issue this command:

mysqldump --all-databases --single-transaction --set-gtid-purged=OFF --user=root --password > all_databases.sql

Otherwise you will see this error:

Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.

You can also execute a partial backup of all of your databases. This example will be a partial backup because I am not going to backup the default databases for MySQL (which are created during installation) – mysql, test, PERFORMANCE_SCHEMA and INFORMATION_SCHEMA

Note: mysqldump does not dump the INFORMATION_SCHEMA database by default. To dump INFORMATION_SCHEMA, name it explicitly on the command line and also use the –skip-lock-tables option.

mysqldump never dumps the performance_schema database.

mysqldump also does not dump the MySQL Cluster ndbinfo information database.

Before MySQL 5.6.6, mysqldump does not dump the general_log or slow_query_log tables for dumps of the mysql database. As of 5.6.6, the dump includes statements to recreate those tables so that they are not missing after reloading the dump file. Log table contents are not dumped.

If you encounter problems backing up views due to insufficient privileges, see Section E.5, “Restrictions on Views” for a workaround.
(from: http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html)

To do a partial backup, you will need a list of the databases that you want to backup. You may retrieve a list of all of the databases by simply executing the SHOW DATABASES command from a mysql prompt:

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| comicbookdb        |
| coupons            |
| mysql              |
| performance_schema |
| scripts            |
| test               |
| watchdb            |
+--------------------+
8 rows in set (0.00 sec)

In this example, since I don’t want to backup the default mysql databases, I am only going to backup the comicbookdb, coupons, scripts and watchdb databases. I am going to use the following options:

--databases - This allows you to specify the databases that you want to backup.  You can also specify certain tables that you want to backup.  If you want to do a full backup of all of the databases, then leave out this option
--add-drop-database - This will insert a DROP DATABASE statement before each CREATE DATABASE statement.  This is useful if you need to import the data to an existing MySQL instance where you want to overwrite the existing data.  You can also use this to import your backup onto a new MySQL instance, and it will create the databases and tables for you.
--triggers - this will include the triggers for each dumped table
--routines - this will include the stored routines (procedures and functions) from the dumped databases
--events - this will include any events from the dumped databases
--set-gtid-purged=OFF - since I am using replication on this database (it is the master), I like to include this in case I want to create a new slave using the data that I have dumped.  This option enables control over global transaction identifiers (GTID) information written to the dump file, by indicating whether to add a SET @@global.gtid_purged statement to the output.
--user - The MySQL user name you want to use
--password - Again, you can add the actual value of the password (ex. --password=mypassword), but it is less secure than typing in the password manually.  This is useful for when you want to put the backup in a script, in cron or in Windows Task Scheduler.
--single-transaction - Since I am using InnoDB tables, I will want to use this option.

Here is the command that I will run from a prompt:

mysqldump --databases comicbookdb coupons scripts watchdb --single-transaction --set-gtid-purged=OFF --add-drop-database --triggers --routines --events --user=root --password > partial_database_backup.sql

I will need to enter my password on the command line. After the backup has completed, if your backup file isn’t too large, you can open it and see the actual SQL statements that will be used if you decide that you need to recreate the database(s). If you accidentally dump all of the databases into one file, and you want to separate the dump file into smaller files, see my post on using Perl to split the dump file.

For example, here is the section of the dump file (partial_database_backup.db) for the comicbookdb database (without the table definitions). (I omitted the headers from the dump file.)

--
-- Current Database: `comicbookdb`
--

/*!40000 DROP DATABASE IF EXISTS `comicbookdb`*/;

CREATE DATABASE /*!32312 IF NOT EXISTS*/ `comicbookdb` /*!40100 DEFAULT CHARACTER SET latin1 */;

USE `comicbookdb`;

--
-- Table structure for table `comics`
--

DROP TABLE IF EXISTS `comics`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `comics` (
  `serial_id` int(7) NOT NULL AUTO_INCREMENT,
  `date_time_added` datetime NOT NULL,
  `publisher_id` int(6) NOT NULL,
....

If you are using the dump file to create a slave server, you can use the –master-data option, which includes the CHANGE MASTER information, which looks like this:

--
-- Position to start replication or point-in-time recovery from
--

CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000013', MASTER_LOG_POS=79338;

If you used the –set-gtid-purged=OFF option, you would see the value of the Global Transaction Identifier’s (GTID’s):

--
--GTID state at the beginning of the backup 
--

SET @@GLOBAL.GTID_PURGED='82F20158-5A16-11E2-88F9-C4A801092ABB:1-168523';

You may also test your backup without exporting any data by using the –no-data option. This will show you all of the information for creating the databases and tables, but it will not export any data. This is also useful for recreating a blank database on the same or on another server.

When you export your data, mysqldump will create INSERT INTO statements to import the data into the tables. However, the default is for the INSERT INTO statements to contain multiple-row INSERT syntax that includes several VALUES lists. This allows for a quicker import of the data. But, if you think that your data might be corrupt, and you want to be able to isolate a given row of data – or if you simply want to have one INSERT INTO statement per row of data, then you can use the –skip-extended-insert option. If you use the –skip-extended-insert option, importing the data will take much longer to complete, and the backup file size will be larger.

Importing and restoring the data is easy. To import the backup file into a new, blank instance of MySQL, you can simply use the mysql command to import the data:

mysql -uroot -p < partial_database_backup.sql

Again, you will need to enter your password or you can include the value after the -p option (less secure).

There are many more options that you can use with a href=”http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html”>mysqldump. The main thing to remember is that you should backup your data on a regular basis, and move a copy of the backup file off the MySQL server.

Finally, here is a Perl script that I use in cron to backup my databases. This script allows you to specify which databases you want to backup via the mysql_bak.config file. This config file is simply a list of the databases that you want to backup, with an option to ignore any databases that are commented out with a #. This isn’t a secure script, as you have to embed the MySQL user password in the script.

#!/usr/bin/perl
# Perform a mysqldump on all the databases specified in the dbbackup.config file

use warnings;
use File::Basename;

# set the directory where you will keep the backup files
$backup_folder = '/Users/tonydarnell/mysqlbak';

# the config file is a text file with a list of the databases to backup
# this should be in the same location as this script, but you can modify this
# if you want to put the file somewhere else
my $config_file = dirname($0) . "/mysql_bak.config";

# example config file
# You may use a comment to bypass any database that you don't want to backup
# # Unwanted_DB    (commented - will not be backed up)
# twtr
# cbgc

# retrieve a list of the databases from the config file
my @databases = removeComments(getFileContents($config_file));

# change to the directory of the backup files.
chdir($backup_folder) or die("Cannot go to folder '$backup_folder'");

# grab the local time variables
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
$year += 1900;
$mon++;
#Zero padding
$mday = '0'.$mday if ($mday<10);
$mon = '0'.$mon if ($mon<10);

$hour = "0$hour" if $hour < 10;
$min = "0$min" if $min  $folder/$file.Z`;

	print "Done\n";
}
print "Done\n\n";

# this subroutine simply creates an array of the list of the databases

sub getFileContents {
	my $file = shift;
	open (FILE,$file) || die("Can't open '$file': $!");
	my @lines=;
	close(FILE);

	return @lines;
}

# remove any commented tables from the @lines array

sub removeComments {
	my @lines = @_;

	@cleaned = grep(!/^\s*#/, @lines); #Remove Comments
	@cleaned = grep(!/^\s*$/, @cleaned); #Remove Empty lines

	return @cleaned;
}

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Retrieving List of MySQL Users and Grants with Perl

Before I upgrade MySQL to the latest and greatest version, one of the first things that I do is export the user and grant information. In the past, I would keep all of my user information (user name, password, grants) in a text file, with the SQL for each user/grant ready to be executed on the upgraded server. I did use my own form of “mental encryption” for my passwords, so the passwords weren’t in plain English. But then I would have to decode my passwords each time before I executed the SQL statements.

When I upgrade, I usually like to dump all of the data and import it into the new version, so I have a fresh copy of the database. The MySQL server that I have is for my personal use and the data size is relatively small, so for my case it doesn’t take long to import the data.

But there were times when I would add a user in the MySQL database and forget to add it to my text file. Then, when it came time to upgrade and I would refer to my text file, these new users would not get recreated. For me it wasn’t that big of a deal, as I am only dealing with my own home-office server, and not a production server. I would eventually figure out that these users weren’t available, and I would simply recreate them. But I often add temporary users for testing purposes. Every once in a while I would want to keep some of the temporary users, and some users would be deleted after the testing was completed. So my text file was rarely up to date.

I am in the process of upgrading my servers to MySQL 5.6.9, and I decided to write a quick Perl script to export all of the users, passwords and grants. Since I will be doing a fresh install of MySQL, I can then just use the output from this script to recreate my users. Of course, you need to run this script before the upgrade.

There are two SQL statements that we will execute with this script. The first, is simply retrieving a list of the user and host names from the mysql.user table:

mysql> SELECT user, host FROM user order by user, host;
+-------------+---------------+
| user        | host          |
+-------------+---------------+
| replicate   | 192.168.1.121 |
| replicate   | 192.168.1.2   |
| replicate   | 192.168.1.4   |
| replicate   | localhost     |
| root        | 127.0.0.1     |
| root        | 192.168.1.2   |
| root        | localhost     |
| root        | macserver01   |
| scripting   | 192.168.1.121 |
| scripting   | 192.168.1.122 |
| scripting   | 192.168.1.2   |
+-----------+---------------+
11 rows in set (0.00 sec)

Then, we will loop through each of the user and host names to retrieve their grants and passwords. Here is a sample of retrieving this data for just one user and one host name:

mysql> SHOW GRANTS FOR 'replicate'@'192.168.1.121';
+-------------------------------------------------------------------------------------------------------------------+
| Grants for replicate@192.168.1.121                                                                                |
+-------------------------------------------------------------------------------------------------------------------+
| GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.121' IDENTIFIED BY PASSWORD '*BF6F715A6EBFE63005BEB705C' |
+-------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

The script will create the necessary SQL statements to recreate the users and grants, with the output displayed in the terminal window. You will just need to copy and save the output from the terminal window into a text file. Or you can change the print statement to output the information directly to a file. Here are the changes to output to a file:

Add this to the top of the script after the “my $Database” line.

open(OUTFILE, ">mysql_usernames.txt") || die "Can't redirect stdout";

Add OUTFILE after each of your print commands:

print OUTFILE "CREATE user \'$user\'\@'$host\'\ identified by '';\n";
print OUTFILE "$privileges;\n\n";

And close the OUTFILE after the last right curly bracket “}” and before the subroutine:

close OUTFILE;

For the script, you will need to install the Perl::DBI module. You will also need to change the values for the $host, $userid and $passwd variables in the sub routine ConnectToMySql to match your system. Here is the Perl script.

#!/usr/bin/perl

use DBI;

my $Database = "mysql";

        $dbh = ConnectToMySql($Database);

        # retrieve a list of users and host names
	$query = "SELECT user, host FROM user order by user, host";
	
        $sth = $dbh->prepare($query);

        $sth->execute();
    
          while (@data = $sth->fetchrow_array()) {
            my $user = $data[0];
            my $host = $data[1];
            
            print "CREATE user \'$user\'\@'$host\'\ identified by '';\n";

                $dbh2 = ConnectToMySql($Database);

	        # retrieve the grants for each user and host combination
                $query2 = "SHOW GRANTS FOR '$user'\@'$host'";
	
                $sth2 = $dbh2->prepare($query2);

                $sth2->execute();

                        while (@data2 = $sth2->fetchrow_array()) {
                            my $privileges = $data2[0];
                            print "$privileges;\n\n";
                        }

          # end first while statement           
          }

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   my $host ="";
   my $userid = "";
   my $passwd = "";
   my $connectionInfo = "dbi:mysql:$db;$host";

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

And here is the output from running the script.

root@macserver01: $ perl get_mysql_users.pl
CREATE user 'replicate'@'192.168.1.121' identified by '';
GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.121' IDENTIFIED BY PASSWORD '*BF6F715A6EBF367E76X705C';

CREATE user 'replicate'@'192.168.1.2' identified by '';
GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.2' IDENTIFIED BY PASSWORD '*BF6F715A6EBF367E76X705C';

CREATE user 'replicate'@'192.168.1.4' identified by '';
GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.4' IDENTIFIED BY PASSWORD '*2A9C19E10B309BF1BE40E4A9C';

CREATE user 'replicate'@'localhost' identified by '';
GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'localhost' IDENTIFIED BY PASSWORD '*2A9C19E10B309BF1BE40E4A9C';

CREATE user 'root'@'127.0.0.1' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'127.0.0.1' IDENTIFIED BY PASSWORD '*BF6F715A6EBFKK367E76X705C' WITH GRANT OPTION;

CREATE user 'root'@'192.168.1.2' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.1.2' IDENTIFIED BY PASSWORD '*BF6F715A6EBFKK367E76X705C' WITH GRANT OPTION;

CREATE user 'root'@'localhost' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*BF6F715A6EBFKK367E76X705C' WITH GRANT OPTION;

CREATE user 'root'@'macserver01' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'macserver01' WITH GRANT OPTION;

CREATE user 'scripting'@'192.168.1.121' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'scripting'@'192.168.1.121' IDENTIFIED BY PASSWORD '*DEE6483B0XX23K3AD402E34F7' WITH GRANT OPTION;

CREATE user 'scripting'@'192.168.1.122' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'scripting'@'192.168.1.122' IDENTIFIED BY PASSWORD '*DEE6483B0XX23K3AD402E34F7' WITH GRANT OPTION;

CREATE user 'scripting'@'192.168.1.2' identified by '';
GRANT ALL PRIVILEGES ON *.* TO 'scripting'@'192.168.1.2' IDENTIFIED BY PASSWORD '*DEE6483B0XX23K3AD402E34F7' WITH GRANT OPTION;

Now when I upgrade my server, I can simply run this script prior to my upgrade and save the information. I don’t have to worry about missing a user or keeping my user information in a text file.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

How To Sort Columns of MySQL Data on a Web Page With Perl

A friend of mine was building a web site so his customers could view his current inventory of transportation containers, and he asked me for help on how to sort the rows of information that appeared on his site. So, in this post, I will give you a quick example on how to sort columns on a web page.

First, let’s start with an inventory database that we will build in MySQL:

CREATE TABLE `inventory` (
`id` int(6) NOT NULL AUTO_INCREMENT,
`item_name` varchar(30) NOT NULL,
`item_SKU` varchar(20) NOT NULL,
`item_cost` decimal(4,2) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=100000 DEFAULT CHARSET=latin1

Next, here are some SQL statements to populate the MySQL database with some sample data.

use inventory;
insert into inventory (item_name, item_SKU, item_cost) values ('Apple', '10001', '1.04');
insert into inventory (item_name, item_SKU, item_cost) values ('Peach', '10004', '1.28');
insert into inventory (item_name, item_SKU, item_cost) values ('Plum', '10301', '1.17');
insert into inventory (item_name, item_SKU, item_cost) values ('Apricot', '13033', '1.92');
insert into inventory (item_name, item_SKU, item_cost) values ('Grapes', '20422', '1.34');
insert into inventory (item_name, item_SKU, item_cost) values ('Kiwi', '98561', '2.78');
insert into inventory (item_name, item_SKU, item_cost) values ('Mango', '56231', '0.99');
insert into inventory (item_name, item_SKU, item_cost) values ('Strawberry', '24689', '1.52');
insert into inventory (item_name, item_SKU, item_cost) values ('Banana', '65213', '0.39');
insert into inventory (item_name, item_SKU, item_cost) values ('Tangerine', '47112', '1.22');

For this example, I am not going to show you how to create a new record, edit or delete your information in a MySQL database, as I did that in an earlier post.

Now that we have our data, we are going to need a Perl script that will retrieve all of the data and create our web page at the same time. I have named the Perl script inventory_view. Don’t forget to change all of the variables to match your system. I will explain the variables that we will use in the Perl Script after the script below:

#!/usr/bin/perl
#--------------------------------------------------------------------------
# inventory_view.pl
#--------------------------------------------------------------------------

use DBI;
use DBD::mysql;
use CGI qw(:standard);

my $Database = "scripting_mysql";

# get the sort parameters from the URL
my $query = new CGI;
# $sort is the order by which to sort - ascending or descending
# you only need to set a value if it is descending (desc)
$sort = $query->param("sort");
# $sortby is the field name to be used for sorting
$sortby = $query->param("sortby");

# check for sort order - ascending or descending
if ($sort =~ "asc")

{
$order_by = "order by $sortby";
}

else

{
$order_by = "order by $sortby $sort";
}

# if the sort isn't set to a value, set it to a default sort of item_name
if (length($sort) < 1)

{
$order_by = "order by item_name";
}

# print HTML header
print header;

# connect to the database and pull every record
$dbh = ConnectToMySql($Database);
$query = "select item_name, item_SKU, item_cost from inventory $order_by";
$sth = $dbh->prepare($query);
$sth->execute();

# print the table header
print <<HTML;
<table border=0>
<tr>
<td style="text-align:center;border:1px solid gray;padding: 5px 10px 5px 10px;"><a href=http://192.168.1.2/cgi-bin/scripting_mysql/inventory_view.pl?sort=asc&sortby=item_name><img alt="sort ascending" title="sort ascending" height=15 src=http://tonydarnell.com/mysql_blog//arrow_blue_up.png></a>  <a href=http://192.168.1.2/cgi-bin/scripting_mysql/inventory_view.pl?sort=desc&sortby=item_name><img alt="sort descending" title="sort descending" height=15 src=http://tonydarnell.com/mysql_blog//arrow_blue_down.png></a></td>
<td style="text-align:center;border:1px solid gray;padding: 5px 10px 5px 10px;"><a href=http://192.168.1.2/cgi-bin/scripting_mysql/inventory_view.pl?sort=asc&sortby=item_SKU><img alt="sort ascending" title="sort ascending" height=15 src=http://tonydarnell.com/mysql_blog//arrow_blue_up.png></a>  <a href=http://192.168.1.2/cgi-bin/scripting_mysql/inventory_view.pl?sort=desc&sortby=item_SKU><img alt="sort descending" title="sort descending" height=15 src=http://tonydarnell.com/mysql_blog//arrow_blue_down.png></a></td>
<td style="text-align:center;border:1px solid gray;padding: 5px 10px 5px 10px;"><a href=http://192.168.1.2/cgi-bin/scripting_mysql/inventory_view.pl?sort=asc&sortby=item_cost><img alt="sort ascending" title="sort ascending" height=15 src=http://tonydarnell.com/mysql_blog//arrow_blue_up.png></a>  <a href=http://192.168.1.2/cgi-bin/scripting_mysql/inventory_view.pl?sort=desc&sortby=item_cost><img alt="sort descending" title="sort descending" height=15 src=http://tonydarnell.com/mysql_blog//arrow_blue_down.png></a></td>
</tr>

<td style="text-align:center;border:1px solid gray;padding: 5px 10px 5px 10px;font-size:12px;">Item Name</td>
<td style="text-align:center;border:1px solid gray;padding: 5px 10px 5px 10px;">Item SKU</td>
<td style="text-align:center;border:1px solid gray;padding: 5px 10px 5px 10px;">Item Price</td>

HTML

# set the second line background color
$background_color = "#FFFFFF";

# loop through the data fetched from the query
while (@data = $sth->fetchrow_array()) {

$item_name = $data[0];
$item_SKU = $data[1];
$item_cost = $data[2];

# alternate the background colors
if ($background_color =~ "#FFFFFF")

{
$background_color="#FFFFCC";
}

else

{
$background_color="#FFFFFF";
}

# print the table rows, one for each item from the database
print <<HTML;

<tr bgcolor="$background_color">
<td style="text-align:right;border:1px solid gray;padding: 5px 10px 5px 10px;font-size:12px;"</td>$item_name </td>
<td style="text-align:right;border:1px solid gray;padding: 5px 10px 5px 10px;font-size:12px;"</td>$item_SKU </td>
<td style="text-align:right;border:1px solid gray;padding: 5px 10px 5px 10px;font-size:12px;"</td>$item_cost </td>
</tr>
HTML
}

# close the table
print "</table>";

exit;

# more on what I am doing with the accessSM file may be found at:
# https://scriptingmysql.wordpress.com/2011/07/27/connecting-to-mysql-with-perl/
#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

my ($db) = @_;

open(PW, "<..\/accessSM") || die "Can't access login credentials";
my $db= <PW>;
my $host= <PW>;
my $userid= <PW>;
my $passwd= <PW>;

chomp($db);
chomp($host);
chomp($userid);
chomp($passwd);

my $connectionInfo="dbi:mysql:$db;$host";
close(PW);

# make connection to database
my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
return $l_dbh;

}

When you run the script for the first time in a browser, you should see something like this:

In our Perl script, we are using the $query variable to select the three columns from our MySQL database – item_name, item_SKU and item_cost. And we will decide which one to sort by with the variable $sortby. And we will assign ascending or descending order to the variable $sort. This will give us a total of six options for sorting columns and in which order:

$sortby $sort
item_name ascending order
item_name descending order
item_SKU ascending order
item_SKU descending order
item_cost ascending order
item_cost descending order

To build the URL for each, we simply use the script name – inventory_view.pl – and add the variables $sortby and $sort along with their values, using an ampersand for the delimiter.

inventory_view.pl?sortby=[column_name]&sort=[asc or desc]

Example:

inventory_view.pl?sortby=item_name&sort=asc

We will then use an image of an arrow pointing upwards (for ascending order) and downwards (for descending order). Here is an example of the HTML for “sort by item_name and ascending order”: (don’t forget to add the full path names for both the location of the Perl script and the location of your image file)

<a href="cgi-bin/inventory_view.pl?sortby=item_name&sort=asc"><img src="images/arrow_blue_up.png"></a>

You simply have to build a link for each of the six sorting options (two per database column) to correspond to each of the blue up/down arrows. All six links would look like this:

<a href="cgi-bin/inventory_view.pl?sort=asc&sortby=item_name"><img alt="sort ascending" title="sort ascending" height="15" src="images/arrow_blue_up.png"></a>
<a href="cgi-bin/inventory_view.pl?sort=desc&sortby=item_name"><img alt="sort descending" title="sort descending" height="15" src="images/arrow_blue_down.png"></a>

<a href="cgi-bin/inventory_view.pl?sort=asc&sortby=item_SKU"><img alt="sort ascending" title="sort ascending" height="15" src="images/arrow_blue_up.png"></a>
<a href="cgi-bin/inventory_view.pl?sort=desc&sortby=item_SKU"><img alt="sort descending" title="sort descending" height="15" src="images/arrow_blue_down.png"></a>

<a href="cgi-bin/inventory_view.pl?sort=asc&sortby=item_cost"><img alt="sort ascending" title="sort ascending" height="15" src="images/arrow_blue_up.png"></a>
<a href="cgi-bin/inventory_view.pl?sort=desc&sortby=item_cost"><img alt="sort descending" title="sort descending" height="15" src="images/arrow_blue_down.png"></a>

When the inventory_view.pl script is executed the first time, we have not set the $sortby column or $sort order, so it will default to sorting by item_name and ascending order (which really doesn’t have a value, as ascending order is the default sort order). Each time a sort request is made, a new connection will be made to the database, so that is something to consider when you decide which columns you want to be available to sort. And we aren’t putting a limit on the number of rows that are retrieved, so that you have multiple pages of items – but I will try to cover that in a future post.

While there are other solutions that don’t require you to hit the database each time, this should give you a quick (and dirty) way to sort columns of information in a table on a web page.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Follow

Get every new post delivered to your Inbox.

Join 46 other followers