MySQL 5.7 multi-source replication – automatically combining data from multiple databases into one

MySQL’s multi-source replication allows you to replicate data from multiple databases into one database in parallel (at the same time). This post will explain and show you how to set up multi-source replication. (WARNING: This is a very long and detailed post. You might want to grab a sandwich and a drink.)

In most replication environments, you have one master database and one or more slave databases. This topology is used for high-availability scenarios, where the reads and writes are split between multiple servers. Your application sends the writes to the master, and reads data from the slaves. This is one way to scale MySQL horizontally for reads, as you can have more than one slave. Multi-source replication allows you to write to multiple MySQL instances, and then combine the data into one server.

Here is a quick overview of MySQL multi-source replication:

MySQL Multi-Source Replication enables a replication slave to receive transactions from multiple sources simultaneously. Multi-source replication can be used to back up multiple servers to a single server, to merge table shards, and consolidate data from multiple servers to a single server. Multi-source replication does not implement any conflict detection or resolution when applying the transactions, and those tasks are left to the application if required. In a multi-source replication topology, a slave creates a replication channel for each master that it should receive transactions from. (from https://dev.mysql.com/doc/refman/5.7/en/replication-multi-source-overview.html)

In this post, I will demonstrate how to setup multi-source replication with two masters and one slave (as shown in the right side of the above picture). This will involve a new installation of MySQL 5.7.10 for each server.

I am not going to explain how to install MySQL, but you do need to follow the post-installation instructions for your operating system. If you don’t run the mysqld initialize post-installation process for each install, you will run into a lot of problems (I will explain this later). I will start with what you need to do post-installation, after the server is up and running. In this example, I have turned off GTID’s, and I will enable GTID later in the process. I have written several posts on replication, so I am going to assume you have some knowledge on how to setup replication, what GTID’s are, and how replication works. I will also show you some errors you may encounter.

You may visit these posts to learn more about replication:

MySQL Replication with Global Transaction Identifiers – Step-by-Step Install and Addition of Slaves – Part One
MySQL Replication with Global Transaction Identifiers – Step-by-Step Install and Addition of Slaves – Part Two
MySQL Replication – Creating a New Master/Slave Topology with or without Virtual Machines
MySQL Replication – Multi-Threaded Slaves (Parallel Event Execution)
MySQL 5.6 Delayed Replication – Making a Slave Deliberately Lag Behind a Master

Prior to installation, you will need to make sure the repositories on the slave are being stored in a table. I have this enabled on all three servers, but it is only required for the slave. If you don’t have this enabled, you can enable it via your configuration (my.cnf or my.ini) file:

[mysqld]
master-info-repository=TABLE
relay-log-info-repository=TABLE

If you did not enable this earlier, you will want to modify your configuration file and restart MySQL. You can check to see if this is enabled via this command:

mysql> SHOW VARIABLES LIKE '%repository%';
+---------------------------+-------+
| Variable_name             | Value |
+---------------------------+-------+
| master_info_repository    | TABLE |
| relay_log_info_repository | TABLE |
+---------------------------+-------+
2 rows in set (0.00 sec)

You will also need to modify your configuration files (my.cnf or my.ini) to make sure each server has a unique server_id. I use the last three digits of the IP address for each server as my server_id, as in this example:

[mysqld]
server-id=141

To view the server_id for a given server, execute this command:

mysql> SHOW VARIABLES WHERE VARIABLE_NAME = 'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 141   |
+---------------+-------+
1 row in set (0.00 sec)

I will be using three servers, and each one has MySQL 5.7.10 installed:

Server #1 – Slave – IP 192.168.1.141
Server #2 – Master #1 – IP 192.168.1.142
Server #3 – Master #2 – IP 192.168.1.143

I will refer to each of these servers as either Slave, Master #1 or Master #2.


NOTE: With MySQL 5.7, if you use a GUI-installation, a password is generated for root during the install process, and it should appear in the installation GUI. If you don’t see the password, it will be in your error log file. Also, when you run the post-installation process, a new root password may be generated again, and this password will also be located in your error log file.

# grep root error.log
2015-12-09T05:34:01.639797Z 1 [Note] A temporary password is generated for root@localhost: T<e-hd0cgI!d

You will need this password to continue using MySQL, and you will need to change it before issuing any other commands. Here is the command to change the password for root:

ALTER USER 'root'@'localhost' IDENTIFIED BY 'password';

The key to making multi-source replication work is to ensure you don’t have the same primary keys on your two masters. This is true especially if you are using AUTO_INCREMENT columns. If both masters have the same primary key for two different records, the data could be corrupted once it reaches the slave. I will show you one way to setup alternating key values using AUTO_INCREMENT. Of course, there are other ways to do this, including having your application generate the value for the keys.

If you don’t turn off GTID’s (via your configuration file) prior to running the post-installation steps, you will encounter a problem in that GTID’s will be created for the mysqld initialize process, and these transactions will be replicated to the slave. Let’s assume you enabled GTID’s from the start, before you ran the post-installation steps, and then you attempted to start replication on the slave. When you run the SHOW MASTER STATUS command, you will see something like this, showing you the 138 transactions which were executed on the master, and would now be replicated to the slave:

mysql> SHOW MASTER STATUS\G
*************************** 1. row ***************************
             File: mysql-bin.000002
         Position: 1286
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 73fdfd2a-9e36-11e5-8592-00a64151d129:1-138
1 row in set (0.00 sec)

On the slave, you would see an error in the SHOW SLAVE STATUS:

Last_Error: Error 'Can't create database 'mysql'; database exists' on query. Default database: 'mysql'. Query: 'CREATE DATABASE mysql;

And the RETRIEVED GTID SET would look like this, showing you the 138 transactions which have already been copied to the slave.

mysql> SHOW SLAVE STATUS\G
...
Retrieved_Gtid_Set: 73fdfd2a-9e36-11e5-8592-00a64151d129:1-138
...

You can attempt to skip these transactions, but it is much easier to wait and enable GTID’s later.

After the post-installation steps, you will get the same results on all three servers for a SHOW MASTER STATUS command:

mysql> SHOW MASTER STATUS\G
*************************** 1. row ***************************
             File: mysql-bin.000002
         Position: 398
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

Next, you will need to create the replication user on each of the master servers (where 192.168.1.141 is the IP address of your slave).

mysql> CREATE USER 'replicate'@'192.168.1.141' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.05 sec)

mysql> GRANT REPLICATION SLAVE ON *.* TO 'replicate'@'192.168.1.141';
Query OK, 0 rows affected (0.01 sec)

After, you can see the additional changes (from creating the user and granting permissions) to the binary log via the SHOW MASTER STATUS command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000002
         Position: 873
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set:
1 row in set (0.00 sec)

Now we are ready to create our schemas on the slave and the master servers. For this example, I have created a small table to be used for storing information about a comic book collection. Here are the CREATE DATABASE and CREATE TABLE commands:

Slave

CREATE DATABASE `comicbookdb`;
use comicbookdb;
CREATE TABLE `comics` (
  `comic_id` int(9) NOT NULL AUTO_INCREMENT,
  `comic_title` varchar(60) NOT NULL,
  `issue_number` decimal(9,0) NOT NULL,
  `pub_year` varchar(60) NOT NULL,
  `pub_month` varchar(60) NOT NULL,
  PRIMARY KEY (`comic_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1;

You can use the same SQL to create tables on the slave as you do on the master. Since we will using AUTO_INCREMENT values on the master, you might think you would not want to use AUTO_INCREMENT in the CREATE TABLE statement on the slave. But, since we will not be doing any writes to the slave, you can use the same CREATE TABLE statement as you use on a master. You will only need to modify the CREATE TABLE statements for the masters to create alternate primary keys values. (More on this later)

When the data replicates to the slave from the master, replication will handle the AUTO_INCREMENT columns.

Here is what happens when you create the comics table on the slave without specifying the AUTO_INCREMENT for the comic_id column, and then you start replication. From the SHOW SLAVE STATUS\G command:

mysql> SHOW SLAVE STATUS\G...
Last_SQL_Error: Error 'Field 'comic_id' doesn't have a default value' on query. Default database: 'comicbookdb'. Query: 'INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','5','2014','03')'
...

We now need to find a way to create different and alternating values for our primary key column – comic_id. You could have your application do this, but an easy way is to use the auto_increment_increment variable. In your configuration file (my.cnf or my.ini), you will want to add this for both master databases:

[mysqld]
auto_increment_increment = 2

Adding this variable will require a reboot of MySQL. But, you can set it during the mysql session if you don’t want to reboot. Just make sure to add it to your configuration file (my.cnf or my.ini), or it won’t take effect after the session ends.

mysql> SET @@auto_increment_increment=2;
Query OK, 0 rows affected (0.00 sec)

You can verify to see if this variable is enabled with this command:

mysql> SHOW VARIABLES WHERE VARIABLES_NAME = 'auto_increment_increment';
+-----------------------------+-------+
| Variable_name               | Value |
+-----------------------------+-------+
| auto_increment_increment    | 2     |
+-----------------------------+-------+
1 row in set (0.00 sec)

The auto_increment_increment variable will increment the AUTO_INCREMENT value by two (2) for each new primary key value. We will also need to use different initial primary key values for each master. You can’t simply use 0 (zero) and 1 (one) for the AUTO_INCREMENT value, as when you use the value of 0 (zero), it defaults back to a value of 1 (one). It is easier to set the AUTO_INCREMENT values to a higher number, with the last digits being 0 (zero) and 1 (one) for each master. Here are the CREATE DATABASE and CREATE TABLE commands for each master:

Master #1

CREATE DATABASE `comicbookdb`;
use comicbookdb;
CREATE TABLE `comics` (
  `comic_id` int(9) NOT NULL AUTO_INCREMENT,
  `comic_title` varchar(60) NOT NULL,
  `issue_number` decimal(9,0) NOT NULL,
  `pub_year` varchar(60) NOT NULL,
  `pub_month` varchar(60) NOT NULL,
  PRIMARY KEY (`comic_id`)
) ENGINE=InnoDB AUTO_INCREMENT=100000;

Master #2

CREATE DATABASE `comicbookdb`;
use comicbookdb;
CREATE TABLE `comics` (
  `comic_id` int(9) NOT NULL AUTO_INCREMENT,
  `comic_title` varchar(60) NOT NULL,
  `issue_number` decimal(9,0) NOT NULL,
  `pub_year` varchar(60) NOT NULL,
  `pub_month` varchar(60) NOT NULL,
  PRIMARY KEY (`comic_id`)
) ENGINE=InnoDB AUTO_INCREMENT=100001;

Now that we have all of our tables and users created, we can implement GTID’s on the master servers. I also implemented GTID’s on the slave, in case I wanted to add another slave to this slave. To enable GTID’s, I put the following my the configuration file (my.cnf or my.ini), and restarted MySQL. I added these variable below the auto_increment_increment variable.

[mysqld]
auto_increment_increment = 2
gtid-mode = on
enforce-gtid-consistency = 1

After you have restarted each server, you can take a look at the MASTER STATUS for each server, and the status should be the same:

mysql> SHOW MASTER STATUS\G
*************************** 1. row ***************************
             File: mysql-bin.000005
         Position: 154
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

You don’t have to do this, but I like to reset the master status on both masters and the slave. Resetting the master deletes all binary log files listed in the index file, resets the binary log index file to be empty, and creates a new binary log file. On each server (both masters and slave servers), I ran this:

mysql> RESET MASTER;
Query OK, 0 rows affected (0.00 sec)

mysql> SHOW MASTER STATUS\G
*************************** 1. row ***************************
             File: mysql-bin.000001
         Position: 154
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

You can see the new binary log (mysql-bin.000001), and the beginning position in the binary log (154). Let’s insert some data into one of the master databases, and then check the master’s status again. (And yes, we haven’t turned on replication yet).

Master #1

mysql> INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','1','2014','01');
Query OK, 1 row affected (0.02 sec)

mysql> SHOW MASTER STATUS\G
*************************** 1. row ***************************
             File: mysql-bin.000001
         Position: 574
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 63a7971c-b48c-11e5-87cf-f7b6a723ba3d:1
1 row in set (0.00 sec)

You can see the GTID created for the INSERT statement – 63a7971c-b48c-11e5-87cf-f7b6a723ba3d:1. The first part of the GTID (63a7971c-b48c-11e5-87cf-f7b6a723ba3d) is the UUID of the master. The UUID information can be found in the auto.cnf file, located in the data directory.

Master #1

# cat auto.cnf
[auto]
server-uuid=63a7971c-b48c-11e5-87cf-f7b6a723ba3d

Let’s insert another row of data, check the master status, and then look at the entries of the comics table:

Master #1

mysql> INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','2','2014','02');
Query OK, 1 row affected (0.05 sec)

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000001
         Position: 994
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 63a7971c-b48c-11e5-87cf-f7b6a723ba3d:1-2
1 row in set (0.00 sec)

mysql> select * from comics;
+----------+-------------+--------------+----------+-----------+
| comic_id | comic_title | issue_number | pub_year | pub_month |
+----------+-------------+--------------+----------+-----------+
|   100001 | Fly Man     |            1 | 2014     | 01        |
|   100003 | Fly Man     |            2 | 2014     | 02        |
+----------+-------------+--------------+----------+-----------+
2 rows in set (0.00 sec)

You can see how the values for the comic_id table are now incremented by two (2). Now we can insert two lines of data into the second master, look at the master’s status, and look at the entries in the comics database:

Master #2

mysql> INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','3','2014','03');
mysql> INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','4','2014','04');

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000005
         Position: 974
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 75e2e1dc-b48e-11e5-83bb-1438deb0d51e:1-2
1 row in set (0.00 sec)

mysql> select * from comics;
+----------+-------------+--------------+----------+-----------+
| comic_id | comic_title | issue_number | pub_year | pub_month |
+----------+-------------+--------------+----------+-----------+
|   100002 | Fly Man     |            3 | 2014     | 03        |
|   100004 | Fly Man     |            4 | 2014     | 04        |
+----------+-------------+--------------+----------+-----------+
2 rows in set (0.00 sec)

The second master has a different UUID than the first master, and that is how we can tell what GTID’s belong to which master. We now have two sets of GTID’s to replicate over to the slave. Of course, the slave will have it’s own UUID as well.

Master #1 and Master #2 GTID sets:

63a7971c-b48c-11e5-87cf-f7b6a723ba3d:1-2
75e2e1dc-b48e-11e5-83bb-1438deb0d51e:1-2

I always check to make sure the slave isn’t running before I do anything:

Slave

mysql> show slave status\G
Empty set (0.00 sec)

Unlike regular replication, in multi-source replication, you have to create a CHANNEL specific to each master. You will need to also name this channel, and I simply named the channels “master-142” and “master-143” to match their server_id‘s (as well as their IP addresses). Here is how you start replication for Master #1 (server_id=142).

Slave

mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.142', MASTER_USER='replicate', MASTER_PASSWORD='password', MASTER_AUTO_POSITION = 1 FOR CHANNEL 'master-142';
Query OK, 0 rows affected, 2 warnings (0.23 sec)

This statement produced two warnings, but they can be ignored. I am following the same instructions on the MySQL Manual Page.

Slave

mysql> SHOW WARNINGS\G
*************************** 1. row ***************************
  Level: Note
   Code: 1759
Message: Sending passwords in plain text without SSL/TLS is extremely insecure.
*************************** 2. row ***************************
  Level: Note
   Code: 1760
Message: Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2 rows in set (0.00 sec)

Now we can start the slave for channel ‘master-142‘:

Slave

mysql> START SLAVE FOR CHANNEL 'master-142';
Query OK, 0 rows affected (0.03 sec)

This command is the same as starting the SQL_THREAD and the IO_THREAD at the same time. There may be times when you will want to stop and stop either of these threads, so here is the syntax – as you have to specify which channel you want to modify:

START SLAVE SQL_THREAD FOR CHANNEL 'master-142';
START SLAVE IO_THREAD FOR CHANNEL 'master-142';

You can also issue a simple START SLAVE command, and it will start both threads for all currently configured replication channels. The slave has been started, and we should see the GTID’s from Master #1 already retrieved and applied to the database. (I am not going to display the entire SHOW SLAVE STATUS output, as it very long)

Slave

mysql> SHOW SLAVE STATUS FOR CHANNEL 'master-142'\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.142
...
                  Master_UUID: 63a7971c-b48c-11e5-87cf-f7b6a723ba3d
...
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
...
           Retrieved_Gtid_Set: 63a7971c-b48c-11e5-87cf-f7b6a723ba3d:1-2
            Executed_Gtid_Set: 63a7971c-b48c-11e5-87cf-f7b6a723ba3d:1-2
                Auto_Position: 1
...
                 Channel_Name: master-142

We can take a look at the comics table, and see the two entries from the Master #1 database (channel master-142):

Slave

mysql> select * from comics;
+----------+-------------+--------------+----------+-----------+
| comic_id | comic_title | issue_number | pub_year | pub_month |
+----------+-------------+--------------+----------+-----------+
|   100001 | Fly Man     |            1 | 2014     | 01        |
|   100003 | Fly Man     |            2 | 2014     | 02        |
+----------+-------------+--------------+----------+-----------+
2 rows in set (0.00 sec)

Since we have the first master up and running with replication, let’s start replication for the second master:

CHANGE MASTER TO MASTER_HOST='192.168.1.143', MASTER_USER='replicate', MASTER_PASSWORD='password', MASTER_AUTO_POSITION = 1 FOR CHANNEL 'master-143';

And we can check the SLAVE STATUS for this master: (Again, not all of the results are displayed below)

Slave

mysql> SHOW SLAVE STATUS FOR CHANNEL 'master-143'\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.143
...
                  Master_UUID: 75e2e1dc-b48e-11e5-83bb-1438deb0d51e
...
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
...
           Retrieved_Gtid_Set: 75e2e1dc-b48e-11e5-83bb-1438deb0d51e:1-2
            Executed_Gtid_Set: 63a7971c-b48c-11e5-87cf-f7b6a723ba3d:1-2,
75e2e1dc-b48e-11e5-83bb-1438deb0d51e:1-2,
                Auto_Position: 1
...
                 Channel_Name: master-143

We can see the slave has retrieved the two GTID’s (75e2e1dc-b48e-11e5-83bb-1438deb0d51e:1-2) and executed them as well. Looking at the comics table, we can see all four comics have been transferred from two different masters:

Slave

mysql> select * from comics;
+----------+-------------+--------------+----------+-----------+
| comic_id | comic_title | issue_number | pub_year | pub_month |
+----------+-------------+--------------+----------+-----------+
|   100001 | Fly Man     |            1 | 2014     | 01        |
|   100002 | Fly Man     |            3 | 2014     | 03        |
|   100003 | Fly Man     |            2 | 2014     | 02        |
|   100004 | Fly Man     |            4 | 2014     | 04        |
+----------+-------------+--------------+----------+-----------+
4 rows in set (0.01 sec)

Replication took care of the AUTO_INCREMENT values. However, if you were able to see the SQL statements which were being replicated, you would have seen the original INSERT statements:

INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','1','2014','01')
INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','2','2014','02');
INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','3','2014','03');
INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','4','2014','04');

The way replication handles the different AUTO_INCREMENT values is by sending over (from the master to the slave via the IO thread), the value for the comic_id column (which uses AUTO_INCREMENT). The value for this column (generated by the master) is transmitted along with the statement. We can take a look at the binary log on the master to see the SET INSERT_ID=100001 information, which is the value for the comic_id column, being transmitted to the slave along with the original SQL statement:

Slave

# mysqlbinlog mysql-bin.000001
...
# at 349
#160106 21:08:01 server id 142  end_log_pos 349 CRC32 0x48fb16a2 	Intvar
SET INSERT_ID=100001/*!*/;
#160106 21:08:01 server id 142  end_log_pos 543 CRC32 0xbaf55210 	Query	thread_id=1exec_time=0	error_code=0
use `comicbookdb`/*!*/;
SET TIMESTAMP=1452132481/*!*/;
INSERT INTO COMICS (comic_title, issue_number, pub_year, pub_month) VALUES('Fly Man','1','2014','01')
/*!*/;
...

You now have two master MySQL databases replicating data to a single MySQL slave database. Let me know if you have any problems following this tutorial. And follow me on Twitter at ScriptingMySQL and TonyDarnell.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Use MySQL to store data from Amazon’s API via Perl scripts

I really like Amazon.com and I have been a Prime member for several years. Along with millions of different items for sale, Amazon has an affiliate program, where you can earn money advertising products on your web site. When a visitor to your site clicks on a link and orders a product from Amazon, you earn a small commission on the total sale. As an affiliate, you have access to Amazon’s Product Advertising API for obtaining product information. But, you can use this information for many other purposes.

The Amazon API is like most other API’s, and their API web site provides you with code examples and explains how it all works. I am going to show you how a Perl program which you can use to access the API data and store it in a MySQL database. This program was modified from one of the examples on the Amazon API web site.

I wrote a book in 2014, and I wanted to be able to track the book’s ranking on Amazon. I have a couple of friends who wrote books as well, so I tracked their ranking at the same time. By using the API, I can get a lot of information about any product – including my book’s ranking. And, I can keep a historical record of the ranks in a MySQL database. You could also use this to track the price of a product, and you could have the script notify you if the price changed or went below a certain threshold. Example – if you want to know when the price of that 55″ 4K television drops below $599 – you can use the API to check the price once an hour – and send you an alert when the price drops. Most prices and ranks only change (at the most) once an hour, so running the script every five minutes isn’t necessary.

To access Amazon’s API, you will need to register as an Amazon affiliate, and obtain your own Access Keys (Access Key ID and Secret Access Key). In the Perl script, this information goes here:

use constant myAWSId	    => 'Access Key ID';
use constant myAWSSecret    => 'Secret Access Key';
use constant myEndPoint	    => 'ecs.amazonaws.com';

The following Perl script is an example of getting the rank for a particular item. To use the script and to access Amazon’s API, you have to supply the product’s ASIN (Amazon Standard Identification Number), or for a book, you can supply the ISBN (International Standard Book Number). The ASIN is usually found in the URL for a product – as in http://www.amazon.com/gp/product/B00R0ZM5W4 – where B00R0ZM5W4 is the ASIN. I use a MySQL table to store the ranking information (you can modify the table to also include other information). Here is the CREATE TABLE statement for this table:

CREATE TABLE `ranks` (
  `idranks` int(11) NOT NULL AUTO_INCREMENT,
  `product_id` varchar(15) DEFAULT NULL,
  `product_title` varchar(100) DEFAULT NULL,
  `product_rank` varchar(15) DEFAULT NULL,
  `rank_datetime` datetime DEFAULT NULL,
  PRIMARY KEY (`idranks`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

The script also has a sub-routine (named sub ConnectToMySql) which uses connection information stored in a text file one directory below the directory where the script is executed. In this example, the file name is accessAMAZON, and the file contains this connection information (in this order):

database_name
IP_address
mysql_user_name
mysql_password

You can hardcode the connection information directly into the script if you prefer.

The script will also output all of the potential variables and values for each ASIN/ISBN, and this information is stored in a text file with the name equal to the ASIN/ISBN, and with a prefix of “.txt”. Example: B00R0ZM5W4.txt There are over a hundred different variables

The script is named amazonrank.pl and may be found on github at: https://github.com/ScriptingMySQL/PerlFiles.

Here is the script: (remember to add your own Amazon access key information into the script)

#!/usr/bin/perl -w

##############################################################################################
# Copyright 2009,2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file 
# except in compliance with the License. A copy of the License is located at
#
#	   http://aws.amazon.com/apache2.0/
#
# or in the "LICENSE.txt" file accompanying this file. This file is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under the License. 
#
#############################################################################################
#
#  Amazon Product Advertising API
#  Signed Requests Sample Code
#
#  API Version: 2009-03-31/2013-08-01
#
#############################################################################################

#use strict;
#use warnings;

#no warnings 'uninitialized';

use Data::Dumper;
use DBD::mysql;
use DBI;

use RequestSignatureHelper;
use LWP::UserAgent;
use XML::Simple;

use constant myAWSId		=> 'Access Key ID';
use constant myAWSSecret	=> 'Secret Access Key';
use constant myEndPoint		=> 'ecs.amazonaws.com';

my $Database = "amazonrank";

# see if user provided ItemId on command-line
# my $itemId = shift @ARGV || '0545010225';

# ----------------------------------------------------------------------------------
# this has to be near the top - as other parts of the script rely on these figures
# ----------------------------------------------------------------------------------

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;

$year = $year + 1900;
$mon = $mon + 1;

# add a zero if the value is less than 10

if ($sec < 10) { $sec = "0$sec"; }
if ($min < 10) { $min = "0$min"; }
if ($hour < 10) { $hour = "0$hour"; }
if ($mday < 10) { $mday = "0$mday"; }
if ($mon < 10) { $mon = "0$mon"; }
if ($year < 10) { $year = "0$year"; }
if ($wday < 10) { $wday = "0$wday"; }
if ($yday < 10) { $yday = "0$yday"; }
if ($isdst $fileout") or die print "Couldn't open log_file: $!";

print "Retrieving rank for $itemId - ";

# Set up the helper
my $helper = new RequestSignatureHelper (
	+RequestSignatureHelper::kAWSAccessKeyId => myAWSId,
	+RequestSignatureHelper::kAWSSecretKey => myAWSSecret,
	+RequestSignatureHelper::kEndPoint => myEndPoint,
);

# A simple ItemLookup request
my $request = {
	Service => 'AWSECommerceService',
	Operation => 'ItemLookup',
	Version => '2013-08-01',
	ItemId => $itemId,
	AssociateTag => 'scmy-20',
	ResponseGroup => 'Large',
};

# Sign the request
my $signedRequest = $helper->sign($request);

# We can use the helper's canonicalize() function to construct the query string too.
my $queryString = $helper->canonicalize($signedRequest);
my $url = "http://" . myEndPoint . "/onca/xml?" . $queryString;
#print "Sending request to URL: $url \n";

my $ua = new LWP::UserAgent();
my $response = $ua->get($url);
my $content = $response->content();
#print "Recieved Response: $content \n";

my $xmlParser = new XML::Simple();
my $xml = $xmlParser->XMLin($content);

# This prints out all of the item information into a text file
print OUT "Parsed XML is: " . Dumper($xml) . "\n";

if ($response->is_success()) {

	# Here is where you extract the information for each item
	my $title = $xml->{Items}->{Item}->{ItemAttributes}->{Title};
	my $rank = $xml->{Items}->{Item}->{SalesRank};
	my $price = $xml->{Items}->{Item}->{Offers}->{Offer}->{OfferListing}->{Price}->{FormattedPrice};

	# option to print to screen - uncomment this next line
	#	print "Item $itemId is titled \"$title\" and ranked $rank\n";

if (length($rank) > 1)

{
	$dbh = ConnectToMySql($Database);	
	$query = "insert into ranks (product_id, product_rank, rank_datetime, product_title) values ('$itemId', '$rank', '$DateTime','$title')";
	$sth = $dbh->prepare($query);
	$sth->execute();
	$dbh->disconnect;

	print "$rank || $title || $itemId || \$$price\n\n";
}

else

{
	print "Rank for: $title is unavailable.\n\n";
}

# optional sleep 
# sleep 5;


} else {
	my $error = findError($xml);
	if (defined $error) {
	print "Error: " . $error->{Code} . ": " . $error->{Message} . "\n";
	} else {
	print "Unknown Error!\n";
	}
}

sub findError {
	my $xml = shift;
	
	return undef unless ref($xml) eq 'HASH';

	if (exists $xml->{Error}) { return $xml->{Error}; };

	for (keys %$xml) {
	my $error = findError($xml->{$_});
	return $error if defined $error;
	}

	return undef;
}


# end - foreach
}

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

  # my ($db) = @_;

   open(PW, "<..\/accessAMAZON") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

I am not the best Perl programmer, nor am I an expert at the Amazon API, so there may be a better/easier way to do this. Good luck with the scripts and let me know how they work for you. And follow me on Twitter at ScriptingMySQL and TonyDarnell.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Add RSS feeds to your Twitter stream using MySQL and Perl

Adding good content to Twitter can be a pain. I can’t do it during working hours, and I don’t have much time at night. But, the more content you have, the more followers you can gain, and the more your original tweets can be seen (hopefully). I have written several posts about using the latest Perl-Twitter API – Net::Twitter::Lite::WithAPIv1_1, so you might want to check these out as well.

Use MySQL and Perl to automatically find, follow and unfollow twitter users

Using Perl to retrieve direct messages from Twitter, insert messages into a MySQL database and then delete the messages

Using Perl and MySQL to automatically respond to retweets on twitter

Using Perl to send tweets stored in a MySQL database to twitter

However, finding good supplemental content is easy. There are plenty of companies and web sites which offer RSS (Rich Site Summary or Really Simple Syndication) feeds that you can use on Twitter. In this post, I will show you how to capture the RSS feed from a web site and tweet the links with your Twitter account.

One warning – Twitter has strict guidelines for using API’s on their site. The rate limits may be found at https://dev.twitter.com/rest/public/rate-limiting.  In my previous post, I included a Perl script (rate_limit.pl) that prints out the API rate limits for your application. You can use this script to monitor your usage. The key with using the Twitter API is to not be too aggressive, or your app will be banned by Twitter. For example, Twitter does not allow bulk follows and unfollows – so having patience is important. And if you use these scripts to send SPAM, your app and Twitter account will be banned.

For this post, I created two Perl scripts. The first grabs the RSS information and inserts it into a MySQL database. The second script pulls the information from the MySQL database and sends a tweet one record at a time. You can set up a cron job to do both of these scripts, but remember the RSS script needs to run first.

The key when parsing RSS feeds is to find a unique number/identifier which you can use for each feed item. Some RSS feeds will include a <guid> tag, but often you have to parse the URL link to get a unique number/identifier. The script uses this unique identifier to check and see if this RSS feed item is already in the database, so you don’t have duplicate RSS items. I tested these scripts on a dozen different RSS feeds, and it works really well.

Before you attempt to use these scripts, you will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

We will need to create two databases. One to hold the RSS feed information, and another to hold the tweet history. I use the tweet history to delete older tweets with another Perl script I wrote. Here is the SQL to create the databases:

CREATE TABLE 'rss_oracle' (
  'id' int(9) NOT NULL AUTO_INCREMENT,
  'id_post' bigint(14) NOT NULL DEFAULT '0',
  'post_title' varchar(256) DEFAULT NULL,
  'post_url' varchar(256) DEFAULT NULL,
  'post_author' varchar(48) DEFAULT NULL,
  'post_date' datetime DEFAULT NULL,
  'tweet_sent' varchar(3) DEFAULT NULL,
  'tweet_sent_date' datetime DEFAULT NULL,
  PRIMARY KEY ('id','id_post')
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

CREATE TABLE `history` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `tweet` char(140) DEFAULT NULL,
  `tweet_id` varchar(30) DEFAULT NULL,
  `tweet_update` datetime DEFAULT NULL,
  `error` char(3) DEFAULT '',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

For our RSS feed, we will be grabbing the RSS from Oracle‘s web site. You will need to change the $source variable to whatever RSS feed you want to use. As you grab the data, you will need to test your script to see what non-ASCII characters are included in the title. This example script should translate most characters to ASCII ones, but you will want to check the output before actually tweeting the posts. When you find strange characters, you can simply truncate the RSS database table and re-run the script, or you can uncomment the print statements to see what is being returned.

In the subroutine ConnectToMySql used in the Perl scripts, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file – named accessTweets contains this information:

database_name
hostname or IP
MySQL user name
password

The scripts are also available on GitHub – https://github.com/ScriptingMySQL/PerlFiles. (For some reason, WordPress likes to change the code.)

rss_oracle.pl

#!/usr/bin/perl -w

use LWP::Simple 'get';
use Encode;
use utf8;
use Text::Unidecode;
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use XML::FeedPP;

    my $source = 'https://www.oracle.com/corporate/press/rss/rss-pr.xml';
    my $feed = XML::FeedPP->new( $source );

foreach my $item ( $feed->get_item() ) {

	$post_id = $item->guid();
	$title = $item->title();

	# this is where you have to replace non-ASCII characters
	# each RSS feed will use different non-ASCII characters
	$title = decode_utf8( $title );
	$title =~ s/’/\^/g;
	$title =~ s/\&\#8217;/^/g;
	$title =~ s/\&\#8216;/^/g;
	$title =~ s/\&\#8212;/-/g;
	$title =~ s/\&\#8230;/-/g;
	$title =~ s/'/\^/g;
	$title =~ s/‘/\^/g;
	$title =~ s/’/^/g;
	$title =~ s/…/.../g;
	$title =~ s/—/-/g;
	$title =~ s/-/-/g;
	$title =~ s/–/-/g;
	$title =~ s/ 8212 /-/g;
	$title =~ s/ 8230 /-/g;
	$title =~ s/<em>//g;
	$title =~ s/</em>//g;
	$title =~ s/[^a-zA-Z0-9 ~,._*:?\$^-]//g;

	$link = $item->link();

	# uncomment this line to test
	#print "$post_id | $title | $link\n";
	
	# see if we already have this post in the RSS database	
	$dbh = ConnectToMySql($Database);
	$query = "select id_post FROM rss_oracle where id_post = '$post_id' limit 1";	
	$sth = $dbh->prepare($query);
	$sth->execute();

        #print "\n$query\n\n";

	# loop through our results - one user at a time
	while (@data = $sth->fetchrow_array()) {
		$id_post_found = $data[0];
	# end - while
	}

		#print "id_post_found $id_post_found \n";

		if (length($id_post_found) > 1)
			
		{
			#print "Found $id_post_found...\n";
			$id_post_found = "";
		}
			
		else
			
		{
			$dbh2 = ConnectToMySql($Database);
			$query2 = "insert into rss_oracle (id_post, post_title, post_url) values ('$post_id', '$title', '$link')";	
			$sth2 = $dbh2->prepare($query2);
			# during testing, comment this next line to prevent the data from being inserted into the database
			$sth2->execute();
			#print "$query2\n";

			$title = "";
			$link = "";
			$id_post_found = "";
			
		}

	# foreach my $item
    }

exit;

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

And here is the script to do the tweeting. You will need to add your Twitter Name to the $My_Twitter_User variable.

tweet-oracle.pl

#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

$My_Twitter_User = "YourTwitterNameHere";

# ----------------------------------------------------------------------------------
# this has to be near the top - as other parts of the script rely on these figures
# ----------------------------------------------------------------------------------

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;

$year = $year + 1900;
$mon = $mon + 1;

# add a zero if the value is less than 10

if ($sec < 10) { $sec = "0$sec"; }
if ($min < 10) { $min = "0$min"; }
if ($hour < 10) { $hour = "0$hour"; }
if ($mday < 10) { $mday = "0$mday"; }
if ($mon < 10) { $mon = "0$mon"; }
if ($year < 10) { $year = "0$year"; }
if ($wday < 10) { $wday = "0$wday"; }
if ($yday < 10) { $yday = "0$yday"; }
if ($isdst prepare($query);
    $sth->execute();
    
    #print "$query \n";
    
			while (@data = $sth->fetchrow_array()) {
            
					$id_post = $data[0];
					$post_title = $data[1];
					$post_url = $data[2];

				$post_title =~s/  / /g;
				$post_title =~s/ ampamp / and /g;

			# while (@data = $sth->fetchrow_array()) {
			}
			
if (length($post_title)  100)

	{
		$title_trimmed = substr($post_title,0,105);
		$title_trimmed = "$title_trimmed...";
		#print "TRIM $title_trimmed\n";

			if ($title_trimmed =~ " Oracle ")

			{
				$title_trimmed =~ s/ Oracle / \#Oracle /;
			}

			else

			{
				$add_Hashtag = " \#Oracle ";
			}		


	}
	
	else
	
	{
		$title_trimmed = "$post_title";
		$title_trimmed =~ s/ Oracle / \#Oracle /;
		#print "x $title_trimmed\n";
	}


$tweet = "$title_trimmed $add_Hashtag \n\n$post_url";

$tweet =~ s/  / /g;
$tweet =~ s/  / /g;

$add_Hashtag = "";

#print "$tweet \n";

#exit;

# ----------------------------------------------------------------------------
# find carats and substitue for single quote
# ----------------------------------------------------------------------------

$tweet =~ s/\^/\'/g;

# ----------------------------------------------------------------------------
# send tweet
# ----------------------------------------------------------------------------


# Credentials for your twitter application
# You will need to substitute your own values for these variables
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      apiurl => 'http://api.twitter.com/1.1',
      ssl                 => 1
);
	# send the tweet
	my $results = eval { $nt->update("$tweet") };

#		    print "---DUMPER START---\n";
#			print Dumper $results;
#		    print "---DUMPER END---\n\n";


# ----------------------------------------------------------------------------
# update mysql with new date for last_tweet date/time
# ----------------------------------------------------------------------------

$dbh = ConnectToMySql($Database);
$query = "UPDATE rss_oracle SET tweet_sent_date = '$DateTime' , tweet_sent = 'yes' where id_post = '$id_post'";
$sth = $dbh->prepare($query);
$sth->execute();

# ----------------------------------------------------------------------------
# get the status id of the last tweet
# ----------------------------------------------------------------------------

my $statuses = $nt->user_timeline({ user => "$My_Twitter_User", count=> 1 });

for my $status ( @$statuses ) {
	$tweet_id = "$status->{id}";
#	print "Tweet ID $tweet_id\n";
}


# ----------------------------------------------------------------------------
# replace special characters
# ----------------------------------------------------------------------------

$tweet =~ s/\\\n/~/g;
$tweet =~ s/\n/~/g;
$tweet =~ s/\'/^/g;

# update mysql with new date for last_tweet date/time

$dbh = ConnectToMySql($Database);	
$query = "insert into history (tweet,tweet_id,tweet_update) values ('$tweet','$tweet_id','$DateTime')";
$sth = $dbh->prepare($query);
$sth->execute();

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

I am not the best Perl programmer, nor am I an expert at the Twitter API, so there may be a better/easier way to do this. Good luck with the scripts and let me know how they work for you. And follow me on Twitter at ScriptingMySQL and TonyDarnell.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Using Perl to retrieve direct messages from Twitter, insert messages into a MySQL database and then delete the messages

In two earlier posts, I gave some examples on how to use Perl to send tweets stored in a MySQL database to Twitter, and then how to automatically reply to your retweets with a “thanks”. In this post, I will show you how to automatically download your direct messages from Twitter, store the messages in a MySQL database, and then delete them.

I don’t like the way Twitter makes me read my direct messages. Granted, the majority of them are not real messages. The message is usually thanking me for following the sender, and then there is a personal website link or a link to a product they are selling. But if I want to delete a direct message, I have to click on the message, click the trash can, and then confirm I want to delete the message. This process is too cumbersome.

I wrote a Perl script that connects to Twitter, downloads your direct messages, inserts them into a MySQL database, and then deletes the direct message. I had a year of direct messages in my Inbox, and in a few minutes, they were gone. But I still had a copy in my MySQL database, in case I wanted to go back and read them or respond.

Just like in the first post, you will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

One caveat: twitter has a rate limit on how often you may connect with your application – depending upon what you are trying to do. See Rate Limiting and Rate Limits for more information. So, check your limits before you try downloading a large number of direct messages.

Just like the other two apps, we will be using the Net::Twitter module, as well as a few more modules which are listed in the beginning of the script.

There are two kinds of direct messages – the messages you sent and the messages you have received. So, we will be looking at two different Perl scripts to retrieve each kind of message. In the Net::Twitter module, there are a lot of different variables you can capture. In these examples, I only grabbed what I thought I needed. If there are other data variables you want, you will have to modify the tables and scripts.

First, you will need to create two databases to store your direct messages – one for the sent messages and one for the received messages. Here are the CREATE TABLE statements for both tables:

CREATE TABLE 'MESSAGES_SENT ' (
  'id' int(10) NOT NULL AUTO_INCREMENT,
  'creation_datetime' datetime DEFAULT NULL,
  'message_id' bigint(20) DEFAULT NULL,
  'sender_screen_name' varchar(16) DEFAULT NULL,
  'recipient_screen_name' varchar(16) DEFAULT NULL,
  'message_text' varchar(140) DEFAULT NULL,
  'sender_friends_count' int(10) DEFAULT NULL,
  'sender_time_zone' varchar(64) DEFAULT NULL,
  'sender_description' varchar(160) DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1


CREATE TABLE 'MESSAGES_RECEIVED' (
  'id' int(10) NOT NULL AUTO_INCREMENT,
  'creation_datetime' datetime DEFAULT NULL,
  'message_id' bigint(20) DEFAULT NULL,
  'sender_screen_name' varchar(16) DEFAULT NULL,
  'recipient_screen_name' varchar(16) DEFAULT NULL,
  'message_text' varchar(140) DEFAULT NULL,
  'sender_friends_count' int(10) DEFAULT NULL,
  'sender_time_zone' varchar(64) DEFAULT NULL,
  'sender_description' varchar(160) DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1

I am only storing a few of the variables. In each script, if you want to see all of the variables which are available, you will need to uncomment this line:

#          print Dumper $statuses;

When using the Dumper command, be sure that you are only downloading a single message. Otherwise, the script will pull all of the variables and their values for all of the messages. You only need one message to see all of the variables. To only retrieve one message, be sure that the value $number_of_messages is equal to 1:

$number_of_messages = 1;

Afterwards, you can change this value to whatever you want – just be sure to watch your Twitter limits.

The print Dumper $statuses; line will display all of the possible variables for a single message. For the “sent” script, there are about 140 variables. For the “received” script, there are about 67 variables.

Here is the “Get direct messages I sent to other people” script, which uses the MESSAGES_SENT database:

#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use Data::Dumper;
    
# ----------------------------------------------------------------------------
# get twitter direct messages sent
# ----------------------------------------------------------------------------

# you will need to fill in this information about your application and your twitter account
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

$number_of_messages = 1;

  eval {
      my $statuses = $nt->sent_direct_messages({ count => $number_of_messages });

# uncomment this line and set the count above to 1 (one) to see all of the variables that are available
#          print Dumper $statuses;

	for my $status ( @$statuses ) {
		
		$creation_date = $status->{created_at};

		# convert $creation_date to MySQL datetime format
		#  0   1   2  3         4     5
		# Fri Sep 04 07:32:05 +0000 2015
		
		@creation_date_array = split(" ",$creation_date);
		
		$creation_date_month = $creation_date_array[1];
		
		if ($creation_date_month =~ "Jan") { $creation_date_month = "01"}
		if ($creation_date_month =~ "Feb") { $creation_date_month = "02"}
		if ($creation_date_month =~ "Mar") { $creation_date_month = "03"}
		if ($creation_date_month =~ "Apr") { $creation_date_month = "04"}
		if ($creation_date_month =~ "May") { $creation_date_month = "05"}
		if ($creation_date_month =~ "Jun") { $creation_date_month = "06"}
		if ($creation_date_month =~ "Jul") { $creation_date_month = "07"}
		if ($creation_date_month =~ "Aug") { $creation_date_month = "08"}
		if ($creation_date_month =~ "Sep") { $creation_date_month = "09"}
		if ($creation_date_month =~ "Oct") { $creation_date_month = "10"}
		if ($creation_date_month =~ "Nov") { $creation_date_month = "11"}
		if ($creation_date_month =~ "Dec") { $creation_date_month = "12"}
		
		$creation_date_day_of_month = $creation_date_array[2];
		$creation_date_year = $creation_date_array[5];
		$creation_date_time = $creation_date_array[3];
		$creation_datetime = "$creation_date_year-$creation_date_month-$creation_date_day_of_month $creation_date_time";

		$message_id = $status->{id};
		
		$sender_screen_name = $status->{sender}{screen_name};
		$sender_screen_name =~ s/\'/\~/g;

		$recipient_screen_name = $status->{recipient_screen_name};
		$recipient_screen_name =~ s/\'/\~/g;
				
		$message_text = $status->{text};
		# remove carriage returns and single tick marks
		$message_text =~ s/\n/ /g;
		$message_text =~ s/\'/\~/g;

		$sender_friends_count = $status->{recipient}{followers_count};

		$sender_time_zone = $status->{sender}{time_zone};
		
		$sender_description = $status->{sender}{description};
		$sender_description =~ s/\n/ /g;
		$sender_description =~ s/\'/\~/g;

		# uncomment this line if you want to print
		# print "$creation_date_day_of_month - $creation_datetime - $message_id - $sender_screen_name - $recipient_screen_name - $message_text - $sender_friends_count - $sender_time_zone - $sender_description\n";


$dbh = ConnectToMySql($Database);	
$query = "insert into messages_sent (creation_datetime,message_id,sender_screen_name,recipient_screen_name,message_text,sender_friends_count,sender_time_zone,sender_description) values ('$creation_datetime','$message_id','$sender_screen_name','$recipient_screen_name','$message_text','$sender_friends_count','$sender_time_zone','$sender_description')";
#print "\nquery $query\n";
$sth = $dbh->prepare($query);
$sth->execute();


# stop the program if we have an error with the database
if ( $sth->err )
{
	die "ERROR! return code:" . $sth->err . " error msg: " . $sth->errstr . "\n";
}

else

{
	 my $destroy_id = eval { $nt->destroy_direct_message("$message_id") };
}



# you can change this so you don't go over your twitter connection limits
sleep 5;


	# end for my $status
	}

# end eval
  };

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

Here is the “Get direct messages I received from other people” script, which uses the MESSAGES_RECEIVED database:

#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use Data::Dumper;
       
# ----------------------------------------------------------------------------
# get twitter direct messages received
# ----------------------------------------------------------------------------

# you will need to fill in this information about your application and your twitter account
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

$number_of_messages = 1;

  eval {
      my $statuses = $nt->direct_messages({ count => $number_of_messages });

			# uncomment this line and set the $number_of_messages variable above to 1 (one) to see all of the variables that are available
#			print Dumper $statuses;

	for my $status ( @$statuses ) {
		
		$creation_date = $status->{created_at};

		# convert $creation_date to MySQL datetime format
		#  0   1   2  3         4     5
		# Fri Sep 04 07:32:05 +0000 2015
		
		@creation_date_array = split(" ",$creation_date);
		
		$creation_date_month = $creation_date_array[1];
		
		if ($creation_date_month =~ "Jan") { $creation_date_month = "01"}
		if ($creation_date_month =~ "Feb") { $creation_date_month = "02"}
		if ($creation_date_month =~ "Mar") { $creation_date_month = "03"}
		if ($creation_date_month =~ "Apr") { $creation_date_month = "04"}
		if ($creation_date_month =~ "May") { $creation_date_month = "05"}
		if ($creation_date_month =~ "Jun") { $creation_date_month = "06"}
		if ($creation_date_month =~ "Jul") { $creation_date_month = "07"}
		if ($creation_date_month =~ "Aug") { $creation_date_month = "08"}
		if ($creation_date_month =~ "Sep") { $creation_date_month = "09"}
		if ($creation_date_month =~ "Oct") { $creation_date_month = "10"}
		if ($creation_date_month =~ "Nov") { $creation_date_month = "11"}
		if ($creation_date_month =~ "Dec") { $creation_date_month = "12"}
		
		$creation_date_day_of_month = $creation_date_array[2];
		$creation_date_year = $creation_date_array[5];
		$creation_date_time = $creation_date_array[3];
		$creation_datetime = "$creation_date_year-$creation_date_month-$creation_date_day_of_month $creation_date_time";

		$message_id = $status->{id};
		
		$sender_screen_name = $status->{sender}{screen_name};
		$sender_screen_name =~ s/\'/\~/g;
		
		$message_text = $status->{text};
		# remove carriage returns and single tick marks
		$message_text =~ s/\n/ /g;
		$message_text =~ s/\'/\~/g;

		$sender_friends_count = $status->{sender}{friends_count};

		$sender_time_zone = $status->{sender}{time_zone};
		
		$sender_description = $status->{sender}{description};
		$sender_description =~ s/\n/ /g;
		$sender_description =~ s/\'/\~/g;

		$recipient_screen_name = $status->{recipient_screen_name};
		$recipient_screen_name =~ s/\'/\~/g;
		
		# uncomment this line if you want to see the output
		# print "$creation_datetime - $message_id - $sender_screen_name - $recipient_screen_name - $message_text - $sender_friends_count - $sender_time_zone - $sender_description\n";

$dbh = ConnectToMySql($Database);	
$query = "insert into messages_received (creation_datetime,message_id,sender_screen_name,recipient_screen_name,message_text,sender_friends_count,sender_time_zone,sender_description) values ('$creation_datetime','$message_id','$sender_screen_name','$recipient_screen_name','$message_text','$sender_friends_count','$sender_time_zone','$sender_description')";
#print "\nquery $query\n";
$sth = $dbh->prepare($query);
$sth->execute();

# stop the program if we have an error with the database
if ( $sth->err )
{
	die "ERROR! return code:" . $sth->err . " error msg: " . $sth->errstr . "\n";
}

else

{
	 my $destroy_id = eval { $nt->destroy_direct_message("$message_id") };
}

# you can change this so you don't go over your twitter connection limits
sleep 5;

	# end for my $status
	}

# end eval
  };

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file – named accessTweets contains this information:

database_name
hostname or IP
MySQL user name
password

You may hardcode these values into the script if you don’t want to mess with storing them in a file.

I tested these scripts on two twitter accounts, and everything worked for me – but I ran out of messages quickly. Let me know if you have problems. I am not the best Perl programmer, nor am I an expert at the Twitter API, so there may be a better/easier way to do this.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

MySQL Enterprise Audit – parsing audit information from log files, inserting into MySQL table

The MySQL Enterprise Audit plug-in is part of the MySQL Enterprise Edition (available through a paid license). Basically, Enterprise Audit tracks everything that is happening on your MySQL server, and can be used to protect/detect the misuse of information, and to meet popular compliance regulations including HIPAA, Sarbanes-Oxley, and the PCI Data Security Standard.

MySQL Enterprise Audit uses the open MySQL Audit API to enable standard, policy-based monitoring and logging of connection and query activity executed on specific MySQL servers. Designed to meet the Oracle audit specification, MySQL Enterprise Audit provides an out of box, easy to use auditing and compliance solution for applications that are governed by both internal and external regulatory guidelines.

When installed, the audit plugin enables MySQL Server to produce a log file containing an audit record of server activity. The log contents include when clients connect and disconnect, and what actions they perform while connected, such as which databases and tables they access.
(from https://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin.html)

When you enable MySQL Enterprise Audit, log files are generated in your MySQL data directory. You can use tools like MySQL Workbench (Enterprise Edition) or Oracle Audit Vault to import the log data, to view the information and to generate reports.

I was talking with a client, and he wanted to know if the audit data could be stored in a table. Currently (as of MySQL 5.6.25), the audit information is stored as XML in the audit log files. So, I decided to write a quick Perl script that would parse the XML log files and insert the information into a MySQL database. You will need to set the size of your audit log files in your my.cnf or my.ini configuration file via the audit_log_rotate_on_size variable. You might need to adjust the size of your log files based upon database activity and how well the script parses the log files. If your log files are very large, the Perl script might have issues processing it, and you might want to decrease the size of your log files and run the script more frequently.


CAVEAT
Enterprise Audit does require a license from MySQL. If you are interested in an Enterprise subscription, contact me via the comment section below. If you are an Enterprise customer, you will need to configure Enterprise Audit first. See the Enterprise Audit online documentation page for more information, or contact MySQL Support.


For the data fields, I used the audit log file format information found at The Audit Log File page on MySQL.com.

My MySQL server doesn’t have a lot of activity, so I tried to configure the size of the data fields as best as possible to accommodate the possible size of the data in each field. There may be instances where you will have to increase the size of these fields or change their data types. The largest field is the SQL_TEXT field which will contain your SQL statements. Every table has a max row size of 65,535 bytes. So, the largest possible size of the SQL_TEXT field could be for this example is around 63,200 bytes (65,535 bytes minus the sum of the size of all of the other fields, and minus the 1-byte or 2-byte length prefix used for each varchar field). In this example, the SQL_TEXT field is set to 8,096 bytes, so you may need to increase or decrease this value.

I used varchar data types for each field, excluding the primary key field named ID. I did not spend a lot of time on the database schema, so you might want to modify it a little. I am sure that some of the fields are integers, but I did not have enough data in my log files to positively determine all of the possible values for each field. I did read the online manual, and it stated that CONNECTION_ID, SERVER_ID, STATUS, STATUS_CODE and VERSION were unsigned integers – but I left them as varchar.


NOTICE
This script requires the use of the new format for the audit log files, which is available in MySQL versions 5.6.20 or later.


I created a database along with two tables; one to store the log file information, and a history table to keep track of what files had already been parsed and inserted into MySQL, as well as the number of log file entries. The CREATE DATABASE and CREATE TABLE syntax is as follows:

CREATE DATABASE `audit_information` /*!40100 DEFAULT CHARACTER SET latin1 */

CREATE TABLE `audit_parsed` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `COMMAND_CLASS` varchar(64) DEFAULT NULL,
  `CONNECTIONID` varchar(32) DEFAULT NULL,
  `DB_NAME` varchar(64) DEFAULT NULL,
  `HOST_NAME` varchar(256) DEFAULT NULL,
  `IP_ADDRESS` varchar(16) DEFAULT NULL,
  `MYSQL_VERSION` varchar(64) DEFAULT NULL,
  `COMMAND_NAME` varchar(64) DEFAULT NULL,
  `OS_LOGIN` varchar(64) DEFAULT NULL,
  `OS_VERSION` varchar(256) DEFAULT NULL,
  `PRIV_USER` varchar(16) DEFAULT NULL,
  `PROXY_USER` varchar(16) DEFAULT NULL,
  `RECORD_ID` varchar(64) DEFAULT NULL,
  `SERVER_ID` varchar(32) DEFAULT NULL,
  `SQL_TEXT` varchar(8096) DEFAULT NULL,
  `STARTUP_OPTIONS` varchar(1024) DEFAULT NULL,
  `COMMAND_STATUS` varchar(64) DEFAULT NULL,
  `STATUS_CODE` varchar(11) DEFAULT NULL,
  `DATE_TIMESTAMP` varchar(24) DEFAULT NULL,
  `USER_NAME` varchar(128) DEFAULT NULL,
  `LOG_VERSION` varchar(11) DEFAULT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

CREATE TABLE `audit_history` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `AUDIT_LOG_NAME` varchar(64) DEFAULT NULL,
  `PARSED_DATE_TIME` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `LOG_ENTRIES` int(11) DEFAULT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

The Perl script finds the non-active log files (which end in .xml – example: audit.log.14357895017796690.xml), parses the data, creates an SQL file with INSERT statements, imports the data via the mysql command-line program, and then moves the log file(s) and SQL file(s) to a directory. The history table records what files have been processed, so you don’t accidentally process the same file twice.

In the beginning of the Perl script, there are several values you need to replace to match your system. The values are under the section titled “values needed”. Here is the Perl script (named audit.pl):

#!/usr/bin/perl
# audit.pl

use DBI;
use CGI;
use XML::Simple;

#----------------------------------------------------------
# values needed
$Database = "audit_information";
$MYSQL_DATA_DIR = "/usr/local/mysql/data";
$MySQL_Host_IP_Name = "192.168.1.2";
$mysql_user = "root";
$mysql_password = "password_needed";

# directory to store old audit files after parsing
$audit_directory = "$MYSQL_DATA_DIR/audit_files";

# make an audit_files directory if one does not exist
mkdir($audit_directory) unless(-d $audit_directory);
#----------------------------------------------------------


#----------------------------------------------------------
#for each file do this
@files = @files = ;;
foreach $file_name_to_parse (@files) {

	#----------------------------------------------------------
	# check to see if file has already been parsed
	$dbh1 = ConnectToMySql($Database);
	$query1 = "select AUDIT_LOG_NAME from audit_history where AUDIT_LOG_NAME = '$file_name_to_parse'";
	$sth1 = $dbh1->prepare($query1);
	$sth1->execute();

          while (@data = $sth1->fetchrow_array()) {
            
         	   $audit_log_name = $data[0];

			}

	# if length of audit_log_name is less than 1, process file
	if (length($audit_log_name) $PARSED_FILE") or die print "Couldn't open log_file: $!";
		
		$count = 0;
		
		# XML::Simple variable - SuppressEmpty => 1   ignore empty values
		$xml = XML::Simple->new(SuppressEmpty => 1);
		$data = $xml->XMLin("$file_name_to_parse");
		
		foreach $info (@{$data->{AUDIT_RECORD}})
		{
			# replace tick marks ' with \' in the SQL TEXT
			$info->{"SQLTEXT"} =~ s/'/\\'/g;
		
			print LOGOUT "INSERT INTO audit_information.AUDIT_PARSED (COMMAND_CLASS, CONNECTIONID, DB_NAME, HOST_NAME, IP_ADDRESS, MYSQL_VERSION, COMMAND_NAME, OS_LOGIN, OS_VERSION, PRIV_USER, PROXY_USER, RECORD_ID, SERVER_ID, SQL_TEXT, STARTUP_OPTIONS, COMMAND_STATUS, STATUS_CODE, DATE_TIMESTAMP, USER_NAME, LOG_VERSION) values ('" . $info->{"COMMAND_CLASS"} . "', '" . $info->{"CONNECTION_ID"} . "', '" . $info->{"DB"} . "', '" . $info->{"HOST"} . "', '" . $info->{"IP"} . "', '" . $info->{"MYSQL_VERSION"} . "', '" . $info->{"NAME"} . "', '" . $info->{"OS_LOGIN"} . "', '" . $info->{"OS_VERSION"} . "', '" . $info->{"PRIV_USER"} . "', '" . $info->{"PROXY_USER"} . "', '" . $info->{"RECORD_ID"} . "', '" . $info->{"SERVER_ID"} . "', '" . $info->{"SQLTEXT"} . "', '" . $info->{"STARTUP_OPTIONS"} . "', '" . $info->{"STATUS"} . "', '" . $info->{"STATUS_CODE"} . "', '" . $info->{"TIMESTAMP"} . "', '" . $info->{"USER"} . "', '" . $info->{"VERSION"} . "');\n";
			$count++;
		
		# end foreach $info (@{$data->{AUDIT_RECORD}})
		}
		
		# load parsed file into MySQL - hide warnings
		system("mysql -u$mysql_user -p$mysql_password  /dev/null 2>&1");
										
		$dbh2 = ConnectToMySql($Database);
		$query2 = "insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('$file_name_to_parse', '$count')";
		
		# optional print output - uncomment if desired
		# print "$query2\n";
																													
		$sth2 = $dbh2->prepare($query2);
		$sth2->execute();

		# close audit log file
		close(INFILE);

		# optional print output - uncomment if desired
		# print "Moving audit log ($file_name_to_parse) and log file ($PARSED_FILE) to $audit_directory.\n";
		
		# strip directories off $file_name_to_parse
		@file_name_to_move_array = split("\/",$file_name_to_parse);
		$directory_count = $#file_name_to_move_array;
		$file_name_to_move = $file_name_to_move_array[$directory_count];
		
		
		# optional print output - uncomment if desired
		# print "mv $file_name_to_move $file_name_to_parse\n";
		# print "mv $PARSED_FILE $audit_directory\n";

		# move audit log files and parsed log files to $audit_directory
		system("mv $file_name_to_parse $audit_directory");
		system("mv $PARSED_FILE $audit_directory");

	# end - if (length($audit_log_name) < 1)
	}

	else

	{
		# optional print output - uncomment if desired
		# print "$audit_log_name already processed\n";
		system("mv $file_name_to_parse $audit_directory");
	}

# end - foreach $file_name_to_parse (@files) 
}

sub ConnectToMySql {

   $connectionInfo="dbi:mysql:$Database;$MySQL_Host_IP_Name:3306";

   # make connection to database
   $l_dbh = DBI->connect($connectionInfo,$mysql_user,$mysql_password);
   return $l_dbh;

}

It should not matter where you execute audit.pl, as long as you have correctly entered the required values in the script. You might get errors if you try to run this script on a log file that has not been rotated, which is the current log file in your MySQL data directory. The current log file is named audit.log.

# pwd
/usr/local/mysql/data
# ls -l audit.log
-rw-rw----  1 mysql  _mysql  9955118 Jul  2 15:25 audit.log

The script will only work on files ending in .xml. For testing, I used four small (and identical) audit log files:

# pwd
/usr/local/mysql/data
# ls -l *xml
-rw-rw----  1 mysql  wheel   15508 Jul  2 12:20 audit.log.14357895017796690.xml
-rw-r-----  1 mysql  _mysql  15508 Jul  2 13:46 audit.log.14357895017796691.xml
-rw-r-----  1 mysql  _mysql  15508 Jul  2 13:46 audit.log.14357895017796692.xml
-rw-r-----  1 mysql  _mysql  15508 Jul  2 13:46 audit.log.14357895017796693.xml

I have commented-out the print statements in the Perl script, but if you uncomment them, running the script gives you this output for each log file:

# perl audit.pl
Parsing - /usr/local/mysql/data/audit.log.14357895017796690.xml
insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('/usr/local/mysql/data/audit.log.14357895017796690.xml', '34')
Moving audit log (/usr/local/mysql/data/audit.log.14357895017796690.xml) and log file (/usr/local/mysql/data/audit.log.14357895017796690_parsed.sql) to /usr/local/mysql/data/audit_files.
mv audit.log.14357895017796690.xml /usr/local/mysql/data/audit.log.14357895017796690.xml
mv /usr/local/mysql/data/audit.log.14357895017796690_parsed.sql /usr/local/mysql/data/audit_files
....

After running my test script, the following data is what is in the audit_history table:

mysql> use audit_information
Database changed
mysql> select * from audit_history;
+----+-------------------------------------------------------+---------------------+-------------+
| ID | AUDIT_LOG_NAME                                        | PARSED_DATE_TIME    | LOG_ENTRIES |
+----+-------------------------------------------------------+---------------------+-------------+
|  1 | /usr/local/mysql/data/audit.log.14357895017796690.xml | 2015-07-02 15:25:07 | 34          |
|  2 | /usr/local/mysql/data/audit.log.14357895017796691.xml | 2015-07-02 15:25:08 | 34          |
|  3 | /usr/local/mysql/data/audit.log.14357895017796692.xml | 2015-07-02 15:25:08 | 34          |
|  4 | /usr/local/mysql/data/audit.log.14357895017796693.xml | 2015-07-02 15:25:09 | 34          |
+----+-------------------------------------------------------+---------------------+-------------+
4 rows in set (0.00 sec)

And here is an example of one line from the audit_parsed table.

mysql> select * from audit_parsed limit 1 \G
*************************** 1. row ***************************
             ID: 1
  COMMAND_CLASS: select
   CONNECTIONID: 10093
        DB_NAME: 
      HOST_NAME: localhost
     IP_ADDRESS: 127.0.0.1
  MYSQL_VERSION: 
   COMMAND_NAME: Query
       OS_LOGIN: 
     OS_VERSION: 
      PRIV_USER: 
     PROXY_USER: 
      RECORD_ID: 1614933_2015-07-01T22:08:58
      SERVER_ID: 
       SQL_TEXT: SELECT (UNIX_TIMESTAMP(now()) - CAST(variable_value AS SIGNED))*1000 as serverStartMillis 
  FROM information_schema.global_status 
 WHERE variable_name='uptime'
STARTUP_OPTIONS: 
 COMMAND_STATUS: 0
    STATUS_CODE: 0
 DATE_TIMESTAMP: 2015-07-01T22:08:58 UTC
      USER_NAME: root[root] @ localhost [127.0.0.1]
    LOG_VERSION: 
1 row in set (0.00 sec)

After parsing the log files, you can then write your own queries for searching through your audit data. You can even include this script in cron, so it runs and parses your files automatically. But as always, test this script and use it with caution before putting it in a production environment. You could also modify the Perl script to filter out values you did not want or need to store.

If you do use this script or if you have any suggestions or other questions, please leave a comment below.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Using Perl and MySQL to automatically respond to retweets on twitter

In an earlier post titled Using Perl to send tweets stored in a MySQL database to twitter, I showed you a way to use MySQL to store tweets, and then use Perl to automatically send your tweets to twitter.

In this post, we will look at automatically sending a “thank you” to people who retweet your tweets – and we will be using Perl and MySQL again.

Just like in the first post, you will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

One caveat: twitter has a rate limit on how often you may connect with your application – depending upon what you are trying to do. See Rate Limiting and Rate Limits for more information. So, if you are going to put this into a cron job, I wouldn’t run it more than once every 15 minutes.

We will also be using the same tables we created in the first post – tweets and history – as well as a new table, named retweets. The retweets table will contain all of the user names and tweet ID’s for those retweets we have discovered and already sent a thank-you tweet response.

The Perl script will connect to your tweet history table, and retrieve a set of your tweet ID’s, with the most recent tweet first. The script will then connect to twitter and check to see if there are any retweets for each ID. If a retweet is found, the script will check your retweets table to see if you have already thanked the tweeter for the retweet. If this is a new retweet, the script will connect to twitter and send a “thank-you” message to that user, and then insert the user name and tweet ID into the retweets table. This will ensure that you do not send a thank-you response more than one time.

Here is a flow chart that will attempt to explain what the script does:

We will be using the API call/method retweets(id) to see if a tweet ID was retweeted, and then we will send the thank-you tweet via the update call. More information about the Perl twitter API may be found at Net::Twitter::Lite::WithAPIv1_1.

First we will need to create the retweets table, where we will store the information about our tweets that were retweeted. Here is the CREATE TABLE statement for the retweets table:

CREATE TABLE `retweets` (
  `id` int(8) NOT NULL AUTO_INCREMENT,
  `tweet_id` bigint(24) DEFAULT NULL,
  `user_name` varchar(24) DEFAULT NULL,
  `retweet_update` varchar(36) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

All you need to do is to use edit this script with your own consumer_key, consumer_secret, access_token and access_token_secret for your application, and edit the accessTweets file used by the subroutine ConnectToMySql. You may also comment-out the “print” commands.

#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

my $Database = "tweets";

# Credentials for your twitter application
# you will need to subsitute your own application information for these four variables
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

# Grab the last X number of tweets to check for retweets
# - determined by the number after "limit"

$dbh = ConnectToMySql($Database);
$query = "select tweet_id, tweet_update FROM history order by tweet_update desc, id limit 10";	
$sth = $dbh->prepare($query);
$sth->execute();

# loop through our results - one tweet at a time
while (@data = $sth->fetchrow_array()) {

	$tweet_id = $data[0];
	$tweet_update = $data[1];

	print "----------------------------------------------------------------------\n";
	print "Checking:  $tweet_id $tweet_update\n";
	print "----------------------------------------------------------------------\n";

		# Connect to twitter and see if anyone retweeted this tweet
		my $results = eval { $nt->retweets($tweet_id)};

		for my $status ( @$results ) {
       
			$user_name = "$status->{user}{screen_name}";
			$retweet_update = "$status->{created_at}";

			# see if this person has retweeted this before, and we already
			# have a record of the retweet in our database
					
			$dbh2 = ConnectToMySql($Database);
			$query2 = "select tweet_id, user_name FROM retweets where tweet_id = '$tweet_id' and user_name = '$user_name' limit 1";	
			$sth2 = $dbh2->prepare($query2);
			$sth2->execute();
    
			@data2 = $sth2->fetchrow_array();
    
			# Uncomment if you want to see it in action
			# print "Query: $query\n";
		
			# Check to see if we had any results, and if not, then insert
			# tweet into database and send them a "thank you" tweet
			if (length($data2[0]) prepare($query3);
				$sth3->execute();

				# Uncomment if you want to see it in action
				# print "Query2: $query2\n";


				# ----------------------------------------------------------------------------
				# send tweet
				# ----------------------------------------------------------------------------

				# This pause is just to slow down the action - you can remove this line if you want
				sleep 5;
				
				my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
					traits              => [qw/API::RESTv1_1/],
					consumer_key        => "$consumer_key",
					consumer_secret     => "$consumer_secret",
					access_token        => "$access_token",
					access_token_secret => "$access_token_secret",
					ssl                 => 1
					);
					
					# Here is the message you want to send - 
					# the thank-you to the user who sent the retweet
					$tweet = "\@$user_name thanks for the retweet!";

					# send thank-you tweet
					my $results = eval { $nt->update("$tweet") };

						undef @data2;
						undef @data3;
					}
		
					else
				
					{
  						# we have already thanked this user - as their name and this tweet-id was found in the database
    					print "----- Found tweet: $tweet_id\n";
    
						while (@data2) {

							print "----------------------------------------------------------------------\n";
							print "Checking retweet by $user_name for $tweet_id\n";
							print "Found retweet:  $tweet_id $user_name $retweet_update \n";

							$tweet_id = $data2[0];
							$user_name = $data2[1];
					
							print "***** Retweet by $user_name already in database \n";
							print "----------------------------------------------------------------------\n";
					
							#exit;
							undef @data2;
							undef @data3;

							# This pause is just to slow down the action - you can remove this line if you want
							sleep 5;

							# end while
							}
						
					# end else
					}

		# end for my $status ( @$results ) {
		}

# end while
}

exit;

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file – named accessTweets contains this information:

database_name
hostname or IP
MySQL user name
password

I tested this on two twitter accounts, and everything worked for me – but let me know if you have problems. I am not the best Perl programmer, nor am I an expert at the twitter API, so there is probably a better/easier way to do this.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

MySQL Enterprise Edition Database Firewall – Control and Monitor SQL Statement Executions

As of MySQL 5.6.24, MySQL Enterprise Edition includes MySQL Enterprise Firewall, an application-level firewall (it runs within the mysql database process) that enables database administrators to permit or deny SQL statement execution based on matching against whitelists of accepted statement patterns. This helps harden MySQL Server against attacks such as SQL injection or attempts to exploit applications by using them outside of their legitimate query workload characteristics.

Each MySQL account registered with the firewall has its own whitelist of statement patterns (a tokenized representation of a SQL statement), enabling protection to be tailored per account. For a given account, the firewall can operate in recording or protecting mode, for training in the accepted statement patterns or protection against unacceptable statements. The diagram illustrates how the firewall processes incoming statements in each mode.

MySQL Enterprise Firewall Operation

(from https://dev.mysql.com/doc/refman/5.6/en/firewall.html)

If you do not have a MySQL Enterprise Edition license, you may download a trial version of the software via Oracle eDelivery. The MySQL Firewall is included in the MySQL Product Pack, specifically for MySQL Database 5.6.24 or higher.

MySQL Enterprise Firewall has these components:

  • A server-side plugin named MYSQL_FIREWALL that examines SQL statements before they execute and, based on its in-memory cache, renders a decision whether to execute or reject each statement.
  • Server-side plugins named MYSQL_FIREWALL_USERS and MYSQL_FIREWALL_WHITELIST implement INFORMATION_SCHEMA tables that provide views into the firewall data cache.
  • System tables named firewall_users and firewall_whitelist in the mysql database provide persistent storage of firewall data.
  • A stored procedure named sp_set_firewall_mode() registers MySQL accounts with the firewall, establishes their operational mode, and manages transfer of firewall data between the cache and the underlying system tables.
  • A set of user-defined functions provides an SQL-level API for synchronizing the cache with the underlying system tables.
  • System variables enable firewall configuration and status variables provide runtime operational information.

(from https://dev.mysql.com/doc/refman/5.6/en/firewall-components.html)

Installing the Firewall

Installing the firewall is fairly easy. After you install MySQL version 5.6.24 or greater, you simply execute an SQL script that is located in the $MYSQL_HOME/share directory. There are two versions of the script, one for Linux and one for Windows (the firewall isn’t supported on the Mac yet).

The scripts are named win_install_firewall.sql for Windows and linux_install_firewall.sql for linux. You may execute this script from the command line or via MySQL Workbench. For the command line, be sure you are in the directory where the script is located.

shell> mysql -u root -p mysql < win_install_firewall.sql
Enter password: (enter root password here)

The script create the firewall tables, functions, stored procedures and installs the necessary plugins. The script contains the following:

# Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved.
# Install firewall tables
USE mysql;
CREATE TABLE IF NOT EXISTS mysql.firewall_whitelist( USERHOST VARCHAR(80) NOT NULL, RULE text NOT NULL) engine= MyISAM;
CREATE TABLE IF NOT EXISTS mysql.firewall_users( USERHOST VARCHAR(80) PRIMARY KEY, MODE ENUM ('OFF', 'RECORDING', 'PROTECTING', 'RESET') DEFAULT 'OFF') engine= MyISAM;

INSTALL PLUGIN mysql_firewall SONAME 'firewall.dll';
INSTALL PLUGIN mysql_firewall_whitelist SONAME 'firewall.dll';
INSTALL PLUGIN mysql_firewall_users SONAME 'firewall.dll';

CREATE FUNCTION set_firewall_mode RETURNS STRING SONAME 'firewall.dll';
CREATE FUNCTION normalize_statement RETURNS STRING SONAME 'firewall.dll';
CREATE AGGREGATE FUNCTION read_firewall_whitelist RETURNS STRING SONAME 'firewall.dll';
CREATE AGGREGATE FUNCTION read_firewall_users RETURNS STRING SONAME 'firewall.dll';
delimiter //
CREATE PROCEDURE sp_set_firewall_mode (IN arg_userhost VARCHAR(80), IN arg_mode varchar(12))
BEGIN
IF arg_mode = "RECORDING" THEN
  SELECT read_firewall_whitelist(arg_userhost,FW.rule) FROM mysql.firewall_whitelist FW WHERE FW.userhost=arg_userhost;
END IF;
SELECT set_firewall_mode(arg_userhost, arg_mode);
if arg_mode = "RESET" THEN
  SET arg_mode = "OFF";
END IF;
INSERT IGNORE INTO mysql.firewall_users VALUES (arg_userhost, arg_mode);
UPDATE mysql.firewall_users SET mode=arg_mode WHERE userhost = arg_userhost;

IF arg_mode = "PROTECTING" OR arg_mode = "OFF" THEN
  DELETE FROM mysql.firewall_whitelist WHERE USERHOST = arg_userhost;
  INSERT INTO mysql.firewall_whitelist SELECT USERHOST,RULE FROM INFORMATION_SCHEMA.mysql_firewall_whitelist WHERE USERHOST=arg_userhost;
END IF;
END //
delimiter ;

After you run the script, the firewall should be enabled. You may verify it by running this statement:

mysql> SHOW GLOBAL VARIABLES LIKE 'mysql_firewall_mode';
+-------------------------------+-------+
| Variable_name                 | Value |
+-------------------------------+-------+
| mysql_firewall_max_query_size |  4096 |
| mysql_firewall_mode           |    ON |
| mysql_firewall_trace          |   OFF |
+-------------------------------+-------+

Testing the Firewall

To test the firewall, you may use a current mysql user, but we are going to create a test user for this example – webuser@localhost. (The user probably doesn’t need all privileges, but for this example we will grant everything to this user)

CREATE USER 'webuser'@'localhost' IDENTIFIED BY 'Yobuddy!';
'GRANT ALL PRIVILEGES ON *.* TO 'webuser'@'localhost' WITH GRANT OPTION'

OPTIONAL: For our test, we will be using the sakila schema provided by MySQL. You may download the sakila database schema (requires MySQL 5.0 or later) at http://dev.mysql.com/doc/index-other.html. If you don’t want to use the sakila database, you may use your own existing database or create a new database.

After downloading the sakila schema, you will have two files, named sakila-schema.sql and sakila-data.sql. Execute the sakila-schema.sql first, and then sakila-data.sql to populate the database with data. If you are using the command line, simply do the following: (substitute UserName for a mysql user name)

# mysql -uUserName -p < sakila-schema.sql
# mysql -uUserName -p < sakila-data.sql

After creating the sakila schema and importing the data, we now set the firewall to record those queries which we want to allow:

mysql> CALL `mysql`.`sp_set_firewall_mode`("webuser@localhost","RECORDING")
+-----------------------------------------------+
| read_firewall_whitelist(arg_userhost,FW.rule) |
+-----------------------------------------------+
| Imported users: 0  Imported rules: 0          |
+-----------------------------------------------+
1 row in set (0.14 sec)

+-------------------------------------------+
| set_firewall_mode(arg_userhost, arg_mode) |
+-------------------------------------------+
| OK                                        |
+-------------------------------------------+
1 row in set (0.22 sec)
Query OK, 5 rows affected (0.28 sec)

We can check to see the firewall mode via this statement, to be sure we are in the recording mode:

mysql> SELECT * FROM MYSQL.FIREWALL_USERS;
+-------------------+------------+
| USERHOST          | MODE       |
+-------------------+------------+
| webuser@localhost |  RECORDING |
+-------------------+------------+
1 row in set (0.02 sec)

Now that we have recording turned on, let’s run a few queries:

mysql> use sakila
Database changed
mysql> show tables;
+----------------------------+
| Tables_in_sakila           |
+----------------------------+
| actor                      |
| actor_info                 |
| address                    |
| category                   |
| city                       |
| country                    |
| customer                   |
| customer_list              |
| film                       |
| film_actor                 |
| film_category              |
| film_list                  |
| film_text                  |
| inventory                  |
| language                   |
| nicer_but_slower_film_list |
| payment                    |
| rental                     |
| sales_by_film_category     |
| sales_by_store             |
| staff                      |
| staff_list                 |
| store                      |
+----------------------------+
23 rows in set (0.00 sec)

mysql> select * from actor limit 2;
+----------+------------+-----------+---------------------+
| actor_id | first_name | last_name | last_update         |
+----------+------------+-----------+---------------------+
|        1 | PENELOPE   | GUINESS   | 2006-02-15 04:34:33 |
|        2 | NICK       | WAHLBERG  | 2006-02-15 04:34:33 |
+----------+------------+-----------+---------------------+
2 rows in set (0.13 sec)

mysql> select first_name, last_name from actor where first_name like 'T%';
+------------+-----------+
| first_name | last_name |
+------------+-----------+
| TIM        | HACKMAN   |
| TOM        | MCKELLEN  |
| TOM        | MIRANDA   |
| THORA      | TEMPLE    |
+------------+-----------+
4 rows in set (0.00 sec)

We turn off the recording by turning on the protection mode:

mysql> CALL `mysql`.`sp_set_firewall_mode`("webuser@localhost","PROTECTING");
+-------------------------------------------+
| set_firewall_mode(arg_userhost, arg_mode) |
+-------------------------------------------+
| OK                                        |
+-------------------------------------------+
1 row in set (0.00 sec)

We can check to see the firewall mode via this statement:

mysql> SELECT * FROM MYSQL.FIREWALL_USERS;
+-------------------+------------+
| USERHOST          | MODE       |
+-------------------+------------+
| webuser@localhost | PROTECTING |
+-------------------+------------+
1 row in set (0.02 sec)

And we can look at our whitelist of statements:

mysql>  SELECT * FROM MYSQL.FIREWALL_WHITELIST;
+-------------------+-------------------------------------------------------------------+
| USERHOST          | RULE                                                              |
+-------------------+-------------------------------------------------------------------+
| webuser@localhost | SELECT * FROM actor LIMIT ?                                       |
| webuser@localhost | SELECT SCHEMA ( )                                                 |
| webuser@localhost | SELECT first_name , last_name FROM actor WHERE first_name LIKE ?  |
| webuser@localhost | SHOW TABLES                                                       |
+-------------------+-------------------------------------------------------------------+
4 rows in set (0.00 sec)

The firewall is now protecting against non-whitelisted queries. We can execute a couple of the queries we previously ran, which should be allowed by the firewall.

mysql> show tables;
+----------------------------+
| Tables_in_sakila           |
+----------------------------+
| actor                      |
| actor_info                 |
| address                    |
| category                   |
| city                       |
| country                    |
| customer                   |
| customer_list              |
| film                       |
| film_actor                 |
| film_category              |
| film_list                  |
| film_text                  |
| inventory                  |
| language                   |
| nicer_but_slower_film_list |
| payment                    |
| rental                     |
| sales_by_film_category     |
| sales_by_store             |
| staff                      |
| staff_list                 |
| store                      |
+----------------------------+
23 rows in set (0.01 sec)

Now we run two new queries, which should be blocked by the firewall.

mysql> select * from rental;
ERROR 1045 (42000): Firewall prevents statement

mysql> select * from staff;
ERROR 1045 (42000): Firewall prevents statement

The server will write an error message to the log for each statement that is rejected. Example:

2015-03-21T22:59:05.371772Z 14 [Note] Plugin MYSQL_FIREWALL reported:
'ACCESS DENIED for webuser@localhost. Reason: No match in whitelist.
Statement: select * from rental '

You can use these log messages in your efforts to identify the source of attacks.

To see how much firewall activity you have, you may look look at the status variables:

mysql> SHOW GLOBAL STATUS LIKE 'Firewall%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Firewall_access_denied  | 42    |
| Firewall_access_granted | 55    |
| Firewall_cached_entries | 78    |
+-------------------------+-------+

The variables indicate the number of statements rejected, accepted, and added to the cache, respectively.

The MySQL Enterprise Firewall Reference is found at https://dev.mysql.com/doc/refman/5.6/en/firewall-reference.html.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots 

Visit http://2044thebook.com for more information.

Follow

Get every new post delivered to your Inbox.

Join 49 other followers