Use MySQL and Perl to automatically find, follow and unfollow twitter users

A friend of mine asked me how they could automatically follow and unfollow people on Twitter. But they didn’t want to follow just anyone and everyone. He had a Twitter account which they used for recruiting in a very narrow construction industry. He wanted to find people in the same industry and follow them – hoping they would follow him back and learn about his open jobs. When I joined Twitter back in 2008, I wrote a similar program to automatically follow/unfollow users, but the Twitter API has changed quite a bit since then. So I decided to re-write the program with the latest Perl-Twitter API – Net::Twitter::Lite::WithAPIv1_1.

Before you attempt to use these scripts, you will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

When running these types of scripts on Twitter, you have to be careful to not surpass the rate limits that Twitter has established for using the API.  I have included a script named rate_limit.pl – which can give you the limits you have for each API-call, and how many calls you have remaining before the limits are reset by Twitter.  I have added pauses in the scripts to help prevent you from going over the rate limits (but you will need to check to ensure you don’t surpass them).  You are free to modify these scripts (but I won’t be able to help you figure out how to do this.)  The rate limits may be found at https://dev.twitter.com/rest/public/rate-limiting.  The key with using the Twitter API is to not be too aggressive, or your app will be banned by Twitter. For example, Twitter does not allow bulk follows and unfollows – so having patience is important.

There are several programs involved, and all of them utilize MySQL databases to store the information.  A summary of the scripts are as follows:

followers_find.pl – To use this script, you choose a Twitter user that has an audience similar to yours, and then follow their users.  For example, if you want to follow cat lovers, you could grab the followers of the Twitter user named @Cat.  This script will pull the last 5,000 followers of @Cat, and place those user ID’s into a database named follows_other_users.  The script will also save the cursor information, so you can run this script multiple times to obtain a large pool of users, and avoid duplicates.  This script stores the account you followed (@Cat) and the user_id of the follower.

friend_lookup.pl – This script takes 100 of the user ID’s from the follows_other_users database, pulls the user’s details from Twitter and inserts this information into the twitter_users database.  The user information includes the name, the user ID, number of tweets, how many followers, how many people they are following, time zone and description.  You can modify the database and the script to include or omit other pieces of information.

follow_user.pl – This script follows users from the twitter_users database, based upon the percentage of followers/following and the number of tweets (you can change the search criteria).  For example, I didn’t want to follow someone who was following 2,000 people but only had 100 followers.  I wanted the followers/following ratio to be a little more even.  I also wanted to follow people who had posted at least 30 tweets.  Even though the followers_find.pl script downloads the information for 5,000 users (at a time), you might only follow a couple hundred of these users who fit your criteria.

friends_follow_check.pl – This script will check to see if a user you followed has followed you back, and if not, then the script will unfollow that user. You will need to specify how many days to give someone to follow you back before you unfollow them.  For example: You follow a group of users on 10/05/2015 (the database stores what date you follow someone). You decide to wait five days to see if anyone from this group follows you back. Therefore, on 10/10/2015, you can run this script and change the $date_to_delete variable to 2015-10-05, and the script will unfollow anyone you followed on 2015-10-05 (or prior) who is not following you back.

This diagram shows you the steps for each script and to what database they connect:

More details on each script:

followers_find.pl – Use this to grab followers of a related Twitter user by providing a value for $user_to_find_followers. This value should be the Twitter user’s name without the “@” symbol – and not their description name. The script will insert 5,000 followers at a time into follows_other_users and insert cursor information in user_cursors.

This script uses the followers_ids API call, which has a limit of 15 every 15 minutes. When you run the rate_limit.pl script, you can see how many calls are remaining:

'followers' => {
'/followers/ids' => {
'limit' => 15,
'remaining' => 14
'reset' => 1445026087,
},

friend_lookup.pl – Selects users from follows_other_users, gets the user’s details, and then inserts the information into twitter_users. The script can get information on 100 users at a time. Some users may produce an error, and the script will stop. If this happens, just re-run the script. If the script still has an error, delete that user from the database.

This script uses the lookup_users API call, which has a limit of 180 every 15 minutes. When you run the rate_limit.pl script, you can see how many calls are remaining:

'/users/lookup' => {
'limit' => 180
'remaining' => 179,
'reset' => 1445028838,
},

For some reason, when I ran this script, I would get an error on a particular user. I spent a lot of time trying to figure out why the error occurred, but there were too many unknowns as to why the error existed. So, when the script finds a bad user, it updates the follows_other_users database and marks that users with an error (err). You may have to delete a user from the follows_other_users if the script doesn’t automatically mark the user with an error and the script exits immediately after running it.

follow_user.pl – Selects users from twitter_users database and follows them if they meet certain criteria. The script also updates the twitter_users database as to whether or not they were followed, and what date/time they were followed.

This script uses the create_friend API call. The web site does not specify the limit, and the limit does not appear when you run the rate_limit.pl script. I only follow 10-20 new friends an hour – to avoid Twitter’s ban on automatic bulk follow/unfollow.

Before you run this script, you want to be sure that you have enough users in the twitter_users database that fit your search criteria. Use this SQL command to find the number of users available for you to follow: (and feel free to modify the criteria)

select user_id FROM twitter_users where sent_follow_request IS NULL and percent_follow > 90 and percent_follow  30;

The number of users to follow is set with the $limit variable, and I have it set to 250. This means the script will follow 250 users before quitting. I used a sleep command (a random-length pause between six minutes ($minimum = 360;) and twelve minutes ($maximum = 720;) between following users so Twitter doesn’t think you are a robot. You may adjust these values as well.

friends_follow_check.pl – Selects users you followed from the twitter_users database and unfollows them if they haven’t followed you. It updates twitter_users with the unfollow information.

After you run the follow_user.pl script, you will need to wait a few days to give people time to follow you back. You then will need to change the variable $date_to_delete to be a few days prior to the current date. I usually give people five days to follow me.

You can always change your search criteria to be less restrictive, in order to find more followers. But I have found that a strict search criteria removes most of the spammers.

This script uses the lookup_friendships API call, which has a limit of 15 every 15 minutes. When you run the rate_limit.pl script, you can see how many calls are remaining:

'/friendships/lookup' => {
'limit' => 15,
'remaining' => 14
'reset' => 1445031488,
},

In each of the scripts, you have an option to print out the results from the Twitter API call. You will need to uncomment (remove the #) from these lines:

#    print "---DUMPER START---\n";
#    print Dumper $followers_list;
#    print "---DUMPER END---\n\n";

Also, there are print statements that have been commented out as well. Uncomment them if you want to see the output.


Here are the CREATE TABLE statements for each database. Some fields are longer than you would think they should be, but I did this to leave room for special characters which are longer than one character (I had to use decode_utf8 on names and descriptions):

CREATE TABLE `follows_other_users` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `user_to_find_followers` varchar(16) DEFAULT NULL,
  `follower_id` varchar(32) DEFAULT NULL,
  `looked_up_info` char(3) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1
CREATE TABLE 'user_cursors' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'user_id' varchar(16) DEFAULT NULL,
  'next_cursor' varchar(48) DEFAULT NULL,
  'previous_cursor' varchar(48) DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1
CREATE TABLE 'twitter_users' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'name' varchar(56) DEFAULT NULL,
  `screen_name` varchar(32) DEFAULT NULL,
  'user_id' varchar(16) DEFAULT NULL,
  'sent_follow_request' varchar(3) DEFAULT NULL,
  'sent_request_datetime' datetime DEFAULT NULL,
  'followed_me' varchar(3) DEFAULT NULL,
  'unfollowed_them' varchar(3) DEFAULT NULL,
  'statuses_count' int(11) DEFAULT NULL,
  'following_count' int(11) DEFAULT NULL,
  'followers_count' int(11) DEFAULT NULL,
  'percent_follow' int(11) DEFAULT NULL,
  'time_zone' varchar(256) DEFAULT NULL,
  'description' varchar(4096) DEFAULT NULL,
  'creation_datetime' datetime DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

And here are the scripts. Remember you need to create your own keys and tokens and insert them into the script for $consumer_key, $consumer_secret, $access_token and $access_token_secret.

In the subroutine ConnectToMySql used in the Perl scripts, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file – named accessTweets contains this information:

database_name
hostname or IP
MySQL user name
password

The scripts are also available on GitHub – https://github.com/ScriptingMySQL/PerlFiles.

followers_find.pl

#!/usr/bin/perl
# Updated 2015-10-25

use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use Data::Dumper;
use Scalar::Util 'blessed';
# ----------------------------------------------------------------------------------
# this has to be near the top - as other parts of the script rely on these figures
# ----------------------------------------------------------------------------------

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;

$year = $year + 1900;
$mon = $mon + 1;

# add a zero if the value is less than 10

if ($sec < 10) { $sec = "0$sec"; }
if ($min < 10) { $min = "0$min"; }
if ($hour < 10) { $hour = "0$hour"; }
if ($mday < 10) { $mday = "0$mday"; }
if ($mon < 10) { $mon = "0$mon"; }
if ($year < 10) { $year = "0$year"; }
if ($wday < 10) { $wday = "0$wday"; }
if ($yday < 10) { $yday = "0$yday"; }
if ($isdst new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "access_token_secret",
      apiurl => 'http://api.twitter.com/1.1',
      ssl                 => 1
);


$count=1;

# twitter user name without the @
$user_to_find_followers = "Cat";

$dbh = ConnectToMySql($Database);
$query = "select user_id, next_cursor FROM user_cursorswhere user_id = '$user_to_find_followers' order by id desc limit 1";	
$sth = $dbh->prepare($query);
$sth->execute();

print "\n$query\n";

$count_users = 0;

#			print "name | friends_count | followers_count | statuses_count | percent_follow\% |$time_zone | description | creation_datetime\n";
# loop through our results - one tweet at a time
while (@data = $sth->fetchrow_array()) {
$cursor = $data[1];

if ($cursor == 0)

{
	$cursor = "-1";
}

}

print "Starting at cursor: $data[1]\n";
# Use the optional cursor parameter to retrieve IDs in pages of 5000. When the cursor parameter is used, 
# the return value is a reference to a hash with keys previous_cursor, next_cursor, and ids. 
# The value of ids is a reference to an array of IDS of the user's followers. 
# Set the optional cursor parameter to -1 to get the first page of IDs. 
# Set it to the prior return's value of previous_cursor or next_cursor to page forward or backwards. 
# When there are no prior pages, the value of previous_cursor will be 0. 
# When there are no subsequent pages, the value of next_cursor will be 0.

  eval {

		my $followers_list = $nt->followers_ids({
        screen_name => "$user_to_find_followers",
        cursor      => "$cursor",
        });

#		count => 1
#    print "---DUMPER START---\n";
#	print Dumper $followers_list;
#    print "---DUMPER END---\n\n";

			$next_cursor = $followers_list->{next_cursor_str};
			$previous_cursor = $followers_list->{previous_cursor_str};
			
			print "next_cursor $next_cursor - previous_cursor $previous_cursor \n";

		for my $status2 ( @{$followers_list->{ids}} ) {
        # print "$count $status $next_cursor\n";

			$follower_id = $status2;
		
			# uncomment to watch as it prints each user
			#print "$count $user_to_find_followers $follower_id\n";

			$dbh = ConnectToMySql($Database);	
			$query = "insert into follows_other_users(user_to_find_followers, follower_id) values ('$user_to_find_followers','$follower_id')";
			#print "\n $query\n";
			$sth = $dbh->prepare($query);
			$sth->execute();

		#sleep 1;

		$count++;

	# end for my $status
    }

# end eval    
};
			if ( my $err = $@ ) {
				die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

				warn "HTTP Response Code: ", $err->code, "\n",
          	 "HTTP Message......: ", $err->message, "\n",
         	  "Twitter error.....: ", $err->error, "\n";
			}
		# put this into a database in case you want to search for more of their followers
		print "\n$user_to_find_followers $next_cursor $previous_cursor\n";

		$user_id = $user_to_find_followers;

		$dbh = ConnectToMySql($Database);	
		$query = "insert into user_cursors(user_id, next_cursor, previous_cursor) values ('$user_id','$next_cursor','$previous_cursor')";
		print "\n $query\n";
		$sth = $dbh->prepare($query);
		$sth->execute();

# ----------------------------------------------------------------------------------
# this has to be near the top - as other parts of the script rely on these figures
# ----------------------------------------------------------------------------------

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;

$year = $year + 1900;
$mon = $mon + 1;

# add a zero if the value is less than 10

if ($sec < 10) { $sec = "0$sec"; }
if ($min < 10) { $min = "0$min"; }
if ($hour < 10) { $hour = "0$hour"; }
if ($mday < 10) { $mday = "0$mday"; }
if ($mon < 10) { $mon = "0$mon"; }
if ($year < 10) { $year = "0$year"; }
if ($wday < 10) { $wday = "0$wday"; }
if ($yday < 10) { $yday = "0$yday"; }
if ($isdst < 10) { $isdst = "0$isdst"; }

$DateTime = "$year-$mon-$mday $hour:$min:$sec";

# ----------------------------------------------------------------------------------


print "Finished importing - $DateTime....\n";
print "\n----------------------------------------------------------------------------\n";

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

exit;

friend_lookup.pl

#!/usr/bin/perl
# Updated 2015-10-25

use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use Data::Dumper;
use Scalar::Util 'blessed';
use Encode;
       
# ----------------------------------------------------------------------------
# get the relationship between my user name and another user name to see
# if they are following me or if I am following them
# ----------------------------------------------------------------------------

$number = 1;

# you are allowed 180 of these lookups every 15 minutes
# with a 15 second pause at the end of each one, you won't 
# go over the limit
while ($number new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "access_token_secret",
      apiurl => 'http://api.twitter.com/1.1',
      ssl                 => 1
);

open(OUT, ">dumper_out.txt") || die "Can't redirect stdout";
$dbh = ConnectToMySql($Database);
$query = "select follower_id, user_to_find_followers FROM follows_other_users where looked_up_info IS NULL limit 100";	
# run to see if you can debug why some users get an error
#$query = "select follower_id, user_to_find_followers FROM follows_other_users where looked_up_info = 'err' limit 100";	
$sth = $dbh->prepare($query);
$sth->execute();

print "\n$query\n";

$count_users = 0;

#			print "name | friends_count | followers_count | statuses_count | percent_follow\% |$time_zone | description | creation_datetime\n";
# loop through our results - one tweet at a time
while (@data = $sth->fetchrow_array()) {

	print "$data[0] ";

	$user_id = $data[0];
	$user_to_find_followers = $data[1];

if (length($data[0])  0)

	{

		$users_to_get = "$users_to_get, $user_id";

		push(@data2, "$user_id");

	}

	else

	{

		$users_to_get = "$user_id";
		push(@data2, "$user_id");

	}

	$count_users++;

# end - while (@data = $sth->fetchrow_array()) {
}

#print "$users_to_get\n";

$count = 1;

#while (@data2) {

			print "--------------------------------------------------------------------------------------------------\n";
	eval {

    			my $user_info = $nt->lookup_users({ 
    				user_id => [ "$users_to_get" ] 
    			});

	print OUT "---DUMPER START---\n";
	print OUT Dumper $user_info;
	print OUT "---DUMPER END---\n\n";

			if ( my $err = $@ ) {
				die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

				warn "\n - HTTP Response Code: ", $err->code, "\n",
				"\n - HTTP Message......: ", $err->message, "\n",
				"\n - Twitter error.....: ", $err->error, "\n";
			}
		for my $status ( @$user_info ) {

			if (length($status->{name}) {name};
			$name =~ s/[^[:ascii:]]//g;
			$name =~ s/[^!-~\s]//g;
			$name = decode_utf8( $name );
			$name =~ s/\'/\^/g;
			print "Working on $name - ";

			$user_id = $status->{id};
			$following_count = $status->{friends_count};
			$followers_count = $status->{followers_count};
			$statuses_count = $status->{statuses_count};
			$time_zone = $status->{time_zone};

			$screen_name = $status->{screen_name};
			$screen_name =~ s/[^[:ascii:]]//g;
			$screen_name = decode_utf8( $screen_name );
			$screen_name =~ s/[^a-zA-Z0-9 _^-]//g;
			$screen_name =~ s/[^!-~\s]//g;

			$description = $status->{description};
			
			if (length($description)  'Wed Nov 09 19:38:46 +0000 2011',

			$created_at = $status->{created_at};
			@creation_date_array = split(" ",$created_at);
		
			$creation_date_month = $creation_date_array[1];
			
			if ($creation_date_month =~ "Jan") { $creation_date_month = "01"}
			if ($creation_date_month =~ "Feb") { $creation_date_month = "02"}
			if ($creation_date_month =~ "Mar") { $creation_date_month = "03"}
			if ($creation_date_month =~ "Apr") { $creation_date_month = "04"}
			if ($creation_date_month =~ "May") { $creation_date_month = "05"}
			if ($creation_date_month =~ "Jun") { $creation_date_month = "06"}
			if ($creation_date_month =~ "Jul") { $creation_date_month = "07"}
			if ($creation_date_month =~ "Aug") { $creation_date_month = "08"}
			if ($creation_date_month =~ "Sep") { $creation_date_month = "09"}
			if ($creation_date_month =~ "Oct") { $creation_date_month = "10"}
			if ($creation_date_month =~ "Nov") { $creation_date_month = "11"}
			if ($creation_date_month =~ "Dec") { $creation_date_month = "12"}
		
			$creation_date_day_of_month = $creation_date_array[2];
			$creation_date_year = $creation_date_array[5];
			$creation_date_time = $creation_date_array[3];
			$creation_datetime = "$creation_date_year-$creation_date_month-$creation_date_day_of_month $creation_date_time";

			# had to add this as the percentage formula below would fail
			if ($following_count < 1)
			
			{
			
				$following_count = 1;
			
			}
			if ($followers_count prepare($query);
			$sth->execute();
			$dbh2 = ConnectToMySql($Database);	
			$query2 = "update follows_other_users set looked_up_info = 'yes' where follower_id = '$user_id'";
			#print " $count ----  $query2\n";
			print "--------------------------------------------------------------------------------------------------\n";
			$sth2 = $dbh2->prepare($query2);
			$sth2->execute();

#sleep 1;

$count++;
		}
# end - eval
};
#599
$number++;

	# if we didn't grab all 100 users, change the last user's status to error
	#print "Count $count\n";
	if ($count prepare($query3);
		$sth3->execute();
		exit;
	}

# if there aren't any more users, quit
if (length($data[0]) fetchrow_array()) {
#}
#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}
close(OUT);
exit;

follow_user.pl

#!/usr/bin/perl
# Updated 2015-10-25

use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use Data::Dumper;
use Scalar::Util 'blessed';

# ----------------------------------------------------------------------------
# follow users from database
# ----------------------------------------------------------------------------
# Credentials for your twitter application - blog
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "access_token_secret",
      apiurl => 'http://api.twitter.com/1.1',
      ssl                 => 1
);

# find the stats and info for the following users

$limit = 250;
$percent_follow_minimum = 80;
$percent_follow_maximum = 140;
$statuses_count_minimum = 30;

$dbh = ConnectToMySql($Database);
$query = "select user_id FROM twitter_users where sent_follow_request IS NULL and percent_follow > $percent_follow_minimum and percent_follow  $statuses_count_minimum limit $limit";	
$sth = $dbh->prepare($query);
$sth->execute();

print "\n$query\n\n";

$count_users = 1;

# 107 following 114 followers

# loop through our results - one tweet at a time
while (@data = $sth->fetchrow_array()) {

# ----------------------------------------------------------------------------------
# this has to be near the top - as other parts of the script rely on these figures
# ----------------------------------------------------------------------------------

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;

$year = $year + 1900;
$mon = $mon + 1;

# add a zero if the value is less than 10

if ($sec < 10) { $sec = "0$sec"; }
if ($min < 10) { $min = "0$min"; }
if ($hour < 10) { $hour = "0$hour"; }
if ($mday < 10) { $mday = "0$mday"; }
if ($mon < 10) { $mon = "0$mon"; }
if ($year < 10) { $year = "0$year"; }
if ($wday < 10) { $wday = "0$wday"; }
if ($yday < 10) { $yday = "0$yday"; }
if ($isdst create_friend({ user_id => "$user_id" });
    
#		    print "---DUMPER START---\n";
#			print Dumper $friend;
#		    print "---DUMPER END---\n\n";

			if ( my $err = $@ ) {
				die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

				warn "\n - HTTP Response Code: ", $err->code, "\n",
				"\n - HTTP Message......: ", $err->message, "\n",
				"\n - Twitter error.....: ", $err->error, "\n";
			
			# end if
			};
			
			
	# end - eval
	};

			$dbh2 = ConnectToMySql($Database);	
			$query2 = "update twitter_users SET sent_follow_request = 'yes', sent_request_datetime  = '$DateTime' where user_id = '$user_id'";
			#print " $query2\n";
			#print " database updated.\n"
			$sth2 = $dbh2->prepare($query2);
			$sth2->execute();

# pause for a random time so twitter doesn't think you are a robot
# minimum and maximum time in seconds to sleep
$minimum = 360; 
$maximum = 720;
$random_sleep = int($minimum + rand($maximum - $minimum));

print " - sleeping for $random_sleep seconds\n";
sleep $random_sleep;

$count_users++;

# end - while (@data = $sth->fetchrow_array())
};

exit;

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

exit;

friends_follow_check.pl

#!/usr/bin/perl
# Updated 2015-10-25

use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use Data::Dumper;
use Scalar::Util 'blessed';

# ----------------------------------------------------------------------------
# see if a user follows me and/or if i follow them
# ----------------------------------------------------------------------------

# Credentials for your twitter application - blog
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "access_token_secret",
      apiurl => 'http://api.twitter.com/1.1',
      ssl                 => 1
);

# limit must be 100 or less
$limit = 100;

# how many days do you want to wait until you unfriend someone?

$date_to_delete = "2015-10-21";

# find the stats and info for the following users

$dbh = ConnectToMySql($Database);
$query = "select user_id, sent_request_datetime FROM twitter_users where sent_follow_request = 'yes' and sent_request_datetime prepare($query);
$sth->execute();

print "\n$query\n\n\n";

$count_users = 0;
$count = 0;

# loop through our results - one tweet at a time
while (@data = $sth->fetchrow_array()) {

	$user_id = $data[0];
	$sent_request_datetime{$user_id} = $data[1];

	if ($count_users > 0)

	{
		$users_to_get = "$users_to_get, $user_id";
		push(@data2, "$user_id");
	}

	else

	{
		$users_to_get = "$user_id";
		push(@data2, "$user_id");
	}

$count_users++;

# end - while
}

print "$users_to_get\n\n";

#exit;

eval {

	my $friend = $nt->lookup_friendships({ user_id => "$users_to_get" });
    
#    print "---DUMPER START---\n";
#	print Dumper $friend;
#    print "---DUMPER END---\n\n";

			if ( my $err = $@ ) {
				die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

				warn "\n - HTTP Response Code: ", $err->code, "\n",
				"\n - HTTP Message......: ", $err->message, "\n",
				"\n - Twitter error.....: ", $err->error, "\n";
			
			# end if
			};

	for $data_list ( @$friend ) {

	$count++;
   
		for $role ( keys %$data_list ) {
        	

        	if ($role =~ "id_str")
        	
        	{
        			# since the connection info is an array, find the values
        			$user_id_str = $data_list->{$role};
 
 					$user_id = $user_id_str;
 
 			# get user_id
 			
 			#print "$user_id_str - ";
 
 
 
 			# once you have the status of the connection and the user_id
 			# you can check to see if they are following you or not
 
				if ($status_friend =~ "followed_by")
        	
				{
        	
					print "*************\nThis person $user_id_str follows you. - $sent_request_datetime";
					print "Status: $status_connection1 $status_connection2\n*************\n";

					$dbh2 = ConnectToMySql($Database);
					$query2 = "update twitter_users set followed_me = 'yes' where user_id = '$user_id_str'";	
					$sth2 = $dbh2->prepare($query2);
					$sth2->execute();

					print "\n$query2\n";
					print "########\n $count of $limit sleeping....\n########\n";
					sleep 155;

				}
    		
				else
    		
				{
					print "This person $user_id_str DOES NOT follow you. - $sent_request_datetime{$user_id}\n";
					print "Status: $status_connection1 $status_connection2\n";

		eval {

		my $friend = $nt->destroy_friend({ user_id => "$user_id" });
    
#		    print "---DUMPER START---\n";
#			print Dumper $friend;
#		    print "---DUMPER END---\n\n";

			if ( my $err = $@ ) {
				die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

				warn "\n - HTTP Response Code: ", $err->code, "\n",
				"\n - HTTP Message......: ", $err->message, "\n",
				"\n - Twitter error.....: ", $err->error, "\n";
			
			# end if
			};
			
			
	# end - eval
	};

					$dbh3 = ConnectToMySql($Database);
					$query3 = "update twitter_users set followed_me = 'no' where user_id = '$user_id_str'";	
					$sth3 = $dbh3->prepare($query3);
					$sth3->execute();

					print "\n$query3\n";

					$dbh4 = ConnectToMySql($Database);
					$query4 = "update twitter_users set unfollowed_them = 'yes' where user_id = '$user_id_str'";	
					$sth4 = $dbh4->prepare($query4);
					$sth4->execute();

					print "\n$query4\n";
					print "########\n $count of $limit sleeping ";
					# pause for a random time so twitter doesn't think you are a robot
					# minimum and maximum time in seconds to sleep
					$minimum = 60; 
					$maximum = 120;
					$random_sleep = int($minimum + rand($maximum - $minimum));

					print " for $random_sleep seconds\n";
					sleep $random_sleep;

				# unfollow this user
    		    		
				}

 			# end - if ($role =~ "id_str")
 			}

        	# check the status of the connection        	
        	if ($role =~ "connections")
        	
        	{
        			# since the connection info is an array, find the values
        			$status_connection1 = $data_list->{$role}[0];
        			$status_connection2 = $data_list->{$role}[1];
        			$status_connection3 = $data_list->{$role}[2];
        	
        	$status_friend = "$status_connection1 $status_connection2 $status_connection3";

    		# if ($role =~ "connections")
    		}
    		
    		# for $role ( keys %$data_list ) {
    		}
    		
    		# end - for $data_list ( @$friend ) {
    		}

# end - eval
};

# end - while
#}

print "\n\n";

exit;

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

rate_limits.pl

#!/usr/bin/perl
# Updated 2015-10-25
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use Data::Dumper;
use Scalar::Util 'blessed';
#use Encode;
use JSON;
  
# ----------------------------------------------------------------------------
# get the relationship between my user name and another user name to see
# if they are following me or if I am following them
# ----------------------------------------------------------------------------

# Credentials for your twitter application - blog
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "access_token_secret",
      apiurl => 'http://api.twitter.com/1.1',
      ssl                 => 1
);

	eval {

    			my $user_info = $nt->rate_limit_status;

#	print "---DUMPER START---\n";
print Dumper $user_info;
#	print "---DUMPER END---\n\n";

			if ( my $err = $@ ) {
				die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

				warn "\n - HTTP Response Code: ", $err->code, "\n",
				"\n - HTTP Message......: ", $err->message, "\n",
				"\n - Twitter error.....: ", $err->error, "\n";
			
			# end if
			};


	$friendships_show_remaining = $user_info=>{friendships};
	print "friendships_show_remaining $friendships_show_remaining\n";

#		print Dumper $friendships_show_remaining;
			
for my $item( @{$user_info_data->{friendships}} ){
    print $item->{'/friendships/show'} . "\n";
};

exit;


I am not the best Perl programmer, nor am I an expert at the Twitter API, so there may be a better/easier way to do this. Good luck with the scripts and let me know how they work for you. And follow me on Twitter at ScriptingMySQL and TonyDarnell.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Advertisements

Using Perl to retrieve direct messages from Twitter, insert messages into a MySQL database and then delete the messages

In two earlier posts, I gave some examples on how to use Perl to send tweets stored in a MySQL database to Twitter, and then how to automatically reply to your retweets with a “thanks”. In this post, I will show you how to automatically download your direct messages from Twitter, store the messages in a MySQL database, and then delete them.

I don’t like the way Twitter makes me read my direct messages. Granted, the majority of them are not real messages. The message is usually thanking me for following the sender, and then there is a personal website link or a link to a product they are selling. But if I want to delete a direct message, I have to click on the message, click the trash can, and then confirm I want to delete the message. This process is too cumbersome.

I wrote a Perl script that connects to Twitter, downloads your direct messages, inserts them into a MySQL database, and then deletes the direct message. I had a year of direct messages in my Inbox, and in a few minutes, they were gone. But I still had a copy in my MySQL database, in case I wanted to go back and read them or respond.

Just like in the first post, you will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

One caveat: twitter has a rate limit on how often you may connect with your application – depending upon what you are trying to do. See Rate Limiting and Rate Limits for more information. So, check your limits before you try downloading a large number of direct messages.

Just like the other two apps, we will be using the Net::Twitter module, as well as a few more modules which are listed in the beginning of the script.

There are two kinds of direct messages – the messages you sent and the messages you have received. So, we will be looking at two different Perl scripts to retrieve each kind of message. In the Net::Twitter module, there are a lot of different variables you can capture. In these examples, I only grabbed what I thought I needed. If there are other data variables you want, you will have to modify the tables and scripts.

First, you will need to create two databases to store your direct messages – one for the sent messages and one for the received messages. Here are the CREATE TABLE statements for both tables:

CREATE TABLE 'MESSAGES_SENT ' (
  'id' int(10) NOT NULL AUTO_INCREMENT,
  'creation_datetime' datetime DEFAULT NULL,
  'message_id' bigint(20) DEFAULT NULL,
  'sender_screen_name' varchar(16) DEFAULT NULL,
  'recipient_screen_name' varchar(16) DEFAULT NULL,
  'message_text' varchar(140) DEFAULT NULL,
  'sender_friends_count' int(10) DEFAULT NULL,
  'sender_time_zone' varchar(64) DEFAULT NULL,
  'sender_description' varchar(160) DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1


CREATE TABLE 'MESSAGES_RECEIVED' (
  'id' int(10) NOT NULL AUTO_INCREMENT,
  'creation_datetime' datetime DEFAULT NULL,
  'message_id' bigint(20) DEFAULT NULL,
  'sender_screen_name' varchar(16) DEFAULT NULL,
  'recipient_screen_name' varchar(16) DEFAULT NULL,
  'message_text' varchar(140) DEFAULT NULL,
  'sender_friends_count' int(10) DEFAULT NULL,
  'sender_time_zone' varchar(64) DEFAULT NULL,
  'sender_description' varchar(160) DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1

I am only storing a few of the variables. In each script, if you want to see all of the variables which are available, you will need to uncomment this line:

#          print Dumper $statuses;

When using the Dumper command, be sure that you are only downloading a single message. Otherwise, the script will pull all of the variables and their values for all of the messages. You only need one message to see all of the variables. To only retrieve one message, be sure that the value $number_of_messages is equal to 1:

$number_of_messages = 1;

Afterwards, you can change this value to whatever you want – just be sure to watch your Twitter limits.

The print Dumper $statuses; line will display all of the possible variables for a single message. For the “sent” script, there are about 140 variables. For the “received” script, there are about 67 variables.

Here is the “Get direct messages I sent to other people” script, which uses the MESSAGES_SENT database:

#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use Data::Dumper;
    
# ----------------------------------------------------------------------------
# get twitter direct messages sent
# ----------------------------------------------------------------------------

# you will need to fill in this information about your application and your twitter account
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

$number_of_messages = 1;

  eval {
      my $statuses = $nt->sent_direct_messages({ count => $number_of_messages });

# uncomment this line and set the count above to 1 (one) to see all of the variables that are available
#          print Dumper $statuses;

	for my $status ( @$statuses ) {
		
		$creation_date = $status->{created_at};

		# convert $creation_date to MySQL datetime format
		#  0   1   2  3         4     5
		# Fri Sep 04 07:32:05 +0000 2015
		
		@creation_date_array = split(" ",$creation_date);
		
		$creation_date_month = $creation_date_array[1];
		
		if ($creation_date_month =~ "Jan") { $creation_date_month = "01"}
		if ($creation_date_month =~ "Feb") { $creation_date_month = "02"}
		if ($creation_date_month =~ "Mar") { $creation_date_month = "03"}
		if ($creation_date_month =~ "Apr") { $creation_date_month = "04"}
		if ($creation_date_month =~ "May") { $creation_date_month = "05"}
		if ($creation_date_month =~ "Jun") { $creation_date_month = "06"}
		if ($creation_date_month =~ "Jul") { $creation_date_month = "07"}
		if ($creation_date_month =~ "Aug") { $creation_date_month = "08"}
		if ($creation_date_month =~ "Sep") { $creation_date_month = "09"}
		if ($creation_date_month =~ "Oct") { $creation_date_month = "10"}
		if ($creation_date_month =~ "Nov") { $creation_date_month = "11"}
		if ($creation_date_month =~ "Dec") { $creation_date_month = "12"}
		
		$creation_date_day_of_month = $creation_date_array[2];
		$creation_date_year = $creation_date_array[5];
		$creation_date_time = $creation_date_array[3];
		$creation_datetime = "$creation_date_year-$creation_date_month-$creation_date_day_of_month $creation_date_time";

		$message_id = $status->{id};
		
		$sender_screen_name = $status->{sender}{screen_name};
		$sender_screen_name =~ s/\'/\~/g;

		$recipient_screen_name = $status->{recipient_screen_name};
		$recipient_screen_name =~ s/\'/\~/g;
				
		$message_text = $status->{text};
		# remove carriage returns and single tick marks
		$message_text =~ s/\n/ /g;
		$message_text =~ s/\'/\~/g;

		$sender_friends_count = $status->{recipient}{followers_count};

		$sender_time_zone = $status->{sender}{time_zone};
		
		$sender_description = $status->{sender}{description};
		$sender_description =~ s/\n/ /g;
		$sender_description =~ s/\'/\~/g;

		# uncomment this line if you want to print
		# print "$creation_date_day_of_month - $creation_datetime - $message_id - $sender_screen_name - $recipient_screen_name - $message_text - $sender_friends_count - $sender_time_zone - $sender_description\n";


$dbh = ConnectToMySql($Database);	
$query = "insert into messages_sent (creation_datetime,message_id,sender_screen_name,recipient_screen_name,message_text,sender_friends_count,sender_time_zone,sender_description) values ('$creation_datetime','$message_id','$sender_screen_name','$recipient_screen_name','$message_text','$sender_friends_count','$sender_time_zone','$sender_description')";
#print "\nquery $query\n";
$sth = $dbh->prepare($query);
$sth->execute();


# stop the program if we have an error with the database
if ( $sth->err )
{
	die "ERROR! return code:" . $sth->err . " error msg: " . $sth->errstr . "\n";
}

else

{
	 my $destroy_id = eval { $nt->destroy_direct_message("$message_id") };
}



# you can change this so you don't go over your twitter connection limits
sleep 5;


	# end for my $status
	}

# end eval
  };

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

Here is the “Get direct messages I received from other people” script, which uses the MESSAGES_RECEIVED database:

#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;
use Data::Dumper;
       
# ----------------------------------------------------------------------------
# get twitter direct messages received
# ----------------------------------------------------------------------------

# you will need to fill in this information about your application and your twitter account
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

$number_of_messages = 1;

  eval {
      my $statuses = $nt->direct_messages({ count => $number_of_messages });

			# uncomment this line and set the $number_of_messages variable above to 1 (one) to see all of the variables that are available
#			print Dumper $statuses;

	for my $status ( @$statuses ) {
		
		$creation_date = $status->{created_at};

		# convert $creation_date to MySQL datetime format
		#  0   1   2  3         4     5
		# Fri Sep 04 07:32:05 +0000 2015
		
		@creation_date_array = split(" ",$creation_date);
		
		$creation_date_month = $creation_date_array[1];
		
		if ($creation_date_month =~ "Jan") { $creation_date_month = "01"}
		if ($creation_date_month =~ "Feb") { $creation_date_month = "02"}
		if ($creation_date_month =~ "Mar") { $creation_date_month = "03"}
		if ($creation_date_month =~ "Apr") { $creation_date_month = "04"}
		if ($creation_date_month =~ "May") { $creation_date_month = "05"}
		if ($creation_date_month =~ "Jun") { $creation_date_month = "06"}
		if ($creation_date_month =~ "Jul") { $creation_date_month = "07"}
		if ($creation_date_month =~ "Aug") { $creation_date_month = "08"}
		if ($creation_date_month =~ "Sep") { $creation_date_month = "09"}
		if ($creation_date_month =~ "Oct") { $creation_date_month = "10"}
		if ($creation_date_month =~ "Nov") { $creation_date_month = "11"}
		if ($creation_date_month =~ "Dec") { $creation_date_month = "12"}
		
		$creation_date_day_of_month = $creation_date_array[2];
		$creation_date_year = $creation_date_array[5];
		$creation_date_time = $creation_date_array[3];
		$creation_datetime = "$creation_date_year-$creation_date_month-$creation_date_day_of_month $creation_date_time";

		$message_id = $status->{id};
		
		$sender_screen_name = $status->{sender}{screen_name};
		$sender_screen_name =~ s/\'/\~/g;
		
		$message_text = $status->{text};
		# remove carriage returns and single tick marks
		$message_text =~ s/\n/ /g;
		$message_text =~ s/\'/\~/g;

		$sender_friends_count = $status->{sender}{friends_count};

		$sender_time_zone = $status->{sender}{time_zone};
		
		$sender_description = $status->{sender}{description};
		$sender_description =~ s/\n/ /g;
		$sender_description =~ s/\'/\~/g;

		$recipient_screen_name = $status->{recipient_screen_name};
		$recipient_screen_name =~ s/\'/\~/g;
		
		# uncomment this line if you want to see the output
		# print "$creation_datetime - $message_id - $sender_screen_name - $recipient_screen_name - $message_text - $sender_friends_count - $sender_time_zone - $sender_description\n";

$dbh = ConnectToMySql($Database);	
$query = "insert into messages_received (creation_datetime,message_id,sender_screen_name,recipient_screen_name,message_text,sender_friends_count,sender_time_zone,sender_description) values ('$creation_datetime','$message_id','$sender_screen_name','$recipient_screen_name','$message_text','$sender_friends_count','$sender_time_zone','$sender_description')";
#print "\nquery $query\n";
$sth = $dbh->prepare($query);
$sth->execute();

# stop the program if we have an error with the database
if ( $sth->err )
{
	die "ERROR! return code:" . $sth->err . " error msg: " . $sth->errstr . "\n";
}

else

{
	 my $destroy_id = eval { $nt->destroy_direct_message("$message_id") };
}

# you can change this so you don't go over your twitter connection limits
sleep 5;

	# end for my $status
	}

# end eval
  };

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file – named accessTweets contains this information:

database_name
hostname or IP
MySQL user name
password

You may hardcode these values into the script if you don’t want to mess with storing them in a file.

I tested these scripts on two twitter accounts, and everything worked for me – but I ran out of messages quickly. Let me know if you have problems. I am not the best Perl programmer, nor am I an expert at the Twitter API, so there may be a better/easier way to do this.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Using Perl and MySQL to automatically respond to retweets on twitter

In an earlier post titled Using Perl to send tweets stored in a MySQL database to twitter, I showed you a way to use MySQL to store tweets, and then use Perl to automatically send your tweets to twitter.

In this post, we will look at automatically sending a “thank you” to people who retweet your tweets – and we will be using Perl and MySQL again.

Just like in the first post, you will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

One caveat: twitter has a rate limit on how often you may connect with your application – depending upon what you are trying to do. See Rate Limiting and Rate Limits for more information. So, if you are going to put this into a cron job, I wouldn’t run it more than once every 15 minutes.

We will also be using the same tables we created in the first post – tweets and history – as well as a new table, named retweets. The retweets table will contain all of the user names and tweet ID’s for those retweets we have discovered and already sent a thank-you tweet response.

The Perl script will connect to your tweet history table, and retrieve a set of your tweet ID’s, with the most recent tweet first. The script will then connect to twitter and check to see if there are any retweets for each ID. If a retweet is found, the script will check your retweets table to see if you have already thanked the tweeter for the retweet. If this is a new retweet, the script will connect to twitter and send a “thank-you” message to that user, and then insert the user name and tweet ID into the retweets table. This will ensure that you do not send a thank-you response more than one time.

Here is a flow chart that will attempt to explain what the script does:

We will be using the API call/method retweets(id) to see if a tweet ID was retweeted, and then we will send the thank-you tweet via the update call. More information about the Perl twitter API may be found at Net::Twitter::Lite::WithAPIv1_1.

First we will need to create the retweets table, where we will store the information about our tweets that were retweeted. Here is the CREATE TABLE statement for the retweets table:

CREATE TABLE `retweets` (
  `id` int(8) NOT NULL AUTO_INCREMENT,
  `tweet_id` bigint(24) DEFAULT NULL,
  `user_name` varchar(24) DEFAULT NULL,
  `retweet_update` varchar(36) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1

All you need to do is to use edit this script with your own consumer_key, consumer_secret, access_token and access_token_secret for your application, and edit the accessTweets file used by the subroutine ConnectToMySql. You may also comment-out the “print” commands.

#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

my $Database = "tweets";

# Credentials for your twitter application
# you will need to subsitute your own application information for these four variables
my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

# Grab the last X number of tweets to check for retweets
# - determined by the number after "limit"

$dbh = ConnectToMySql($Database);
$query = "select tweet_id, tweet_update FROM history order by tweet_update desc, id limit 10";	
$sth = $dbh->prepare($query);
$sth->execute();

# loop through our results - one tweet at a time
while (@data = $sth->fetchrow_array()) {

	$tweet_id = $data[0];
	$tweet_update = $data[1];

	print "----------------------------------------------------------------------\n";
	print "Checking:  $tweet_id $tweet_update\n";
	print "----------------------------------------------------------------------\n";

		# Connect to twitter and see if anyone retweeted this tweet
		my $results = eval { $nt->retweets($tweet_id)};

		for my $status ( @$results ) {
       
			$user_name = "$status->{user}{screen_name}";
			$retweet_update = "$status->{created_at}";

			# see if this person has retweeted this before, and we already
			# have a record of the retweet in our database
					
			$dbh2 = ConnectToMySql($Database);
			$query2 = "select tweet_id, user_name FROM retweets where tweet_id = '$tweet_id' and user_name = '$user_name' limit 1";	
			$sth2 = $dbh2->prepare($query2);
			$sth2->execute();
    
			@data2 = $sth2->fetchrow_array();
    
			# Uncomment if you want to see it in action
			# print "Query: $query\n";
		
			# Check to see if we had any results, and if not, then insert
			# tweet into database and send them a "thank you" tweet
			if (length($data2[0]) prepare($query3);
				$sth3->execute();

				# Uncomment if you want to see it in action
				# print "Query2: $query2\n";


				# ----------------------------------------------------------------------------
				# send tweet
				# ----------------------------------------------------------------------------

				# This pause is just to slow down the action - you can remove this line if you want
				sleep 5;
				
				my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
					traits              => [qw/API::RESTv1_1/],
					consumer_key        => "$consumer_key",
					consumer_secret     => "$consumer_secret",
					access_token        => "$access_token",
					access_token_secret => "$access_token_secret",
					ssl                 => 1
					);
					
					# Here is the message you want to send - 
					# the thank-you to the user who sent the retweet
					$tweet = "\@$user_name thanks for the retweet!";

					# send thank-you tweet
					my $results = eval { $nt->update("$tweet") };

						undef @data2;
						undef @data3;
					}
		
					else
				
					{
  						# we have already thanked this user - as their name and this tweet-id was found in the database
    					print "----- Found tweet: $tweet_id\n";
    
						while (@data2) {

							print "----------------------------------------------------------------------\n";
							print "Checking retweet by $user_name for $tweet_id\n";
							print "Found retweet:  $tweet_id $user_name $retweet_update \n";

							$tweet_id = $data2[0];
							$user_name = $data2[1];
					
							print "***** Retweet by $user_name already in database \n";
							print "----------------------------------------------------------------------\n";
					
							#exit;
							undef @data2;
							undef @data3;

							# This pause is just to slow down the action - you can remove this line if you want
							sleep 5;

							# end while
							}
						
					# end else
					}

		# end for my $status ( @$results ) {
		}

# end while
}

exit;

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= ;
   my $host= ;
   my $userid= ;
   my $passwd= ;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}

In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file – named accessTweets contains this information:

database_name
hostname or IP
MySQL user name
password

I tested this on two twitter accounts, and everything worked for me – but let me know if you have problems. I am not the best Perl programmer, nor am I an expert at the twitter API, so there is probably a better/easier way to do this.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

MySQL Dumping and Reloading the InnoDB Buffer Pool

MySQL’s default storage engine as of version 5.5 is InnoDB. InnoDB maintains a storage area called the buffer pool for caching data and indexes in memory. By keeping the frequently-accessed data in memory, related searches are retrieved much faster than reading from disk.

When you stop or restart MySQL, you lose the cached data stored in the buffer pool. There is a feature in MySQL 5.6 which allows you to dump the contents of the buffer pool before you shutdown the mysqld process. Then, when you start mysqld again, you can reload the contents of the buffer pool back into memory. You may also dump the buffer pool at any time for reloading later.

To see information about the buffer pool, use the SHOW ENGINE INNODB STATUS command:

mysql> SHOW ENGINE INNODB STATUS\G
....
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 274726912; in additional pool allocated 0
Dictionary memory allocated 308740
Buffer pool size   16384
Free buffers       15186
Database pages     1195
Old database pages 421
....

This example shows the buffer pool contains 1195 database pages (this example is a very small one from my home server). When you dump the buffer pool to disk, only the database pages are recorded. When you restart mysqld, the data from these pages will be loaded back into memory.

You may dump the buffer pool with this command:

mysql> SET GLOBAL innodb_buffer_pool_dump_now=ON;
Query OK, 0 rows affected (0.00 sec)

The buffer pool dump file is stored in your MySQL data directory.

# pwd
/usr/local/mysql/data
# ls -l ib_buffer_pool
-rw-rw----  1 mysql  wheel  7122 Feb 13 13:58 ib_buffer_pool

The dump is a plain-text file, and we can see the file is 1195 lines long and contains only the database page references.

# file ib_buffer_pool
ib_buffer_pool: ASCII text
# wc -l ib_buffer_pool
    1195 ib_buffer_pool
# head ib_buffer_pool
0,7
0,1
0,3
0,2
0,4
0,11
0,5
0,6
0,301
0,522

If you have a large buffer pool, you can check on the status of the dump with this command:

SHOW STATUS LIKE 'Innodb_buffer_pool_dump_status';

If you want to save the buffer pool when MySQL is shutdown or restarted, use this command:

SET GLOBAL innodb_buffer_pool_dump_at_shutdown=ON;

To restore the buffer pool when starting MySQL, append this statement to your mysqld command:

--innodb_buffer_pool_load_at_startup=ON;

Or, to load the buffer pool file while mysqld is running, use this command:

SET GLOBAL innodb_buffer_pool_load_now=ON;

Reloading the buffer pool is very fast, and is performed in the background so the users will not be effected. More information about preloading the buffer pools may be found at http://dev.mysql.com/doc/refman/5.6/en/innodb-preload-buffer-pool.html.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

MySQL 5.7 labs and the HTTP Plugin – inserting, updating and deleting records in MySQL via HTTP

In the MySQL Labs version of MySQL version 5.7, there is a new HTTP plugin. The HTTP plugin documentation from the labs site provides this information (from MySQL Labs):

The HTTP Plugin for MySQL adds HTTP(S) interfaces to MySQL. Clients can use the HTTP respectively HTTPS (SSL) protocol to query data stored in MySQL. The query language is SQL but other, simpler interfaces exist. All data is serialized as JSON. This version of MySQL Server HTTP Plugin is a Labs release, which means it’s at an early development stage. It contains several known bugs and limitation, and is meant primarily to give you a rough idea how this plugin will look some day. Likewise, the user API is anything but finalized. Be aware it will change in many respects.

In other words, with a simple HTTP URL, you can access and modify your data stored in MySQL. Here is an overview from the documentation:


The HTTP Plugin for MySQL is a proof-of concept of a HTTP(S) interface for MySQL 5.7.

The plugin adds a new protocol to the list of protocols understood by the server. It adds the HTTP respectively HTTPS (SSL) protocol to the list of protocols that can be used to issue SQL commands. Clients can now connect to MySQL either using the MySQL Client Server protocol and programming language-dependent drivers, the MySQL Connectors, or using an arbitrary HTTP client.
Results for SQL commands are returned using the JSON format.

The server plugin is most useful in environments where protocols other than HTTP are blocked:
• JavaScript code run in a browser
• an application server behind a firewall and restricted to HTTP access
• a web services oriented environment

In such environments the plugin can be used instead of a self developed proxy which translates HTTP requests into MySQL requests. Compared to a user-developed proxy, the plugin means less latency, lower complexity and the benefit of using a MySQL product. Please note, for very large deployments an architecture using a proxy not integrated into MySQL may be a better solution to clearly separate software layers and physical hardware used for the different layers.

The HTTP plugin implements multiple HTTP interfaces, for example:
• plain SQL access including meta data
• a CRUD (Create-Read-Update-Delete) interface to relational tables
• an interface for storing JSON documents in relational tables

Some of the interfaces follow Representational State Transfer (REST) ideas, some don’t. See below for a description of the various interfaces.

The plugin maps all HTTP accesses to SQL statements internally. Using SQL greatly simplifies the development of the public HTTP interface. Please note, at this early stage of development performance is not a primary goal. For example, it is possible to develop a similar plugin that uses lower level APIs of the MySQL server to overcome SQL parsing and query planning overhead.


In this post, I will show you how to install the plugin and use HTTP commands to retrieve data. The documentation also provides other examples. We aren’t going to explain everything about the plugin, as you will need to download the documentation.

First, you will need to download the MySQL Labs 5.7 version which includes the plugin. This download is available from the MySQL Labs web site.

After MySQL 5.7 is installed, you will want to add these lines to your my.cnf/my.ini file under the [mysqld] section:

#
# Default database, if no database given in URL
#
myhttp_default_db = httptest
#
# Non-SSL default MySQL SQL user
#
myhttp_default_mysql_user_name = http_sql_user
myhttp_default_mysql_user_passwd = sql_secret
myhttp_default_mysql_user_host = 127.0.0.1

There are other options for the plugin, but we will skip them for this post.

# Change only, if need be to run the examples!
#
# General settings
# 
# myhttp_http_enabled = 1
# myhttp_http_port = 8080
# myhttp_crud_url_prefix = /crud/
# myhttp_document_url_prefix = /doc/
# myhttp_sql_url_prefix = /sql/
# 
# 
# 
# Non-SSL HTTP basic authentication
# 
# myhttp_basic_auth_user_name = basic_auth_user
# myhttp_basic_auth_user_passwd = basic_auth_passwd

# 
# SSL
# 
# myhttp_https_enabled = 1
# myhttp_https_port = 8081
# myhttp_https_ssl_key = /path/to/mysql/lib/plugin/myhttp_sslkey.pem

After modifying the my.cnf/my.ini file, restart mysql and then install the plugin from a mysql prompt. Before proceeding, be sure to also check to make sure the plugin is installed:

mysql> INSTALL PLUGIN myhttp SONAME 'libmyhttp.so';
Query OK, 0 rows affected (0.09 sec)


mysql> SELECT * FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME='myhttp'\G
*************************** 1. row ***************************
           PLUGIN_NAME: myhttp
        PLUGIN_VERSION: 1.0
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: DAEMON
   PLUGIN_TYPE_VERSION: 50705.0
        PLUGIN_LIBRARY: libmyhttp.so
PLUGIN_LIBRARY_VERSION: 1.5
         PLUGIN_AUTHOR: Andrey Hristov, Ulf Wendel
    PLUGIN_DESCRIPTION: HTTP Plugin for MySQL
        PLUGIN_LICENSE: GPL
           LOAD_OPTION: ON
1 row in set (0.03 sec)

We will need to create the user for accessing our database, and grant permissions:

mysql> CREATE USER 'http_sql_user'@'127.0.0.1' IDENTIFIED WITH mysql_native_password;
Query OK, 0 rows affected (1.89 sec)

mysql> SET old_passwords = 0;
Query OK, 0 rows affected (0.05 sec)

mysql> SET PASSWORD FOR 'http_sql_user'@'127.0.0.1' = PASSWORD('sql_secret');
Query OK, 0 rows affected (0.05 sec)

mysql> GRANT ALL ON myhttp.* TO
    -> 'http_sql_user'@'127.0.0.1';
Query OK, 0 rows affected (0.12 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.58 sec)

We will need to create a table for our example. The table will be a very simple table with three fields – ID, first and last names:

mysql> CREATE TABLE `names` (
    ->   `id` int(11) NOT NULL DEFAULT '1000',
    ->   `name_first` varchar(40) DEFAULT NULL,
    ->   `name_last` varchar(40) DEFAULT NULL,
    ->   PRIMARY KEY (`id`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.04 sec)

We need to insert some data into the table:

INSERT INTO `names` (name_first, name_last) VALUES ('Clark','Kent');
INSERT INTO `names` (name_first, name_last) VALUES ('Bruce','Wayne');
INSERT INTO `names` (name_first, name_last) VALUES ('Hal','Jordan');
INSERT INTO `names` (name_first, name_last) VALUES ('Barry','Allen');
INSERT INTO `names` (name_first, name_last) VALUES ('Diana','Prince');
INSERT INTO `names` (name_first, name_last) VALUES ('Arthur','Curry');
INSERT INTO `names` (name_first, name_last) VALUES ('Oliver','Queen');
INSERT INTO `names` (name_first, name_last) VALUES ('Ray','Palmer');
INSERT INTO `names` (name_first, name_last) VALUES ('Carter','Hall');
Query OK, 9 rows affected (0.01 sec)

Now that we have our table and table data, we can test a select statement with an HTTP URL. You may use a browser for this, but since I like to work with command line tools, I am going to use curl, a command line tool for doing all sorts of URL manipulations and transfers. Here is a simple select statement via curl. Use the plus sign (+) for spaces.

Select all of the names in the table:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/SELECT+name_first,+name_last+FROM+names"
[
{
"meta":[
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_first","org_column":"name_first","charset":33,"length":120,"flags":0,"decimals":0},
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_last","org_column":"name_last","charset":33,"length":120,"flags":0,"decimals":0}
],
"data":[ 
	["Clark","Kent"],
	["Bruce","Wayne"],
	["Hal","Jordan"],
	["Barry","Allen"],
	["Diana","Prince"],
	["Arthur","Curry"],
	["Oliver","Queen"],
	["Ray","Palmer"],
	["Carter","Hall"]
],
"status":[{"server_status":34,"warning_count":0}]
}
]

If you want to use a browser, you might have to authenticate the connection (enter the user name and password):

And here is the output from submitting the URL in a browser:

URL:  http://127.0.0.1:8080/sql/myhttp/SELECT+name_first,+name_last+FROM+names

Selecting a single name:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/SELECT+name_first,+name_last+FROM+names+where+name_first+=+'Clark'"
[
{
"meta":[
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_first","org_column":"name_first","charset":33,"length":120,"flags":0,"decimals":0},
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_last","org_column":"name_last","charset":33,"length":120,"flags":0,"decimals":0}
],
"data":[ 
	["Clark","Kent"]
],
"status":[{"server_status":34,"warning_count":0}]
}
]

Deleting a row:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/delete+from+names+where+name_first+=+'Hal'"
{"server_status":34,"warning_count":0,"affected_rows":1,"last_insert_id":0}

Inserting a row:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/INSERT+INTO+names+(name_first,+name_last)+VALUES+('Hal','Jordan');"
{"server_status":2,"warning_count":0,"affected_rows":1,"last_insert_id":1018}

In a future post, I will show you how to use Perl to connect via HTTP and then parse the results.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Using Perl to send tweets stored in a MySQL database to twitter

Twitter is not my favorite social media site. Using twitter is like driving downtown, screaming what you want to say out the window, and hoping someone hears you. There might be tens of thousands of people downtown, but your message will only be heard by a few. Your best bet is to repeat your message as often as possible.

However, twitter is free and if you want to reach as many people (theoretically) as possible, you might as well use it. But sending tweets on a scheduled basis can be a pain. There are client programs available which allow you to schedule your tweets (Hootsuite is one I have used in the past). You can load your tweets in the morning, and have the application tweet for you all day long. But you still have to load the application with your tweets – one by one.

A friend of mine asked me if there was a way to send the same 200 tweets over and over again, spaced out every 20 minutes or so. He has a consulting business, and just wants to build up a list of twitter followers by tweeting inspirational quotes. If he tweets for twenty hours a day, and sends three quotes an hour, it will take him a little more than three days to burn through his 200 quotes. And he can always add more quotes or space out the tweets as necessary. I decided to write a Perl script to do this for him.

To start, we will need a MySQL database to store the tweets. I use MySQL’s Workbench product as my client application for connecting to MySQL. From within Workbench, I can create my tweet database:

CREATE DATABASE 'tweets' /*!40100 DEFAULT CHARACTER SET latin1 */

I will then need a table inside my database to store my tweets.

CREATE TABLE 'tweets' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'tweet' char(140) DEFAULT NULL,
  'last_tweet_date' datetime NOT NULL DEFAULT '2015-01-01 00:00:00',
  'tweet_count' int(5) DEFAULT NULL,
  'tweet_length' int(3) DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB AUTO_INCREMENT=100001 DEFAULT CHARSET=latin1

The tweet messages will be stored in the tweet column, and the last date the tweet was sent will have a time stamp in the last_tweet_date column. When we perform a search to find the next tweet, we will simply sort our search by the last_tweet_date and then id, and limit our output to one tweet. After we send the tweet, we will update the last_tweet_date column and send that tweet to the end of the line. The script will also incrementally change the tweet_count value (number of times the tweet has been sent), and record the length of the tweet in tweet_length. I do not do any error checking in the script to make sure the tweet was sent, but errors are printed.

We now need to insert some tweets into our table. Since my friend is going to be sending inspirational quotes, I found a few I can import. In order to make it easier for importing, all single quote marks () will be replaced by the carat symbol (^). I can then swap these symbols inside the Perl script. You could use the backslash (\) before the single quote, but I prefer a single character substitution so I know how long the tweet will be.

I will also use the tilde (~) as a way to designate a carriage return in my tweet. The Perl script will replace the tilde with a carriage return (\n). Two tildes give me two carriage returns and a blank line.

insert into tweets (tweet) VALUES('I^m not afraid. -Luke~~You will be. -Yoda~~http://SomeWebSiteHere.com');
insert into tweets (tweet) VALUES('Do or do not.  There is no try.~~-Yoda~~http://SomeWebSiteHere.com');
insert into tweets (tweet) VALUES('No, I am your father.~~-Darth~~http://SomeWebSiteHere.com');

I also create a history table to store the tweet identification numbers. Each tweet is assigned a unique number by twitter, and this is how you can access this tweet. I save this information so I can delete the tweets later using this number. I have included a short script for deleting tweets near the end of this post.

CREATE TABLE 'history' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'tweet' char(140) DEFAULT NULL,
  'tweet_id' varchar(30) DEFAULT NULL,
  'tweet_update' datetime DEFAULT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB AUTO_INCREMENT=1000032 DEFAULT CHARSET=latin1

You will need to register your application with twitter via apps.twitter.com, and obtain the following:

consumer_key
consumer_secret
access_token
access_token_secret

You will also need to register your mobile phone in order to link your twitter account to your application. I have not figured out how to use this script with someone else’s account, as the instructions for scripting Perl for use with twitter are not very thorough. I will try to add this at a later date.

Now that you have your application information and all of your tables created with data inserted, here is the Perl script for sending tweets. You will need to install the necessary Perl modules that are used.


#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

my $Database = "tweets";

# ----------------------------------------------------------------------------------
# this has to be near the top - as other parts of the script rely on these figures
# ----------------------------------------------------------------------------------

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;

$year = $year + 1900;
$mon = $mon + 1;

# add a zero if the value is less than 10

if ($sec < 10) { $sec = "0$sec"; }
if ($min < 10) { $min = "0$min"; }
if ($hour < 10) { $hour = "0$hour"; }
if ($mday < 10) { $mday = "0$mday"; }
if ($mon < 10) { $mon = "0$mon"; }
if ($year < 10) { $year = "0$year"; }
if ($wday < 10) { $wday = "0$wday"; }
if ($yday < 10) { $yday = "0$yday"; }
if ($isdst < 10) { $isdst = "0$isdst"; }

$DateTime = "$year-$mon-$mday $hour:$min:$sec";

# ----------------------------------------------------------------------------------
# retrieve tweet from database
# ----------------------------------------------------------------------------------

     $dbh = ConnectToMySql($Database);
     $query = "select id, tweet, last_tweet_date, tweet_count FROM tweets order by last_tweet_date, id limit 1";	
     $sth = $dbh->prepare($query);
     $sth->execute();
    
          while (@data = $sth->fetchrow_array()) {
            
		 $id = $data[0];
		 $tweet = $data[1];
		 $last_tweet_date = $data[2];
		 $tweet_count = $data[3];
	}

$tweet_original = $tweet;

# ----------------------------------------------------------------------------
# find tildes ~ and substitute for carriage return
# find carats and substitue for single quote
# ----------------------------------------------------------------------------

$tweet =~ s/~/\n/g;
$tweet =~ s/\^/\'/g;

# ----------------------------------------------------------------------------------
# check length of tweet
# ----------------------------------------------------------------------------------

$tweet_length = length($tweet);

if (length($tweet) > 140)

{
	print "Error - tweet is longer than 140 characters\n";
	exit;
}

# add to the tweet count
$tweet_count++;

# ----------------------------------------------------------------------------
# send tweet
# ----------------------------------------------------------------------------

my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      traits              => [qw/API::RESTv1_1/],
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1
);

my $results = eval { $nt->update("$tweet") };
 
  if ( my $err = $@ ) {
      die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

      warn "HTTP Response Code: ", $err->code, "\n",
           "HTTP Message......: ", $err->message, "\n",
           "Twitter error.....: ", $err->error, "\n";
  } 

# ----------------------------------------------------------------------------
# update mysql with new date for last_tweet date/time
# ----------------------------------------------------------------------------

$dbh = ConnectToMySql($Database);
$query = "UPDATE tweets SET last_tweet_date = '$DateTime' , tweet_count = '$tweet_count' , tweet_length = '$tweet_length' where id = '$id'";
$sth = $dbh->prepare($query);
$sth->execute();

# ----------------------------------------------------------------------------
# get the status id of the last tweet
# ----------------------------------------------------------------------------

my $statuses = $nt->user_timeline({ user => "2044_The_Book", count=> 1 });

for my $status ( @$statuses ) {
	$tweet_id = "$status->{id}";
}

if ( my $err = $@ ) {
      die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error');

      warn "HTTP Response Code: ", $err->code, "\n",
           "HTTP Message......: ", $err->message, "\n",
           "Twitter error.....: ", $err->error, "\n";
} 

# ----------------------------------------------------------------------------
# replace special characters
# ----------------------------------------------------------------------------

$tweet =~ s/\\\n/~/g;
$tweet =~ s/\'/^/g;

# update mysql with new date for last_tweet date/time

$dbh = ConnectToMySql($Database);	
$query = "insert into history (tweet,tweet_id,tweet_update) values ('$tweet_original','$tweet_id','$DateTime')";
$sth = $dbh->prepare($query);
$sth->execute();

#----------------------------------------------------------------------
sub ConnectToMySql {
#----------------------------------------------------------------------

   my ($db) = @_;

   open(PW, "<..\/accessTweets") || die "Can't access login credentials";
   my $db= <PW>;
   my $host= <PW>;
   my $userid= <PW>;
   my $passwd= <PW>;

   chomp($db);
   chomp($host);
   chomp($userid);
   chomp($passwd);
   
   my $connectionInfo="dbi:mysql:$db;$host:3306";
   close(PW);

   # make connection to database
   my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd);
   return $l_dbh;

}



In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file contains this information:

database_name
hostname or IP
MySQL user name
password

You can include your information inside the file if you prefer.

If you want to delete your tweets, you can create a script to access the tweets in your history table, and then delete them one at a time. Here is an example without the database connections:


#!/usr/bin/perl
 
use Net::Twitter::Lite::WithAPIv1_1;
use DBI;
use DBD::mysql;

# ----------------------------------------------------------------------------
# delete tweet
# ----------------------------------------------------------------------------

# replace the values for $consumer_key $consumer_secret $access_token $access_token_secret
# with your values for your application

my $nt = Net::Twitter::Lite::WithAPIv1_1->new(
      consumer_key        => "$consumer_key",
      consumer_secret     => "$consumer_secret",
      access_token        => "$access_token",
      access_token_secret => "$access_token_secret",
      ssl                 => 1,
	 );

my $statuses = $nt->destroy_status({ id => "$tweet_id" });

exit;

Be sure to replace the value of $tweet_id with the value from the tweet you want to delete.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.

Upgrade MySQL to a new version with a fresh installation & use shell scripts and mysqldump to reload your data

There are several ways to upgrade MySQL. In this post, we will use a combination of shell scripts and the mysqldump application to export our MySQL data, and then re-import it back into the upgraded version of MySQL.

In this example, we will be doing a minor version upgrade. We will be going from 5.6.17 to 5.6.19. This method may not work if you are upgrading from one major release to another – from 5.1 to 5.5, or 5.5 to 5.6. You will want to check each version and review the new features/functions and also what features/functions have been deprecated. We are also assuming that no one will be using the database during the time it takes for us to do the upgrade.

If you want to upgrade from a version that is more than one major release apart from your current version, then you will want to upgrade to each successive version. For example, if you want to upgrade from 5.0 to 5.6, you will want to upgrade from 5.0 to 5.1, then 5.1 to 5.5, and then 5.5 to 5.6.

You don’t have to export all of your data when you upgrade MySQL. There are ways of upgrading without doing anything to your data. But in this post, I will be exporting the data and re-importing it, for a fresh installation. I don’t have that much data, so I don’t mind doing the export and import. If you have a lot of data, you might want to consider other options.

To get an idea of the size of your database(s), here is a quick script that you can use:

SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB" 
FROM information_schema.TABLES GROUP BY table_schema ; 

When I perform an export/import, I like to export each database as a separate mysqldump file, and then also export all of the databases together in one large file. By exporting/importing the individual databases, if you have an error importing one of the database dump files, you can isolate the error to a single database. It is much easier to fix the error in one smaller data dump file than with a larger all-inclusive dump file.

I am also going to create some simple shell scripts to help me create the commands that I need to make this task much easier. First, you will want to create a directory to store all of the scripts and dump files. Do all of your work inside that directory.

Next, I want to get a list of all of my databases. I will log into mysql, and then issue the show databases; command: (which is the same command as: select schema_name from information_schema.schemata;)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| 12thmedia          |
| cbgc               |
| comicbookdb        |
| coupons            |
| healthcheck        |
| innodb_memcache    |
| landwatch          |
| laurelsprings      |
| ls_directory       |
| mem                |
| mysql              |
| performance_schema |
| protech            |
| scripts            |
| stacy              |
| storelist          |
| test               |
| testcert           |
| tony               |
| twtr               |
| watchdb            |
+--------------------+
22 rows in set (1.08 sec)

I can then just highlight and copy the list of databases, and put that list into a text file named “list.txt“. I do not want to include these databases in my export:

information_schema
mysql
performance_schema
test

However, I will export the mysql table later. You need to check with the MySQL manual to make sure that there haven’t been any changes to the MySQL table from one version to the next.

I will need to manually remove those databases from my list.txt file. I then want to remove all of the spaces and pipe symbols from the text file – assuming that you do not have any spaces in your database names. Instead of using spaces in a database name, I prefer to use an underline character “_“. These scripts assume that you don’t have any spaces in your database names.

If you know how to use the vi editor, you can so a substitution for the pipes and spaces with these commands:

:%s/ //g
:%s/|//g

Otherwise, you will want to use another text editor and manually edit the list to remove the spaces and pipe symbols. Your finished list.txt file should look like this:

12thmedia cbgc
comicbookdb
coupons
healthcheck
innodb_memcache
landwatch
laurelsprings
ls_directory
mem
protech
scripts
stacy
storelist
testcert
tony
twtr
watchdb

You can then create a simple shell script to help create your mysqldump commands – one command for each database. You will want to create this script and the other scripts in the directory you created earlier. Name the script export.sh. You can also change the mysqldump options to meet your needs. I am using GTID’s for replication, so I want to use this option –set-gtid-purged=OFF. You will also want to change the value of my password my_pass to your mysql password. You can also skip including the password by using the -p option, and just enter the password each time you run the mysqldump command.

# export.sh
# script to create the database export commands
k=""
for i in `cat list.txt`
do

echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $i > "$i"_backup.sql"

k="$k $i"

done

# Optional - export the entire database
# use the file extention of .txt so that your script won't import it later
echo "mysqldump -uroot --password=my_pass --set-gtid-purged=OFF --triggers --quick --skip-opt --add-drop-database --create-options --databases $k > all_db_backup.txt"

For the individual databases, I am using the suffix of .sql. For the dump file that contains all of the databases, I am using the prefix .txt – as I use a wildcard search later to get a list of the dump files, and I don’t want to import the one dump file that contains all of the databases.

Now you can run the export.sh script to create a list of your mysqldump commands, and you are going to direct the output into another shell script named export_list.sh.

# sh export.sh > export_list.sh

We can now take a look at what is in the export_list.sh file

# cat export_list.sh
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases 12thmedia > 12thmedia_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases cbgc > cbgc_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases comicbookdb > comicbookdb_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases coupons > coupons_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases healthcheck > healthcheck_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases innodb_memcache > innodb_memcache_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases landwatch > landwatch_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases laurelsprings > laurelsprings_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases ls_directory > ls_directory_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases mem > mem_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases protech > protech_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases scripts > scripts_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases stacy > stacy_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases storelist > storelist_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases testcert > testcert_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases tony > tony_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases twtr > twtr_backup.sql
mysqldump -uroot --set-gtid-purged=OFF --password=my_pass --triggers --quick --skip-opt --add-drop-database --create-options --databases watchdb > watchdb_backup.sql

mysqldump -uroot -p --set-gtid-purged=OFF --password=my_psss --triggers --quick --skip-opt --add-drop-database --create-options --databases  12thmedia cbgc comicbookdb coupons healthcheck innodb_memcache landwatch laurelsprings ls_directory mem protech scripts stacy storelist testcert tony twtr watchdb > all_db_backup.txt

Now you have created a list of mysqldump commands that you can execute to dump all of your databases. You can now go ahead and execute your mysqldump commands by running the export_list.sh script:

# sh export_list.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

The message “Warning: Using a password on the command line interface can be insecure.” is shown because you included the value for “–password“. If you don’t want to put your password on the command line, just change that option to “-p“, and you will have to manually enter your MySQL root user’s password after each mysqldump command.

Here is a list of the dump files that was produced:

# ls -l
total 21424
-rw-r--r--  1 root  staff    26690 Aug  1 16:25 12thmedia_backup.sql
-rw-r--r--  1 root  staff  5455275 Aug  1 16:26 all_db_backup.txt
-rw-r--r--  1 root  staff  1746820 Aug  1 16:25 cbgc_backup.sql
-rw-r--r--  1 root  staff   492943 Aug  1 16:25 comicbookdb_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 coupons_backup.sql
-rw-r--r--  1 root  staff     3366 Aug  1 16:25 export_list.sh
-rw-r--r--  1 root  staff     1077 Aug  1 16:25 healthcheck_backup.sql
-rw-r--r--  1 root  staff     3429 Aug  1 16:25 innodb_memcache_backup.sql
-rw-r--r--  1 root  staff  1815839 Aug  1 16:25 landwatch_backup.sql
-rw-r--r--  1 root  staff   642965 Aug  1 16:25 laurelsprings_backup.sql
-rw-r--r--  1 root  staff   660254 Aug  1 16:25 ls_directory_backup.sql
-rw-r--r--  1 root  staff     1037 Aug  1 16:25 mem_backup.sql
-rw-r--r--  1 root  staff     1057 Aug  1 16:25 protech_backup.sql
-rw-r--r--  1 root  staff     2889 Aug  1 16:25 scripts_backup.sql
-rw-r--r--  1 root  staff    11107 Aug  1 16:25 stacy_backup.sql
-rw-r--r--  1 root  staff     4002 Aug  1 16:25 storelist_backup.sql
-rw-r--r--  1 root  staff     1062 Aug  1 16:25 testcert_backup.sql
-rw-r--r--  1 root  staff     4467 Aug  1 16:25 tony_backup.sql
-rw-r--r--  1 root  staff     1042 Aug  1 16:25 twtr_backup.sql
-rw-r--r--  1 root  staff    52209 Aug  1 16:25 watchdb_backup.sql

You will now want to dump your MySQL table, so you don’t have to recreate all of the MySQL information, including the users, passwords and privileges after the new install.

mysqldump -uroot --password=my_pass --set-gtid-purged=OFF mysql > mysql_user_backup.txt

I am once again using the .txt prefix for this file.

After you execute the above command, make sure that the dump file was created:

# ls -l mysql_user_backup.txt
-rw-r--r--  1 root  staff  9672 Aug  1 16:32 mysql_user_backup.txt

We have now finished exporting all of our data, including our MySQL table data. You will need to shutdown MySQL. You may use mysqladmin to shutdown your database, or here is a link on ways to shutdown MySQL.

# mysqladmin -uroot --password=my_pass shutdown
Warning: Using a password on the command line interface can be insecure.

Before continuing, you might want to check to make sure that the mysqld process isn’t still active.

# ps -ef|grep mysqld
    0 18380 17762   0   0:00.00 ttys002    0:00.00 grep mysqld

You are now going to want to change the name of your mysql directory. This will give you access to the old directory in case the upgrade fails. For my OS (Mac OS 10.9), my MySQL home directory is a symbolic link to another directory that contains the actual MySQL data. All I have to do is to remove the symbolic link. A new symbolic link will be created with the new install. Otherwise, just use the mv command to rename your old MySQL directory.

# cd /usr/local/
# ls -ld mysql* 
lrwxr-xr-x   1 root  wheel   36 Aug  9  2013 mysql -> mysql-advanced-5.6.17-osx10.6-x86_64
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

All I have to do is to remove the link, and the MySQL directory will still be there:

# rm mysql
# ls -ld mysql* 
drwxr-xr-x  18 root  wheel  612 Jan 16  2014 mysql-advanced-5.6.17-osx10.6-x86_64

Now I am ready to install the new version of MySQL. I won’t cover the installation process, but here is the link to the installation page.

Tip: After you have installed MySQL, don’t forget to run this script from your MySQL home directory. This will install your mysql database tables. Otherwise, you will get an error when you try to start the mysqld process.

# ./scripts/mysql_install_db

Now you can start the mysqld process. See this page if you don’t know how to start MySQL.

You can test to see if the new installation of MySQL is running by either checking the process table, or logging into mysql. With a fresh install of 5.6, you should not have to include a user name or password.

Note: (Future versions of MySQL may automatically create a random root password and put it in your data directory. You will then need to use that password to login to MySQL for the first time. Check the user’s manual for any MySQL versions beyond 5.6.)

# mysql
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

Now that MySQL is up and running, leave the mysql terminal window open, and open another terminal window so you can import your mysql table information from your dump file:

# mysql < /users/tonydarnell/mysql_2014_0731/2014_0731_mysql_backup.sql

You won't be able to login with your old user names and passwords until you execute the flush privileges command. So, in your other terminal window with the mysql prompt:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Open another terminal window and see if you can login with your old mysql user name and password:

# mysql -uroot -p
Enter password: 
Welcome to the mysql monitor.  Commands end with ; or \g.
Your mysql connection id is 3
....

mysql>

You can then look at your the user names and passwords in the mysql.user table:

mysql> select user, host, password from mysql.user order by user, host;
+----------------+---------------+-------------------------------------------+
| user           | host          | password                                  |
+----------------+---------------+-------------------------------------------+
| root           | 127.0.0.1     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.2   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.5   | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | 192.168.1.50  | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root           | localhost     | *BF6F71512345332CAB67E7608EBE63005BEB705C |
+----------------+---------------+-------------------------------------------+
5 rows in set (0.00 sec)


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000005
         Position: 644455
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: e1eb3f38-18da-11e4-aa44-0a1a64a61679:1-124
1 row in set (0.00 sec)

We are now ready to import the database dump files. We can use this script to create the import commands. Copy this into a text file named import.sh:

# import.sh
# script to import all of the export files
# run this script in the same directory as the exported dump files
#
> import_files.sh
directory=`pwd`
for file in `ls *sql`
do

if [[ $(grep -c '.txt' $file) != 0 ]];then

echo "# found mysql - do nothing"

else

echo "mysql -uroot -p"my_pass"  < $directory/$file"
echo "mysql -uroot -p"my_pass"  > import_files.sh

fi

done

Then run the import.sh script. The script will print the output to the terminal window as well as into a new script file named import_files.sh.

# sh import.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

Look at the contents of the new script file – import_files.sh – to make sure that it contains all of the database files. You will use this file to help you import your dump files.

# cat import_files.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql

WARNING: Be sure that this script file does not contain the main dump file or the mysql user’s file that we created.


I was exporting and importing eighteen (18) database files, so I can also check the line count of the import_files.sh script to make sure it matches:

# wc -l import_files.sh
      18 import_files.sh

I am now ready to import my files.


Optional: add the -v for verbose mode – sh -v import_files.sh


# sh import_files.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
....

You databases should now be imported into your new instance of MySQL. You can always re-run the script to make sure that the databases are the same size.


OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed after importing the dump files, by issuing the show master status command:

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000003
         Position: 16884001
     Binlog_Do_DB: 
 Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: cc68d008-18f3-11e4-aae6-470d6cf89709:1-43160
1 row in set (0.00 sec)

Your new and fresh installation of MySQL should be ready to use.

NOTE:A thank-you to Daniel Van Eeden for pointing out a mistake that I had made.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.