Checking on the Progress of Large DML Commands in MySQL Using Perl – Part One
November 2, 2011 Leave a comment
Part One of Two: Checking on database activity when running a large DML (Data Manipulation Language) statement – such as INSERT, DELETE, UPDATE or SELECT.
Part One: Inserting a million rows into a database.
A friend of mine had asked a question – “Is there any way you can track how far you have advanced in a slow-moving ALTER or OPTIMIZE statement?”. A customer was performing some modifications on a database with tens of millions of rows, and they wanted to be able to see if the command was making any progress.
Since the customer was using the InnoDB storage engine, I thought of a way that you could check on the progress – but only given the fact that nothing else (major) was happening in the database (more on this reason later).
With InnoDB, you can issue the SHOW ENGINE INNODB STATUS (part of the InnoDB Monitors) and you will get a wealth of information. I am not going to list any of the output from the command here as it would consume too much space. However, towards the bottom of the output from the command, you will notice under “ROW OPERATIONS” a line that contains the words “Number of rows inserted…”:
The above line starting with “Number of rows…” gives you a quick snapshot of how many rows have been inserted, updated, deleted or selected (read) up to that point in time. So, by invoking the SHOW ENGINE INNODB STATUS command, you can figure what database activity has occurred since the last time you issued the command. But, if you invoke this command while the database is being used for other purposes, it will be difficult to figure out your progress on any of the four values shown, as the stats aren’t just for your activity.
So, all I needed to do was to write a quick Perl script to monitor that particular line of output, and then I could figure out if a certain statement was indeed being executed, and I could monitor the progress (assuming nothing else was happening on the database). The problem that I had was that I didn’t have a database with a lot of records in it. But, I did have a database server that didn’t have much activity. So, in part one, I will show you how to create a dummy table and then shove a million records into it. In part two (my next blog entry), I will show you the Perl script to monitor the changes to the output from the SHOW ENGINE INNODB STATUS command. (And yes, you can just run the SHOW ENGINE INNODB STATUS command manually over and over again, but what fun would that be?)
First I created a dummy table named “test_large”:
Next, I created a Perl script that would create dummy/fake data and insert it into the table. This script uses a subroutine named “ConnectToMySql” that I covered in an earlier post – Connecting to MySQL with Perl. It also uses a subroutine to create fake alpha/numeric data (generate_random_string) and one that creates fake numeric data (generate_random_numbers). A third subroutine (get_date_time) is used to display the date and time (I use this subroutine when I want to insert a date and time into a MySQL database).
The script is fairly easy to use. You only need to modify the $total_rows variable to the number of rows that you want to insert, as well as the information about your particular database. I didn’t include anything in the script to show you the progress of this script, but you could certainly add that if you wish.
This script took about three hours to insert a million rows into the new database. And since this was a new table, every so often I would just check to see how many rows were in the table with a simple select command:
|Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.|
Tony is the author of Twenty Forty-Four: The League of Patriots
Visit http://2044thebook.com for more information.