MySQL versus MongoDB for storing JSON documents in a doc-store database – which is faster?

MySQL added the ability to store JSON documents in a doc-store format with MySQL version 5.7 (general availability date – October 21, 2015). Although MySQL’s initial release of their doc-store feature did have some limitations, MySQL has made some really nice improvements in their version 8.0 release, and more recently, in their 8.0.19 release.

I came across a paper written by Erik Andersson and Zacharias Berggren titled “A Comparison Between MongoDB and MySQL Document Store Considering Performance“. I think the paper was written for a college project (and it might have been their senior thesis as they were in the bachelor’s program in computer science). In the paper, they compared MongoDB 3.4.4 against MySQL version 5.7.12, which was an early release of MySQL version 5.7. And, it wasn’t a surprise to see that MongoDB’s document store was much faster than MySQL.

But, with all of the new and improved changes to MySQL’s document-store in the latest release (version 8.0.19), I attempted to re-create their tests to see if MySQL had improved since version 5.7.12. And, both products should have improved added since the original test in 2017.

Disclaimer

I am a solutions engineer at MySQL, and I am not a MongoDB user. There may be ways of configuring both databases to be faster, but I did these tests without modifying any of the default variables which could affect performance. The only exception is that for MySQL, I did a separate test with the binary log disabled (I explain this later in the post).

There were two variables that weren’t related to performance I had to change. For MySQL, I did have to set max_join_size=11000000 when running the queries for the largest database. For MongoDB, I had to set DBQuery.shellBatchSize=1000 or MongoDB would only return 20 rows at a time in the terminal window.

The Equipment

I ran the tests on my own server, and then on a Virtual Machine in Oracle’s Cloud Infrastructure (OCI). My server’s hardware was as follows:

  • Intel 8th Gen Core i7-8700K Processor (6 Cores / 12 Threads)
  • 32GB DDR4 DRAM 2666MHz
  • 500GB PC SSD SATA III 6 Gb/s M.2
  • Gigabyte Z370 AORUS motherboard
  • Mac OS 10.13.6

For OCI, I used the VM.Standard.B1.1 shape, which consisted of the following:

  • Single OCPU – 2.2 GHz IntelĀ® XeonĀ® E5-2699 v4
  • 11.8 GB of Memory
  • 100 GB Remote Block Volume

Installation

NOTE: I am not going to go into detail how to install MongoDB, NodeJS or MySQL, but I will provide the links.

I began by installing MongoDB via yum. Per the installation instructions, I created a MongoDB repo file (/etc/yum.repos.d/mongodb-org-4.2.repo), edited the repo file and added the following:

[mongodb-org-4.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.2/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc

Note: gpgcheck is GNU privacy guard, which helps to verify the that you are installing the correct package from MongoDB, and not from a third-party.

I then used this command to install MongoDB via yum.

sudo yum install -y mongodb-org

I also had to install NodeJS. To install, I used the following commands:

curl -sL https://rpm.nodesource.com/setup_10.x | sudo bash -
sudo yum install nodejs

I downloaded the RPM for the latest (version 8.0.19) MySQL community release, and installed it via yum:

sudo yum localinstall mysql80-community-release-el7-3.noarch.rpm -y
sudo yum install mysql-community-server -y

I started both MongoDB and MySQL:

sudo service mongod start
sudo service mysqld start

And I confirmed the status of each:

sudo service mongod status
sudo service mysqld status

Now that I had MongoDB and MySQL installed, I needed some data.

Data

I found some basic JSON data on the data.gov web site – which is a great site for finding generic JSON data. In this case, I needed a large data set, so I found some JSON data which consisted of a list of businesses in the state of Washington (USA). The file I downloaded contained 2.6 million records. The JSON records contained this data:

{
"Ubi": "1234567890",
"Title": "GOVERNOR",
"FirstName": "CHRISTOPHER",
"MiddleName": "WALKEN",
"LastName": "KRAMER",
"Address": "324 SMITHY DR",
"City": "NOWHERE",
"State": "XA",
"Zip": "05252"
}

I know this example isn’t a large document, but it was the only dataset that I could find with millions of records.

Managing the Data

I took the 2.6 million (2,600,000) records, and split the original single file into three files (using the split command). This produced two files containing one million records, and the last file contained 600,000 records. I discarded the last file.

I used one of the files containing one million records, and split it into ten files containing 100,000 records. I then took one of the files with 100,000 records, and split it into ten files with 10,000 records each. I did the same for the 10,000 record file, splitting it into ten 1,000 record files.

I used the same JSON files for both MongoDB and MySQL.

Note: I am not going to share the data I used. Even though the data is public data, it does contain personal information, and as an employee of Oracle, it might violate their privacy policy.

The Databases

I created the same databases to be used on both as well. The naming convention was as follows:

  • db_json_test10k = 10 x 1k records imported – 10k records total
  • db_json_test100k = 10 x 10k records imported – 100k records total
  • db_json_test1000k = 10 x 100k records imported – 1,000k records total
  • db_json_test10000k = 10 x 1000k records imported – 10,000k records total

When importing JSON documents, the databases are automatically created in MongoDB. With MySQL, you have to create them manually.

create database db_json_test10k;
create database db_json_test100k;
create database db_json_test1000k;
create database db_json_test10000k;

The documents inside the database can be created automatically by both MongoDB and MySQL.

The Tests

I did not add any configuration variables for either database – except for what I put in the MongoDB repo file – so the default variables were used for each. For MySQL, the binary log is enabled by default, so I ran the tests with the binary log turned on and turned off. For MySQL, the binary log contains all of the transactions which could change data. In other words, all insert, update and delete transactions are written to a (binary) log on disk. Obviously, running the tests without the binary log enabled was faster in MySQL.

Each test consisted of the following:

  • Drop all databases (if required)
  • Create the MySQL databases
  • Import 1k, 10k, 100k and 1,000k records (10 times each with unique data in each import)
  • Create indexes – executed once per round per database
  • Perform 100 search queries x 10 different times
  • Perform 100 update queries x 10 different times
  • Perform 100 delete queries x 10 different times
  • Repeat four times, for a total of five tests

I recorded the time it took for each part of the test to be executed, and inserted the test results into a spreadsheet. I used the same data files and queries for both MongoDB and MySQL.

The graphs will show you the time for each action and the results are labeled Mongo, MySQL and MySQL-noBL (with the binary log disabled). For the search, update and delete tests, I performed the same 100 unique queries, 10 times against each database. Each series of 10 tests was executed five times, and the time needed to complete each of the individual 10 tests for each database size (10k, 100k, 1000k and 10000k records) was then averaged across all five tests to create the graphs. (Note: for this test, “k” equals 1,000 and not 1,024)

While I was running the tests, the only applications I had open were a text editor, three terminal windows and a spreadsheet program. However, there were some small anomalies in the test results where a test would take a bit longer than the normal run time. I did not correct the data when an anomaly occurred.

Also, not all off the queries returned or modified the same number of rows. For example, one update query might change 10 rows, while another changed 50, so the query result times are not going to be relative to the every other query. But, each query was executed in the same order, so the time it takes to run query #1 should be relative in all five executions of the tests.

Importing Data

For each of the four databases, I imported the same set of records. This involved importing 1k, 10k, 100k and 1,000k records ten times each into their respective databases.

For MySQL, I used the MySQL Shell, which is a command-line client. Here is an example of the import command:

mysqlsh root:password@localhost/db_json_test10k --import /Volumes/HD1/0mongo_v_mysql/1000/1000-0 json_test10k

For MongoDB, I used the mongoimport utility. Here is an example of the import command:

mongoimport --jsonArray --db db_json_test10k --collection json_test10k --file /Volumes/HD1/0mongo_v_mysql/1000/1000-0m

Note: With MySQL, you have to be careful when you import JSON documents – if the syntax is incorrect, the file can import but it will not create a collection (it will create a table instead). The syntax on the import command is important.

Here are the results of the imports. The Y-axis (vertical line) on the graph represents the time in seconds to perform the action.

MongoDB was much faster than MySQL in importing data, even with the MySQL binary log turned off.

Creating Indexes

Whenever you import data into a collection for either MongoDB or MySQL, both instances will automatically create some indexes. After the initial import, I took a look at the indexes:

For MongoDB:

> use db_json_test10k
switched to db db_json_test10k
> db.json_test10k.getIndexes()
[
	{
		"v" : 2,
		"key" : {
			"_id" : 1
		},
		"name" : "_id_",
		"ns" : "db_json_test10k.json_test10k"
	}
]

For MySQL:

mysql> use db_json_test10k;
Database changed
mysql> SHOW INDEXES FROM json_test10k\G
*************************** 1. row ***************************
        Table: json_test10k
   Non_unique: 0
     Key_name: PRIMARY
 Seq_in_index: 1
  Column_name: _id
    Collation: A
  Cardinality: 98627
     Sub_part: NULL
       Packed: NULL
         Null: 
   Index_type: BTREE
      Comment: 
Index_comment: 
      Visible: YES
   Expression: NULL
1 rows in set (0.00 sec)

Both MongoDB and MySQL automatically create an index on the column _id (which is a primary key as well).

I needed to create an index on the column UBI, as I was going to use this column for my searches, updates and deletes.

I would only need to create the indexes once for every series of tests, but I still tracked how long it took for both MongoDB and MySQL to create the indexes. Here are the commands to create an index for each one:

MySQL

use db_json_test10k
ALTER TABLE json_test10k ADD COLUMN UBI_2 INT GENERATED ALWAYS AS (doc->"$.Ubi");
ALTER TABLE json_test10k ADD INDEX (UBI_2);

MongoDB

use db_json_test10k
db.json_test10k.createIndex( { Ubi: 1 })

Note: MySQL indexes JSON documents via virtual columns. See this blog post for a detailed explanation of virtual columns.

For the tests, I would be creating an index on all four databases for each instance. Here are the results of the index creation for each database. The Y-axis (vertical line) on the graph represents the time in seconds needed to perform the action.

MySQL was a bit faster than MongoDB in index creation.

Searches

The search test consisted of executing ten scripts, where each script performed 100 search queries. The test was executed against all four databases, each containing their own update queries (there was a total of 40 separate scripts running 100 separate queries each). The test was repeated five times and the average times are shown in the graphs.

Here is an example of the search syntax: (Only the first three queries are shown)

MySQL

use db_json_test10k
select * from json_test10k where UBI_2 = '603013962'; 
select * from json_test10k where UBI_2 = '603598341'; 
select * from json_test10k where UBI_2 = '601574968';
...

MongoDB

use db_json_test10k
DBQuery.shellBatchSize = 1000
db.json_test10k.find({"Ubi" : "603013962"})
db.json_test10k.find({"Ubi" : "603598341"})
db.json_test10k.find({"Ubi" : "601574968"})
...

The search queries were placed in files – SQL files for MySQL, and JavaScript files for MongoDB – and each file was executed as follows: (Only the first three queries are shown)

MySQL

time mysql -uroot -p < /Volumes/HD1/0mongo_v_mysql/3_search/mysql_search_sql_10k/search_mysql_100_10k_0.sql
time mysql -uroot -p < /Volumes/HD1/0mongo_v_mysql/3_search/mysql_search_sql_10k/search_mysql_100_10k_1.sql
time mysql -uroot -p < /Volumes/HD1/0mongo_v_mysql/3_search/mysql_search_sql_10k/search_mysql_100_10k_2.sql
...

MongoDB

time mongo < /Volumes/HD1/0mongo_v_mysql/3_search/mongo_search_10k/search_mongo_100_10k_00.js
time mongo < /Volumes/HD1/0mongo_v_mysql/3_search/mongo_search_10k/search_mongo_100_10k_02.js
time mongo < /Volumes/HD1/0mongo_v_mysql/3_search/mongo_search_10k/search_mongo_100_10k_04.js
...

Here are the results of the searches for each database. The Y-axis (vertical line) on the graph represents the time in seconds needed to perform the action.

The results were much different than in the original tests. MySQL was much faster than MongoDB in searches.

Update

The update test consisted of executing ten scripts, where each script contained 100 unique update queries. The test was executed against all four databases, each containing their own update queries (there was a total of 40 separate scripts). The test was repeated five times and the average times are shown in the graphs.

The update syntax was as follows:

MySQL

use db_json_test10k;
SET max_join_size=11000000;
SET SQL_SAFE_UPDATES=0;
UPDATE json_test1k SET doc = JSON_SET(doc, '$.State', 'KS') 
WHERE UBI_2 = '604052443';

MongoDB

use db_json_test10k
DBQuery.shellBatchSize = 1000
db.json_test1k.updateMany({"Ubi" : "604052443"}, {$set: { "State" : "KS"}});

Here are the results of the updates for each database. The Y-axis (vertical line) on the graph represents the time in seconds needed to perform the action. Note that not all of the queries updated the same number of rows, so the results aren’t going to be the same for each group of queries.

Delete

The delete test consisted of executing ten scripts, where each script contained 100 unique delete queries. The test was executed against all four databases, each containing their own update queries (there was a total of 40 separate scripts). The test was repeated five times and the average times are shown in the graphs. (I did have to add the changes to max_join_size and DBQuery.shellBatchSize as I explained earlier)

The delete syntax was as follows: (Only the first three queries are shown)

MySQL

use db_json_test10k;
SET max_join_size=11000000;
SET SQL_SAFE_UPDATES=0;
DELETE FROM json_test1k WHERE UBI_2 = '603013962';
DELETE FROM json_test1k WHERE UBI_2 = '603598341';
DELETE FROM json_test1k WHERE UBI_2 = '601574968';
...

MongoDB

use db_json_test10k
DBQuery.shellBatchSize = 1000
db.json_test1k.deleteMany({"Ubi" : "603013962"});
db.json_test1k.deleteMany({"Ubi" : "603598341"});
db.json_test1k.deleteMany({"Ubi" : "601574968"});
... 

Again, the results were much different than in the original tests. MySQL was much faster than MongoDB in deletions.


Oracle Cloud Infrastructure Test Results

Here are the test results for running the same tests on Oracle Cloud. MongoDB’s imports were still much faster. On the queries, MongoDB performed better, and in some tests, MySQL was competitive only if you had the binary log disabled.

Import

Create Index

Searches

Updates

Deletes


The results from running with only one CPU via OCI are surprising.

Overall, I think the latest version of MySQL is certainly a great alternative to MongoDB for storing JSON documents. If you are using MySQL version 5.7 for storing your JSON documents, I would recommend upgrading to the latest 8.0 version.

Finally – remember, I am not a MongoDB expert. If you have any suggestions on how to tune MongoDB to run faster, please leave a comment below.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots 
Visit http://2044thebook.com for more information.
Tony is the editor/illustrator for NASA Graphics Standards Manual Remastered Edition 
Visit https://amzn.to/2oPFLI0 for more information.

Use MySQL to store NoSQL and SQL data in the same database using memcached and InnoDB

MySQL is a great relational database, but at some point someone (management) in your company is probably going to say that they need to use NoSQL to store their data. After all, NoSQL is one of the latest buzzwords, so it must be good (correct?). Basically, NoSQL allows you to store data without all of the characteristics of a relational database. A very simple explanation is that you are storing all of a data set with just one primary key, and the primary key is how you also retrieve the data. While NoSQL may be good in some cases, it is hard to beat “old-fashioned” SQL relational databases – especially if that is what you know. But, with MySQL and InnoDB, you can have the best of both worlds.

With MySQL version 5.6 (and above), you have the ability to store and retrieve NoSQL data, using NoSQL commands, while keeping the data inside a MySQL InnoDB database. So, you can use NoSQL and SQL at the same time, on the same data, stored in the same database. And the beauty is that it takes just a few minutes to setup. This post will provide you with a quick lesson on how to setup NoSQL on a MySQL InnoDb database.

I would suggest that you read this MySQL web page to get started – Getting Started with InnoDB Memcached Plugin. You should be able to follow this guide and have your NoSQL database up and running in no time at all.

NoSQL on MySQL uses memcached – a distributed memory object caching system. Currently, the memcached daemon plugin is only supported on Linux, Solaris, and Mac OS X platforms.

There are a few prerequisites. You must have the libevent 1.4.3 (or greater) libraries installed (it is not installed if you used a MySQL installer to install MySQL). Depending upon your operating system, you can use apt-get, yum, or port install. For example, on Ubuntu Linux:

sudo apt-get install libevent-dev

 


On the Mac, it takes a few more steps (I tested this with Mac OS 10.9 – Mavericks). To install libevent, you will need to install Xcode, the Xcode command line tools, and then Homebrew. You may install Xcode via the Apple App Store (and this is the most time-consuming part). Once Xcode is installed, from a command line type:

# xcode-select --install

This will prompt you to install the command-line tools. Then, you can easily install Homebrew via this command:

# ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"

Then install libevent via:

# brew install libevent

 

The libraries for memcached and the InnoDB plugin for memcached are put into the correct place by the MySQL installer. For a typical operation, the files lib/plugin/libmemcached.so and lib/plugin/innodb_engine.so are used.

You may check to make sure you have the libraries: (substitute $MYSQL_HOME for your mysql home directory)

# ls -l $MYSQL_HOME/lib/plugin/libmemcached.so
-rwxr-xr-x  1 mysql  wheel  195664 Mar 14 15:23 lib/plugin/libmemcached.so

# ls -l $MYSQL_HOME/lib/plugin/innodb_engine.so
-rwxr-xr-x  1 mysql  wheel  109056 Mar 14 15:23 lib/plugin/innodb_engine.so

To be able to use memcached so that it can interact with InnoDB tables, you will need to run a configuration script to install the tables necessary for use with memcached. This script is named innodb_memcached_config.sql, and it should be in your $MYSQL_HOME/share directory.

# cd $MYSQL_HOME
# ls -l share/innodb_memcached_config.sql
-rwxr-xr-x  1 mysql  wheel  3963 Mar 14 15:02 share/innodb_memcached_config.sql

To install the script, from the command line run:

# mysql -uroot -p < $MYSQL_HOME/share/innodb_memcached_config.sql

This script will install the three tables (cache_policies, config_options and containers) that it needs for the InnoDB/memcached mapping relationship, along with a sample table named demo_test, which is installed in the test database.

mysql> use innodb_memcache
Database changed
mysql> show tables;
+---------------------------+
| Tables_in_innodb_memcache |
+---------------------------+
| cache_policies            |
| config_options            |
| containers                |
+---------------------------+
3 rows in set (0.01 sec)

mysql> use test;
Database changed
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| demo_test      |
+----------------+
1 row in set (0.00 sec)

The table that we need to use for the InnoDB mapping is the containers table. Here is the DESCRIBE statement for the containers table:

mysql> DESCRIBE innodb_memcache.containers;
+------------------------+--------------+------+-----+---------+-------+
| Field                  | Type         | Null | Key | Default | Extra |
+------------------------+--------------+------+-----+---------+-------+
| name                   | varchar(50)  | NO   | PRI | NULL    |       |
| db_schema              | varchar(250) | NO   |     | NULL    |       |
| db_table               | varchar(250) | NO   |     | NULL    |       |
| key_columns            | varchar(250) | NO   |     | NULL    |       |
| value_columns          | varchar(250) | YES  |     | NULL    |       |
| flags                  | varchar(250) | NO   |     | 0       |       |
| cas_column             | varchar(250) | YES  |     | NULL    |       |
| expire_time_column     | varchar(250) | YES  |     | NULL    |       |
| unique_idx_name_on_key | varchar(250) | NO   |     | NULL    |       |
+------------------------+--------------+------+-----+---------+-------+
9 rows in set (0.00 sec)

Here is some information about the container columns:

  • name: This is the name that is like the primary key for the memcache data collection. If you have a value of default for name, then this will be the default entry that is used. Otherwise it uses the first entry in the container table. You can also specify this name value in the NoSQL statement.
  • db_schema: The InnoDB database name that you will use to store the data.
  • db_table: The InnoDB database table name that you will use to store the data.
  • key_columns: The column that you will use for the key value lookup. This field only contains one column (despite the plural name of key_columns).
  • value_columns: Data will be pulled from and/or stored to these column/columns of data. You use a separator value (such as a pipe "|" symbol) to separate the columns. In the example database that is installed, you can store first name | last name which would pull data from both a firstname and lastname column.
  • flags: This column stores memcache flags, which is an integer that is used to mark rows for memcache operations.
  • cas_column and expire_time_column: These two columns are used for storing memcache compare-and-swap and expiration values. You can ignore these for now.
  • unique_idx_name_on_key: This is the name of the unique index that you will use for the key column, and you should use the primary key for the table. You should not use an auto-incrementing index, as you can’t insert into an auto-incrementing key.

NOTE: If you make any changes to the innodb_memcache.containers table, you will need to restart the plugin or restart mysqld.

The innodb_memcached_config.sql script inserted one line of data for us in the innodb_memcache.containers table:

mysql> select * from innodb_memcache.containers;
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| name | db_schema | db_table  | key_columns | value_columns | flags | cas_column | expire_time_column | unique_idx_name_on_key |
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| aaa  | test      | demo_test | c1          | c2            | c3    | c4         | c5                 | PRIMARY                |
+------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
1 row in set (0.00 sec)

You can see the name aaa is mapped to the db_table (InnoDB table) demo_test. And the innodb_memcached_config.sql script also created this demo_test table for us to use, and it inserted one row of data. Here is the DESCRIBE statement for the demo_test table:

mysql> DESCRIBE test.demo_test;
+-------+---------------------+------+-----+---------+-------+
| Field | Type                | Null | Key | Default | Extra |
+-------+---------------------+------+-----+---------+-------+
| c1    | varchar(32)         | NO   | PRI |         |       |
| c2    | varchar(1024)       | YES  |     | NULL    |       |
| c3    | int(11)             | YES  |     | NULL    |       |
| c4    | bigint(20) unsigned | YES  |     | NULL    |       |
| c5    | int(11)             | YES  |     | NULL    |       |
+-------+---------------------+------+-----+---------+-------+
5 rows in set (0.00 sec)

And here is the row that was inserted:

mysql> select * from demo_test;
+----+--------------+------+------+------+
| c1 | c2           | c3   | c4   | c5   |
+----+--------------+------+------+------+
| AA | HELLO, HELLO |    8 |    0 |    0 |
+----+--------------+------+------+------+
1 row in set (0.00 sec)

Next you will need to install the memcached plugin. From a mysql prompt:

mysql> install plugin daemon_memcached soname "libmemcached.so";

Once the plugin is installed this way, it is automatically activated each time the MySQL server is booted or restarted.

To turn off the plugin, use this statement:

mysql> uninstall plugin daemon_memcached;

NOTE: You might need to restart mysqld before continuing.

Now we can start testing memcached with InnoDB. MySQL is now listening to the memcached port 11211. We could try writing some code, but the easiest way is to just telnet to the port 11211 and issue some NoSQL commands. We only have one row in the innodb_memcache.containers table, and only one row in the test.demo_test table. For this example, we are only going to use a few NoSQL commands – get and set. Here is a link to the full list of commands.

Let’s telnet and use NoSQL to retrieve the data from the InnoDB demo_test table, which contained one row of data. The key for this one row is AA. We simply need to type get AA to retrieve the data:

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get AA
VALUE AA 8 12
HELLO, HELLO
END

We can now insert a line of data. This is done with the set command. The syntax for inserting data is:

  • set – this is the command to store a value
  • XX – this is the value key
  • # – this is a reference to the flags that we will use
  • # – this is the expiration TTL (time to live)
  • # – the length of the string that we will insert/store
  • ABCDEF – the value to insert/store

In your telnet session, type this:

set BB 0 0 16
Goodbye, Goodbye
STORED

Now, let’s take a look at the InnoDB table, test.demo_test – and we can see our data:

mysql> select * from test.demo_test;
+----+------------------+------+------+------+
| c1 | c2               | c3   | c4   | c5   |
+----+------------------+------+------+------+
| AA | HELLO, HELLO     |    8 |    0 |    0 |
| BB | Goodbye, Goodbye |    0 |    1 |    0 |
+----+------------------+------+------+------+
2 rows in set (0.00 sec)

If we go back to the telnet session, we can also get the value for BB, which we inserted earlier via NoSQL:

get BB
VALUE BB 0 16
Goodbye, Goodbye
END

 


 
Let’s create a new table, create the relationship in the innodb_memcache.containers and insert (set) and read (get) some new data. Here is the CREATE TABLE statement for our new users table:

use test;
CREATE TABLE `users` (
 `user_id` varchar(32) NOT NULL DEFAULT '',
 `first` varchar(100) DEFAULT NULL,
 `last` varchar(100) DEFAULT NULL,
 `flags` int(11) DEFAULT '0',
 `cas` int(11) DEFAULT '0',
 `expiry` int(11) DEFAULT '0',
 PRIMARY KEY (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

And here is the SQL INSERT for innodb_memcache.containers table, to establish our relationship with memcached and InnoDB:

INSERT INTO `innodb_memcache`.`containers` 
(`name`, `db_schema`, `db_table`, `key_columns`, `value_columns`, `flags`, `cas_column`, `expire_time_column`, `unique_idx_name_on_key`)
VALUES ('default', 'test', 'users', 'user_id', 'first|last', 'flags','cas','expiry','PRIMARY');

NOTE: Since we changed the innodb_memcache.containers table, we will need to either restart mysqld or disable/enable the plugin (as shown above).

Now that we have restarted mysqld or the plugin, let’s insert some data into our new InnoDB table via NoSQL.

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
set jj1 0 0 15
James|Johnson
STORED
set jj2 0 0 14
John|Jackson
STORED

We can now use SQL to query the test.users table to see the InnoDB data we just stored via NoSQL:

mysql> select * from users;
+---------+-------+-----------+-------+------+--------+
| user_id | first | last      | flags | cas  | expiry |
+---------+-------+-----------+-------+------+--------+
| jj1     | James | Johnson   |     0 |    1 |      0 |
| jj2     | John  | Jackson   |     0 |    2 |      0 |
+---------+-------+-----------+-------+------+--------+
2 rows in set (0.00 sec)

Let’s insert some data into the temp.users table via mysql, and then retrieve the data via NoSQL.

mysql> INSERT INTO test.users (user_id, first, last, flags, cas, expiry) VALUES ('bb1', 'Beth', 'Brown', '0', '3', '0');
Query OK, 1 row affected (0.00 sec)

Retrieve the data via NoSQL:

# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get bb1
VALUE bb1 0 10
Beth|Brown
END

We now have a way to use NoSQL via memcached and at the same time use good old-fashioned SQL statements on the same data via InnoDB and MySQL. It is the best of both worlds!

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots

 

Visit http://2044thebook.com for more information.