mysql

Quick Update...

Sorry this blog has been inactive for so long but I've been really, really busy with work, and my move to Puerto Nuevo, Mexico in northern BC. I am thinking about putting together a series of posts that detail how to set-up a data-processing stack, in PHP, for mongodb that allows you to dynamically generate all CRUD queries via the class stack.

The front-end interface, to this stack, is through RabbitMQ -- also written in PHP -- which eliminates Apache from the  LAMP stack, and no longer requires a REST interface for transferring data requests to-and-from store.

The stack includes services such as auditing, registration for public-facing requests, memcached and membase support, error-logging, and internal checks on requests that prevent things like query generation that result in full-table scans or any searches on un-indexed columns within either mongodb or mysql. (I think I still remember how to code for mysql... :) )

Anyway, this project has been all-consuming for me for the past year and the concept of generalizing the stack for instructional purposes has been rattling around in my can now, looking for a way out, for quite some time.  It's not like there's a plethora of PHP-based RabbitMQ tutorials out there either.

So, that's the happs.  Now that things are settling down a bit, I'll try to get more information out.

Thank you for checking-in!

Searching MongoDB Sub-Documents...

I've recently finished a mongo collection that stores all auditing data from my application -- specifically, it records every database transaction, conducted in either mySQL or mongo, assigning an event-identifier to the event, and storing the data under an event ID within a single sessionManger object.

Sounds good?

Well, I like it.   This design eliminated the need to maintain meta-data in my data tables since I can pull transaction history for any record that I've accessed.

The problem is that, being new to mongodb, accessing what I've put into mongodb isn't (yet) as intuitive as, say, my mySQL skills are.

Sub-documents within a mongo document are analogous to the results of a mySQL join.  One of the key motivators in storing this information in mongodb to begin with was that I could de-normalize the data by storing the sub-document with it's parent instead of having to incur the expense of a search-join-fetch later.

Traditionally, any data objects defined as a one-to-many type of a relationship (1:m) were stored in multiple mySQL tables and were accessed via some sort of join mechanism.

Mongodb breaks that traditional mold by allowing you to store a sub-document (the "m" part of the 1:m relationship) within the same document in which you're currently working.

Using my sessionManger document, I have a document that looks something like this:

[cc lang='javascript' line_numbers='false'] { _id : somevalue, foo : bar, event : {}, argle : bargle, } [/cc]

My desire is to, for every database event that is recorded, enter information about that event within the sub-document that I've wittily named "event".

In my PHP code, I've written a sequence manager for mongo that maintains a document containing sequence values for various tables.  Think of this as the functional version of mySQL's auto-increment feature.  I decided, then, for the sessionManager events, I would use this key sequence to obtain unique values and use those as my sub-document index.  I'd then store whatever data I needed to store using the sequence value as a sub-document key, or index:

[cc lang='javascript' line_numbers='false']{ _id : somevalue, foo: bar, event : { n : { created : dateval, table : tableName, schema : dbSchema, query : lastQuery } } argle : bargle }[/cc]

So, when I need to add another event, I just create a new sub-document under the event key, then add the data I need to store under the sub-document index key.

Worked like a champ!

And then I asked myself:  "So, Brainiac, how would you go about extracting event -n- from your collection?"

I went through a lot of failed query attempts, bugged a lot of people, googled and saw stuff  that led me down many plush ratholes until I finally, through some serious trial-and-error, got the answer...

> db.mytable.find( { foo : bar }, { 'event.n' : 1 } );

where n = the number of the event I want to find.

If I want to get all of the events for a particular document (sessionManger object), then I would write something like:

> db.mytable.find( {foo : bar}, { event : 1});

If I wanted to return all of the events for all of the objects, then I would write this:

> db.mytable.find( {}, {event : 1});

What I've not been able to figure out, so far, is how I can use $slice to grab a range of events within a document.  Everything I try returns the full sub-set of documents back to me.  The doc tells me that $slice is used to return a subrange of array elements, which is what I thought "event.n" was but, apparently, it's not.  (I think it's an object (sub-document) which is why $slice fails for me.)

It's not a big deal because, programmatically, I can grap the entire sub-document from it's parent and parse in-memory to get the desired record.  And, if I know what the value for -n- is, then I can fetch just that one sub-document.  So, I'm ok for now.  However, please feel free to enlighten me with your expertise and experience should you see where I am failing here, ok?

 

mongodb, geospatial indexing, and advanced queries....

I've been working to build, and re-build, a geospatial table for work.  There's been a lot of challenges in this project for me as this is the first time that I've had to architect db designs incorporating mongodb with mySQL.

The mongo geospatial repository will be replacing several tables in the legacy mySQL system -- as you may know, mongodb comes with full geospatial support so executing queries against a collection (table) built in this manner is shocking in terms of it's response speeds -- especially when you compare those speeds to the traditional mySQL algorithms for extracting geo-points based on distance ranges for lat/lon coordinates.  The tl;dr for this paragraph is:  no more hideous trigonometric mySQL queries!

What I learned in this exercise was that the key to architecting a mongo collection requires you to re-think how data is stored.  Mongo stores data as a collection of documents.  The key to successful thinking, at least in terms of mongo storage, is denormalization of your data objects.

Let's use a standard customer object as an example.  Every customer has at least one phone number.  Most, if not all, customers have more than one phone number.  We could define several columns in the customer table for the phone numbers: workphone, homephone, cellphone, otherphone and store the data that way.  Problem is that we will eventually hit the wall where we have the need to store numbers for which we don't have columns pre-defined:  faxphone, skypephone, tddphone, vrsphone, etc.

RDBMS design demands a normalization of this 1:M data design by requiring a second table to store just phone numbers for each customer.  The phone table would have a primary key (id), the customer id, the customer phone number and perhaps a short, descriptive, field explaining the purpose of this number.  To get the phone data for a customer, then, you'd simply query (or join) the phone table based on the customer ID to get all the phone tuples for that customer.

Mongo, on the other hand, sees every customer as a document.  Think of each customer in your collection as a piece of paper.  You want to go into your collection and retrieve on piece of paper upon which has all the customer data.  So, for example, you retrieve the document for "John Smith" and on this document, it lists several key-value pairs, underneath an array called phone:

phone : { home : (408) 123-4567, work : (415) 123-4567, cell : (312) 765-4321 }

...and so on...

Mongo stores the document for this, or any user, by de-normalizing the data relationships within the customer object.  These relationships can be maintained as sub-arrays within the document.  Because mongo is schema-less, every customer object isn't required to have all the possible combinations of phone numbers.  So, if you were to do a search where you pull-up all customers with fax numbers, our Mr. Smith would not appear in this list since he has no fax number listed in his phone array.

See?

This first step towards clarity in mongo architecture, then, is to think of all the data when you design a class object and include that data within a single document.  Data that was stored, in traditional RDBMS relation-based tables, is incorporated into the document as sub-arrays to the document.

But, you're asking, what if you want to later add a fax number to John Smith's phone collection?  Can you do that?

Sure!

Again, this is the inherent strength in mongodb -- it's schema-less!  Adding another number to the existing collection of phone numbers, or adding a new "column" to the document itself, requires only that you update that document.  That's it!

So, returning back to the geospatial build, I used mySQL pull the legacy data, and collect the updated catalog tables into a single database.  Then I built new tables that (a) eliminated columns I no longer needed and, (b), de-normalized the data so that every tuple in every table reflected all of the data.

I then combined the five tables into a single table under a new primary-key value and then imported this data directly into a mongo collection.  This took several hours as my collection has over 3.6 million rows.

Once I had the collection in mongo, I made a mongo-dump of the collection so that I'd could recover back to this point in-case anything went south.  (Which it did...)

I executed a PHP script I wrote to scan the mySQL table, get the tuple by the newly-created primary key, and then create the sub-array in the mongo collection for the geospatial data.  See, in order to impose a geospatial index, your lat/lon data has to be a sub-array within the primary collection.  There's no way I've yet discovered to import data over from either a flat (csv) file, or directly from mySQL, so that it creates your sub-array automagically.  Hence, the home-brew PHP script to parse through the mySQL records and build (insert) the sub-array in the newly-created mongodb collection.

(Side note:  I was careful to maintain the maximum mantissa values for the lat/lon fields by importing initially, into mySQL as varchar(255) fields -- this kept my 13-digit mantissas.  When I imported the data into mongodb, mongo converted these values into doubles and kept the precision.  However, my PHP program, casting these values to either (float) or (double) converted (round) the matissa to 7-digit precision.  Suitable for task?  Yes.  Annoying to lose this data?  Yes.  If you have a solution to this, please leave me a comment at the end of this article.  Thanks!   :-P )

The next step was to add the geo-spatial index to the collection:

> db.geodata_geo.ensureIndex( { loc : "2d" } ); point not in interval of [ -180, 180 )

What?

This error message was telling me that my data was out of range of the acceptable lat/lon values!

I tried searching for the data culprits:

> db.geodata_geo.find( { "loc" : { $exists : true }}).count();
3685667
> db.geodata_geo.find({"loc.lon" : {$lt : -180}}).count();
0
> db.geodata_geo.find({"loc.lon" : {$gt : 180}}).count();
0
> db.geodata_geo.find({"loc.lat" : {$gt : 180}}).count();
0
> db.geodata_geo.find({"loc.lat" : {$lt : -180}}).count();
0

These queries were telling me that while I have over 3.6 million records indexed, none are outside of the -180,180 boundaries.

> db.geodata_geo.find({"loc.lat" : {$gt : -180}, "loc.lon" : {$lt : 180}}).count();
3685663
> db.geodata_geo.find( { "loc" : { $exists : true }}).count();
3685667

These queries tell me that I have a delta of 4-records that exists outside of the -180, 180 boundary.

Wait...wut?

I focus on the $gt/$lt and wonder if I have an "edge" case.  Given that I've lost 6-digits of my mantissa, I wonder if mongo has rounded my data to my edge cases of 180:

> db.geodata_geo.find({"loc.lon" : 180 });

And I get back exactly four records that have a lon-value of exactly 180:

"loc" : { "lon" : 180, "lat" : -16.1499996 }

This, to me, seems to be an error in how mongodb indexes geospatial data.  If 180 is an allowed value for lat/lon, then why throw the error when you ensure the index?  I decide to handle this rounding problem by expanding the allowable limits of my query:

> db.geodata_geo.ensureIndex({ "loc" : "2d" }, { min : -180, max : 181 });
> db.geodata_geo.getIndexes();
[
	{
		"v" : 1,
		"key" : {
			"_id" : 1
		},
		"ns" : "dev_honeybadger.geodata_geo",
		"name" : "_id_"
	},
	{
		"v" : 1,
		"key" : {
			"loc" : "2d"
		},
		"ns" : "dev_honeybadger.geodata_geo",
		"name" : "loc_",
		"min" : -180,
		"max" : 181
	}
]

And I see that my geospatial index has been created.  Now, to test:

> db.geodata_geo.find( { loc : {$near : [-50,50] } } ).limit(5);

And it immediately returns five records (Elliston, Bonavista, Elliston Station, Catalina and Port Union, Division #7,  in Canada) that I asked for.

My geospatial index is complete!  Now, all I need to do is add my regular indexes for keyed searching and export the table off my development environment.

 

 

Converting a mySQL Column to AutoIncrement

We're updating a large dataset at work -- there's about an 18% increase in the number of tuples in the new dataset spread across a highly-normalized 8 or so tables.

I have to port the new data into a (more) efficient table structure -- so I'm de-normalizing the heck out of the data reducing the schema from eight tables to a single table.

In the old architecture, four of the tables have unique key values that were imposed on the data during the original port.  So, to maintain application compatibility in the data catalog, these key values have to be maintained.  Additionally, new tuples of data have to be added to the data set and new (old) key values assigned.

In porting over one of the updated tables, which uses a string-code as the primary key, I first export the old table columns (pkey, str_code) into a temp table and then add an auto_increment int() column to the new table marking the column as default = null.  I then do a simple update-join to bring over the old pkey values based on the native str_code.

This leaves me with a new numeric column that has a variable number of NULL values (representing the delta of the new-data import) interspersed with the legacy data pkey -> str_code values.

The problem is: how do I convert the NULL pkey fields to a meaningful value that maintains the auto-increment without causing mysql to totally freak?

The first thing I do is get the max-value of the pkey:

select max(pkey_field_name) from table_name;
+---------------+
| max(pkey_...) |
+---------------+
|          4162 |
+---------------+
1 row in set (0.00 sec)

Next, I need to reset the auto_increment value of the column because, since the column is just a numeric column, it currently defaults to zero.  Attempting to convert the column on-the-fly to auto-increment will cause mysql to spit and error out about duplicate primary key values...

mysql> alter table table_name auto_increment=4163;
Query OK, 3965 rows affected (0.05 sec)
Records: 3965 Duplicates: 0 Warnings: 0

Now that I have the auto_increment reset, I can convert the column to auto_increment type and, in the process of converting, mySQL will re-number the NULL column key values using the new auto_increment value so that my numbering scheme is seamless.

When I query the data back, I can see that my new column is completely re-ordered with the legacy data maintained and the new data correctly keyed.

Resetting the auto-increment key is a handy little trick to know -- I also use it when building test datasets and I need a fast way to reset my table values.

Hope this helps!

 

 

Note:  Here's the complete and full steps to successfully complete this operation.

(DDL for admin1 table)

CREATE TABLE `admin1` (
  `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
  `field1` varchar(255) NOT NULL,
  `field2` varchar(255) NOT NULL,
  `field3` varchar(255) NOT NULL,
  `field4` varchar(255) NOT NULL,
  `field5` varchar(255) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=4163 DEFAULT CHARSET=utf8

(DDL for admin1ll table)

CREATE TABLE `admin1ll` (
  `field1` int(11) NOT NULL,
  `field2` varchar(255) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8

 

Step 1:  Create a new integer column in the table

ALTER TABLE `meridian`.`admin1` DROP COLUMN `id`, ADD COLUMN `id` int UNSIGNED FIRST, CHANGE COLUMN `field1` `field1` varchar(255) NOT NULL AFTER `id`, CHANGE COLUMN `field2` `field2` varchar(255) NOT NULL AFTER `field1`, CHANGE COLUMN `field3` `field3` varchar(255) NOT NULL AFTER `field2`, CHANGE COLUMN `field4` `field4` varchar(255) NOT NULL AFTER `field3`, CHANGE COLUMN `field5` `field5` varchar(255) NOT NULL AFTER `field4`;

Step 2:  Update the new column by inserting the previous-tables pkey values

update admin1, admin1ll set admin1.id = admin1ll.field1 where admin1.field1 = admin1ll.field2

Step 3: Update the Auto_Increment value:

> select max(id) from admin1;

4030

> alter table admin1 auto_increment=4031;

Note:  the row count does not imply or set the auto_increment value!

Step 4: Reformat Column

Set column to auto_increment, unsignent, not null, primary key

Step 5:  Validate!

select *

from admin1, admin1ll

where admin1.id <> admin1ll.field1

and admin1.field1 = admin1ll.field2

> Empty set (1.70 sec)

mongodb.findOne() -- calling with PHP variables (not literals)

So I've been doing a lot of work, for work, in MongoDB lately and I've learned an awful lot.  Or, depending on your point of view, a lot that's just awful.

See, there's not what you could even charitably call a lot of MongoDB documentation to begin with.   If you filter what is available on, oh, say, PHP implementation, well the results just dwindle to something roughly the same size as a tax-collector's heart.

Here's the scenario -- I've been working on adding a mongo abstraction class on top of my base-data abstraction class -- whereas said classes are extended by the table-level class instantiation.  This allows me to keep all of my query logic in the middle tier of the class design, generic and administrative functions in the base class, and table-specific stuff in the table class.  So far, so good, right?

Well, I get the mongo constructor running and, like it's mySQL counterpart, I have an rule in every table constructor that states "if I pass a indexed field and it's value to the constructor, then instantiate the class pre-populated with that record."

And that's where things start to head south.

In my constructor logic, I'm only allowing single-value key->value pairs as constructor parameters with the design intention of getting a record from the db using the pkey of the table/collection.  In other words, you get one column and one column value.  So, if you're going to instantiate a new user object, you'd probably want to pass-in the primary-key field of a user and that field's value:

$objUser = new UserProfile('email', 'mshallop@gmail.com');   // instantiate a new user object with this email address

Still pretty easy.  I bang out the mySQL equivalent in nothing flat.  I hit a huge pothole when I get to the mongo side.

The method is defined as a protected abstract method in the base class - so this method has to appear in both child classes as defined in the parent:

protected abstract function loadClassById($_key, $_value);

So I have my methods defined in both the mySQL and mongoDB middle layer.  My strategy for the mongo fetch-and-return is pretty simple -- once the class has been instantiated, do the following:

  1. make sure the $_key value exists in the allowed field list
  2. make sure the $_value has a value
  3. query mongodb using .findOne()
  4. store the return key->value pairs in the member array
  5. return status

That's pretty much it.  But I run into huge problems when I get to step 3 -- use the mongoDB findOne command.

The findOne method takes an array input of the key->value pair.  From the mongo command line, you'd execute something like this:

> db.session_ses.findOne({'idpro_ses' : 1})
{
 "_id" : ObjectId("4ea1af93ddc69802376b56d1"),
 "id_ses" : 1,
 "idpro_ses" : 1
}

( Just to show you that the data exists in the mongo collection...)

But, the PHP-ized version of the method is a wee bit different:

$this->collection->findOne(array('idpro_ses' => 1));

All of the examples that I've been able to locate show using the method by invoking it using literals.  My problem is that I have the two input parameters sent to the method ($_key and $_value) and I've got to find a way to get the PHP version of the method call to work using variables instead of constants.  This is what didn't work:

$this->collection->findOne(array($_key => $_value));

$this->collection->findOne(array("'" . $_key . "'" => $_value));

 

$this->collection->findOne(array("{$_key}" => $_value));

$aryData = array(); $aryData[$_key] = $_value; $this->collection->findOne($aryData); or $this->collection->findOne(var_dump($aryData)); 

I thought this worked but I was wrong:

$this->collection->findOne(array(array_keys($aryData) => array_values($aryData)));

This format returned a mongo record -- the problem was that it returned the first mongo record independently of any key-search criteria.

What finally worked for me was this:

            $qs = array(); // QueryStructure
            switch($this->fieldTypes[$_k]) {
                case 'int' :
                    $_v = intval($_v);
                    break;
                case 'str' :
                    $_v = strval($_v);
                    break;
                case 'float' :
                    $_v = floatval($_v);
                    break;
            }
            $qs[$_k] = $_v;
            $aryData = $this->collection->findOne($qs);

[Update]

I encountered a similar problem when trying to update records in a mongo collection -- while I could update the record from the mongo command line, I did not experience the same success in trying to execute the command from within my PHP program...

$foo = $collection->find(array('id_geo' => $row['id_geo']));

Consistently failed.  No exceptions were caught, and mongo's findLastError() reported no errors in the transaction.

After several iterations of debugging and attempting various work-arounds, I stumbled upon the solution as being one of casting.  While the variable was being evaluated in the PHP array as type int, somehow this wasn't being interpreted that way by Mongo.  Casting the variable to an integer:

$foo = $collection->find(array('id_geo' => intval($row['id_geo']))); 

 generated a successful query for both the find() and my update() functions.

As I gain experience with Mongo, I expect to discover more of these little mannerisms...