query

Why is my mongo query so slow?

Why's my mongodb query so slow? I got my geospatial collection set-up -- I am running some really great queries making sure that the locations I am pulling aren't in any sort of cache, and I am just blown-away by how fast data is being returned.

The problem is:  when I query the collection to pull up the requisite lon/lat data by name:  city & state, or city & country, the query seems to take seconds to complete!

I set-up the table correctly...I indexed the crap out of all my columns...a week or two ago, I was at the mongoSV 2011 in Santa Clara and learned some really cool stuff about queries, indexing, and performance management, so let's dig-out the notes and see where I went wrong.  Because I strongly doubt that the problem is in mongo but, rather as we used to say in technical support: this is a PBCK issue...

The first thing I want to do is run an explain against my query so I can see mongo's query plan for my query.  This should provide me with a starting point for trying to figure out what went wrong.

> db.geodata_geo.find({ cityName : "Anniston", stateName : "Alabama" }).explain();

By adding the trailing function: .explain(), I'm requesting that mongoDB return the query-plan to me instead of executing the query.  I hit enter to launch the explain() and get back the following output:

> db.geodata_geo.find({ cityName : "Anniston", stateName : "Alabama"}).explain(); { "cursor" : "BasicCursor", "nscanned" : 3691723, "nscannedObjects" : 3691723, "n" : 1, "millis" : 2269, "nYields" : 0, "nChunkSkips" : 0, "isMultiKey" : false, "indexOnly" : false, "indexBounds" : {

} }

The important information, I bold-faced in the query output (above).   What this output is telling me is that I've using a "BasicCursor" for my search cursor -- which is indicates that, yes, I am doing a table-scan on the collection.  So, already I know my query is not optimal.  But, wait!  More good news...

The value for nscanned and nscannedObjects is the same: 3,691,723 -- which coincidently is the same as the cardinality of the collection.  This number is the number of documents scanned to satisfy the query which, given it's value, confirms that I am doing a full table scan.

millis tells me the number of milliseconds that the query would take:  2.269 seconds:  way too slow for my back-end methods() serving a REST API -- unacceptable.

And then we get to the tell:  IndexOnly tells me that if the query could have been resolved by an (existing) covering index.  Seeing the value false here tells me that the collection has no index on the columns I am scanning against.

What?!?  I know I indexed this collection...

So, I run db.geodata_geo.getIndexes() to dump my indexes and ... I ... don't see my name columns indexed.  Oh, I remembered to index the the ID and Code columns...but somehow, indexing the Name columns completely slipped past my lower brain-pan.

I add these indexes to my collection:

> db.geodata_geo.ensureIndex({ cityName : 1 }); > db.geodata_geo.ensureIndex({ stateName : 1 });

And then I rerun the query plan and see the following output:

> db.geodata_geo.find({ cityName : "Anniston", stateName : "Alabama"}).explain(); { "cursor" : "BtreeCursor cityName_1", "nscanned" : 2, "nscannedObjects" : 2, "n" : 1, "millis" : 101, "nYields" : 0, "nChunkSkips" : 0, "isMultiKey" : false, "indexOnly" : false, "indexBounds" : { "cityName" : [ [ "Anniston", "Anniston" ] ] } }

Instead of BasicCursor, I see BtreeCursor which gives me a happy.  I also see that the nscanned and nscannedObjects values are now more realistic...seriously:  2 is a LOT better than 3.6 million something, right?  Another happy for me!

I score the third happy when I see that the millis has dropped down to 101:  0.101 seconds to execute this search/query!  Not jaw-dropping, I agree -- but acceptable considering that everything is running off my laptop...I know production times will be much, much lower.

 

In the end, I learned that a simple tool like .explain() can tell me where my attention is needed when it comes to optimization and fixing even simple, seemingly innocent queries.  Knowing what you're looking at, and what you're looking for, is pretty much thick-end of the baseball bat when it comes to crushing one out of the park.

I hope this helps!

 

Reference Link:  Explain

mongodb.findOne() -- calling with PHP variables (not literals)

So I've been doing a lot of work, for work, in MongoDB lately and I've learned an awful lot.  Or, depending on your point of view, a lot that's just awful.

See, there's not what you could even charitably call a lot of MongoDB documentation to begin with.   If you filter what is available on, oh, say, PHP implementation, well the results just dwindle to something roughly the same size as a tax-collector's heart.

Here's the scenario -- I've been working on adding a mongo abstraction class on top of my base-data abstraction class -- whereas said classes are extended by the table-level class instantiation.  This allows me to keep all of my query logic in the middle tier of the class design, generic and administrative functions in the base class, and table-specific stuff in the table class.  So far, so good, right?

Well, I get the mongo constructor running and, like it's mySQL counterpart, I have an rule in every table constructor that states "if I pass a indexed field and it's value to the constructor, then instantiate the class pre-populated with that record."

And that's where things start to head south.

In my constructor logic, I'm only allowing single-value key->value pairs as constructor parameters with the design intention of getting a record from the db using the pkey of the table/collection.  In other words, you get one column and one column value.  So, if you're going to instantiate a new user object, you'd probably want to pass-in the primary-key field of a user and that field's value:

$objUser = new UserProfile('email', 'mshallop@gmail.com');   // instantiate a new user object with this email address

Still pretty easy.  I bang out the mySQL equivalent in nothing flat.  I hit a huge pothole when I get to the mongo side.

The method is defined as a protected abstract method in the base class - so this method has to appear in both child classes as defined in the parent:

protected abstract function loadClassById($_key, $_value);

So I have my methods defined in both the mySQL and mongoDB middle layer.  My strategy for the mongo fetch-and-return is pretty simple -- once the class has been instantiated, do the following:

  1. make sure the $_key value exists in the allowed field list
  2. make sure the $_value has a value
  3. query mongodb using .findOne()
  4. store the return key->value pairs in the member array
  5. return status

That's pretty much it.  But I run into huge problems when I get to step 3 -- use the mongoDB findOne command.

The findOne method takes an array input of the key->value pair.  From the mongo command line, you'd execute something like this:

> db.session_ses.findOne({'idpro_ses' : 1})
{
 "_id" : ObjectId("4ea1af93ddc69802376b56d1"),
 "id_ses" : 1,
 "idpro_ses" : 1
}

( Just to show you that the data exists in the mongo collection...)

But, the PHP-ized version of the method is a wee bit different:

$this->collection->findOne(array('idpro_ses' => 1));

All of the examples that I've been able to locate show using the method by invoking it using literals.  My problem is that I have the two input parameters sent to the method ($_key and $_value) and I've got to find a way to get the PHP version of the method call to work using variables instead of constants.  This is what didn't work:

$this->collection->findOne(array($_key => $_value));

$this->collection->findOne(array("'" . $_key . "'" => $_value));

 

$this->collection->findOne(array("{$_key}" => $_value));

$aryData = array(); $aryData[$_key] = $_value; $this->collection->findOne($aryData); or $this->collection->findOne(var_dump($aryData)); 

I thought this worked but I was wrong:

$this->collection->findOne(array(array_keys($aryData) => array_values($aryData)));

This format returned a mongo record -- the problem was that it returned the first mongo record independently of any key-search criteria.

What finally worked for me was this:

            $qs = array(); // QueryStructure
            switch($this->fieldTypes[$_k]) {
                case 'int' :
                    $_v = intval($_v);
                    break;
                case 'str' :
                    $_v = strval($_v);
                    break;
                case 'float' :
                    $_v = floatval($_v);
                    break;
            }
            $qs[$_k] = $_v;
            $aryData = $this->collection->findOne($qs);

[Update]

I encountered a similar problem when trying to update records in a mongo collection -- while I could update the record from the mongo command line, I did not experience the same success in trying to execute the command from within my PHP program...

$foo = $collection->find(array('id_geo' => $row['id_geo']));

Consistently failed.  No exceptions were caught, and mongo's findLastError() reported no errors in the transaction.

After several iterations of debugging and attempting various work-arounds, I stumbled upon the solution as being one of casting.  While the variable was being evaluated in the PHP array as type int, somehow this wasn't being interpreted that way by Mongo.  Casting the variable to an integer:

$foo = $collection->find(array('id_geo' => intval($row['id_geo']))); 

 generated a successful query for both the find() and my update() functions.

As I gain experience with Mongo, I expect to discover more of these little mannerisms...