PHP: Daemons, Forking, RabbitMQ, Shared-Memory, and You.

The scope, in terms of breadth and depth, of my current project, that which I've now spent a year of my life on, is huge.  It's a back-end storage and processing system that manages MySQL, mongodb, memcached, RabbitMQ as it's resources, and is build on a custom-written, multiple-abstracted and object-oriented, PHP framework.

As of this writing, we're in the final stages of releasing the second version, considerably more feature-rich than the first, to production which is configured as a highly-scalable, fault-tolerant and redundant cluster-configuration deployed across AWS.

The base architecture of the project is centered around message brokering, AMQP and RabbitMQ to be exact, in favor of Apache and a RESTful API interface which controls data delivery between the public-facing front-end application and my framework.  I selected RabbitMQ as my AMQP broker because open-source, active maintenance, and a strong community.  PHP is under-represented within the RabbitMQ community and were it for the work of Alvaro Videla available as a resource, my endeavor would have never seen production.

The basic premise of the framework, which I've dubbed "BEDS" (Back-End Data Service - A wholly unimaginative yet strangely descriptive name.) is to process requests, sent via the message broker to/from the front-end, and either store or fetch the data provided or requested.  

Data is stored in a variety of mediums:

  • MySQL for relational data
  • mongo for documents (there are a lot of documents in this application)
  • memcache for assembled data constructs and frequently-accessed records
  • file-system for user uploads
  • RabbitMQ via RPC brokers for (in)direct communications

Data objects are organized and controlled via class models which approximate the storage media schema and are constructed via template files so as to promote a factory-model during instantiation.  These are the dynamic classes.

Static classes manifest as framework controllers and resource managers: memcache, logging, error-handling, metrics, resource management and configuration controllers are all static classes and are leveraged at all levels within the framework.

I've been actively programing in PHP for over 10 years now as web developer.  I've always had a preference for PHP because of it's strong community support, ease of use, and strong evolution in terms of functional ability.  At various times in the past decade I've been forced, on more than one occasion, to recognize the limits of the language which, fortunately, seem to be mostly ephemeral.  Today, I am developing algorithms that would have been considered to be impossible just a few years ago; PHP evolves quickly and still remains my preferred language for development.

However, I do still embrace the conviction that PHP applications are designed to be short, ephemeral, work-horses -- PHP applications are not generally suitable for long-running tasks, such as daemons.

Which does absolutely nothing to explain why I implemented my message brokers as PHP daemons as the primary interface to my BEDS framework.  Perhaps the decision was influenced by my premature senility and lack of short-term memory...

At any rate - the last week I've been struggling with a memory leak issue deep within my framework.  I read-up on all current literature revealed by frantic interweb searches, for locating, identifying and destroying the root causes of memory leaks manifesting in PHP.  After several days of detective work, where I had refactored my code to include explicit destructor functions, register destructor functions and shutdown methods within my classes,  unset class objects following their deconstruction, viewed hundreds of memory-profile graphs via Xdebug, I was simply unable to locate the leak which caused one of my broker daemons (I have five) to crash once all available memory (to the broker) was consumed and exhausted.

With the pressure increasing daily from a slipped release date, I started to exhaust the repertoire of search-results hunting for the leak and knew I had to devise a work-around, if not a solution, soon.  Upgrading all my open-source libraries, including my AMQP library, a brief and failed attempt with PECL-AMQP, and several iterations of refactoring proved to be effective only in improving the overall performance of the framework, yet the leak persisted.  To some credit, I was effective in reducing the rate of which memory was accumulating, but that only delayed the inevitable crash.

The offending code lay somewhere deep in the layers of my framework and was manifesting primarily in the broker tasked with managing high-volume transfers of document-based data.  Stack-Overflow was persistence in it's collaborative wisdom that PHP, as a long-running application, was sub-optimal.

That notion, combined with a comment made during our now-daily status meetings:  the front-end isn't showing memory leaks!  (Me:  yes it probably has leaks, however the programs running on the front-end are stateless.) was what gave me the clue which lead to the work-around solution for "fixing" the leak.

I needed to make the event-calls, within the broker, stateless, ephemeral processes which would execute and then die without impacting the broker's memory footprint.  I could do this if I forked off the broker events as child processes to the broker's parent.

Even though child processes inherit all the resources of a parent, intuitively I believed that letting a child process write a response to the broker resource would overly complicate my fork implementation.  As such, I decided to limit child processing to just that - the processing of the request.  Broker communication would continue to be the responsibility of the parent process.

I also realized I had to face another problem, once I figured out the forking process, was how to get the data generated in the event processing back up to the parent.  I decided to use shared-memory as a resource for inter-process communication between the child and parent apps simply because I know that the event itself would exist in etherspace for around 0.2 seconds on average.  Not enough time to use file i/o -- remember we're talking about very large document processing -- and too much overhead to use another AMQP-based delivery.

So my basic algorithm became this:

  • Broker processing begins
    • complete preliminary validation of message request data
    • establish shared memory segment with unique descriptor
    • fork the broker
      • child processes the request
      • child generates a structure with request results and status
      • child writes structure to shared memory
      • child exits
    • parent process accesses shared memory and retrieves the structure
      • parent processes the structure and build AMQP return payload
      • parent closes connection to shared memory
      • parent destroys shared memory resource
  • Broker Processing Ends

Time to start coding...

The first step is to set-up the shared memory segment that will be used to pass data from the child back to the parent processes:

$shmKey = ftok(__FILE__, 't');
if (!$shmID = shmop_open($shmKey, 'c', 0666, $shmSpaceSize)) {
// log error message
// build error-return payload
} else {
switch ($pid = pcntl_fork()) {
case -1 : // fork failed!
// log error message
// build error-return payload

case 0 : // child
// processing code
// in which $aryReturnData is built
// ...
$aryReturnData = serialize(json_encode($aryReturnData, true));
$bytesWritten = shmop_write($shmID, $aryReturnData, 0);
posix_kill(getmypid(), 9);

default : // parent
$status = 0;
pcntl_waitpid($pid, $status);
if (pcntl_wifexited($status)) {
// log error message about exit status
} else {
$aryReturnData = shmop_read($shmID, 0, 0);
$aryReturnData = json_decode(unserialize($aryReturnData), true);
// code to publish return AMQP message

A couple of notes here:

  • I used a global constant to define the shared-memory space size.
  • When you return a json string via shared memory, you need to serialize the string.
  • the posix_kil() method was required - without it, the parent would eventually self-terminate because, I assume, the timeout was reached on the wait status - and sends a signal to the parent that processing has completed.
  • The parent blocks on the child.  Sure, in testing, I was forking off a string of 10-child process and gleefully watching the randomness in which the threads were returning to the parent.  PHP, however, is not thread-safe by default and I made an architectural decision to not deploy a thread-safe version of PHP simply because of the horrors involved in maintaining the package outside of Ubuntu's, or CentOS's, package management environment.
  • Deleting a shared-memory block is not sufficient by itself to free the shared-memory resource.  You must also close the connection.

After refactoring my broker, I was very pleased to see my memory consumption stabilize into rock-solidness.  My zero-state on start-up is around 20M, and following calls to all the events in the broker (there are 18), memory consumption plateaued into a steady-state of about 36M as shown in the output below:

$ cat /proc/6135/status
Name: php
State: S (sleeping)
Tgid: 6135
Pid: 6135
PPid: 1
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 64
Groups: 4 24 27 30 46 108 124 1000
VmPeak: 401376 kB
VmSize: 357984 kB <--- VM map
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 53332 kB
VmRSS: 36908 kB <--- resident set size
VmData: 99376 kB <--- size of data segment
VmStk: 136 kB <--- size of the stack segment
VmExe: 7376 kB <--- size of the text segment
VmLib: 52620 kB
VmPTE: 524 kB
VmSwap: 0 kB
Threads: 1
SigQ: 4/62591
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000001007
SigCgt: 0000000184000000
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000001fffffffff
Seccomp: 0
Cpus_allowed: ff
Cpus_allowed_list: 0-7
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 19554
nonvoluntary_ctxt_switches: 53

As a result of this work-around, and I am painfully aware that it is a work-around and that I have still have investigative work ahead to locate and resolve the memory leak.  However, I am able to deliver a stable broker as a result.

I also learned how to use Xdebug to profile my application and learn where my application was spending it's time.  Combined with KCachegrind, I was able to visually inspect my application's process and progress.

I'm still searching for a decent memory-usage reporting tool for Ubuntu - please feel free to share your choices in a comment.