Optimizing a real-time Symfony app with Memcached, Socket.io, Node.js, Redis, and Nginx
Tue 16 April 2013Tenho Reserva, a PHP/Symfony/MySQL web app, had a problem: we desperately needed to lower its CPU footprint. The primary culprit was the clerk availability algorithm, which would take Apache or php-cgi an average of two seconds to handle. Not only that, in order to keep information up to date, pages would initiate requests automatically every five seconds, compounding the issue. It was a dire predicament: a single-core VPS couldn't handle more than two simultaneous users without a noticeable delay browser-side!
Mapping a strategy
To solve any problem you must ask the right questions, so after some thought, the following suggested themselves:
- Are there any cycles being wasted on superfluous operations? (Doctrine, I'm looking at you!)
- Can we cache results?
- Can the algorithm be optimized?
- Can the frequency of requests be diminished?
It was clear that successful optimization would require answering all of these. So, without further ado:
Don't turn on the light in empty rooms
First stop: Xdebug's profiler. By using the awesome Kcachegrind to investigate the bottlenecks in the controller, it came as a surprise that Doctrine's default hydrator was taking up over 80% of the total processing time for each request. So, as suggested in many Symfony optimization threads, the first thing to do would be to switch to array hydration. Pretty simple to implement, as long as you remember to use array accessors everywhere. You enable it when executing a query:
<?php $query->execute(array(), Doctrine_Core::HYDRATE_ARRAY); ?>
Results? Roughly 50% less cycles spent, for less than one hour of work. I'll never use object hydration again!
Caching and hooks
Not satisfied with a still unwieldy full second of real time for each and every request, I went on to question #2: how about some caching? It seemed like a cheap proposition: in Symfony, enabling query result caching is about as easy as flipping on a switch.
We went with memcached for the caching backend, primarily because I had previous experience with it at Bolsa de Mulher. Once you have the daemon set up and the appropriate PHP library installed (for Symfony, you need the "memcache" extension, not "memcached"), all it takes is the following in your ProjectConfiguration.class.php:
<?php
public function configureDoctrine(Doctrine_Manager $manager) {
$cache_options = array('servers' => array('host' => 'localhost'));
$manager->setAttribute(Doctrine_Core::ATTR_QUERY_CACHE,
new Doctrine_Cache_Memcache($cache_options));
$manager->setAttribute(Doctrine_Core::ATTR_RESULT_CACHE,
new Doctrine_Cache_Memcache($cache_options));
}
?>
...and before executing a $query:
<?php $query->useResultCache(true, 3600, 'cache_key'); ?>
Now, while the first request still took a second, subsequent ones (for the following hour, at least) would be handled in less than 50 milliseconds!
But we weren't done. This is a realtime application, so we couldn't have stale results after a change in the database. Clearing the cache everywhere there's and insert, update, or delete in the code would be tiresome, so Doctrine hooks to the rescue! Here's a demonstration, to be called from the model class (say, lib/model/doctrine/Availability.class.php):
<?php
public function postSave($event) {
$cache = self::getInstance()->
getAttribute(Doctrine_Core::ATTR_RESULT_CACHE);
$cache->delete('an_appropriately_named_cache_key');
}
?>
The same would be done to the postDelete() hook, and if you use DQL to delete items, you can also use the preDqlDelete hook, which first requires enabling in the ProjectConfiguration class:
<?php
public function configureDoctrine(Doctrine_Manager $manager) {
$manager->setAttribute(Doctrine_Core::ATTR_USE_DQL_CALLBACKS, true);
}
?>
And then, back on the model class, you'd do the following. Note the conversion of the deletion query into a select:
<?php
public function preDqlDelete($event) {
$query = clone $event->getQuery();
$query->select();
$items = $query->execute();
$cache = self::getInstance()->
getAttribute(Doctrine_Core::ATTR_RESULT_CACHE);
foreach($items as $items) {
$cache->delete('key_relevant_to_the_item');
}
}
?>
That covered our bases, at least as far as caching went: any update to the database would mean fresh results for the next request. One should be careful to delete *all* relevant cache items, though, maybe by using a hierarchical key naming scheme in conjunction with $cache->deleteByPrefix().
Thinking outside the box
We now had an expensive 1000ms request, followed by an indefinite number of cheap 50ms ones. This wasn't good enough, though. To be able to scale realistically, we would need to not only lower the initial expensive request to below 200ms, but make sure it would happen as rarely as possible. What to do?
I've been a fan of offloading processing to the browser for some time. Even mobile browsers are efficient enough to handle most Javascript loops you throw at them. Taking inspiration from Backbone.js and an awesome tutorial (written by my friend and colleague Xavier Antoviaque), I decided to delegate any and all significant calculations to the browser.
It was a funny endeavor: the controller was wittled down to nothing but a database -> JSON translator, its previous responsibilities transmogrified almost verbatim from PHP into Javascript (discovering the marvelous Moment.js in the process, to supplement Javascript's sparse datetime functions).
And guess what? That 1000ms monster was brought down to a meager 150ms. Mission accomplished? Not so fast!
There can be only one
What if we could do away with all those pesky refresh requests from the browser, and instead, only notify it of changes as they occurred? That would make for a positively *idle* server.
Did anybody think of Websockets? So did I! ;) And thanks to socket.io and node.js, we could even use them(!), falling back gracefully to a decent emulation layer on non-supporting browsers. What's more, since version 1.3.13 nginx has been capable of proxying HTTP 1.1 requests, enabling us to sidestep possible firewall and cross-site issues by hiding nodejs behind the same URL as the site itself. These are the relevant bits of nginx.conf:
upstream node {
server localhost:3001;
}
server {
location /socket/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://node;
}
}
We like letting node.js itself serve socket.io, so this is what's added to the relevant Symfony templates:
<?php use_javascript('/socket/socket.io.js'); ?>
The key to have socket.io on a different path than the default '/socket.io/' (for us, '/socket/') is to use the 'resource' option when opening the socket from the browser:
$(document).ready(function() {
var socket = io.connect('http://same_host_as_the_website.com', {
'resource': 'socket',
'transports': ['websocket', 'xhr-polling', 'jsonp-polling']
});
});
And on the node.js server, don't forget a leading '/' (it took some fiddling to figure this one out :P):
var io = require('socket.io').listen(server);
io.set('resource', '/socket');
At this point we could open a websocket (or equivalent) cleanly from any browser, but how would we handle communication from Symfony through it? Why, Redis, of course! Both node.js and PHP have simple-to-install, easy-to-use libraries or plugins for it. On the Symfony side, we'd once again resort to the postSave, postDelete, and postDqlDelete hooks, this time to send out notifications when something changed in the database. This is what it looks like on a postDelete() hook:
<?php
public function postDelete($event) {
$message = array(
'class' => 'an_arbitrary_message_class',
'type' => 'delete',
'objects' => array(array(
'id' => $this->id
))
);
$redis = sfRedis::getClient();
$channel = 'relevant_channel_for_the_class';
$redis->publish($channel, json_encode($message));
}
?>
What we did here is build the a JSON message that contains just enough information for the browser itself to update the view without needing to issue a new request, and published it on a channel. This last past is important: we don't want all browsers to get all notifications; otherwise, we'd run into a whole different bottleneck! So we partitioned the notifications into channels, and arranged it so a given page will only subscribe to the relevant ones. Like so:
$(document).ready(function() {
socket.emit('subscribe', {channel: 'a_relevant_channel'});
});
We're still missing the "glue" on node.js, though! How does it listen to incoming messages from Redis and know where to send them? This is what it looks like there:
var io = require('socket.io').listen(server);
var redis = require('socket.io/node_modules/redis').createClient();
io.sockets.on('connection', function(socket) {
socket.on('subscribe', function(data) {
socket.join(data.channel);
});
socket.on('unsubscribe', function(data) {
socket.leave(data.channel);
});
});
redis.psubscribe('*');
redis.on('pmessage', function(pattern, channel, message) {
io.sockets.in(channel).emit('message', {
'channel': channel,
'message': message
});
});
It is easy enough to see what's happening on the browser socket: when getting a "subscribe" command, node.js has the socket join an internal channel with the requested name. On the Redis side, as node.js gets a message on a certain Redis-specific channel, it'll forward it to sockets listening on node.js channels of the same name.
How does the browser handle the messages? Easily enough:
socket.on('message', function(data) {
var message = $.parseJSON(data.message);
// Code to parse the message and redraw the view.
});
Since the browser was already handling the view of the model (almost) directly, it was a (relative) piece of cake to have it display changes to the relevant database tables with the short messages coming over the socket.io wire.
Are we there, yet?
The verdict? The bottleneck was removed. We went from a 15-minute load average of 1.0 to 0.05, for 2 simultaneous users, and an average request processing time of 100ms, down from 2000ms. What's more, updates to the page are broadcast and displayed immediately, as opposed to having the user wait as much as 7 seconds for a refresh. In table form:
Before | After | |
---|---|---|
Avg processing time (uncached) | 2000ms | < 150ms |
Avg processing time (cached) | 2000ms | < 50ms |
15-min load average (2 users) | > 1.0 | 0.05 |
Avg wait for updates | > 5s | Immediate |
There is more to be done, if necessary. Doctrine's hydration process is still the biggest eater of CPU cycles, so in the future we might look into optimizing that further (doing away with hydration entirely?).
On the whole, I'm really happy with the mix of Symfony, Memcached, Socket.io, Node.js, Redis, and Nginx. It all not only makes for a very scalable infrastructure, but keeps the developer-friendliness of a framework. But whether on a framework or not, I intend to use this toolkit a lot.