Pages Menu
Twitter
Categories Menu

Posted by on Jan 29, 2014 in Optimisations, Programming, Ruby On Rails, Server | 0 comments

Horizontal scaling using Db Charmer

I was looking for a way to scale horizontally a Ruby on Rails application, and i have tried several methods to scale it. A method would be using a MySQL cluster, but that would require some serious database administrator skills, which unfortunately i don’t have.

dbreplication173Mainly i have an application that is read intensive (80% reads vs 20% writes) so, i have considered to use a MySQL master – slave configuration. The problem is that there is nothing about it in Rails documentation, however, after a short look in ruby-toolbox.com I have discovered that I am not the only one who encountered this problem.

I have tried octopus as my first choice, but i have soon discovered that is not fit for my application. For some reasons, not all my “read” queryes were passed to my slave connection. I have tried to see why, but because I was kind of pressed by time, i have dismissed this gem, even if i love the simplicity of the models.

After dismissing octopus, I have tried db charmer gem, which is pretty active. This is yet another Active Record Sharding gem that offers you the possibility to split database reads and writes.

The method i have chosen for my first try was to split my actions that were 100% reads, and push them to a slave. That was pretty simple using a before filter in my rails controllers.

class ProfilesController < Application
  force_slave_reads :only =>  [ :show, :index ]
end

This action allowed me to scale the application by keeping the same amount of servers, but the main effect was a drop in the response time of the applications.

The second action i have taken was to get all the heavy queries like counts out of the mysql master server and move them to slave.

class User < ActiveRecord::Base
  def some_some_heavy_query
    self.on_slave.joins(:profile, :messages).count(:group => ['messages.thread_id'])
  end
end

In my enthusiasm of having a mysql slave I have thought that it would be nice to have “ready” 3 slave instances in my config. I have later realised that this “optimisation” caused problems because those 3 connections multiplied by the number of max_child in my apache configuration and also multiplied by the number of the servers exceded the number of the max_connection on my mysql slave server.

After a small fix in my database.yml files I was back online with a more performant application.

Read More

Posted by on Nov 27, 2013 in PHP, Server | 0 comments

How to use aggressive file caching

Speed up your site

Recently I have observed that one of my servers took long time to respond to users. After an investigation I have seen that i had a lot of TIME_WAIT connections, because each request needed to process some output. My application serves some user widgets that are connecting a 3rd Party server, which can cause a lot of delays regarding my output. Given the fact the application did not used secured content (did not required for user to be signed in), I have decided to use aggressive file caching strategy. Basically i have used PHP’s  ob_start function and its callback in order to write the application’s response on disk.

I had an YII Framework application, so i have modified index.php file to look like this:

<?php
function callback($buffer)
{
  if (empty($buffer)) {
    return $buffer;
  }
  try {
    $file_name = $_SERVER['REQUEST_URI'];
    if (preg_match("/\?/", $file_name)) {
      $file_name = substr($file_name, 0, strpos($file_name, '?'));
    }
    if (substr($file_name, -3, 3) == '.js') {
      file_put_contents(dirname(__FILE__) . $file_name, $buffer);
    } else if (substr($file_name, -9, 9) == 'some custom name') {
      mkdir(dirname(__FILE__) . substr($file_name, 0, -9), 0777, true);
      file_put_contents(dirname(__FILE__) . $file_name, $buffer);
    }
  }catch(Exception $e) { }
  return $buffer;
}

ob_start("callback");

// change the following paths if necessary
$yii=dirname(__FILE__).'/some/path/to/yii/framework/yii.php';
$config=dirname(__FILE__).'/protected/config/main.php';

// remove the following lines when in production mode
//defined('YII_DEBUG') or define('YII_DEBUG',true);
// specify how many levels of call stack should be shown in each log message
//defined('YII_TRACE_LEVEL') or define('YII_TRACE_LEVEL',3);

require_once($yii);

Yii::createWebApplication($config)->run();

ob_end_flush();

Given the fact that my application needed to return JSON objects, i had to added in my NGINX de following lines:

location ~ ^/js/.*\.js$ {
  #access_log  off;
  access_log    /var/log/nginx/hostname-access-log main;
  add_header Content-Type application/javascript;
  add_header Access-Control-Allow-Origin *;
  if (-f $request_filename) { break; }
  try_files $uri  @apachesite;
}

location ~ ^/js/.*/some custom name$ {
  #access_log off;
  access_log    /var/log/nginx/hostname-access-log main;
  add_header Content-Type application/json;
  add_header Access-Control-Allow-Origin *;
  if (-f $request_filename) { break; }
  try_files $uri  @apachesite;
}
location / {
  # some more config here 
}
location @apachesite {
  # some more config here 
}

The result was a immediate drop of TCP connections on that server, a CPU usage decrease and no difference regarding the functionality. Even more, all what I could see it was a performance improvement. However now I got two other issues: the size of the folder and the cache expiration. Given the fact I wrote the files on disk in one single folder, there was a response time issue (again) because of the big number of files. Those 2 issues, were easier to fix by adding some small script to my crontab:

#Added cronjob to delete old files
0 * * * * /some/path/for/cache/expire/script.sh

And the source of: /some/path/for/cache/expire/script.sh

#!/bin/bash

BASE='/just/another/htdocs/public/folder/matching/my/url'
#age in minutes
AGE=60

find $BASE/* -mmin +$AGE -exec rm -r {} \;

Warning!! This aggressive file caching strategy cand cause serious response time issues if the number of the files is too big (I let you decide what “big” means to you). By implementing the cron job from above ensures the cache expiration but also the cleanup of the folder by deleting the files that have not been accessed in a while.

Read More