Archive for the ‘ Tech ’ Category

At my place of employment the majority of doors between and along the corridors can now be opened with the push of a button. This is a boon for those who are wheelchair bound, pushing a pushchair or catering trolley, or simply have their arms full.

However they are also a source of frustration, especially when encountering someone who hasn’t a clue. Just because a door can be opened with the button, does not mean that it must be opened with the button, or that this is the most efficient and sensible way of operating the door in all cases.

Many of these have only been installed this year. Previously there were a few push button doors along the more busy corridors. As you walk along some of the wider sections of corridor you will approach a set of three doors adjacent to each other. Two of the doors can only be opened manually and one has a push button. It amazes me the number of people who will be walking on one side of the corridor and will veer right over to the other side in order to push the button rather than simply pulling open the door that is in front of them. Sometimes they will even wait for several people to stream through the electrically opened door rather than using one of the manual ones.

Now many more doors are push button. It did not require replacing any doors, simply fitting the existing ones with a robot arm and screwing a couple of touch sensors to the wall that communicate wirelessly with the robot.

 

One of the touch sensitive door opening buttons.

One of the touch sensitive door opening buttons.

Notice the images on the button. A wheel chair and a pushchair. It should be obvious for whom the button is intended, but not to most people. Most physically fit and unencumbered people think that the button is for them. I have seen people approach the door, veer over to touch the button, then stand waiting as the door slowly opens in front of them. JUST PUSH THE STUPID DOOR!!!! The one nearest to my office opens really slowly. You can open it manually, but once the button is pushed and the motor engages it becomes too heavy to push. So to the helpful person who saw me approach the door and hit the button for me as he went past I say “No thank you”.

Apart from a couple of older doors that have “Please don’t open manually” signs on them (but have manual doors next to  them) all these doors can be opened manually, and it is usually far quicker to do so than to wait for the motor to do it for you.

So if you’re walking on two legs and have at least one hand free please don’t push the button. It wastes our electricity, your time, and your muscles which would benefit from the exercise of opening the door themselves rather than expecting a robot to do it for you. Have we got so used to machines doing our bidding that we can’t even open a door by ourselves?

In part 1 I showed you how to use a combination of pound, haproxy and stunnel to create a cookie based load balancing solution on Debian GNU/Linux 6.0. In part two I will show you how to make the system more resilient.

Aims

If you have followed the instructions in part 1 you should have two web servers and a server that is acting as a load balancer between them. This solution will work well, but in the event of an unexpected failure in one of the nodes the system will cease to function properly. Either half your users will have their connection forwarded to a broken web server, or (if the load balancer fails) the whole system will be unavailable.

In this part we will set up two new features.

  1. A second stand-by load balancer that can take over load balancing should the primary one fail.
  2. A system that monitors the web servers and adjusts the load balancing rules to remove servers that have failed.

Outwith the scope of this document is the monitoring that you should be doing anyway so that you can react to failed services. My preference is Nagios but other monitoring systems are available.

Assumptions

The IP addresses will change slightly here. Since 192.168.0.1 was the address of the load balancer, it will remain the address of the load balancer, but only from a user’s perspective. Here are the addresses as they should stand now:

192.168.0.1 – Load balancer IP address that will float between the two load balancers.

192.168.0.2 – Web server 1

192.168.0.3 – Web server 2

192.168.0.4 – Load Balancer 1

192.168.0.5 – Load Balancer 2

Initial Set Up

Change the IP address of your current load balancer to 192.168.0.4, then head over to part 1 and set up your second load balancer with pound, haproxy and stunnel. Make sure to configure pound to listen on address 192.168.0.1, and also to tell haproxy to listen on 192.168.0.1:80 if you are load balancing http as well as https.

Heartbeat

Heartbeat is the software that will allow the secondary load balancer to take over if the primary one fails. Install it with:

apt-get install heartbeat

Then, on both load balancers, configure it by creating the following files:

/etc/ha.d/ha.cf

logfacility     local0
keepalive 2
deadtime 10
warntime 10
initdead 20
udpport 694
auto_failback on
node    balancer1
node    balancer2

ucast   eth0 <other nodes ip address>

In <other nodes ip address> insert the ip address of the other load balancer on each one. This tells heartbeat the IP address of the other node in the cluster. balancer1 and balancer2 are the resolvable host names of the two load balancers.

/etc/ha.d/haresources

balancer1 192.168.0.1 stunnel4 haproxy pound

This tells heartbeat to manage the 192.168.0.1 ip address and also to start/stop stunnel, haproxy and pound. It also specifies balancer1 as the primary load balancer. This, plus the auto_failback on setting tells heartbeat to always use balancer1 if possible and to revert to using it as soon as it comes back to health.

Once you have configured heartbeat, restart it on both load balancers and check the ip address with ifconfig. You should see eth0:0 on the active node with the ip address of 192.168.0.1. Shut down heartbeat on balancer1 and run ifconfig on balancer2 and you should see it take over 192.168.0.1. Start heartbeat on balancer1 and it should take the ip address back.

Monitoring the Web Servers

Monitoring the web servers is done using mon. Mon will constantly monitor the web servers. If one goes down mon will trigger an alert that will adjust the load balancing configuration. The alert program works by maintaining a sqlite database in which it records the “state of the world” and then uses the contents of that database to regenerate the haproxy configuration before restarting haproxy.

From our perspective an SQL database is the simplest way to maintain the “state of the world” so that we do not have to write our own faffy flat file handling code, but the amount of data and amount of access is too small to require a full blown MySQL, Postgres or (God forbid) Oracle installation. For this reason I chose sqlite.

All the steps shown below in this section should be executed on both load balancers.

Install the software:

apt-get install mon sqlite3 libdbd-sqlite3-perl

SQLite Database

Next we need to create our sqlite database as follows:

sqlite3 /etc/mon/balance.db
CREATE TABLE balance (type text, checkurl text, targeturl text, status text);
INSERT INTO balance
VALUES('plain','webserver1','server webserver1 192.168.0.2:80 cookie webserver1 maxconn 5000','up');
INSERT INTO balance
VALUES('plain','webserver2','server webserver2 192.168.0.3:80 cookie webserver2 maxconn 5000','up');
INSERT INTO balance
VALUES('ssl','webserver1','server webserver1 127.0.0.1:82 cookie webserver1 maxconn 5000','up');
INSERT INTO balance
VALUES('ssl','webserver2','server webserver2 127.0.0.1:83 cookie webserver2 maxconn 5000','up');
.exit

That will create your sqlite database. In order for mon to be able to maintain it ensure that balance.db is owned by mon and that mon has read/write access to it. You should also ensure that the /etc/mon directory  has mon as its gid and has group write permissions.

The event handler

By default mon event handlers (or alerts as they are known) live in /usr/lib/mon/alert.d

In this directory we need to create the balance.alert program as follows:

#!/usr/bin/perl
use DBI;
use Getopt::Std;
getopts ("g:u");

my $dbargs = {AutoCommit => 1,
              PrintError => 1};

my $dbh = DBI->connect("dbi:SQLite:dbname=/etc/mon/balance.db","","",$dbargs);

if ($opt_u)
{
        $dbh->do("UPDATE balance SET status='up' WHERE checkurl='$opt_g'");
}
else
{
        $dbh->do("UPDATE balance SET status='down' WHERE checkurl='$opt_g'");
}

open HAP, ">/etc/haproxy/haproxy.cfg";
print HAP << "EOT"     global         log 127.0.0.1 local0 notice         user haproxy         group haproxy         daemon         maxconn 20000     defaults         log global         option dontlognull         balance leastconn         clitimeout 60000          srvtimeout 60000         contimeout 5000         retries 3         option redispatch      listen http 192.168.0.1:80          mode http          cookie WEBSERVERID insert          option httplog          balance source          option forwardfor except 192.168.0.1          option httpclose          option redispatch          maxconn 10000 EOT ; my $statcursor=$dbh->prepare("SELECT * FROM balance WHERE type='plain' AND status='up'");
$statcursor->execute();
while(my $statrow=$statcursor->fetchrow_hashref())
{
        my $targeturl=$statrow->{'targeturl'};
        print HAP "     $targeturl\n";
}
$statcursor->finish();

print HAP << "EOT"     listen https 127.0.0.1:81          mode http          cookie WEBSERVERID insert          option httplog          balance source          option forwardfor except 192.168.0.1          option httpclose          option redispatch          maxconn 10000 EOT ; my $statcursor=$dbh->prepare("SELECT * FROM balance WHERE type='ssl' AND status='up'");
$statcursor->execute();
while(my $statrow=$statcursor->fetchrow_hashref())
{
        my $targeturl=$statrow->{'targeturl'};
        print HAP "     $targeturl\n";
}
$statcursor->finish();

close HAP;
$dbh->disconnect();

my $heartbeat = system("/sbin/ifconfig | grep 192.168.0.1");
if($heartbeat == 0)
{
        `sudo /etc/init.d/haproxy restart`
}

Mon runs this program using options -g to specify the web server on which it is alerting, and -u  if it is alerting that the server has recovered.

In order for mon to be able to run this program you need to ensure that the mon user can write haproxy.conf in the /etc/haproxy directory. You also need to tell sudo that mon can run the /etc/init.d/haproxy script without needing a password. In /etc/sudoers add the following (preferably by using visudo to edit it):

mon ALL = NOPASSWD: /etc/init.d/haproxy

This program, firstly updates the sqlite database with the status of the web server on which it is alerting. It then reads from the database and creates /etc/haproxy/haproxy.conf according to the servers that are registered as “up” and then (if running on the active load balancer) restarts haproxy.

mon.cf

Configure mon as follows:

cfbasedir   = /etc/mon
alertdir   = /usr/lib/mon/alert.d
mondir   = /usr/lib/mon/mon.d
maxprocs        = 20
histlength = 100
randstart = 30s
logdir = /var/log/mon
dtlogging = yes
dtlogfile = dtlog

hostgroup webserver1 webserver1.stir.ac.uk
hostgroup webserver2 webserver2.stir.ac.uk

watch webserver1
        service apache
        interval 10s
        monitor http.monitor
        period wd {Sun-Sat}
        numalerts 1
        alert balance.alert
        upalert balance.alert

watch webserver2
        service apache
        interval 10s
        monitor http.monitor
        period wd {Sun-Sat}
        numalerts 1
        alert balance.alert
        upalert balance.alert

Once done, restart mon.

Extra Resilience

Pound, haproxy and stunnel are fairly robust but I have known stunnel to crash before now. If you want to guard against any of the key processes crashing then you can configure mon to watch for them, and to shutdown heartbeat if any of them die (thus migrating load balancing to the secondary server which will start its own processes up at that point.

Mon does not provide an easy to use monitor to check if processes are up so I chose to use nagios for this purpose since the systems I use are already set up for full nagios monitoring and event handling.

Please get in touch if you have any comments or ways to improve this.

The task was simple. Create a load balancing solution in an attempt at creating high availability on a crucial service. The solution must support SSL and must support cookie-based persistence so that clients will always be sent to the same backend server.

Firstly I’d like to credit Bob Feldbauer of CompleteFusion whose instructions provide the basis for this solution. However Bob’s instructions were slightly lacking in that communication between the load balancer and the application servers is in the clear. Here I attempt to show how to encrypt all network traffic.

HAProxy is an extremely powerful load balancer and is up to the job for the most part. It can insert its own cookies for persistence, however it does not support SSL. This is not a show stopper, but is the reason why I felt the need to document my set up as it is a little complicated.

In order to ensure that haproxy was not involved in SSL I used stunnel. However stunnel can operate in client mode, or server mode, but not both. Creating two instances of stunnel could get messy so I decided to use stunnel in client mode to talk to the application servers, and to use pound on the load balancer to receive connections from the clients.

This system is running on Debian 6.0 (squeeze). Please adjust accordingly if you are using a different system.

Firstly make sure that all the software you need is installed.

apt-get install pound haproxy stunnel4

If you need your application servers to know the IP address of the originating client then check out Bob Feldbauer’s instructions on building your own stunnel including the xforwarded-for patch. Should you choose to build your own stunnel then the simplest way to make sure you are running it is to edit /etc/init.d/stunnel4 and set the DAEMON variable to the location of your hand-built stunnel binary.

Assumptions

I am using three servers here. The load balancer at 192.168.0.1, app server 1 at 192.168.0.2 and app server 2 at 192.168.0.3. Please substitute your appropriate IP addresses (but you knew that anyway, if that was not obvious to you then you shouldn’t be attempting any of this).

Get a Certificate

One of the first things you should do is get a SSL certificate. Have a look at Paul Bramscher’s instructions on how to create SSL certificates, but instead of self-signing it you probably want to get it signed by a recognised certificate authority. When you receive your certificate you need to set it up in the appropriate format for pound to accept it. The certificate file that pound will read needs your unencrypted key at the top of the file, followed by your signed certificate, then any intermediate certificates that your CA may have sent you. In this example I have placed the full certificate file in /etc/ssl/certs/fullcertificate.crt.

While you are in certificate mode I recommend creating self-signed certificates for each of the web servers. These should be installed appropriately on your web server software and will be used by stunnel to verify that it is talking directly to the web servers.

Configuring Pound

Once you have a certificate you can configure pound to receive SSL connections, decrypt them and send them on to haproxy (which we will configure to listen on port 81) in the clear.

Create the following /etc/pound/pound.cfg:

User "www-data"
Group "www-data"
LogLevel 1
Alive 30
Control "/var/run/pound/poundctl.socket"

ListenHTTPS
    Address 192.168.0.1
    Port    443
    Cert    "/etc/ssl/certs/fullcertificate.crt"
    Service
        BackEnd
            Address 127.0.0.1
            Port    81
        End
    End
End

Also (on Debian) you need to edit /etc/default/pound to set startup=1. You can then run pound:

/etc/init.d/pound start

Configuring HAProxy

We will configure two aspects of haproxy. Firstly we can tell it to simply forward requests on port 80 to port 80 on the application servers. Secondly we tell it to take requests on port 81 (ie from pound) and forward them onto stunnel (which we will configure to forward the requests via SSL to the application servers).

Here are the contents of /etc/haproxy/haproxy.cfg (shamelessly copied from Bob Feldbauer and tweaked).

    global
        log 127.0.0.1 local0 debug
        user haproxy
        group haproxy
        daemon
        maxconn 20000

    defaults
        log global
        option dontlognull
        balance leastconn
        clitimeout 60000
        srvtimeout 60000
        contimeout 5000
        retries 3
        option redispatch

    listen http 192.168.0.1:80
        mode http
        cookie WEBSERVERID insert
        option httplog
        balance source
        option forwardfor except 192.168.0.1
        option httpclose
        option redispatch
        maxconn 10000
        reqadd X-Forwarded-Proto:\ http
        server webserver1 192.168.0.2 cookie webserver1 maxconn 5000
        server webserver2 192.168.0.3 cookie webserver2 maxconn 5000

    listen https 127.0.0.1:81
        mode http
        cookie WEBSERVERID insert
        option httplog
        balance source
        option forwardfor except 192.168.0.1
        option httpclose
        option redispatch
        maxconn 10000
        reqadd X-Forwarded-Proto:\ https
        server webserver1 127.0.0.1:82 cookie webserver1 maxconn 5000
        server webserver2 127.0.0.1:83 cookie webserver2 maxconn 5000

Here’s a quick run through of what is going on here.

In the global section we set users, daemon mode and logging. If you want haproxy to log to syslog then you’ll need to switch on UDP port 514 in rsyslog. Find the MODULES section in /etc/rsyslog.conf and add the following lines.

$ModLoad imudp
$UDPServerRun 514
$UDPServerAddress 127.0.0.1

Then restart rsyslog (/etc/init.d/rsyslog restart)

Once you have finished and got everything working you may wish to turn logging down to “notice” instead of “debug”.

The listen http 192.168.0.1:80 section is telling haproxy to load balance port 80 between the two webservers and to insert its own WEBSERVERID cookie that it can use for webserver persistence.

The listen https 127.0.0.1:81 section is telling haproxy to receive data on port 81 (from pound) and forward it on to either of the two webservers, but to do so via stunnel (which will be configured to listen on ports 82 and 83 and forward them on via SSL to the webservers). It also sets and uses the WEBSERVERID cookie.

Edit /etc/default/haproxy and set ENABLED=1 before starting haproxy (/etc/init.d/haproxy start).

Configuring stunnel

We configure stunnel (in /etc/stunnel/stunnel.conf) to receive data on port 82 and send it to port 443 over SSL to webserver 1 and to receive data on port 83 and send it to webserver 2. This version uses certificates to verify that it is talking to the web servers and not being intercepted by a man-in-the-middle. If you don’t care to verify the webservers then set verify = 0 and don’t bother with the CAfile lines.

You will need to create the certificate files in the appropriate directory (in this case /etc/stunnel/certs). Each certificate file should contain (in this order) your signed certificate, any intermediate certificates, and finally your private key.

client = yes
verify = 1

#sslVersion = SSLv3

chroot = /var/lib/stunnel4/
setuid = stunnel4
setgid = stunnel4
pid = /stunnel4.pid

socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1

debug = 7
output = /var/log/stunnel4/stunnel.log

[webserver1]
accept = 82
connect = 192.168.0.2:443
CAfile = /etc/stunnel/certs/webserver1.crt

[webserver2]
accept = 83
connect = 192.168.0.3:443
CAfile = /etc/stunnel/certs/webserver2.crt

End of Part One

If you have followed this guide so far you should have a single IP address that load balances across two servers using SSL, both on the front end and on the back end. In Part Two I will look at creating a second load balancer that can take over should your first one fail, and monitoring the web servers so that we can automatically update the balancing rules should one of them fail.

As a GNU/Linux user I tend to try and automate tedious tasks as much as possible. Here is a handy script that can be used to split up a video file without all that pointy, clicky, nonsense. This script has come in very handy for creating my cycling videos as it enables me to easily separate the real-time sequences from the sped-up sequences. You need ffmpeg (in case that is not obvious).

Simply provide your video file, and a text file containing the cut points in seconds, one per line e.g.

 
10.1
15.2
65.4

The last line in the timings file should be the length of the file if you want your split files to go up to the end of your original.

You then run it as follows:

 
splitavi video_file.avi timings_file

Here’s the script:

 
#!/usr/bin/perl
my $infile=$ARGV[0];
my $timefile=$ARGV[1];
print "Using infile $infile.\n";
print "Using timefile $timefile.\n";
my $counter=1;
my $start=0;
my $end;
open TIMES, $timefile;
while(my $time = )
{
    chomp $time;
    $end = $time;
    my $outfile = $infile;
    $outfile =~ s/(.*)\.avi/$1/ ;
    $outfile = sprintf("%s%03d.mpeg",$outfile,$counter);
    my $length = $end - $start;
    print "Running ffmpeg -i $infile -ss $start -t $length $outfile\n";
    `ffmpeg -i $infile -ss $start -t $length $outfile`;
    $start = $end;
    $counter++;
}

The idea of cloud computing is one that has me worried.

Just recently I discovered that due to a cock up I had deleted someone’s important files off a server that I administer.

The lesson here is, “if it is important to you then keep it on a computer that only you control”.