Ember and Django Deployment

I’m working on a small project, it started as simple showcase using only Django, but soon enough I needed more interaction and started a simple front-end in Ember. So now I have to projects, front-end and backend. Django actually morphed to be only an API endpoint, keeping my data on a database and handling a few other things.

So I need to deploy this, on a really basic server, no fancy clouds and CDN stuff. First thought was to deploy to two different servers, but using the same instance of nginx. But them I would have to handle CORS issues, and what not.

That was kind of bothering me… then I found Luke Melia talk on Lightning Fast Deployment of Your Rails-backed JavaScript app. And it just clicked, problem solved. Applying his ideas to Django where really straightforward. I just needed a view, a simple model for storing the current index, and a static folder to store all this. Nginx will server all static files and Django just need to serve the index.html, enabling me to use its templating system.

Django

Model for handling the current page in use:

class IndexPage(models.Model):
    
    hash = models.CharField(max_length=10)
    index_name = models.CharField(max_length=40)
    is_current = models.BooleanField(default=False)
    
    def save(self, *args, **kwargs):
        if self.is_current:
            IndexPage.objects.filter(is_current=True).update(is_current=False)
        super(IndexPage, self).save(*args, **kwargs)

The view that is mapped as the default on urls.py file:

def static_index_view(request):
    hash_id = request.GET.get('hash_id', '')
    
    index = IndexPage.objects.get(is_current=True)
    
    if hash_id:
        try: 
            index = IndexPage.objects.get(hash=hash_id)
        except IndexPage.DoesNotExist:
            pass
    
    logger.debug("Using index: %s" % index.hash)
    path = os.path.normpath(os.path.join(settings.BASE_DIR, '../static'))
    logger.debug(path)
    
    return render_to_response(index.index_name, dirs=[path, ])

Django deployment stayed pretty much the same, minus a few extra libraries that weren’t needed anymore and a few paths that changed. I’ve added a few management commands to handle adding, listing and setting the current index page, really basic stuff.

Ember

The easiest part around, just build and upload to the server.

ember build --environment=production

Copy the contents to your server static root after ember build finishes. I’ve automated that using flightplan, it works like Fabric, but it’s all javascript. One issue of flightplan is that it doesn’t ask for passwords while doing ssh or sudo – not really a bad thing, just extra configuration needed. My flightplan config is something like this:

var plan = require('flightplan');

plan.target('staging', {
  host: '10.1.1.50',
  username: 'stage',
  agent: process.env.SSH_AUTH_SOCK
});

var digest, archiveName;

plan.local(['deploy', 'build'], function(local) {
  local.log("Removing previous build.");
  local.rm('-rf dist');

  local.log("Building app...");
  local.exec("ember build dist --environment=production")

  digest = local.exec("shasum dist/index.html | cut -c 1-8").stdout.replace(/[\n\t\r]/g,"");;
  local.mv("dist/index.html dist/index."+ digest +".html");

  archiveName = "my-project." + digest + ".tar.gz";

  local.with("cd dist", function() {
    local.tar('-czvf ../' + archiveName + ' *')
  });

});

plan.local(['deploy', 'upload'], function(local) {
  local.log("Uploading app...");

  var input = local.prompt('Ready for deploying to ' + plan.target.destination + '? [yes]');
  if (input.indexOf('yes') === -1) {
    local.abort('user canceled flight'); // this will stop the flightplan right away.
  }

  local.log("Current digest: " + digest);
  local.transfer(archiveName, '/opt/django/apps/my-project/static');
});

plan.remote(['deploy', 'extract'], function(remote) {
  remote.with('cd apps/my-project/static', function() {
    remote.tar('-xzf '+ archiveName);
  });
});

plan.remote(['deploy', 'config'], function(remote) {
  remote.log("Configure app... digest: " + digest);

  remote.with('cd apps/my-project', function() {
    remote.with('source bin/activate', function() {
      remote.exec('./my-project/manage.py indexadd '+ digest + ' index.' + digest + '.html');
      remote.log('Added new index.');

      var input = remote.prompt('Make this release current? [yes]');
      if (input.indexOf('yes') === 0) {
        remote.exec('./my_project/manage.py indexsetcur '+ digest);
      }
    });
  });
});

plan.remote('list-indexes', function(remote) {
  remote.with('cd apps/my-project', function() {
    remote.with('source bin/activate', function() {
      remote.exec('./my_project/manage.py indexlist');
    })
  });
});

Nginx Configuration

Nginx gave me a few headaches, because I was also using the PushStream module, but in the end I finally found a good enough solution for running both Django and statically serving Ember files. My config is the following, which is pretty much basic:

upstream my_project_backend {
    server unix:/opt/django/run/my_project.sock fail_timeout=0;
}

server {
  # listen 80 default deferred; # for Linux
  # listen 80 default accept_filter=httpready; # for FreeBSD
  listen 80;

  client_max_body_size 4G;
  server_name my-project.local;

  # ~2 seconds is often enough for most folks to parse HTML/CSS and
  # retrieve needed images/icons/frames, connections are cheap in
  # nginx so increasing this is generally safe...
  keepalive_timeout 5;

  # path for static files
  root /opt/django/apps/my-project/static;

  access_log /opt/django/logs/nginx/my_project_access.log;
  error_log  /opt/django/logs/nginx/my_project_error.log;

  location / {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    # enable this if and only if you use HTTPS, this helps Rack
    # set the proper protocol for doing redirects:
    # proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    # proxy_buffering off;

    # Try to serve static files from nginx, no point in making an
    # *application* server like Unicorn/Rainbows! serve static files.
    if (!-f $request_filename) {
      proxy_pass http://my_project_backend;
      break;
    }
  }
}

After all this was in place, refreshing the page was giving me a 404 – Django was trying to find a view for the current url, but it only existed in Ember. To fix that I’ve added to my urls.py the following:

from django.conf.urls import handler404
from api.views import static_index_view

handler404 = static_index_view

And that fixed my issue, it’s not the most elegant way, but it works!

Openswan tunnel to Juniper SSG

Just a small gathering of information on how I’ve setup a tunnel between a Centos 6.3, with openswan and NETKEY ipsec stack, and a Juniper SSG. Before we start configuring, lets define IP’s nets and address (by the way, those are not the real IP’s). We are link two networks with this tunnel, not a network-to-client configuration.

On the Centos side we have:

  • Name: Office City A
  • External Ip: 200.201.202.203
  • Internal Network: 10.20.20.0/24
  • Internal Gateway Ip: 10.20.20.254

On the Juniper SSG we have:

  • Name: Office City B
  • External Ip: 100.101.102.103
  • Internal Network: 10.20.10.0/24
  • Internal Gateway Ip: 10.20.10.254

Pre-shared Key: my-long-and-secret-key

Centos Side

First we need to install and configure the centos box. That should be fairly simple, start by installing openswan:

yum install openswan

Now we have to edit /etc/ipsec.conf. The default config should be fine for us, but we have to make sure that the line which includes the configs “.conf” stored under /etc/ipsec.d/ is uncommented. Your config file should look something like this:

# /etc/ipsec.conf - Openswan IPsec configuration file
#
# Manual:     ipsec.conf.5
#
# Please place your own config files in /etc/ipsec.d/ ending in .conf

version	2.0	# conforms to second version of ipsec.conf specification

# basic configuration
config setup
	# Debug-logging controls: "none" for (almost) none, "all" for lots.
	# klipsdebug=none
	# plutodebug="control parsing"
	# For Red Hat Enterprise Linux and Fedora, leave protostack=netkey
	protostack=netkey
	nat_traversal=yes
	virtual_private=
	oe=off
	# Enable this if you see "failed to find any available worker"
	# nhelpers=0

#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf

You also need to make sure that file /etc/ipsec.secrets includes all “.secret” files under /etc/ipsec.d/. It should read like:

include /etc/ipsec.d/*.secrets

We have to create the config file for our tunnel, let’s name it “office_b_tun”. The new config will be stored under /etc/ipsec.d/office_b_tun.conf. The content of the file should be:

conn office_b_tun
	ike=3des-md5
	esp=3des-md5
	authby=secret
	keyingtries=0
	left=100.101.102.103
	leftsubnet=10.20.10.0/24
	leftnexthop=%defaultroute
	right=200.201.202.203
	rightsubnet=10.20.20.0/24
	rightnexthop=%defaultroute
	compress=no
	auto=start

We need to set the PSK for the tunnel, so edit the file /etc/ipsec.d/office_b_tun.secrets.

100.101.102.103 200.201.202.203: PSK "my-long-and-secret-key"

As I don’t have two NIC’s on my server, I’ve setup an alias for eth0. This is not needed if you have two NIC’s. Edit /etc/sysconfig/network-scripts/ifcfg-eth0:0:

DEVICE=eth0:0
ONBOOT=yes
NETWORK=10.20.0.0
NETMASK=255.255.0.0
IPADDR=10.20.20.254

Restart your network, and start ipsec.

/etc/init.d/ipsec start

Finish configuring the Juniper, and then check the output of ipsec auto --status, it should read something like “IPsec SA established” and “ISAKMP SA established”. Verify your routes and test the tunnel.

Juniper SSG

We can configure the junipers using either the WebUI or the CLI, so I’ll describe first how to configure using the WebUi, and latter I’ll show the CLI config lines. I’m doing a Route Based VPN config as it adds more flexibility to my setup, you can use a Policy Based VPN if you wish, but I’m not covering that here (see a sample config here).

Some extra info we need to know on the Juniper side, is that I have a VPN Zone bound to trust-vr. I recommend that you create a zone for your VPN’s tunnels, as it makes easier to add trafic policies to it later.

Tunnel Interface

Go to Network -> Interface, select “Tunnel IF” and click the New button. Select a not used tunnel number, mine is 1. Also, make sure you select the Zone (VR) as “vpn” and that it’s an unnumbered interface. Click Ok. That’s it for the Tunnel Interface.

VPN AutoKey Gateway

Now we need to setup the VPN Gateway, for that go to VPN -> AutoKey Advanced -> Gateway. Click on the New button. Name the gateway as “gw_to_office_a”. Make sure “Static IP Address” is selected, and fill in the IPv4/v6 Address/Hostname field. The remote IP address is 200.201.202.203.

Click on Advanced button. On that page, enter the Pre-shared Key “my-long-and-secret-key”. Select the correct outgoing interface, mine is “Ethernet0/0”.

On the Security Level field, select “pre-g2-3des-md5“. It’s really important that you get this right!

Make sure the Mode (Initiator) is set to Main. That’s it, just click Ok to save the gateway configuration.

VPN AutoKey IKE

Time to setup the AutoKey IKE VPN, so go to VPN -> AutoKey IKE. Click on New button. I’ll name this vpn as “vpn_to_office_a”. Make sure you selected “gw_to_office_a” as the predefined gateway. Click on Advanced.

On the advanced configuration page, set the security level as “g2-esp-3des-md5“. That’s really import, otherwise the tunnel will not work.

Bind the VPN to tunnel interface “tunnel.1”. Check “Proxy-ID Check”, “VPN Monitor”, “Optimize”, “Rekey”. Select as source interface, your external port, mine is “Ethernet0/0”. Fill in the destination IP, the remote internal gateway ip address, 10.20.20.254.

Click Ok to save the tunnel.

Proxy-ID

We need to setup the Proxy-ID for the tunnel, go to the AutoKey IKE listing, click on Proxy ID for the “vpn_to_office_a” tunnel. Add the following:

Local: 10.20.10.0/24
Remote: 10.20.20.0/24
Service: ANY

Click on New, and that’s it.

Route

We need to set a static route to Centos network, as it’s not running a dynamic routing daemon (such as RIP, OSPF, BGP, …). Go to Network -> Routing -> Destination. Select “trust-vr” and click New.

The route we want to add is 10.20.20.0/24, using as gateway the interface “tunnel.1” with the address 200.201.202.203. Make the route permanent, set the preference to 20, and add a description “office A network”.

Click Ok to save it.

Policy

As I’m connecting two Trusted networks, I’ll allow any trafic incoming from VPN to Trusted and from Trusted to VPN. You can, and should, set tighter policies as you see fit.

CLI

You can configure the VPN using the CLI, use the following commands, adapt as need.

set zone id 100 "vpn"
set interface "tunnel.1" zone "vpn"
set interface tunnel.1 ip unnumbered interface ethernet0/0
set ike gateway "gw_to_office_a" address 200.201.202.203 Main outgoing-interface "ethernet0/0" preshare "my-long-and-secret-key" proposal "pre-g2-3des-md5"
set ike respond-bad-spi 1
set ike ikev2 ike-sa-soft-lifetime 60
unset ike ikeid-enumeration
unset ike dos-protection
unset ipsec access-session enable
set ipsec access-session maximum 5000
set ipsec access-session upper-threshold 0
set ipsec access-session lower-threshold 0
set ipsec access-session dead-p2-sa-timeout 0
unset ipsec access-session log-error
unset ipsec access-session info-exch-connected
unset ipsec access-session use-error-log
set vpn "vpn_to_office_a" gateway "gw_to_office_a" no-replay tunnel idletime 0 proposal "g2-esp-3des-md5" 
set vpn "vpn_to_office_a" monitor source-interface ethernet0/0 destination-ip 10.20.20.254 optimized rekey
set vpn "vpn_to_office_a" id 0xa bind interface tunnel.1
unset interface tunnel.1 acvpn-dynamic-routing
set url protocol websense
exit
set vpn "vpn_to_office_a" proxy-id check
set vpn "vpn_to_office_a" proxy-id local-ip 10.20.10.0/24 remote-ip 10.20.20.0/24 "ANY" 
set route 10.20.20.0/24 interface tunnel.1 gateway 200.201.202.203 preference 20 permanent description "office A network"

Testing

On the Office A network, try to ping a machine on B Office network, something like:

ping 10.20.10.254

On the Office B network, try to ping a machine on A Office network, something like:

ping 10.20.20.254

If you got ping’s, everything is up and running! Have fun!

Asterisk, OpenVPN and QoS

Installing a VoIP system is nowadays an easy task, just install Asterisk, have a few SIP clients and you have an ‘instant’ telephone system. But your system will not be as reliable as the one offered by any telecom company. Why? Quality of Service, or for short, QoS.

Telecom companies use sophisticated hierarchies of systems to deliver the needed QoS. Backbones uses SDH systems, where one can guarantee the bandwidth and throughput for any kind of data. So if you specify that a voice packet should be delivered in 10ms, it will get delivered in that time span. Now when it comes to IP networks, you have no guarantee that your packet will be delivered in that time frame, which is a good thing when you’re downloading files, opening web pages, and so on. But when it comes to voice and video streaming, it’s a real mess. So you must create some QoS rules for your packets.

Asterisk has this real nice feature for aggregating multiple servers so that it works as a single phone network. The only problem is that this feature is not really secure, so as to mitigate that, one can always create VPN’s (Virtual Private Networks). But how does that impact your QoS solution? Well, depends on what kind, or how you configure your VPN, with OpenVPN it’s quite simple.

Just as a reminder, for the rest of the article when I say QoS, I really mean the QoS of the gateway of your network. The gateway is the one place that will enforce the needed quality of service (okay, on bigger networks you will have multiple routers which will need to be configured for QoS too).

Don’t get too excited with QoS, even though you did everything by the book that doesn’t mean that your ISP will use TOS field the same way you did. By that I mean, you won’t solve any problem with QoS if the problem is not on how you route packets to the internet. If you have full control of your link and all the router in between your networks, you’re a lucky guy!

The Network

We have some computers, servers, and IP phones on each network. The OpenVPN tunnel server doesn’t need to be the same as the gateway, as long as you export the correct ports to that server. Make sure you also add the correct gateway for the packets that should be tunneled (ie. packets for the network 10.2.1.0 that originates on the 10.1.1.0 network). On the image, the tunnel is represented by the red lines.

A sample network using Asterisk and OpenVPN

OpenVPN

I don’t intend to give a full how-to on OpenVPN, just a basic configuration, with a highlight on how to get QoS for the tunneled packets. Besides that, configuring OpenVPN is really simple.

First you have to create your own Certificate Authority (CA). You can use something like tinyca or minica, or the command line version, described here. Remember that you will need one certificate per client. After that is just a matter of writing a really simple text file. Below are a sample configuration, known to work well integrating two Asterisk servers.

Server

# OpenVPN server
# Listen to local ip address only
local 10.1.1.2

# Should be exported on the router
port 1194
proto udp
dev tun

# SSL/TLS CA and keys
ca ca.crt
cert server.network.crt
key server.network.key

# Diffie Hellman Parameters
dh dh1024.pem

# Server tunnel
server 10.3.1.0 255.255.255.0
ifconfig-pool-persist ipp.txt

push "route 10.1.1.0 255.255.255.0"
route 10.2.1.0 255.255.255.0
client-config-dir client-configs
keepalive 10 120

# Drop privileges
user nobody
group nogroup

# Persist
persist-key
persist-tun

# Logs
verb 5
status /var/log/openvpn.log

# Fork to the background
daemon

Client

The highlighted line is the one which will make the QoS work for the encrypted packets. If you think that passing the TOS (Type of service) is a security fault, don’t panic, just create another tunnel for passing your sensitive data – and that’s really easy to do with OpenVPN.

# OpenVPN client
client

# Interface for tunnel
# Protocol and Port
dev tun0
proto udp
port 1194

# SSL/TLS CA and keys
ca /etc/openvpn/certs/ca.crt
cert /etc/openvpn/certs/remote1.mynetwork.crt
key /etc/openvpn/keys/remote1.mynetwork.key

# Symmetric cipher
cipher BF-CBC

# Remote server to connect to. Can be domain name or IP address.
remote remote1.mynetwork.com

# Check if the tunnel went down and restart it. 
# 10 is the ping interval number and 120 is the timeout to restart.
keepalive 10 120
route 10.1.1.0 255.255.255.0

# This is need so we can apply QoS to the tunnel
passtos

# Drop privileges
user nobody
group nogroup

# Use a persistent key and tunnel interface.
persist-tun
persist-key

# Log to file instead of syslog
log-append /var/log/openvpn.log
verb 4

# Fork to the background
daemon

If you can ping the remote server, using the internal IP address, then your tunnel is up and running.

Asterisk

I suppose that you already know how to configure an Asterisk server, if you don’t you can follow my guide (it’s a bit outdated, I might update it soon).

Getting IAX2 working is really simple too, so I won’t describe it. If you’re using FreePBX, you can follow this guide. Remember to use the internal IP’s from your network.

Make sure your asterisk installation is tagging the correct TOS for the packets. On my FreePBX install it already had the correct configuration set on /etc/asterisk/sip_general_additional.conf. Check your asterisk configuration for the following lines:

tos_sip=cs3
tos_audio=ef
tos_video=af41

This tags your voice data as Expedited Forwarding, normal SIP packets get Class Selector 3 and video data gets Assured Forwarding, Class 4, with drop precedence 1. More on what all this means shortly.

QoS

Getting the right choice of tools for your specific QoS application is a hard problem. You can have some traffic shaping algorithms, congestion avoidance mechanisms and quite a few packet scheduling algorithms. I’m not an expert on how all these different types of algorithms work, or what is the best solution for your case. I’m just putting together some information that I think is relevant. One can always read all the RFC’s about QoS.

First things first, the mentioned TOS field is now called DSCP (Differentiated Services Code Point), it replaces the TOS field and is specified for IPv4 and IPv6 (for reference RFC2474 is the specification). It tries to maintain backward compatibility with the TOS field. Most networks use the following traffic classes:

  • Default PHB — which is typically best-effort traffic
  • Expedited Forwarding (EF) PHB — dedicated to low-loss, low-latency traffic
  • Assured Forwarding (AF) PHB — gives assurance of delivery under prescribed conditions
  • Class Selector PHBs — which maintain backward compatibility with the IP Precedence field.

That is what EF, CS3 and AF41 means, just a common way of signalling that your packet is important, or not that much. But just tagging your packets won’t get you far. For now, you’ve got your Asterisk correctly tagging the packets, and your tunnel to preserve them. Time to add the magic to classify and prioritise the packets!

Linux Traffic Control

Linux has the tc tool for configuring and setting up a QoS policy. With it you can configure different kinds of queueing disciplines and classes. This queues acts directly on net devices, so you have to configure it per device. In the example below we have an ADSL modem on ppp0 device.

TC allows you to configure classful and classless disciples, each one supporting different scheduling algorithms. We will use Hierarchy Token Bucket (HTB) for the classful packets (the ones that got tagged by Asterisk), and Stochastic Fairness Queueing (SFQ) for the classless packets. After getting your queues configured you have to inform iptables that it should use the queue, that’s basically setting up some CLASSIFY targets. You definitely can add some MARK rules to tag your packets, but we don’t need it, Asterisk is doing that job for us.

First we will configure what is the maximum bandwidth allowed, in this case we have an 1000kbps uplink that we want to add a QoS policy. The following table illustrates the QoS policy required for the network. As we are using an asymmetric connection, we will limit the upload bandwidth to 95% of the nominal speed.

Class Nominal rate Maximum rate Priority Packets
Real time 47.5kbps 95kbps 0 ICMP, SYN, RST, ACK
High 522.5kbps 950kbps 1 EF and CS3 packets
Regular 190kbps 950kbps 2 Regular traffic, HTTP, SSH, etc
Bulk 190kbps 950kbps 3  
QoS Policy

With the queues in place you just have to add the necessary iptable rules. The rules will classify the packets that have the DSCP tag using the same classes that you defined using tc. That’s it, your QoS is now in place. Just make sure you add and remove the rules according to the status of your link (in this case ppp0). The script bellow is called by /etc/ppp/ip-up.d and /etc/ppp/ip-down.d, with the start and stop targets respectively.

# !/bin/bash
# 20110916 - Leonardo Santos <leonardo at aligera dot com dot br>
# Initial version. It only uses the iptables target CLASSIFY.
# For the QoS to work, Asterisk has to tag the packets with the right DSCP.
# The OpenVPN tunnel must be passing along the DSCP field, and not blanking it out.
#
PATH=$PATH:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin

# uplink in kbps
UPLINK=1000
DEV=ppp0

CEIL=$(($UPLINK*95/100))

CLASS_RT="10"
CLASS_HIGH="11"
CLASS_REG="12"
CLASS_BULK="13"

do_iptables() {
        iptables -$1 POSTROUTING -t mangle -p icmp -j CLASSIFY --set-class 1:$CLASS_RT
        iptables -$1 POSTROUTING -t mangle -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN -j CLASSIFY --set-class 1:$CLASS_RT
        iptables -$1 POSTROUTING -t mangle -p udp -m dscp --dscp-class cs3 -j CLASSIFY --set-class 1:$CLASS_HIGH
        iptables -$1 POSTROUTING -t mangle -p udp -m dscp --dscp-class ef -j CLASSIFY --set-class 1:$CLASS_HIGH
}
add_rules() {
        tc qdisc add dev $DEV root handle 1: htb default $CLASS_BULK
        tc class add dev $DEV parent 1: classid 1:1 htb rate ${CEIL}kbit ceil ${CEIL}kbit
        tc class add dev $DEV parent 1:1 classid 1:$CLASS_RT   htb rate $((1*$CEIL/20))kbit  ceil $(($CEIL/10))kbit prio 0
        tc class add dev $DEV parent 1:1 classid 1:$CLASS_HIGH htb rate $((11*$CEIL/20))kbit ceil ${CEIL}kbit       prio 1
        tc class add dev $DEV parent 1:1 classid 1:$CLASS_REG  htb rate $((4*$CEIL/20))kbit  ceil ${CEIL}kbit       prio 2
        tc class add dev $DEV parent 1:1 classid 1:$CLASS_BULK htb rate $((4*$CEIL/20))kbit  ceil ${CEIL}kbit       prio 3
        tc qdisc add dev $DEV parent 1:$CLASS_HIGH handle 120: sfq perturb 10
        tc qdisc add dev $DEV parent 1:$CLASS_BULK handle 130: sfq perturb 10
        do_iptables A
}
del_rules() {
        tc qdisc del dev $DEV root
        do_iptables D
}
show_status() {
        tc -s -d class show dev $DEV
        tc -s -d qdisc show dev $DEV
}
case $1 in
        start)
                add_rules
        ;;
        stop)
                del_rules
        ;;
        status)
                show_status
        ;;
        restart)
                del_rules
                add_rules
        ;;
        *)
                echo "Usage: $0 {start|stop|restart|status}"
                exit 1
        ;;
esac

I would like to thank Leonardo Santos for putting the script together and letting me publish it, and for being a good friend.

From CVS to Git to Gitorious!

Migrating from CVS to Git

Last week I’ve offered myself to migrate some ~300 repositories to git. Not an easy task at first, but with the right tools at hand the task becomes manageable. Installing cvs2git, and following its documentation will get you started. In Ubuntu that is as simples as:

sudo apt-get install cvs2svn

I know it’s weird, but cvs2git is bundled in cvs2svn… go figure.

But migrating hundreds of repositories isn’t a task to do manually, so I created a script for automating the process. As I had access to the server files, migrating was easier then I expected. My directory structure was something like:

  • cvs_project_1
    • repo_1
    • repo_2
    • repo_3
  • cvs_project_2

I’ve decided to migrate one project at a time, making it straightforward to verify each repo. My script is the following, bare in mind that it my have some flaws, it worked for me. Test it before erasing your old CVS data.

#!/bin/bash
# Copyright (C) Pedro Kiefer

for f in `cat repo_list`;
do
	FOP=${f/\//\-}
	echo "===== Creating git repository for ${f/\//\-/}/";
	sed -e "s/__REPO__/${f/\//\\/}/g" my-default.options > $FOP.options;
	cvs2git --options=$FOP.options
	rm $FOP.options
	mkdir $FOP.git
	cd $FOP.git
	git init --bare
	cat ../cvs2svn-tmp/git-blob.dat ../cvs2svn-tmp/git-dump.dat | git fast-import
	cd ..
done

The script takes a repo_list file with a list of paths to the CVS repositories. Creating this list is quite easy, something like this should work. Be sure to remove CVSROOT and the root directory.

find cvs_project_1/ -maxdepth 1 -type d | sort > repo_list
vim repo_list

The other file the script need is my-default.options, which is the configuration file used by cvs2git. Most of the default values are good, but you really want to add a list of cvs commiters – so you can map the cvs login to a name + email. The other change need is on the line that sets the repository path. For the script to work you need to have it set as __REPO__. Like this:

run_options.set_project(
    # The filesystem path to the part of the CVS repository (*not* a
    # CVS working copy) that should be converted.  This may be a
    # subdirectory (i.e., a module) within a larger CVS repository.
    r'__REPO__',

That’s it, just run the script, and voilà, git repositories for all your cvs modules.

From Git to Gitorious

The second part of my task was importing all of those git repositories to my local Gitorious install. Again, doing it manually is not the right way to do it. After asking about it on gitorious mailing list and learning some ruby, I’ve created this little script. It creates all the repositories for a given project. The projects were created manually on gitorious, as I had only 6 projects – extending the tool to support the creating of projects should be easy.

After using the script above, I had the following directory structure:

  • project_1/
    • repo_1.git
    • repo_2.git
    • repo_3.git

      The scripts takes as argument the project name, which should be equal to the one you created on gitorious web interface. The script scan the project directory and creates the matching gitorious repositories, copying the data to the newly created repository. Some magic regexp was added to remove version numbers and set uniform names to the new repositories. You might want to edit this to your taste.

      By the way, this is my very first ruby programming, don’t expect it to be pretty!

      #!/usr/bin/env ruby
      # encoding: utf-8
      #--
      # Copyright (C) Pedro Kiefer
      #
      # Mass migrate git repositories to gitorious
      #
      #++
      
      require "/path/to/gitorious/config/environment.rb"
      require "optparse"
      
      def new_repos(opts={})
        Repository.new({
          :name => "foo"
          }.merge(opts))
      end
      
      current_proj = ARGV[0]
      
      @project = Project.find_by_slug(current_proj)
      
      Dir.chdir(current_proj)
      puts Dir.pwd
      files = Dir.glob("*.git")
      
      files.each do |f|
        orig_repo = f
        f = f.gsub(/\.git$/, "")
        f = f.gsub(/_/,"-")
        
        # has version?
        version = f.match(/-([0-9](.[0-9][0-9]*)+)(-)?/)
        f = f.gsub(/-([0-9](.[0-9][0-9]*)+)(-)?/, "")
        
        desc = "Repository for package #{f.downcase}\n"
        desc << "Package version #{version&#91;1&#93;}\n" if version
        
        print "Creating repository for package #{f} ... " 
        
        @repo = new_repos(:name => f.downcase, :project => @project, :owner => @project.owner, :user => @project.user, :description => desc)
        @repo.save
        path = @repo.full_repository_path
        Repository.git_backend.create(path)
        Repository.create_git_repository(@repo.real_gitdir)
        @repo.ready = true
        @repo.save 
      
        FileUtils.cp_r(["#{orig_repo}/branches", "#{orig_repo}/info", "#{orig_repo}/objects", "#{orig_repo}/refs"], @repo.full_repository_path)
        puts "Ok!"
      end 
      

Asterisk and FreePBX on Ubuntu Server 10.10

This is just a small gathering of commands and best practices for installing Asterisk and FreePBX on Ubuntu. This worked for me, it has some shortcomings but should work on most of the cases. Feel free to add some comments on better ways of installing it.

The following packages will be installed:

  • Asterisk 1.6.2.7
  • FreePBX 2.8.1

I started with a fresh install of Ubuntu Server 10.10, but if you already have it installed, results should be similar. While installing I selected the LAMP and SSH services, those are pretty basic services which you will need. If you have finished a fresh install, or haven’t updated your system in a while, I suggest running the following lines before continuing with this guide.

sudo apt-get update
sudo apt-get upgrade

Postfix

Although not necessary for running Asterisk and FreePBX, I suggest that you install a MTA agent. If you think this is unnecessary on your setup skip to the next section. Postfix is my MTA of choice, so we are going to install it. When prompt about which configuration should be done to it, select Internet with smarthost, just confirm the other options.

sudo apt-get install postfix

Okey, postfix installed, time to edit the basic configuration, add or change the following lines to /etc/postfix/main.cf:

relayhost = [smtp.gmail.com]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_use_tls = yes

The password for accessing your external relay must be saved to
/etc/postfix/sasl_passwd, add the following this file:

[smtp.gmail.com]:587    user.name@gmail.com:password

Fix the permissions on this file:

sudo chmod 400 /etc/postfix/sasl_passwd
sudo postmap /etc/postfix/sasl_passwd

Add the appropriate ca-certificate to /etc/postfix/cacert.pem. For gmail, that’s Thawte Consulting, so add their ca-certificate.

cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem

Restart postfix:

sudo /etc/init.d/postfix restart

Avoid sending mail as root

Edit /etc/aliases, add the following:

root: server@domain.tld

Run the new alias command:

newaliases

Create a /etc/postfix/sender_canonical file mapping user -> email such as:

root            server@domain.tld

Run the following lines:

sudo postmap hash:/etc/postfix/sender_canonical

Add the following line to /etc/postfix/main.cf:

sender_canonical_maps=hash:/etc/postfix/sender_canonical

Restart postfix:

sudo /etc/init.d/postfix restart

PHP

When you selected the LAMP service on your Ubuntu install, you automatically got PHP5 installed. Now you just have to install some additional packages that didn’t get installed. So run the following line to install them.

sudo apt-get install php5-gd php-pear php-db sox curl

phpMyAdmin

One might find useful to have phpMyAdmin installed for managing the MySQL database used by FrePBX and Asterisk. If you don’t know what phpMyAdmin is, you can skip to the next section.

Asterisk

Ubuntu 10.10 provides pre-compiled asterisk packages, using that is way more easier than backing your own asterisk. Run the following to install it, and all of its dependencies.

sudo apt-get install asterisk asterisk-mysql asterisk-mp3 asterisk-sounds-extra

Dahdi

This is a really short how-to for configuring Dahdi, it just covers the bare minimum, but it works ok. First of all, load the necessary kernel modules, in my case for a TDM400P it was the following line:

sudo modprobe wctdm

You might wanna check if the module was loaded and configured your hardware properly, so run a dmesg. If everything is alright, you have to create the dadhi configuration file. That’s really easy, just run:

sudo dahdi_genconf -vvvv

Warning: Be careful when you run this on a production system, it will override the current dahdi configuration file.

Edit /etc/dahdi/system.conf and set the correct loadzone and defaultzone for your country code. I like to use vim to edit configuration files, but you can use any text editor.

sudo vim /etc/dahdi/system.conf

Now check if channels are up an running, run dahdi_cfg:

sudo dahdi_cfg -vvv

Next you have to edit /etc/asterisk/chan_dahdi.conf to configure the channels, this is what asterisk will see and use to send and receive calls.

Apache

Before running the install command, you have to configure your apache server. I prefer to use virtual host, and as of lately I have adopted the following layout for my server:

  • /var/www/address/conf
  • /var/www/address/public
  • /var/www/address/log

In the conf I store the necessary vhost configuration, in public lives the public accessible files, and log hosts the logging files. Feel free to use your own personal taste on installing webapps. For those who want to stick with the how-to, create the needed directories:

sudo mkdir /var/www/pabx.domain/
sudo mkdir /var/www/pabx.domain/conf
sudo mkdir /var/www/pabx.domain/log
sudo mkdir /var/www/pabx.domain/public

Now create a /var/www/pabx.domain/conf/vhost.conf file:

sudo vim /var/www/pabx.domain/conf/vhost.conf

And paste the following lines, change it accordingly to your domain.

<VirtualHost *:80>
   ServerName pabx.domain
   ServerAlias pabx.domain

   ServerAdmin admin@domain.tld
   ErrorLog /var/www/pabx.domain/log/error.log
   CustomLog /var/www/pabx.domain/log/access.log combined

   DocumentRoot /var/www/pabx.domain/public
   <Directory /var/www/pabx.domain/public>
       Options Indexes FollowSymLinks MultiViews
       Order allow,deny
       AllowOverride All
       Allow from all
   </Directory>

   <Directory /var/www/pabx.domain/public/admin>
       AuthType Basic
       AuthName "Restricted Area"
       AuthUserFile freepbx-passwd
       Require user admin
   </Directory>
</VirtualHost>

With the file created, add the vhost to the sites-enabled directory, with:

sudo ln -s /var/www/pabx.domain/conf/vhost.conf /etc/apache2/sites-available/pabx.domain
cd /etc/apache2/sites-enabled/
sudo ln -s ../sites-available/pabx.domain

For now, create an htpasswd file to protect the access to freepbx.

sudo htpasswd -c /etc/apache2/freepbx-passwd admin

And finally, restart apache.

sudo /etc/init.d/apache2 restart

FreePBX

Your Asterisk install should be working by now, so it’s time to install a nice web user interface. Ubuntu doesn’t provide a package for FreePBX, so grab the latest stable source code from FreePBX site.

cd /tmp
wget http://mirror.freepbx.org/freepbx-2.8.1.tar.gz
cd /usr/src
sudo tar xvzf /tmp/freepbx-2.8.1.tar.gz
cd freepbx-2.8.1/

You can equally extract the tarball on your home directory. It doesn’t make any difference. Now it’s time to create the database, the user used to access it, and populate the basic tables. This will create and import the basic tables to asterisk and asterisk cdr database, run this from the recently extracted directory.

mysqladmin create asterisk -u root -p
mysqladmin create asteriskcdrdb -u root -p
mysql -u root -p asterisk < SQL/newinstall.sql
mysql -u root -p asteriskcdrdb < SQL/cdr_mysql_table.sql

With the tables in-place, it's time to create the user with privileges to access and edit those tables. Open a mysql prompt with:

mysql -u root -p

On the prompt run the following queries:

GRANT ALL PRIVILEGES ON asterisk.* TO asterisk@localhost IDENTIFIED BY 'badasspassword';
GRANT ALL PRIVILEGES ON asteriskcdrdb.* TO  asterisk@localhost IDENTIFIED BY 'badasspassword';
flush privileges;
quit;

Don't forget to change the password!

Before running the install command, make a copy of /etc/asterisk/modules.conf. FreePBX rewrites the file and trashes Asterisk installation. If you restart Asterisk after installing FreePBX Asterisk dies with no message.

sudo cp /etc/asterisk/modules.conf ~/asterisk-modules.conf

Ok, we are ready to install freepbx to /var/www/pabx.domain/public:

sudo ./install_amp

The install script will ask for some configuration data, eg. were to install freepbx (/var/www/pabx.domain/public), sql password, asterisk password, etc. Take note of the passwords you used, you might need them later.

The output from the install script is somewhat like this:

...
Enter your USERNAME to connect to the 'asterisk' database:
 [asteriskuser] asterisk
Enter your PASSWORD to connect to the 'asterisk' database:
 [amp109] badasspassword
Enter the hostname of the 'asterisk' database:
 [localhost] 
Enter a USERNAME to connect to the Asterisk Manager interface:
 [admin] 
Enter a PASSWORD to connect to the Asterisk Manager interface:
 [amp111] 
Enter the path to use for your AMP web root:
 [/var/www/html] 
/var/www/pabx.domain/public 
Enter the IP ADDRESS or hostname used to access the AMP web-admin:
 [xx.xx.xx.xx] pabx.domain
Enter a PASSWORD to perform call transfers with the Flash Operator Panel:
 [passw0rd] password
Use simple Extensions [extensions] admin or separate Devices and Users [deviceanduser]?
 [extensions] 
Enter directory in which to store AMP executable scripts:
 [/var/lib/asterisk/bin] 
...

Restore asterisk-modules.conf file, which you backed up before installing FreePBX:

sudo cp ~/asterisk-modules.conf /etc/asterisk/modules.conf

Apache runs as www-data, Asterisk as user asterisk, so we have to change some permission to make both programs work together. First, add www-data to asterisk group:

sudo adduser www-data asterisk

Fix the permissions from amportal, add these lines to the end of /etc/amportal.conf:

AMPASTERISKUSER=www-data
AMPASTERISKGROUP=asterisk
AMPASTERISKWEBUSER=www-data
AMPASTERISKWEBGROUP=asterisk

Everything in place, time to start amportal:

sudo amportal start

Open your web browser and go to http://pabx.domain/ and you will be greeted with FreePBX site. I strongly suggest you to upgrade and install the FreePBX modules you will need, so go to Modules Admin and click on Check for online updates.

Start asterisk with amportal

Before we finish, lets make amportal script to manage asterisk and run it through the safe_asterisk script, for that, we have to remove asterisk from rc.d:

sudo update-rc.d -f asterisk remove

Now edit safe_asterisk, to make sure it runs on background, edit the variable BACKGROUND to zero:

sudo sed -e s/BACKGROUND=0/BACKGROUND=1/ -i /usr/sbin/safe_asterisk

We have to start amportal after booting, so call amportal start in /etc/rc.local. Edit your /etc/rc.local and add the following line before the exit 0 line.

/usr/local/sbin/amportal start

Reboot your machine, and check that everything is still working. Have fun!