Ember and Django Deployment

I’m working on a small project, it started as simple showcase using only Django, but soon enough I needed more interaction and started a simple front-end in Ember. So now I have to projects, front-end and backend. Django actually morphed to be only an API endpoint, keeping my data on a database and handling a few other things.

So I need to deploy this, on a really basic server, no fancy clouds and CDN stuff. First thought was to deploy to two different servers, but using the same instance of nginx. But them I would have to handle CORS issues, and what not.

That was kind of bothering me… then I found Luke Melia talk on Lightning Fast Deployment of Your Rails-backed JavaScript app. And it just clicked, problem solved. Applying his ideas to Django where really straightforward. I just needed a view, a simple model for storing the current index, and a static folder to store all this. Nginx will server all static files and Django just need to serve the index.html, enabling me to use its templating system.

Django

Model for handling the current page in use:

class IndexPage(models.Model):
    
    hash = models.CharField(max_length=10)
    index_name = models.CharField(max_length=40)
    is_current = models.BooleanField(default=False)
    
    def save(self, *args, **kwargs):
        if self.is_current:
            IndexPage.objects.filter(is_current=True).update(is_current=False)
        super(IndexPage, self).save(*args, **kwargs)

The view that is mapped as the default on urls.py file:

def static_index_view(request):
    hash_id = request.GET.get('hash_id', '')
    
    index = IndexPage.objects.get(is_current=True)
    
    if hash_id:
        try: 
            index = IndexPage.objects.get(hash=hash_id)
        except IndexPage.DoesNotExist:
            pass
    
    logger.debug("Using index: %s" % index.hash)
    path = os.path.normpath(os.path.join(settings.BASE_DIR, '../static'))
    logger.debug(path)
    
    return render_to_response(index.index_name, dirs=[path, ])

Django deployment stayed pretty much the same, minus a few extra libraries that weren’t needed anymore and a few paths that changed. I’ve added a few management commands to handle adding, listing and setting the current index page, really basic stuff.

Ember

The easiest part around, just build and upload to the server.

ember build --environment=production

Copy the contents to your server static root after ember build finishes. I’ve automated that using flightplan, it works like Fabric, but it’s all javascript. One issue of flightplan is that it doesn’t ask for passwords while doing ssh or sudo – not really a bad thing, just extra configuration needed. My flightplan config is something like this:

var plan = require('flightplan');

plan.target('staging', {
  host: '10.1.1.50',
  username: 'stage',
  agent: process.env.SSH_AUTH_SOCK
});

var digest, archiveName;

plan.local(['deploy', 'build'], function(local) {
  local.log("Removing previous build.");
  local.rm('-rf dist');

  local.log("Building app...");
  local.exec("ember build dist --environment=production")

  digest = local.exec("shasum dist/index.html | cut -c 1-8").stdout.replace(/[\n\t\r]/g,"");;
  local.mv("dist/index.html dist/index."+ digest +".html");

  archiveName = "my-project." + digest + ".tar.gz";

  local.with("cd dist", function() {
    local.tar('-czvf ../' + archiveName + ' *')
  });

});

plan.local(['deploy', 'upload'], function(local) {
  local.log("Uploading app...");

  var input = local.prompt('Ready for deploying to ' + plan.target.destination + '? [yes]');
  if (input.indexOf('yes') === -1) {
    local.abort('user canceled flight'); // this will stop the flightplan right away.
  }

  local.log("Current digest: " + digest);
  local.transfer(archiveName, '/opt/django/apps/my-project/static');
});

plan.remote(['deploy', 'extract'], function(remote) {
  remote.with('cd apps/my-project/static', function() {
    remote.tar('-xzf '+ archiveName);
  });
});

plan.remote(['deploy', 'config'], function(remote) {
  remote.log("Configure app... digest: " + digest);

  remote.with('cd apps/my-project', function() {
    remote.with('source bin/activate', function() {
      remote.exec('./my-project/manage.py indexadd '+ digest + ' index.' + digest + '.html');
      remote.log('Added new index.');

      var input = remote.prompt('Make this release current? [yes]');
      if (input.indexOf('yes') === 0) {
        remote.exec('./my_project/manage.py indexsetcur '+ digest);
      }
    });
  });
});

plan.remote('list-indexes', function(remote) {
  remote.with('cd apps/my-project', function() {
    remote.with('source bin/activate', function() {
      remote.exec('./my_project/manage.py indexlist');
    })
  });
});

Nginx Configuration

Nginx gave me a few headaches, because I was also using the PushStream module, but in the end I finally found a good enough solution for running both Django and statically serving Ember files. My config is the following, which is pretty much basic:

upstream my_project_backend {
    server unix:/opt/django/run/my_project.sock fail_timeout=0;
}

server {
  # listen 80 default deferred; # for Linux
  # listen 80 default accept_filter=httpready; # for FreeBSD
  listen 80;

  client_max_body_size 4G;
  server_name my-project.local;

  # ~2 seconds is often enough for most folks to parse HTML/CSS and
  # retrieve needed images/icons/frames, connections are cheap in
  # nginx so increasing this is generally safe...
  keepalive_timeout 5;

  # path for static files
  root /opt/django/apps/my-project/static;

  access_log /opt/django/logs/nginx/my_project_access.log;
  error_log  /opt/django/logs/nginx/my_project_error.log;

  location / {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    # enable this if and only if you use HTTPS, this helps Rack
    # set the proper protocol for doing redirects:
    # proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    # proxy_buffering off;

    # Try to serve static files from nginx, no point in making an
    # *application* server like Unicorn/Rainbows! serve static files.
    if (!-f $request_filename) {
      proxy_pass http://my_project_backend;
      break;
    }
  }
}

After all this was in place, refreshing the page was giving me a 404 – Django was trying to find a view for the current url, but it only existed in Ember. To fix that I’ve added to my urls.py the following:

from django.conf.urls import handler404
from api.views import static_index_view

handler404 = static_index_view

And that fixed my issue, it’s not the most elegant way, but it works!

Openswan tunnel to Juniper SSG

Just a small gathering of information on how I’ve setup a tunnel between a Centos 6.3, with openswan and NETKEY ipsec stack, and a Juniper SSG. Before we start configuring, lets define IP’s nets and address (by the way, those are not the real IP’s). We are link two networks with this tunnel, not a network-to-client configuration.

On the Centos side we have:

  • Name: Office City A
  • External Ip: 200.201.202.203
  • Internal Network: 10.20.20.0/24
  • Internal Gateway Ip: 10.20.20.254

On the Juniper SSG we have:

  • Name: Office City B
  • External Ip: 100.101.102.103
  • Internal Network: 10.20.10.0/24
  • Internal Gateway Ip: 10.20.10.254

Pre-shared Key: my-long-and-secret-key

Centos Side

First we need to install and configure the centos box. That should be fairly simple, start by installing openswan:

yum install openswan

Now we have to edit /etc/ipsec.conf. The default config should be fine for us, but we have to make sure that the line which includes the configs “.conf” stored under /etc/ipsec.d/ is uncommented. Your config file should look something like this:

# /etc/ipsec.conf - Openswan IPsec configuration file
#
# Manual:     ipsec.conf.5
#
# Please place your own config files in /etc/ipsec.d/ ending in .conf

version	2.0	# conforms to second version of ipsec.conf specification

# basic configuration
config setup
	# Debug-logging controls: "none" for (almost) none, "all" for lots.
	# klipsdebug=none
	# plutodebug="control parsing"
	# For Red Hat Enterprise Linux and Fedora, leave protostack=netkey
	protostack=netkey
	nat_traversal=yes
	virtual_private=
	oe=off
	# Enable this if you see "failed to find any available worker"
	# nhelpers=0

#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf

You also need to make sure that file /etc/ipsec.secrets includes all “.secret” files under /etc/ipsec.d/. It should read like:

include /etc/ipsec.d/*.secrets

We have to create the config file for our tunnel, let’s name it “office_b_tun”. The new config will be stored under /etc/ipsec.d/office_b_tun.conf. The content of the file should be:

conn office_b_tun
	ike=3des-md5
	esp=3des-md5
	authby=secret
	keyingtries=0
	left=100.101.102.103
	leftsubnet=10.20.10.0/24
	leftnexthop=%defaultroute
	right=200.201.202.203
	rightsubnet=10.20.20.0/24
	rightnexthop=%defaultroute
	compress=no
	auto=start

We need to set the PSK for the tunnel, so edit the file /etc/ipsec.d/office_b_tun.secrets.

100.101.102.103 200.201.202.203: PSK "my-long-and-secret-key"

As I don’t have two NIC’s on my server, I’ve setup an alias for eth0. This is not needed if you have two NIC’s. Edit /etc/sysconfig/network-scripts/ifcfg-eth0:0:

DEVICE=eth0:0
ONBOOT=yes
NETWORK=10.20.0.0
NETMASK=255.255.0.0
IPADDR=10.20.20.254

Restart your network, and start ipsec.

/etc/init.d/ipsec start

Finish configuring the Juniper, and then check the output of ipsec auto --status, it should read something like “IPsec SA established” and “ISAKMP SA established”. Verify your routes and test the tunnel.

Juniper SSG

We can configure the junipers using either the WebUI or the CLI, so I’ll describe first how to configure using the WebUi, and latter I’ll show the CLI config lines. I’m doing a Route Based VPN config as it adds more flexibility to my setup, you can use a Policy Based VPN if you wish, but I’m not covering that here (see a sample config here).

Some extra info we need to know on the Juniper side, is that I have a VPN Zone bound to trust-vr. I recommend that you create a zone for your VPN’s tunnels, as it makes easier to add trafic policies to it later.

Tunnel Interface

Go to Network -> Interface, select “Tunnel IF” and click the New button. Select a not used tunnel number, mine is 1. Also, make sure you select the Zone (VR) as “vpn” and that it’s an unnumbered interface. Click Ok. That’s it for the Tunnel Interface.

VPN AutoKey Gateway

Now we need to setup the VPN Gateway, for that go to VPN -> AutoKey Advanced -> Gateway. Click on the New button. Name the gateway as “gw_to_office_a”. Make sure “Static IP Address” is selected, and fill in the IPv4/v6 Address/Hostname field. The remote IP address is 200.201.202.203.

Click on Advanced button. On that page, enter the Pre-shared Key “my-long-and-secret-key”. Select the correct outgoing interface, mine is “Ethernet0/0”.

On the Security Level field, select “pre-g2-3des-md5“. It’s really important that you get this right!

Make sure the Mode (Initiator) is set to Main. That’s it, just click Ok to save the gateway configuration.

VPN AutoKey IKE

Time to setup the AutoKey IKE VPN, so go to VPN -> AutoKey IKE. Click on New button. I’ll name this vpn as “vpn_to_office_a”. Make sure you selected “gw_to_office_a” as the predefined gateway. Click on Advanced.

On the advanced configuration page, set the security level as “g2-esp-3des-md5“. That’s really import, otherwise the tunnel will not work.

Bind the VPN to tunnel interface “tunnel.1”. Check “Proxy-ID Check”, “VPN Monitor”, “Optimize”, “Rekey”. Select as source interface, your external port, mine is “Ethernet0/0”. Fill in the destination IP, the remote internal gateway ip address, 10.20.20.254.

Click Ok to save the tunnel.

Proxy-ID

We need to setup the Proxy-ID for the tunnel, go to the AutoKey IKE listing, click on Proxy ID for the “vpn_to_office_a” tunnel. Add the following:

Local: 10.20.10.0/24
Remote: 10.20.20.0/24
Service: ANY

Click on New, and that’s it.

Route

We need to set a static route to Centos network, as it’s not running a dynamic routing daemon (such as RIP, OSPF, BGP, …). Go to Network -> Routing -> Destination. Select “trust-vr” and click New.

The route we want to add is 10.20.20.0/24, using as gateway the interface “tunnel.1” with the address 200.201.202.203. Make the route permanent, set the preference to 20, and add a description “office A network”.

Click Ok to save it.

Policy

As I’m connecting two Trusted networks, I’ll allow any trafic incoming from VPN to Trusted and from Trusted to VPN. You can, and should, set tighter policies as you see fit.

CLI

You can configure the VPN using the CLI, use the following commands, adapt as need.

set zone id 100 "vpn"
set interface "tunnel.1" zone "vpn"
set interface tunnel.1 ip unnumbered interface ethernet0/0
set ike gateway "gw_to_office_a" address 200.201.202.203 Main outgoing-interface "ethernet0/0" preshare "my-long-and-secret-key" proposal "pre-g2-3des-md5"
set ike respond-bad-spi 1
set ike ikev2 ike-sa-soft-lifetime 60
unset ike ikeid-enumeration
unset ike dos-protection
unset ipsec access-session enable
set ipsec access-session maximum 5000
set ipsec access-session upper-threshold 0
set ipsec access-session lower-threshold 0
set ipsec access-session dead-p2-sa-timeout 0
unset ipsec access-session log-error
unset ipsec access-session info-exch-connected
unset ipsec access-session use-error-log
set vpn "vpn_to_office_a" gateway "gw_to_office_a" no-replay tunnel idletime 0 proposal "g2-esp-3des-md5" 
set vpn "vpn_to_office_a" monitor source-interface ethernet0/0 destination-ip 10.20.20.254 optimized rekey
set vpn "vpn_to_office_a" id 0xa bind interface tunnel.1
unset interface tunnel.1 acvpn-dynamic-routing
set url protocol websense
exit
set vpn "vpn_to_office_a" proxy-id check
set vpn "vpn_to_office_a" proxy-id local-ip 10.20.10.0/24 remote-ip 10.20.20.0/24 "ANY" 
set route 10.20.20.0/24 interface tunnel.1 gateway 200.201.202.203 preference 20 permanent description "office A network"

Testing

On the Office A network, try to ping a machine on B Office network, something like:

ping 10.20.10.254

On the Office B network, try to ping a machine on A Office network, something like:

ping 10.20.20.254

If you got ping’s, everything is up and running! Have fun!

On hardware tutorials

A friend of mine sent me this post from Phillip Burgess on why Arduino foster innovation on the long run. And he is absolutely right, Arduino does provide the basic environment for learning electronics and basic concepts on computer science and programming.

But I think we could do better. Arduino lack of proper debugging interfaces is a real problem. But it is easily fixable, just create and kickstart an open Arduino Debugger, using the debugWire or the default JTAG interface.

What is my main issue is the lack of in-depth tutorials and manuals, for the ones that need to go the extra mile – learn and explore the full capabilities of the platform. The VIC-20 and the PET computers (or toys) had really great manuals, including the whole schematic and full programming documentation. We don’t have that for Arduino, we have lots of scattered tutorials most on the same subject, without adding much to it.

The basic example, of one of this tutorials, is the “hardware” hello world: blinking a LED. We, electrical engineers, computer scientists, physicists, chemists, take for granted what a LED is, what a CPU does, what an algorithm is, etc. So we just write the really basic steps for doing this hello world. That’s more than enough for us, but it’s just the tip of the iceberg for a beginner and sometimes even for someone with a technology background.

Why can’t a tutorial begin explaining some basic physics of how a LED works, why a through hole led has different length legs, and how an embedded cpu has digital and analog inputs and outputs. LED is just ubiquitous nowadays, but no one really knows how it works (ok, you physicists should know it pretty well). Couldn’t we, as a collective work, write better tutorials? Have a physicist write a basic outline of a PN junction and how it can emit light. Then ask an electrical engineer to write basic concepts of an output port and a computer scientist could write on how a CPU is just a “dumb” serial worker and how software is translated to that “dumb” worker.

I bet that we would have better engineers, fewer frustrated engineers working on software and not on hardware, and happier computer scientist willing to go to the really low-level of the CPU. Just because they now know all the effort that have gone in the develop of that “toy”.

Maybe I should try to write this tutorial…

Working with LCD glyphs

Reading a diff today I found this piece of code for defining a font for a matrix LCD display. The code is interesting, it lets the developer see what the font looks like, so fixing your alphabet is really easy.

unsigned char font5x7[][8] = 
{
/* z */
 ,{
   ________,
   ________,
   XXXXX___,
   ___X____,
   __X_____,
   _X______,
   XXXXX___,
   ________}

/* s */
 ,{
   ________,
   ________,
   _XXX____,
   X_______,
   _XX_____,
   ___X____,
   XXX_____,
   ________}
}

But something is fishy here, how do the compiler understand ________ as being 0x00, or 0xFF? So I went on to see the included header… and ouch, this is what I found.

#define	_XX_____	0x60
#define	_XX____X	0x61
#define	_XX___X_	0x62
#define	_XX___XX	0x63
#define	_XX__X__	0x64
#define	_XX__X_X	0x65
#define	_XX__XX_	0x66
#define	_XX__XXX	0x67
#define	_XX_X___	0x68
#define	_XX_X__X	0x69
#define	_XX_X_X_	0x6a
#define	_XX_X_XX	0x6b
#define	_XX_XX__	0x6c
#define	_XX_XX_X	0x6d
#define	_XX_XXX_	0x6e
#define	_XX_XXXX	0x6f

This is ugly as code and pretty as ASCII art. When we are coding we want beautiful code, but not pretty ASCII art. Let that to all the artists, they do better art than we do. So, how do we fix the code? Simple, macros to the rescue!

#define _	0
#define X	1
#define b(a,b,c,d,e,f,g,h)	(a << 7| b << 6 | c << 5 | d << 4 | e << 3 | f << 2 | g << 1 | h)
&#91;/sourcecode&#93;

With this we let the compiler do the dirty job of creating all those values. Using the macros above, the code becomes easier to maintain and read. Just remember to undef the macros after using it, as you don't want all your X's, _'s and b's being changed!

&#91;sourcecode lang=c&#93;
unsigned char font5x7&#91;&#93;&#91;8&#93; = {
	/* z */ {
	b(_,_,_,_,_,_,_,_),
	b(_,_,_,_,_,_,_,_),
	b(X,X,X,X,X,_,_,_),
	b(_,_,_,X,_,_,_,_),
	b(_,_,X,_,_,_,_,_),
	b(_,X,_,_,_,_,_,_),
	b(X,X,X,X,X,_,_,_),
	b(_,_,_,_,_,_,_,_)
	},

	/* s */ {
	b(_,_,_,_,_,_,_,_),
	b(_,_,_,_,_,_,_,_),
	b(_,X,X,X,_,_,_,_),
	b(X,_,_,_,_,_,_,_),
	b(_,X,X,_,_,_,_,_),
	b(_,_,_,X,_,_,_,_),
	b(X,X,X,_,_,_,_,_),
	b(_,_,_,_,_,_,_,_)
	},
};
&#91;/sourcecode&#93;

By the way, you can apply this idea for creating small graphics on code. It's easy and self-documenting. Happy hacking!

<strong>Update:</strong>
I just remembered the section <em>Making a Glyph from Bit Patterns</em> from <strong>Expert C Programming</strong> (<a href="http://www.amazon.com/Expert-Programming-Peter-van-Linden/dp/0131774298">buy</a> this book if you don't have it yet!), it gives a solution similar to mine. The macros defined there are:


#define _ )*2
#define X )*2 + 1
#define s ((((((((0

So the code looks like this:

unsigned char font5x7[][8] = {
	/* z */ {
	s _ _ _ _ _ _ _ _,
	s _ _ _ _ _ _ _ _,
	s X X X X X _ _ _,
	s _ _ _ X _ _ _ _,
	s _ _ X _ _ _ _ _,
	s _ X _ _ _ _ _ _,
	s X X X X X _ _ _,
	s _ _ _ _ _ _ _ _,
	},
	/* s */ {
	s _ _ _ _ _ _ _ _,
	s _ _ _ _ _ _ _ _,
	s _ X X X _ _ _ _,
	s X _ _ _ _ _ _ _,
	s _ X X _ _ _ _ _,
	s _ _ _ X _ _ _ _,
	s X X X _ _ _ _ _,
	s _ _ _ _ _ _ _ _,
	}
};

From CVS to Git to Gitorious!

Migrating from CVS to Git

Last week I’ve offered myself to migrate some ~300 repositories to git. Not an easy task at first, but with the right tools at hand the task becomes manageable. Installing cvs2git, and following its documentation will get you started. In Ubuntu that is as simples as:

sudo apt-get install cvs2svn

I know it’s weird, but cvs2git is bundled in cvs2svn… go figure.

But migrating hundreds of repositories isn’t a task to do manually, so I created a script for automating the process. As I had access to the server files, migrating was easier then I expected. My directory structure was something like:

  • cvs_project_1
    • repo_1
    • repo_2
    • repo_3
  • cvs_project_2

I’ve decided to migrate one project at a time, making it straightforward to verify each repo. My script is the following, bare in mind that it my have some flaws, it worked for me. Test it before erasing your old CVS data.

#!/bin/bash
# Copyright (C) Pedro Kiefer

for f in `cat repo_list`;
do
	FOP=${f/\//\-}
	echo "===== Creating git repository for ${f/\//\-/}/";
	sed -e "s/__REPO__/${f/\//\\/}/g" my-default.options > $FOP.options;
	cvs2git --options=$FOP.options
	rm $FOP.options
	mkdir $FOP.git
	cd $FOP.git
	git init --bare
	cat ../cvs2svn-tmp/git-blob.dat ../cvs2svn-tmp/git-dump.dat | git fast-import
	cd ..
done

The script takes a repo_list file with a list of paths to the CVS repositories. Creating this list is quite easy, something like this should work. Be sure to remove CVSROOT and the root directory.

find cvs_project_1/ -maxdepth 1 -type d | sort > repo_list
vim repo_list

The other file the script need is my-default.options, which is the configuration file used by cvs2git. Most of the default values are good, but you really want to add a list of cvs commiters – so you can map the cvs login to a name + email. The other change need is on the line that sets the repository path. For the script to work you need to have it set as __REPO__. Like this:

run_options.set_project(
    # The filesystem path to the part of the CVS repository (*not* a
    # CVS working copy) that should be converted.  This may be a
    # subdirectory (i.e., a module) within a larger CVS repository.
    r'__REPO__',

That’s it, just run the script, and voilà, git repositories for all your cvs modules.

From Git to Gitorious

The second part of my task was importing all of those git repositories to my local Gitorious install. Again, doing it manually is not the right way to do it. After asking about it on gitorious mailing list and learning some ruby, I’ve created this little script. It creates all the repositories for a given project. The projects were created manually on gitorious, as I had only 6 projects – extending the tool to support the creating of projects should be easy.

After using the script above, I had the following directory structure:

  • project_1/
    • repo_1.git
    • repo_2.git
    • repo_3.git

      The scripts takes as argument the project name, which should be equal to the one you created on gitorious web interface. The script scan the project directory and creates the matching gitorious repositories, copying the data to the newly created repository. Some magic regexp was added to remove version numbers and set uniform names to the new repositories. You might want to edit this to your taste.

      By the way, this is my very first ruby programming, don’t expect it to be pretty!

      #!/usr/bin/env ruby
      # encoding: utf-8
      #--
      # Copyright (C) Pedro Kiefer
      #
      # Mass migrate git repositories to gitorious
      #
      #++
      
      require "/path/to/gitorious/config/environment.rb"
      require "optparse"
      
      def new_repos(opts={})
        Repository.new({
          :name => "foo"
          }.merge(opts))
      end
      
      current_proj = ARGV[0]
      
      @project = Project.find_by_slug(current_proj)
      
      Dir.chdir(current_proj)
      puts Dir.pwd
      files = Dir.glob("*.git")
      
      files.each do |f|
        orig_repo = f
        f = f.gsub(/\.git$/, "")
        f = f.gsub(/_/,"-")
        
        # has version?
        version = f.match(/-([0-9](.[0-9][0-9]*)+)(-)?/)
        f = f.gsub(/-([0-9](.[0-9][0-9]*)+)(-)?/, "")
        
        desc = "Repository for package #{f.downcase}\n"
        desc << "Package version #{version&#91;1&#93;}\n" if version
        
        print "Creating repository for package #{f} ... " 
        
        @repo = new_repos(:name => f.downcase, :project => @project, :owner => @project.owner, :user => @project.user, :description => desc)
        @repo.save
        path = @repo.full_repository_path
        Repository.git_backend.create(path)
        Repository.create_git_repository(@repo.real_gitdir)
        @repo.ready = true
        @repo.save 
      
        FileUtils.cp_r(["#{orig_repo}/branches", "#{orig_repo}/info", "#{orig_repo}/objects", "#{orig_repo}/refs"], @repo.full_repository_path)
        puts "Ok!"
      end