Ember and Django Deployment

I’m working on a small project, it started as simple showcase using only Django, but soon enough I needed more interaction and started a simple front-end in Ember. So now I have to projects, front-end and backend. Django actually morphed to be only an API endpoint, keeping my data on a database and handling a few other things.

So I need to deploy this, on a really basic server, no fancy clouds and CDN stuff. First thought was to deploy to two different servers, but using the same instance of nginx. But them I would have to handle CORS issues, and what not.

That was kind of bothering me… then I found Luke Melia talk on Lightning Fast Deployment of Your Rails-backed JavaScript app. And it just clicked, problem solved. Applying his ideas to Django where really straightforward. I just needed a view, a simple model for storing the current index, and a static folder to store all this. Nginx will server all static files and Django just need to serve the index.html, enabling me to use its templating system.


Model for handling the current page in use:

class IndexPage(models.Model):
    hash = models.CharField(max_length=10)
    index_name = models.CharField(max_length=40)
    is_current = models.BooleanField(default=False)
    def save(self, *args, **kwargs):
        if self.is_current:
        super(IndexPage, self).save(*args, **kwargs)

The view that is mapped as the default on urls.py file:

def static_index_view(request):
    hash_id = request.GET.get('hash_id', '')
    index = IndexPage.objects.get(is_current=True)
    if hash_id:
            index = IndexPage.objects.get(hash=hash_id)
        except IndexPage.DoesNotExist:
    logger.debug("Using index: %s" % index.hash)
    path = os.path.normpath(os.path.join(settings.BASE_DIR, '../static'))
    return render_to_response(index.index_name, dirs=[path, ])

Django deployment stayed pretty much the same, minus a few extra libraries that weren’t needed anymore and a few paths that changed. I’ve added a few management commands to handle adding, listing and setting the current index page, really basic stuff.


The easiest part around, just build and upload to the server.

ember build --environment=production

Copy the contents to your server static root after ember build finishes. I’ve automated that using flightplan, it works like Fabric, but it’s all javascript. One issue of flightplan is that it doesn’t ask for passwords while doing ssh or sudo – not really a bad thing, just extra configuration needed. My flightplan config is something like this:

var plan = require('flightplan');

plan.target('staging', {
  host: '',
  username: 'stage',
  agent: process.env.SSH_AUTH_SOCK

var digest, archiveName;

plan.local(['deploy', 'build'], function(local) {
  local.log("Removing previous build.");
  local.rm('-rf dist');

  local.log("Building app...");
  local.exec("ember build dist --environment=production")

  digest = local.exec("shasum dist/index.html | cut -c 1-8").stdout.replace(/[\n\t\r]/g,"");;
  local.mv("dist/index.html dist/index."+ digest +".html");

  archiveName = "my-project." + digest + ".tar.gz";

  local.with("cd dist", function() {
    local.tar('-czvf ../' + archiveName + ' *')


plan.local(['deploy', 'upload'], function(local) {
  local.log("Uploading app...");

  var input = local.prompt('Ready for deploying to ' + plan.target.destination + '? [yes]');
  if (input.indexOf('yes') === -1) {
    local.abort('user canceled flight'); // this will stop the flightplan right away.

  local.log("Current digest: " + digest);
  local.transfer(archiveName, '/opt/django/apps/my-project/static');

plan.remote(['deploy', 'extract'], function(remote) {
  remote.with('cd apps/my-project/static', function() {
    remote.tar('-xzf '+ archiveName);

plan.remote(['deploy', 'config'], function(remote) {
  remote.log("Configure app... digest: " + digest);

  remote.with('cd apps/my-project', function() {
    remote.with('source bin/activate', function() {
      remote.exec('./my-project/manage.py indexadd '+ digest + ' index.' + digest + '.html');
      remote.log('Added new index.');

      var input = remote.prompt('Make this release current? [yes]');
      if (input.indexOf('yes') === 0) {
        remote.exec('./my_project/manage.py indexsetcur '+ digest);

plan.remote('list-indexes', function(remote) {
  remote.with('cd apps/my-project', function() {
    remote.with('source bin/activate', function() {
      remote.exec('./my_project/manage.py indexlist');

Nginx Configuration

Nginx gave me a few headaches, because I was also using the PushStream module, but in the end I finally found a good enough solution for running both Django and statically serving Ember files. My config is the following, which is pretty much basic:

upstream my_project_backend {
    server unix:/opt/django/run/my_project.sock fail_timeout=0;

server {
  # listen 80 default deferred; # for Linux
  # listen 80 default accept_filter=httpready; # for FreeBSD
  listen 80;

  client_max_body_size 4G;
  server_name my-project.local;

  # ~2 seconds is often enough for most folks to parse HTML/CSS and
  # retrieve needed images/icons/frames, connections are cheap in
  # nginx so increasing this is generally safe...
  keepalive_timeout 5;

  # path for static files
  root /opt/django/apps/my-project/static;

  access_log /opt/django/logs/nginx/my_project_access.log;
  error_log  /opt/django/logs/nginx/my_project_error.log;

  location / {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    # enable this if and only if you use HTTPS, this helps Rack
    # set the proper protocol for doing redirects:
    # proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    # proxy_buffering off;

    # Try to serve static files from nginx, no point in making an
    # *application* server like Unicorn/Rainbows! serve static files.
    if (!-f $request_filename) {
      proxy_pass http://my_project_backend;

After all this was in place, refreshing the page was giving me a 404 – Django was trying to find a view for the current url, but it only existed in Ember. To fix that I’ve added to my urls.py the following:

from django.conf.urls import handler404
from api.views import static_index_view

handler404 = static_index_view

And that fixed my issue, it’s not the most elegant way, but it works!

SELinux + FreePBX

After watching this awesome talk on SELinux, I realized that I should give SELinux another try.

Disclaimer: This is a learning exercise to me, not a guide on how to secure FreePBX

Most Linux How-To guides just say you should disable SELinux, for whatever particular reason they have. But you shouldn’t be disabling it just because you don’t understand what it’s doing and why it’s blocking your commands. So I’ll try to install FreePBX, a software for managing an Asterisk server, following the instructions from here, skipping the step of disabling SELinux. Doing that will show you that SELinux is blocking most of the actions from the web interface, and things are not working as it was supposed to.

So let’s change the enforcing policy to ‘Permissive‘, using the command setenforce. This allows everything to work, but it also logs everything that would be blocked by SELinux on /var/log/audit/audit.log. If you play around FreePBX for a while, you will see lots of entries on that log file, such as:

type=AVC msg=audit(1397677629.300:287): avc:  denied  { execute_no_trans } for  pid=13449 comm="sh" path="/var/lib/asterisk/bin/retrieve_conf" dev=dm-0 ino=12191 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:asterisk_var_lib_t:s0 tclass=file
type=AVC msg=audit(1397677629.615:294): avc:  denied  { write } for  pid=13449 comm="retrieve_conf" name="queue_devstate.agi" dev=dm-0 ino=13546 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:asterisk_var_lib_t:s0 tclass=file
type=AVC msg=audit(1397677629.778:303): avc:  denied  { open } for  pid=13452 comm="crontab" name="asterisk" dev=dm-0 ino=13629 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:user_cron_spool_t:s0 tclass=file
type=AVC msg=audit(1397677985.862:334): avc:  denied  { write } for  pid=11503 comm="httpd" name="amportal.conf" dev=dm-0 ino=271259 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:etc_t:s0 tclass=file
type=AVC msg=audit(1397678526.281:406): avc:  denied  { write } for  pid=14718 comm="retrieve_conf" name="indications.conf" dev=dm-0 ino=276961 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:asterisk_etc_t:s0 tclass=file

Every action that would be denied is listed here, how can we use this to allow SELinux to enforce its policies and FreePBX actually works? Another tool can help us: audit2allow. It scans the audit log and figures out what is the best policy to allow those actions to pass SELinux.

First try, I’ll filter only asterisk related logs and pipe it to audit2allow.

grep asterisk /var/log/audit/audit.log | audit2allow -m asterisklocal
module asterisklocal 1.0;

require {
	type asterisk_etc_t;
	type user_cron_spool_t;
	type httpd_t;
	type asterisk_var_lib_t;
	class lnk_file { read getattr };
	class dir { read search open getattr };
	class file { execute setattr read getattr execute_no_trans write ioctl unlink open };

#============= httpd_t ==============
allow httpd_t asterisk_etc_t:file write;
allow httpd_t asterisk_var_lib_t:dir { read search open getattr };
allow httpd_t asterisk_var_lib_t:file { execute setattr read ioctl execute_no_trans write getattr open };
allow httpd_t asterisk_var_lib_t:lnk_file { read getattr };
allow httpd_t user_cron_spool_t:file { unlink open };

All those actions comes from httpd_t, so maybe there are more rules that should allowed, let’s try again:

grep httpd /var/log/audit/audit.log | audit2allow -m asterisklocal

The output got a lot bigger, it might cover everything that should be allowed now.

module asterisklocal 1.0;

require {
	type ssh_port_t;
	type asterisk_var_lib_t;
	type httpd_t;
	type port_t;
	type etc_runtime_t;
	type user_cron_spool_t;
	type shadow_t;
	type sysctl_fs_t;
	type asterisk_etc_t;
	type etc_t;
	class capability audit_write;
	class tcp_socket name_connect;
	class file { rename execute setattr read create getattr execute_no_trans write ioctl unlink open };
	class netlink_audit_socket { nlmsg_relay create };
	class lnk_file { read getattr };
	class dir { search read write getattr remove_name open add_name };

#============= httpd_t ==============
allow httpd_t asterisk_etc_t:file write;
allow httpd_t asterisk_var_lib_t:dir { read search open getattr };
allow httpd_t asterisk_var_lib_t:file { execute setattr read ioctl execute_no_trans write getattr open };
allow httpd_t asterisk_var_lib_t:lnk_file { read getattr };
allow httpd_t etc_runtime_t:file setattr;
allow httpd_t etc_t:file write;

#!!!! This avc can be allowed using one of the these booleans:
#     allow_ypbind, httpd_can_network_connect
allow httpd_t port_t:tcp_socket name_connect;

#!!!! This avc can be allowed using the boolean 'allow_httpd_mod_auth_pam'
allow httpd_t self:capability audit_write;

#!!!! This avc can be allowed using the boolean 'allow_httpd_mod_auth_pam'
allow httpd_t self:netlink_audit_socket { nlmsg_relay create };
allow httpd_t shadow_t:file { read getattr open };

#!!!! This avc can be allowed using one of the these booleans:
#     allow_ypbind, httpd_can_network_connect
allow httpd_t ssh_port_t:tcp_socket name_connect;
allow httpd_t sysctl_fs_t:dir search;
#!!!! The source type 'httpd_t' can write to a 'dir' of the following types:
# squirrelmail_spool_t, dirsrvadmin_config_t, var_lock_t, tmp_t, var_t, tmpfs_t, dirsrv_config_t, httpd_tmp_t, dirsrvadmin_tmp_t, httpd_cache_t, httpd_tmpfs_t, httpd_squirrelmail_t, var_lib_t, var_run_t, var_log_t, dirsrv_var_log_t, zarafa_var_lib_t, dirsrv_var_run_t, httpd_var_lib_t, httpd_var_run_t, httpd_nagios_rw_content_t, passenger_tmp_t, httpd_nutups_cgi_rw_content_t, httpd_apcupsd_cgi_rw_content_t, httpd_sys_content_t, httpd_dspam_rw_content_t, httpd_mediawiki_rw_content_t, httpd_squid_rw_content_t, httpd_prewikka_rw_content_t, httpd_smokeping_cgi_rw_content_t, passenger_var_run_t, httpd_openshift_rw_content_t, httpd_dirsrvadmin_rw_content_t, httpd_w3c_validator_rw_content_t, cluster_var_lib_t, cluster_var_run_t, httpd_user_rw_content_t, httpd_awstats_rw_content_t, root_t, httpdcontent, httpd_cobbler_rw_content_t, httpd_munin_rw_content_t, cluster_conf_t, httpd_bugzilla_rw_content_t, httpd_cvs_rw_content_t, httpd_git_rw_content_t, httpd_sys_rw_content_t, httpd_sys_rw_content_t

allow httpd_t user_cron_spool_t:dir { write remove_name getattr search add_name };
allow httpd_t user_cron_spool_t:file { rename create unlink open setattr };

But this command only shows what the modules ‘asterisklocal‘ will do, we must run the command with ‘-M‘ to generate the loadable policy file. This post, from Dan Walsh, explains how this work. After generating we need to load it, using semanage -i asterisklocal. Now we can set the SELinux back to enforcing mode and FreePBX should still be working.

That should cover the basics for running FreePBX using SELinux, but this is not supposed to be a complete guide on how to secure FreePBX

Reviewing the policies needed to run FreePBX makes me thing of all the possible exploits and problems that FreePBX hides inside itself. From a security point of view, FreePBX does not use the safest architecture around, it could definitely be improved – maybe splinting in a frontend / backend design. I think it’s safe to say that one should not run other sensitive services on the same server as FreePBX, specially if you disabled SELinux.

Openswan tunnel to Juniper SSG

Just a small gathering of information on how I’ve setup a tunnel between a Centos 6.3, with openswan and NETKEY ipsec stack, and a Juniper SSG. Before we start configuring, lets define IP’s nets and address (by the way, those are not the real IP’s). We are link two networks with this tunnel, not a network-to-client configuration.

On the Centos side we have:

  • Name: Office City A
  • External Ip:
  • Internal Network:
  • Internal Gateway Ip:

On the Juniper SSG we have:

  • Name: Office City B
  • External Ip:
  • Internal Network:
  • Internal Gateway Ip:

Pre-shared Key: my-long-and-secret-key

Centos Side

First we need to install and configure the centos box. That should be fairly simple, start by installing openswan:

yum install openswan

Now we have to edit /etc/ipsec.conf. The default config should be fine for us, but we have to make sure that the line which includes the configs “.conf” stored under /etc/ipsec.d/ is uncommented. Your config file should look something like this:

# /etc/ipsec.conf - Openswan IPsec configuration file
# Manual:     ipsec.conf.5
# Please place your own config files in /etc/ipsec.d/ ending in .conf

version	2.0	# conforms to second version of ipsec.conf specification

# basic configuration
config setup
	# Debug-logging controls: "none" for (almost) none, "all" for lots.
	# klipsdebug=none
	# plutodebug="control parsing"
	# For Red Hat Enterprise Linux and Fedora, leave protostack=netkey
	# Enable this if you see "failed to find any available worker"
	# nhelpers=0

#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf

You also need to make sure that file /etc/ipsec.secrets includes all “.secret” files under /etc/ipsec.d/. It should read like:

include /etc/ipsec.d/*.secrets

We have to create the config file for our tunnel, let’s name it “office_b_tun”. The new config will be stored under /etc/ipsec.d/office_b_tun.conf. The content of the file should be:

conn office_b_tun

We need to set the PSK for the tunnel, so edit the file /etc/ipsec.d/office_b_tun.secrets. PSK "my-long-and-secret-key"

As I don’t have two NIC’s on my server, I’ve setup an alias for eth0. This is not needed if you have two NIC’s. Edit /etc/sysconfig/network-scripts/ifcfg-eth0:0:


Restart your network, and start ipsec.

/etc/init.d/ipsec start

Finish configuring the Juniper, and then check the output of ipsec auto --status, it should read something like “IPsec SA established” and “ISAKMP SA established”. Verify your routes and test the tunnel.

Juniper SSG

We can configure the junipers using either the WebUI or the CLI, so I’ll describe first how to configure using the WebUi, and latter I’ll show the CLI config lines. I’m doing a Route Based VPN config as it adds more flexibility to my setup, you can use a Policy Based VPN if you wish, but I’m not covering that here (see a sample config here).

Some extra info we need to know on the Juniper side, is that I have a VPN Zone bound to trust-vr. I recommend that you create a zone for your VPN’s tunnels, as it makes easier to add trafic policies to it later.

Tunnel Interface

Go to Network -> Interface, select “Tunnel IF” and click the New button. Select a not used tunnel number, mine is 1. Also, make sure you select the Zone (VR) as “vpn” and that it’s an unnumbered interface. Click Ok. That’s it for the Tunnel Interface.

VPN AutoKey Gateway

Now we need to setup the VPN Gateway, for that go to VPN -> AutoKey Advanced -> Gateway. Click on the New button. Name the gateway as “gw_to_office_a”. Make sure “Static IP Address” is selected, and fill in the IPv4/v6 Address/Hostname field. The remote IP address is

Click on Advanced button. On that page, enter the Pre-shared Key “my-long-and-secret-key”. Select the correct outgoing interface, mine is “Ethernet0/0″.

On the Security Level field, select “pre-g2-3des-md5“. It’s really important that you get this right!

Make sure the Mode (Initiator) is set to Main. That’s it, just click Ok to save the gateway configuration.


Time to setup the AutoKey IKE VPN, so go to VPN -> AutoKey IKE. Click on New button. I’ll name this vpn as “vpn_to_office_a”. Make sure you selected “gw_to_office_a” as the predefined gateway. Click on Advanced.

On the advanced configuration page, set the security level as “g2-esp-3des-md5“. That’s really import, otherwise the tunnel will not work.

Bind the VPN to tunnel interface “tunnel.1″. Check “Proxy-ID Check”, “VPN Monitor”, “Optimize”, “Rekey”. Select as source interface, your external port, mine is “Ethernet0/0″. Fill in the destination IP, the remote internal gateway ip address,

Click Ok to save the tunnel.


We need to setup the Proxy-ID for the tunnel, go to the AutoKey IKE listing, click on Proxy ID for the “vpn_to_office_a” tunnel. Add the following:

Service: ANY

Click on New, and that’s it.


We need to set a static route to Centos network, as it’s not running a dynamic routing daemon (such as RIP, OSPF, BGP, …). Go to Network -> Routing -> Destination. Select “trust-vr” and click New.

The route we want to add is, using as gateway the interface “tunnel.1″ with the address Make the route permanent, set the preference to 20, and add a description “office A network”.

Click Ok to save it.


As I’m connecting two Trusted networks, I’ll allow any trafic incoming from VPN to Trusted and from Trusted to VPN. You can, and should, set tighter policies as you see fit.


You can configure the VPN using the CLI, use the following commands, adapt as need.

set zone id 100 "vpn"
set interface "tunnel.1" zone "vpn"
set interface tunnel.1 ip unnumbered interface ethernet0/0
set ike gateway "gw_to_office_a" address Main outgoing-interface "ethernet0/0" preshare "my-long-and-secret-key" proposal "pre-g2-3des-md5"
set ike respond-bad-spi 1
set ike ikev2 ike-sa-soft-lifetime 60
unset ike ikeid-enumeration
unset ike dos-protection
unset ipsec access-session enable
set ipsec access-session maximum 5000
set ipsec access-session upper-threshold 0
set ipsec access-session lower-threshold 0
set ipsec access-session dead-p2-sa-timeout 0
unset ipsec access-session log-error
unset ipsec access-session info-exch-connected
unset ipsec access-session use-error-log
set vpn "vpn_to_office_a" gateway "gw_to_office_a" no-replay tunnel idletime 0 proposal "g2-esp-3des-md5" 
set vpn "vpn_to_office_a" monitor source-interface ethernet0/0 destination-ip optimized rekey
set vpn "vpn_to_office_a" id 0xa bind interface tunnel.1
unset interface tunnel.1 acvpn-dynamic-routing
set url protocol websense
set vpn "vpn_to_office_a" proxy-id check
set vpn "vpn_to_office_a" proxy-id local-ip remote-ip "ANY" 
set route interface tunnel.1 gateway preference 20 permanent description "office A network"


On the Office A network, try to ping a machine on B Office network, something like:


On the Office B network, try to ping a machine on A Office network, something like:


If you got ping’s, everything is up and running! Have fun!

On hardware tutorials

A friend of mine sent me this post from Phillip Burgess on why Arduino foster innovation on the long run. And he is absolutely right, Arduino does provide the basic environment for learning electronics and basic concepts on computer science and programming.

But I think we could do better. Arduino lack of proper debugging interfaces is a real problem. But it is easily fixable, just create and kickstart an open Arduino Debugger, using the debugWire or the default JTAG interface.

What is my main issue is the lack of in-depth tutorials and manuals, for the ones that need to go the extra mile – learn and explore the full capabilities of the platform. The VIC-20 and the PET computers (or toys) had really great manuals, including the whole schematic and full programming documentation. We don’t have that for Arduino, we have lots of scattered tutorials most on the same subject, without adding much to it.

The basic example, of one of this tutorials, is the “hardware” hello world: blinking a LED. We, electrical engineers, computer scientists, physicists, chemists, take for granted what a LED is, what a CPU does, what an algorithm is, etc. So we just write the really basic steps for doing this hello world. That’s more than enough for us, but it’s just the tip of the iceberg for a beginner and sometimes even for someone with a technology background.

Why can’t a tutorial begin explaining some basic physics of how a LED works, why a through hole led has different length legs, and how an embedded cpu has digital and analog inputs and outputs. LED is just ubiquitous nowadays, but no one really knows how it works (ok, you physicists should know it pretty well). Couldn’t we, as a collective work, write better tutorials? Have a physicist write a basic outline of a PN junction and how it can emit light. Then ask an electrical engineer to write basic concepts of an output port and a computer scientist could write on how a CPU is just a “dumb” serial worker and how software is translated to that “dumb” worker.

I bet that we would have better engineers, fewer frustrated engineers working on software and not on hardware, and happier computer scientist willing to go to the really low-level of the CPU. Just because they now know all the effort that have gone in the develop of that “toy”.

Maybe I should try to write this tutorial…

Open Hardware Projects that I would like to create or be part of

For the last couple of weeks I’ve had several ideas of products that I would like do create as an Open Hardware project. Some of them might be just crazy grandeur – of doing something somewhat impossible – and would have prohibitive costs for something open sourced. Just to be sure, I don’t want to build an open source lunar module (although that would be wicked). Others are doable with a small funding and reusing other open hardware projects.

Letting the impossible projects aside, for now, I would really like to create these with the help of anyone willing to share knowledge and learn something new in the process. So here is the list (with no proper order):


There are several DIY beer in the internet, lots of kits – some really basic, others really advanced. Some brewers share the way they brew, but none is really Open Source, or a full project. The idea is to have a small brewery able to brew 40 liters of beer, or less. The system should control all the pumps, heaters, connections needed.

Agriculture/Garden Sensors Network

An active soil monitor with high quality measurements, able to send periodic data on temperature, moisture, pH, etc. Battery powered, with a solar charger, battery should last for at last 10 days. The collected data could be feed to an automatic irrigation system, or just enable better crops.

Telecine System

Since I’ve discovered some really old 8mm films that my grandpa made in the 1960’s, my interest on the subject grew. Converting an old film to a digital media is really expensive, but the mechanics and electronics needed are somewhat simple, and have been in use for ages. The mechanics are known to work since circa 1880, the electronics (CCD’s) have been in use since 1971.


There are several high-speed data acquisition systems being developed by CERN. Reuse one of these designs, add a FPGA, a USB 3.0 port, and you have a really basic oscilloscope. Want more channels? Add a backplane for connecting up to 4 acquisition cards and stream all the data to the USB port. Connect the device to a PC, tablet, etc and you have a really good scope. The cost of this system would be probably be bound by the cost of the ADC, which is quite expensive (especially in small quantities).


  • FMC ADC 1G 10B 2CHA
  • FMC ADC 250M 12B 2CHA
  • Portable Ultrasound

    A friend of mine, who is a doctor, always talk about the benefits of using ultrasound to early diagnostics. But ultrasound devices for diagnostic are just too expensive to be used by a broader range of physicians, like those on remote areas, accidents, etc. Ultrasound has been in development since circa 1950, which means that the basic electronic needed is simple. The data processing algorithms are way more complicated than the electronics. My idea is to focus on the electronics to have a basic working device, just acquiring all the information needed to feed the proper imaging algorithms. Main challenge is having a fast (2MHz ~ 18Mhz) switch with high voltage (~90V).