Simply logging JavaScript calls.

When debugging complicated JavaScript one thing I find myself constantly doing is using console.log() to print out what functions are being called in what order. JavaScript is single-threaded and event driven so it’s often not entirely clear what functions will be called in what order.

Traditionally I’ve done something like this:

function foo(bar) {

but last night I came up with something better. It’s probably not completely portable but it seems to work fine in recent Chrome / Safari / Firefox, which is really all I’m going to be using for debugging anyway:

function logCall() {
  console.log( + '(' +
    .map(JSON.stringify).join(', ') + 

Just add logCall() to the start of functions and the function call (including serialized arguments) will be logged to the console. Easy and foolproof.

LXC on Ubuntu 11.04 Server

For a while I’ve been interested in Linux Containers (LXC), new way of providing Linux virtual machines on Linux hosts. Unlike machine virtualization systems like Xen, VMWare, KVM and VirtualBox, LXC is an OS-level virtualization system. Instead of booting a complete disk image Linux Containers share the same kernel as the host and typically use a filesystem that is a sub-tree of the host’s. I’d previously tried to get this running on a machine that was connected by WiFi, but basically that doesn’t work. Bridged network interfaces don’t play nicely with wireless network interfaces. I just set up a machine on wired ethernet, found a great introductory article, and I’m up and running.

Stéphane Graber‘s article is great, but it hides a bunch of the details away in a script he wrote. Here I’m going to explain how I got LXC up and running on my Ubuntu 10.04 system as simply as possible.

Install the required packages
Everything you need to set up and run LXC is part of Ubuntu these days, but it’s not all installed by default. Specifically you need the LXC tools, debootstrap (which can create new Ubuntu / Debian instances) and the Linux bridge utilities.

apt-get install lxc debootstrap bridge-utils

Set up the network
To put the containers on the network we use a bridge. This is a virtual ethernet network in the kernel that passes ethernet frames back and forth between the containers and the physical network. Once we’ve set this up our current primary network interface (eth0) becomes just a conduit for the bridge (br0) so the bridge should be used as the new primary interface.

Add the bridge in /etc/networking/interfaces:

# LXC bridge
auto br0
iface br0 inet dhcp
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

And change eth0 to be manually configured, ie: it doesn’t DHCP:

auto eth0
iface eth0 inet manual

Now bring up the interface:

ifup br0

Set up Control Groups
Control Groups are a fairly new Linux mechanism for isolating groups of processes as well as managing the resources allocated to them. The feature is exposed to userland via a filesystem that must be mounted, so put this in your /etc/fstab:

cgroup          /cgroup         cgroup

Create the mount point, and mount it:

mkdir /cgroup
mount /cgroup

Now we’re ready to create containers
An LXC container is a directory under /var/lib/lxc. That directory contains a configuration file named config, a filesystem table called fstab and a root filesystem called rootfs. The easiest way to do that is to use the lxc-create script. First we create a configuration file that describes the networking configuration, let’s call it network.conf: = veth = up = br0

and then run lxc-create to build the container:

lxc-create -n mycontainer -t natty -f network.conf

Note, -n let’s you specify the container name (ie: the directory under /var/lib/lxc), -f lets you specify a base configuration file and -t lets you indicate which template to use.

Container templates are implemented as scripts under /usr/lib/lxc/templates. Given a destination path they will install a base Linux system customized to run in LXC. There are template scripts to install Fedora, Debian and recent Ubuntus as well as minimal containers with just busybox or an sshd. Template scripts cache completed root filesystems under /var/cache/lxc. I’ve found the container template scripts to be interesting and quite readable.

Starting a container
Starting the container is as simple as:

lxc-start --name mycontainer --daemon

If you leave off the --daemon then you’ll be presented with the container’s console. Getting a console is as simple as:

lxc-console --name mycontainer

Keeping containers straight
Each time a container is started it will be allocated a random MAC (ethernet) address. This means that it’ll be seen by your network as a different machine each time it’s started and it’ll be DHCPed a new address each time. That’s probably not what you want. When it requests an address from the DHCP server your container will pass along its hostname (eg: mycontainer). If your DHCP server can be configured to allocate addresses based on hostname then you can use that. Mine doesn’t so I assign static ethernet addresses to my containers. There’s a class of ethernet addresses that are “locally administered addresses”. These have a most-significant byte of xxxxxxxx10, ie: x2, x6, xA
or xE. These addresses will never be programmed into a network card. I chose an arbitrary MAC address block and started allocating addresses to containers. They’re allocated by adding the following line after the network configuration in the /var/lib/lxc/mycontainer/config file: = 66:5c:a1:ab:1e:01

Managing your containers
There are a few handy tools for container management, beyond the lxc-start and lxc-console mentioned before.

lxc-stop --name mycontainer does what you’d probably expect.

lxc-ls lists available containers on one line and running containers on the next. It’s a shell script so you can read it and work out how to find which containers are running in your own scripts.

lxc-ps --lxc lists all process across all containers.

There are more like lxc-checkpoint, lxc-freeze and lxc-unfreeze that look pretty exciting but I haven’t had a chance to play with them. There are also a set of tools for restricting a container’s access to resources so that you can prioritize some above others. That’s a story for another day.

We already have information, why do we need more data?

Jawbone Up
I love gadgets and metrics and pretty graphs. I’ve been using Endomondo to track my cycling, primarily my commute. I know it’s about 6.5km an I ride it in between 22 and 26 minutes. I can even share or embed my workout here. I love that shit. When I get to work in 21 minutes I feel awesome. What it doesn’t tell me is that riding to work is good for my health. I don’t need a fancy app on my $500 mobile phone to tell me that.

Fancy pedometers like the Fitbit or Jawbone’s newly announced Up are neat gadgets, but are they really going to help anyone become healthier? More importantly are they going to help stem the slide of some countries into unhealthy patterns like obesity?

The target market for these devices is affluent and health conscious. They already know that they should be eating less fat, more fiber and exercising more. They can afford a good bicycle, a personal trainer or fancy running shoes. Anyone who is forking over a hundred dollars for a pedometer already knows more about the impact of their lifestyle on their health than these gadgets will give them.

These gadgets aren’t doing anything to educate less health literate Americans about living healthier lifestyles. They aren’t doing anything to address childhood nutrition here or around the world. They’re a distraction from the real problems we face.

I still kind of want one.

The full Bradley Manning / Adrian Lamo logs

(06:08:53 PM) What’s your greatest fear?
(06:09:24 PM) bradass87: dying without ever truly living
via Manning-Lamo Chat Logs Revealed | Threat Level |

Everyone should read (apparently) this full chat log. In it Manning, a 22 year old depressed man, questioning his gender reached out to Lamo for someone to talk to. While Manning poured out his story of growing up with abusive, alcoholic parents Lamo repeatedly assured him that he was safe to talk to. Manning denied having a “doctrine”, but he did have a clear sense of morality:
(1:11:54 PM) bradass87: and… its important that it gets out… i feel, for some bizarre reason
(1:12:02 PM) bradass87: it might actually change something
(1:13:10 PM) bradass87: i just… dont wish to be a part of it… at least not now… im not ready… i wouldn’t mind going to prison for the rest of my life, or being executed so much, if it wasn’t for the possibility of having pictures of me… plastered all over the world press… as a boy…

via Manning-Lamo Chat Logs Revealed | Threat Level |

When Lamo was chatting to him, Manning had been demoted and had lost his access to classified networks. Any good or ill he had done by leaking to WikiLeaks was over. He was going to be discharged. He’d started to set up his identity as a woman. He was just 22, trying to serve his country, serve humanity and work out who he was.

Emissions taxing and trading in Australia

POLLUTION After many years of proposals, counter-proposals, coups and general disappointment the Australian Government has announced its scheme to allow the economic impact of carbon pollution to be managed by the free market.

I’m really really proud that as a country we’ve reached the point where there’s a plan in place. It’s taken longer than it should have. In 2007 it was a policy of both major parties yet for a while more recently it had been a policy of neither. We’re one of the first countries in the world to pull this off.

The plan announced by Julia Gillard sets an initial price of AUD $23 per ton of CO2 produced, along with subsidies for many industries and tax cuts to low and middle income Australians. A few heavily polluting industries will take a hit, but that’s the idea. Businesses that aren’t pollution-centric seem to largely support the scheme.

There will be a transition in 2015 to a free market emission trading scheme. Amusingly Tony Abbot. leader of the conservative, supposedly free market Liberal Party opposes the idea of allowing markets to determine prices. There’s been some shrill, populist freak-out over new taxes, but the impact is likely to be small enough peoples’ lives that it won’t be an issue at the next election. I’m expecting the bursting housing bubble will be more of a worry.

As we sunset foursquare APIv1 and announce some amazing new milestones for APIv2, now seemed like as good a time as any to reflect on some of the decisions we made in APIv2 and see how they’re holding up. Fortunately, we were able to draw on both our experience with APIv1 and from tight iteration with our client developers (especially Anoop, who is awesome!). We’re pretty happy with the result, but we still made a few mistakes. Hopefully there are some lessons for anybody else out there who gets stuck designing an API.

via APIv2: Woulda, coulda, shoulda | Foursquare Engineering Blog.

Showing git status in my Zsh prompt

I like Zsh. It’s a powerful, efficient shell. It’s better than Bash by just about every metric (better performance, more features, better sh compatibility). I really have no idea why people keep using Bash.

Anyway, I put together a little piece of zshrc to show my current status in right-hand prompt – a prompt that’s shown right-aligned in the shell. Zsh has a couple of features that make this really easy.

First the prompt_subst options instructs the shell to do variable substitution when evaluating prompts. So if you were to set your prompt to '$PWD> ' then your prompt would contain your current directory. Of course you wouldn’t do it that way, %~ does that much more nicely, but that takes us to Zsh’s second feature, ridiculously powerful variable substitution and expansion. In my prompt I just use the simple $(shell-command) substitution, but there’s a full complement of file-io, string manipulation and more to be had.

Breaking into a debugger in Python

This is probably old news to most folks, but I only found out about this recently, more than twelve years into being a Python programmer.

The pdb (Python Debugger) module has a few useful functions. For years I’ve been using, the postmortem debugger. It can be run from the Python shell to invoke a debugger after an exception. Traditionally I’ve done:

% python -i
(some exception occurs)
>>> import pdb

But for this to work you need to be able to pass arguments to the interpreter (-i) and the code needs to throw an exception that isn’t caught.

Enter pdb.set_trace(). It begins the same interactive debugger that provides upon request, from code. So in the middle of a function that’s confusing me I add:

from pdb import set_trace;set_trace()

and my code will stop, ready for me to tear apart its runtime state. I just wish it had a more descriptive name so that I’d have found it earlier.

Bonus: printing stack traces
Python’s stack traces in exceptions are pretty useful. It used to be difficult to get the current stack in Python, involving raising and catching an exception. Now the traceback module has traceback.print_stack() and traceback.extract_stack() methods.