I made one blog post in 2013. I’m going to do better this year. I miss writing on the internet. In the past year I’ve found myself drifting away from consuming short-form writing on Twitter and Facebook to return to enjoying longer form posts on blogs and to a lesser extent on Google+, Tumblr and Medium. So here I am.

This morning I made waffles. While I don’t have the actual recipe my dad used when I was growing up, found and used a recipe that uses that technique – beaten egg whites. For four of us I doubled this recipe and had way too much batter:


  • 1¾ cups all-purpose flour
  • 2 tsp baking powder
  • ½ tsp salt
  • 1 Tbsp granulated sugar
  • 3 eggs
  • 1¾ cups whole milk
  • ¼ cup vegetable oil
  • 2 oz (½ stick) whole butter


  1. Melt the butter – on the stove at low heat or in the microwave. Let it cool but not solidify.
  2. Separate the eggs.
  3. Sift together the flour, baking powder and salt.
  4. Beat the egg whites until they’re stiff. Add the sugar and continue beating until it forms peaks.
  5. Beat the yolks by hand, then mix in the milk, oil and butter.
  6. Mix the liquid ingredients into the dry ingredients.
  7. Fold the beaten egg whites into the batter. Don’t over mix – they’re going to make those waffles fluffy.
  8. Make waffles with your waffle iron on its hottest setting.

You could add half a teaspoon of vanilla extract to the liquid ingredients but I generally don’t.

This morning I served them with bacon, fresh raspberries, butter, maple syrup and boysenberry syrup.

Horse Meat

I like horse meat. It’s delicious and healthy. And not so different from beef. I’m really enjoying watching the unfolding European horse meat scandal. Even countries like France where horse is regularly eaten are outraged that they’ve been lied to.

A madrinha alerta

The scandal has exposed the complicated supply chain in the European cheap meat trade. It’s exposed other lovely facts like that a “beef burger” in the UK only needs to be 47% beef. What’s the rest of it? Pretty much anything, but generally protein powder and highly processed meat off-cuts.

In the 1990s the UK banned mechanically recovered meat, commonly referred to as “pink slime”, after it was linked to the spread of CJD, the human form of mad cow disease. Pink slime was replaced by “de-sinewed meat” in cheap meat products until last year when it was reclassified and no longer allowed in cheap burgers. The way I look at it, this means, since we’re in the process of eliminating pink slime, we’re about 15 years away from this scandal here in the US.

I think it’s great whenever people are exposed to their food chain. We need to demand more accountability, transparency and integrity. If that means we can’t afford to eat meat every day, but the meat we do it is of higher quality then that’s a fine outcome.

I’d go so far as to say I’m Lovin’ It.

2012, The Year of the Linux Personal Computer

2012 Q3 PC sales: 87.5M
2012 Q3 Android sales: 122.5M

For sure, many would-be PC buyers were waiting for Windows 8 and refreshed models that were waiting for Windows 8 to be released, but that still means that last quarter 1.4 times as many Linux computers were sold than Windows computers.

You might try to argue that an Android device isn’t a personal computer, but apart from writing software, everything I do on my computer I do on my Android devices. You might argue that Android isn’t “Linux” enough, but it’s certainly largely open source and runs a Linux kernel. There’s plenty I don’t like about the way Android is put together compared to a traditional stock Unix system, but hey, look at the numbers – 122.5M Linux computer shipped last quarter. That’s a whole lot of Freedom!

Creepy Flashing Heads

Sharon picked up some cheap styrofoam heads from a store that had previously used them to display wigs for use in a decoration project of hers for this Halloween. I got her to pick up a couple more for me to use. After some discussion and ideas Aaron and I decided that it would be creepy to have lights flashing on them in a random fashion. This sounded like exactly the project I wanted to use my TI MSP430 Launchpad for. I don’t have very much microcontroller experience – pretty much just making lights flash on or off – so that’s perfect.

I managed to get a MSP430 development environment up and running on Linux pretty easily. It was just:

apt-get install mspdebug gcc-msp430

The code is pretty simple. Two digital IO pins drive the LEDs and a watchdog timer wakes up every 32ms to decide if an LED should be turned on or off or left alone. In the end much of the code is overkill for such a simple application, but it does look kind of cool:

Turkish Bread (Pide)

One of the foods I miss most from Australia is Turkish bread or pide. In Australia it’s common to have as a sandwich bread or for dips, in Berlin it holds the amazing doner kebabs. In the US, or at least in the Bay Area it’s completely unknown and sadly unavailable. When I visited Australia for a few days for work recently I didn’t have time to visit any family but the first meal I had was a sandwich on Turkish bread.

This weekend I’ve been trying to cook Turkish bread. A minor mix-up with units ruined my first attempt, on Saturday. I tried fixing it by adding the right amount of flour and ended up with a bread, but it wasn’t what I was after. Today I got it right.

I’m pretty much using the recipe from SBS, but here’s more precisely what I did.


  • 1 tablespoon (2 x 7 g sachets) dried yeast
  • 1 pinch caster / extra fine / bakers sugar
  • 375ml warm water
  • 480g strong bread flour
  • 1tsp salt
  • 60ml extra-virgin olive oil
  • 2 free-range eggs
  • 50 ml milk
  • nigella and / or sesame seeds


  1. Dissolve the sugar and yeast in 125ml of the warm water in a medium bowl. Set it aside until it froths, about 10 minutes.
  2. Mix in 90g of the flour, using fingers if you have to. It will be a quite liquid almost batter-like consistency. Cover with a tea towel and leave in a warm place for 30 minutes as it forms a sponge. It will at least double in size so make sure that your bowl has room.
  3. Put the remaining flour (390g) and the salt in the bowl for an electric mixer. Make a well in the center and add the rest of the warm water (250ml), the sponge and the olive oil. Mix it with your fingers – it will be pretty wet and sloppy.
  4. Use the mixer’s dough hook to kneed the sloppy dough for 10 to 15 minutes, until it’s smooth and springy.
  5. Transfer the dough to a lightly oiled bowl and leave it covered with a tea towel until it has doubled in size, about an hour.
  6. Preheat the oven as hot as it will go with a large pizza stone – big enough to hold two pide.
  7. Divide the dough into two and form rounds
    on a lightly floured surface. Dust them with flour (to prevent sticking) and cover them with a tea towel for about half an hour.
  8. Make an egg wash by mixing the eggs and milk.
  9. Place a piece of parchment paper on a pizza peel, then flatten the dough rounds into 20cm long ovals onto the peel.
  10. Brush the dough ovals with egg wash. Dip your fingertips into the wash and drag them lengthwise down the dough to form the grooves.
  11. Sprinkle with nigella and / or sesame seeds
  12. Slip the dough ovals on the parchment paper onto the pizza stone and bake until golden brown, about 10 minutes.

Tonight we just ate the fresh bread by itself, but we saved a second loaf to eat tomorrow when we’re grilling out burgers. I’m thinking of seasoning mine like some of the doner recipes I’ve found.

Cloudy with a chance of downtime

AWS went down again last Friday. I wouldn’t normally care, I only run non-critical toy projects out of their infrastructure, but I know that it disrupted a friend’s wedding and that’s just not cool.

Amazon’s public statement about the event is fairly detailed and fairly believable. In one of their northern Virginia datacenters “each generator independently failed“. They don’t state how many generators they have, but their vagueness and references to “primary and backup power generators” seem to indicate that they have two.

Since they had UPS systems, a power outage with generator failure from 7:24pm PDT meant that the datacenter only lost power between 8:04pm PDT and 8:24pm PDT, and apparently many systems had power restored from 8:14pm, PDT. So why was the outage for customers so long?

The majority of EBS servers had been brought up by 12:25am PDT on Saturday. However, for EBS data volumes that had in-flight writes at the time of the power loss, those volumes had the potential to be in an inconsistent state.

I always understood that the value of having a UPS was two-fold, you could survive small power interruptions and you could safely shut down so that when power was restored your systems would return without requiring manual intervention. The Amazon cloud does not seem to be good at the latter.

At the most basic level it would seem prudent to force EBS servers to switch to a more cautious mode as soon as grid power is lost. If a server is running on batteries or even on a generator then forcing disks to remain in a consistent state is a pretty basic precaution. How hard is it to mount -o remount,sync automatically? Obviously there’s performance degradation with that, but it seems a small price to pay in the rare occasion when there’s clear and present risk of data loss. Who wouldn’t take an occasional performance hit in exchange for reliable disks and shorter outages?

Bringing back EC2 instances is a harder problem. Fundamentally the machines that run EC2 instances don’t know or care much about the VM images that run on them. That’s what makes them easy to manage, that’s what makes it easy to spin up new images. On the other hand my simple web service that went down for hours last week does simply boot up. Because it’s deployed into this automatically managed cloud it has to. Had I been running on my own hardware in the exact same datacenter my downtime would have been  on the order of 20 minutes rather than hours.

Because we’re building on top of a system involving half a million servers for compute alone we’re subject to the complexities of very large scale systems, even for our very simple systems. Each time a set of cascading failures causes extensive downtime we have to ask ourselves if the benefits of such complicated systems outweigh the cost.

Seven Inches

This year at Google I/O I got a Nexus 7, the new tablet from Google and Asus. First of all Android 4.1 Jelly Bean is great – it has a ton of incremental improvements over the already excellent ICS, plus Google Now, which promises to be a really useful daily tool.

Last year I got the iPad styled Galaxy Tab 10.1 at I/O. It’s a beautiful piece of hardware and the Honeycomb OS it came with was lovely. Honeycomb’s spectacular GMail and Calendar apps have only improved slightly in ICS and JB and remain one of the main reasons I like Android so much. Nonetheless I never found myself actually using my Galaxy Tab. I would take it on planes to play games or watch movies and use it to read books in hotels or occasionally at home but it never became part of my daily life.

I’m writing this on my Nexus 7. I’ve taken it pretty much everywhere I’ve been since I first opened it up. It fits into every bag I carry, can squeeze into my back pocket and is comfortable to use one handed on a busy BART train in the morning. It has the first touch keyboard that feels like the right size – larger tablets fail at thumb typing, and Google’s predictive keyboard is invaluable.

Overall I feel like I have a new tool, not just another gadget.

Simply logging JavaScript calls.

When debugging complicated JavaScript one thing I find myself constantly doing is using console.log() to print out what functions are being called in what order. JavaScript is single-threaded and event driven so it’s often not entirely clear what functions will be called in what order.

Traditionally I’ve done something like this:

function foo(bar) {

but last night I came up with something better. It’s probably not completely portable but it seems to work fine in recent Chrome / Safari / Firefox, which is really all I’m going to be using for debugging anyway:

function logCall() {
  console.log(logCall.caller.name + '(' +
    .map(JSON.stringify).join(', ') + 

Just add logCall() to the start of functions and the function call (including serialized arguments) will be logged to the console. Easy and foolproof.

LXC on Ubuntu 11.04 Server

For a while I’ve been interested in Linux Containers (LXC), new way of providing Linux virtual machines on Linux hosts. Unlike machine virtualization systems like Xen, VMWare, KVM and VirtualBox, LXC is an OS-level virtualization system. Instead of booting a complete disk image Linux Containers share the same kernel as the host and typically use a filesystem that is a sub-tree of the host’s. I’d previously tried to get this running on a machine that was connected by WiFi, but basically that doesn’t work. Bridged network interfaces don’t play nicely with wireless network interfaces. I just set up a machine on wired ethernet, found a great introductory article, and I’m up and running.

Stéphane Graber‘s article is great, but it hides a bunch of the details away in a script he wrote. Here I’m going to explain how I got LXC up and running on my Ubuntu 10.04 system as simply as possible.

Install the required packages
Everything you need to set up and run LXC is part of Ubuntu these days, but it’s not all installed by default. Specifically you need the LXC tools, debootstrap (which can create new Ubuntu / Debian instances) and the Linux bridge utilities.

apt-get install lxc debootstrap bridge-utils

Set up the network
To put the containers on the network we use a bridge. This is a virtual ethernet network in the kernel that passes ethernet frames back and forth between the containers and the physical network. Once we’ve set this up our current primary network interface (eth0) becomes just a conduit for the bridge (br0) so the bridge should be used as the new primary interface.

Add the bridge in /etc/networking/interfaces:

# LXC bridge
auto br0
iface br0 inet dhcp
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

And change eth0 to be manually configured, ie: it doesn’t DHCP:

auto eth0
iface eth0 inet manual

Now bring up the interface:

ifup br0

Set up Control Groups
Control Groups are a fairly new Linux mechanism for isolating groups of processes as well as managing the resources allocated to them. The feature is exposed to userland via a filesystem that must be mounted, so put this in your /etc/fstab:

cgroup          /cgroup         cgroup

Create the mount point, and mount it:

mkdir /cgroup
mount /cgroup

Now we’re ready to create containers
An LXC container is a directory under /var/lib/lxc. That directory contains a configuration file named config, a filesystem table called fstab and a root filesystem called rootfs. The easiest way to do that is to use the lxc-create script. First we create a configuration file that describes the networking configuration, let’s call it network.conf:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0

and then run lxc-create to build the container:

lxc-create -n mycontainer -t natty -f network.conf

Note, -n let’s you specify the container name (ie: the directory under /var/lib/lxc), -f lets you specify a base configuration file and -t lets you indicate which template to use.

Container templates are implemented as scripts under /usr/lib/lxc/templates. Given a destination path they will install a base Linux system customized to run in LXC. There are template scripts to install Fedora, Debian and recent Ubuntus as well as minimal containers with just busybox or an sshd. Template scripts cache completed root filesystems under /var/cache/lxc. I’ve found the container template scripts to be interesting and quite readable.

Starting a container
Starting the container is as simple as:

lxc-start --name mycontainer --daemon

If you leave off the --daemon then you’ll be presented with the container’s console. Getting a console is as simple as:

lxc-console --name mycontainer

Keeping containers straight
Each time a container is started it will be allocated a random MAC (ethernet) address. This means that it’ll be seen by your network as a different machine each time it’s started and it’ll be DHCPed a new address each time. That’s probably not what you want. When it requests an address from the DHCP server your container will pass along its hostname (eg: mycontainer). If your DHCP server can be configured to allocate addresses based on hostname then you can use that. Mine doesn’t so I assign static ethernet addresses to my containers. There’s a class of ethernet addresses that are “locally administered addresses”. These have a most-significant byte of xxxxxxxx10, ie: x2, x6, xA
or xE. These addresses will never be programmed into a network card. I chose an arbitrary MAC address block and started allocating addresses to containers. They’re allocated by adding the following line after the network configuration in the /var/lib/lxc/mycontainer/config file:

lxc.network.hwaddr = 66:5c:a1:ab:1e:01

Managing your containers
There are a few handy tools for container management, beyond the lxc-start and lxc-console mentioned before.

lxc-stop --name mycontainer does what you’d probably expect.

lxc-ls lists available containers on one line and running containers on the next. It’s a shell script so you can read it and work out how to find which containers are running in your own scripts.

lxc-ps --lxc lists all process across all containers.

There are more like lxc-checkpoint, lxc-freeze and lxc-unfreeze that look pretty exciting but I haven’t had a chance to play with them. There are also a set of tools for restricting a container’s access to resources so that you can prioritize some above others. That’s a story for another day.