Category Archives: Default

Reusing Passwords

I have a confession. Sometimes I reuse passwords. Not for anything that “matters”, but I’ve ended up using a couple of passwords a lot of times. And inevitably some of those sites get hacked. But where did I use them?

Chrome remembers all my passwords but unfortunately doesn’t seem to offer a straightforward API to get at them. Conveniently they do sync my passwords to my computer’s password store and there’s an API for that.

I wrote a little script and I’ve been going through generating unique passwords for all the unimportant sites and turning two-factor authentication on where it’s offered. The Secret Service database (and Chrome) seem to sometimes end up with multiple entries for a single site, and Chrome doesn’t seem to sync my updates immediately, but I’ve found this a helpful start.

Partisan Divide

There’s an American election on Tuesday. Whatever the outcome of the races, the partisan polarization is disturbing. Roughly 40% of the electorate considers each presidential candidate to be unqualified to even run. I don’t have the right to vote but my perspective is just as absolute. I’m right, but I’d say that wouldn’t I.

Each party chose a candidate that was dismissed out of hand by half the country as even a valid choice to offer. The core of the debate has not been over policy or even really a vision of the future of the country, but of the fatal personal flaws of the other candidate.

Where do we go from here? If America elects Trump on Tuesday then I and half the country won’t just feel defeated and disappointed, worried about next four years and the country our children will inherit, but we’ll be skeptical of the president elect’s eligibility to hold the office to which he was democratically elected. If Clinton prevails the other half of the country will feel the same way.

Perhaps more disturbingly support for the candidates break heavily along gender, ethnicity and class lines. Whoever wins, whole communities will not only feel unrepresented in the White House, they’ll feel that its occupant is illegitimate. And for all their various skills, neither candidate has demonstrated skill in uniting the nation.

Tyranny is mostly pleasant

In my life I’ve been lucky enough to visit some brutal military dictatorships. From Suharto’s Indonesia as a child to Mubarak’s Egypt more recently, they’ve been really pleasant to visit. My impression is that most citizens of these countries were fairly unencumbered by the political system they lived under. Some people had terrible time – Islamists in Egypt, Timorese and Papuans in Indonesia – were tortured and murdered, their languages and beliefs suppressed. But for most people this wasn’t an issue.

Thinking about technology we have a similar situation. Most people are happy to rely on a proprietary operating system like MacOS or Windows because even though it takes away some freedoms, for them these freedoms aren’t as important on a day-to-day basis as the convenience that the platform provides.

Even though they’ve done a terrible job of protecting women and marginalized minorities from abuse, Twitter is a really convenient conversation platform for me. Facebook too with its real names policy excludes many people from honest, safe expression, but for a white cis man like me it’s really convenient.

On the other hand I run Linux on my personal computers because software freedom is morally important to me and the practical benefits for my fringe use case (programming) are significant.

If we’re going to build and promote Free technology – both FLOSS and a decentralized web, we need to accept that the pure principle of freedom isn’t enough to kick-start change. Enough people need to suffer enough discomfort to trigger a revolution.

Decentralized Web: My Thought Experiment

I’m at the Decentralized Web Summit today and it’s all very interesting. There are some big picture ideas of how the future should be. There are all sorts of interesting disparate technologies filling all kinds of holes. But I have a thought experiment that I’ve used to understand where we need to go and what we need to build to get there.

Uber. Or Lyft or AirBnB, or even Etsy.

This new sharing economy supposedly shakes up traditional businesses by harnessing the distributed power of the internet, but when you ignore shiny apps these businesses look a lot like traditional rent-seeking middlemen.

It feels like a bug that we are making new businesses that look like such old businesses. Ride sharing shouldn’t need  middleman. Prospective passengers and drivers should be able to discover each other, agree on a transaction, go for a ride and then make payment.

We have a lot of the pieces already, but reputation is the big challenge I see in all this. Even centralized systems struggle with reputation but we don’t have a good way to know if we should trust that a driver is competent or a guest won’t trash my apartment. I don’t know how to solve this, but I sure hope someone is thinking about it.

Low Fidelity Abstraction

It’s only through abstraction that we’re able to build the complex software systems that we do today. Hiding unimportant details from developers lets us work more efficiently and most importantly it allows us to devote more of our brain to the higher-level problems we’re trying to solve for our users.

As an obvious example, if you’re implementing a simple arithmetic function in assembly language you have to expend a lot of your brain power to track which registers are used for what, how the CPU will schedule the instructions and so on, while in a high level language you just worry about if you’ve picked the right algorithm and let the compiler worry about communicating it correctly and efficiently to the processor.


More abstraction isn’t necessarily good though. If your abstractions hide important details then the cognitive burden on developers is increased (as they keep track of important information not expressed in the API) or their software will be worse (if they ignore those details). This can take many forms, but generally it makes things that can be expensive feel free by overloading programming language constructs. Here are some examples…

Getters and Setters

Getters and setters can implicitly introduce unexpected, important side effects. Code that looks like: = x;
y = foo.baz;

is familiar to programmers. We think we know what it means and it looks cheap. It looks like we’re writing a value to a memory location in the foo structure and reading out of another. In a language that supports getters and setters that may be what’s happening, or much more may be happening under the hood. Some common unexpected things that happen with getters and setters are:

  • unexpected mutation – getting or setting a field changes some other aspect of an object. For example, does setting person.fullName update the person.firstName and person.lastName fields?
  • lack of idempotency – reading the same field of an object repeatedly shouldn’t change its value, but with a getter it can. It’s even often convenient to have a nextId getter than returns an incrementing id or something.
  • lack of symmetry – if you write to a field does the same value come out when you immediately read from it? Some getters or setters clean up data – useful, but unexpected.
  • slow performance – setting a field on a struct is just about the cheapest thing you can do in high level code. Calling a getter or setter can do just about anything. Expensive field validation for setters, expensive computation for getters, and even worse reading from or writing to a database are unexpected yet common.

Getters and setters are really useful to API designers. They allow us to present a simple interface to our consumers but they introduce the risk of hiding details that will impact them or their users.

Automatic Memory Management

Automatic memory management is one of the great step forwards for programmer productivity. Managing memory with malloc and free is difficult to get right, often inefficient (because we err on the side of simple predictability) and the source of many bugs. Automatic memory management introduces its own issues.

It’s not so much that garbage collection is slow, but it makes allocation look free. The more allocation that occurs the more memory is used and the more garbage needs to be collected. The performance price of additional allocations aren’t paid by the code that’s doing the allocations but by the whole application.

APIs’ memory behavior is hidden from callers making it unclear what their cost will be. Worse, in weirder automatic memory management systems like Automatic Reference Counting in modern versions of Objective-C, it’s not clear if APIs will retain objects passed to them or returned from them – often even to the implementers of the API (for example).


It’s appealing to hide inter-process communication and remote procedure calls behind interfaces that look like local method calls. Local method calls are cheap, reliable, don’t leak user data, don’t have semantics that can change over time, and so on. Both IPC and RPC have those issues and can have them to radically different extents.  When we make calling remote services feel the same as calling local methods we remove the chore of using a library but force developers to carry the burden of the subtle but significant differences in our meagre brains.


But I like abstraction. It can be incredibly valuable to hide unimportant details but incredibly costly to hide important ones. In practice, instead of being aware of the costs hidden by an abstraction and taking them into account, developers will ignore them. We’re a lazy people, and easily distracted.

When designing an API step back and think about how developers will use your API. Make cheap things easy but expensive things obviously expensive.

HTTP/2 on nginx on Debian

I run my web site off a Debian server on GCE. I like tinkering with the configuration. I hear that HTTP 2 is the new hot thing, and that’s going to mean supporting ALPN which means upgrading to OpenSSL 1.0.2 and nginx 1.9.5 or newer. But they isn’t supported in Debian 8.

I used apt pinning to bring in versions of nginx and OpenSSL from testing into my jessie server. I first added sources for testing by creating a file /etc/apt/sources.list.d/testing.list:

deb testing main non-free contrib

Then I configured my pin priorities by creating /etc/apt/preferences with:

Package: *
Pin: release a=stable
Pin-Priority: 700

Package: *
Pin: release a=testing
Pin-Priority: 650

Package: *
Pin: release a=unstable
Pin-Priority: 600

After an apt-get update I could install the version of nginx from testing, bringing in the appropriate version of OpenSSL: apt-get -t=testing install nginx-full

Then it was just a matter of changing:

listen 443 ssl;


listen 443 ssl http2;

wherever I wanted it.

Now it looks like I’m serving over HTTP/2. Not that it makes a whole lot of obvious difference yet.

Writing ES6 without a transpiler

I like modern JS features. I’ve been using some like const and let for a long time in specific contexts like Gecko chrome (when I worked on Flock & Songbird). Recently I’ve been playing with the features that have been standardized as ES6.

I tried a bunch of different build-system and transpiler approaches but I found them all unsatisfying. Even with automatic transpilation on save, even with source maps, I felt that I was missing too much of the edit/reload cycle that makes web JS programming such a pleasure. Even with low latency I couldn’t take advantage of things like being able to edit my source code inside the Chrome developer tools debugger.

Luckily stable Chrome supports a lot of ES6. Just restricting myself to that quite large subset gives me a great set of features to work with, without losing the development flow I like. Every six weeks I get more features to play with as a new Chrome release launches. When I want to deploy to older browsers I can transpile (Closure Compiler is great) and polyfill, but it’s part of the release / deployment flow rather than obstructing development.

So what am I using? Mostly:

  • Classes
  • Arrow functions
  • let and const
  • New collection types – Map, Set, etc

It’s pretty nice.

Australia Days

Cape Leveque
Photo credit: Finn Pröpper

I really like Australia. I was born there, I grew up there, my parents brothers and sisters all live there. I don’t have a single national identity, but ‘Australian’ is the one I put above the others. It’s a physically beautiful and inspiring country. At their best its people are generous, open, welcoming, relaxed and funny. At its strongest Australian nationalism is self-critical. The story of the first European settlers arriving as British criminals  and founding a free nation is inspiring.

On the other hand, our country was founded on the lie of terra nullius which continues to pervert many Australians’ understanding of their country. Indigenous Australians suffer incarceration and ill health at rates that are hard to comprehend, relative to the prosperous, free image we have of ourselves. Explicit genocide, dispossession, unequal treatment and paternalism has been replaced by active neglect and disrespect. Stan Gran’s speech is a must-watch:

Australia’s attitude to immigration has been troubled for a long time. A set of White Australia policies restricted non-European immigration persisted into the 1960s. Even then, non-English Europeans, especially Catholics were looked down on and politically and economically marginalized. During the Second World War, unlike the United States, we interned not only Japanese Australians, but Australians of Italian and German descent – as I like to say, “our racism was colourblind”. Subsequent waves of Southern European, East Asian, and now South Asian and Middle Eastern immigrants are attacked verbally by politicians and physically by thugs.

Finally in the treatment of asylum seekers by Australia since the 1990s has been a stain on our reputation and our moral standing. Governments of all parties have imprisoned men, women and children, in awful conditions, exposing them to horrible abuse – simply for arriving in Australia after escaping persecution – but without their paperwork in order. It’s a continuation and escalation of our history of xenophobic immigration policies.

So January 26th doesn’t get me that excited, but it does make me reflect. I’m glad my employer decided to use our home page the occasion to highlight the historic oppression of our indigenous brothers and sisters. In four months time there will be National Sorry Day, a national day of atonement and reconciliation. That recognizes where we’ve come from, how far we have to go to achieve the standards we set ourselves. That celebrates the aspects of Australian culture that I most identify with.

Brief first impressions of Rust

I had a little play with Rust this week. I’d been meaning to for a long time, but the arrival of 1.0 motivated me to spend a few hours playing around with the tools and the tutorial. I haven’t actually written anything myself yet – I’m sure I’ll have some different, perhaps more valid thoughts after I do that. I’ll probably write a web server – that’s been my go-to hello world program for the past 20 years or so.

But just working through the first couple of chapters of the excellent free, online Rust Book I’ve come away with some impressions. For better or worse I’m comparing Rust to Go and to a lesser extent to C and C++.

Something that Go promised was good tooling and strong opinions on source code organization. I like to tooling just fine, but the source code organization bugs me – I don’t like having all my code and the code I depend on under a single directory. Rust’s source code organization is much more appealing to me – a top level directory with a configuration file Cargo.toml and then a src/ directory with my code. Dependencies are downloaded and stashed somewhere (who cares? I don’t).

Even playing around with the simple examples in the book I was missing “go fmt” and “goimports”. I haven’t gone deep enough into Rust to know if there’s an equivalent that everyone else is using or if the community has the (incorrect) opinion that source code formatting is a lifestyle choice. One of the most charming things about Go and Python are standardized source code formatting. In my team at work we’re using clang-format to (mostly) enforce a consistent style to Objective-C.

For the actual code itself there’s a steep learning curve that I’m still at the bottom of, staring up. I know that the whole point of Rust is a type system that is powerful enough to allow programmers to express complexities of thread safety and resource ownership, but that means that it’s really complicated. I like that it’s explicit and designed to be easily parsed. I like that there’s type inference so I don’t have to type the confusing names of these types all the time. I still have a lot of learning to do.

After my brief time with Rust I’m excited that we have a language that is expressive enough to provide memory and concurrency safety while remaining explicit and predictable. I just hope that I and others are able to become comfortable with the syntactic line noise to use it effectively, and that we find the right problems so we can use it appropriately.


Dan McKinley wrote a great blog post about choosing boring technology. It’s worth reading, but the tl;dr is that by adopting new and exciting technology you introduce additional risk and instability, which limits your ability to innovate on your product. I agree with pretty much everything he said.

There’s another dimension to the boredom question though. I think it’s both a cause and effect of what Dan’s talking about. I call it Ian’s Rule Of Bored Programmers:

A smart programmer tasked with solving a simple problem will make the problem more complicated until it becomes interesting.

I’ve observed this time and time again both in companies and in the open source world. To me the classic open source example is the GNU C Library. The team developing the glibc library, led for many years by Ulrich Drepper are intelligent and experienced, but they working on a library that for the most part is wrappers around syscalls, plus printf and malloc implementations. Almost every user of their library is using the Linux kernel. Instead of a simple, stable library we’ve ended up with a complex, always changing source of innovation and bugs.

At companies that only hire experienced programmers I’ve watched experienced developers faced with a simple but important problem add complexity to the solution to keep their attention. In some cases it takes the form of adopting or developing new languages, databases or other technology (like Dan writes about). In other cases developers chose to solve an abstract problem class rather than the specific concrete problem at hand, which ironically often ends being more brittle and inflexible than a solution specific to the problem would have been.

There doesn’t seem to be an obvious solution to this. Sometime simple problems are important enough that we want someone with experience to tackle it. For myself I just try to check in with myself to make sure I’m not adding complexity where it doesn’t belong.