A simpler mobile OpenID workflow?

Chris Messina posted today about the problems with current OpenID work-flows for mobile users. In spite of a long list of chores I was intending to complete today I had a bit of an experiment with an approach to solving this.

The main problem I wanted to solve was to allow a user to prove their identity without having to enter a password. Most mobile devices lack physical alphanumeric keyboards, and without that it’s very hard to fill out password fields.

Continue reading “A simpler mobile OpenID workflow?”

Moving from Drupal to WordPress for blogging

I really like Drupal quite a lot. It’s powerful and flexible, it’s code is clear and well written and it’s extension mechanism is one of the best I’ve ever seen. All this flexibility tends to distract me from actually writing blog posts. So I’ve moved back to WordPress. If you’re reading this over RSS expect the usual disruption.

I first discovered WordPress late in 2003 from Mark Finlay, an Irish GNOME contributor. On December 24th 2003 he posted:

Everyone go look at WordPress: Seriouslessly sexy blogging Software, looks like it’s gonna kick MT’s ass.

I did, and in the end he was right. Unfortunately he died January 10th 2004. WordPress always reminds me of the three months when we lost Ettore, Chema and Mark.

There were a few scripts around for moving Drupal posts to WordPress but they were all pretty out of date so I took one and updated it. It has a lot of commentary in it. Take a look: drupal-to-wordpress.sql

Songbird 0.3

Last week we release Songbird 0.3. It’s the release I’ve been working on since I joined the Pioneers of the Inevitable in January. There are a whole lot of improvements in there to various parts of the application – the database engine was rewritten to be faster and more extensible, we added tabs to the browser, etc – but that’s not what’s really cool. What’s really cool is the new Web API we’ve developed.

Web pages being displayed in Songbird can (with the user’s permission) interact with pretty much the whole music player. We’re exposing both basic playback controls (what’s the current track, play, pause, next, etc) and access to the music library (see what’s in the library, add to it, etc). This means any music web site can offer a really rich integrated experience. Sites like last.fm or Pandora could offer recommendations without requiring you to download a plugin or desktop application, music stores like Emusic or Amazon.com wouldn’t need to provide a separate desktop applications and could offer the kind of seamless player / web-service integration that only Apple is providing at the moment, and there are a million other possibilities that we haven’t even thought of yet. Right now you can see this at work on The Hype Machine.

Steve and I have been hacking on Greasemonkey, getting it integrated with the bird and writing userscripts that add new functionality to web sites that don’t yet know about Songbird. My most complete hack is one that adds album previews (courtesy of Amazon.com’s music store) to album reviews on Metacritic and Pitchfork. It’s useful and its fun!

PS: yes, it supports your bloody iPod…

Flock 1.0

Flock 1.0 has finally shipped.

Almost two and a half years ago I met up with Bart Decrem for coffee in Palo Alto. He was working with Geoffrey Arone to build a company to write a new browser. Bart wasn’t fully clear on exactly what it would look like or exactly what it would do but he believed that it was important to build a web browser that would break new ground in functionality and experience. He showed me a prototype that showed off two pretty amazing features: really simple bookmarking shared bookmarking and the ability to take pieces of web pages and store them for later reuse. I was excited so I quit my job and joined up.

The first day we were working out of the Bessemer offices in Menlo Park. Beyond Bart, Geoffrey and myself the team included Daryl Houston, Chris Messina, Andy Smith and Anthony Young. Pretty quickly the vision came together. Even though Daryl is the only one of us left at the company the 1.0 product that was just released this week is pretty much what we wanted to build.

In the past couple of years the company has had its ups and downs. There were personality conflicts that were handled pretty poorly (especially by me). There were times of hyper-productivity (I hacked for basically 24 hours straight in a hotel room in Portland to get a feature ready for 0.1) and unproductivity (what did I actually do in the last half of 2006?). But the high-level vision remained clear and nobody working on the product ever doubted that what we were doing was important.

How I set up gitweb

There don’t seem to be any straight-forward documents on how to set up gitweb, the web interface to git repositories. Or at least I couldn’t find any. After failing to get it working a couple of times and then succeeding a couple of times in different ways here’s the recipe I came up with to get what you can see on http://git.ianloic.com/. What I have there is a bunch of git trees I’m following or working on. Perhaps not a bunch, but at least a few.

What you need:

  • a recent git installed
  • Apache
  • the gitweb files (from your git distribution) somewhere in your web tree

Gitweb comes as part of recent git distributions in a gitweb/ directory. This needs to be accessible from the web. I just have a git checkout in my web area and use that. When you build git you can supply all kinds of defaults that are applied when gitweb.perl is transformed into gitweb.cgi. Compile-time configuration and since you can override all that in a config file I prefer to just use gitweb.perl directly. My config file gitweb-config.pl looks like this:

# where's the git binary?
$GIT = "/home/yakk/bin/git";
# where's our projects?
$projectroot = "/home/yakk/git.ianloic.com";
# what do we call our projects in the ui?
$home_link_str = "projects";
# where are the files we need for web display?
@stylesheets = ("/git/gitweb/gitweb.css");
$logo = "/git/gitweb/git-logo.png";
$favicon = "/git/gitweb/git-favicon.png";
# what do we call this site
$site_name = "Ian's git trees";
# these variables should be empty
$site_header = "";
$home_text = "";
$site_footer = "";
$projects_list = "";
$export_ok = "";
$strict_export = "";

Most of that should be fairly straight-forward. $GIT and $projectroot are local filesystem paths for the git binary and the directory that contains your git trees respectively. @stylesheets, $logo and $favicon are URLs. In my case I just use relative URLs. You can customize most of the other variables to tweak the text to display. We have values for all these because gitweb.perl contains junk defaults that need to be cleared.

I wrote a little wrapper CGI, invoke-gitweb.cgi to invoke gitweb.perl with an environment variable to tell it to load gitweb-config.pl:

#!/bin/sh
GITWEB_CONFIG=gitweb-config.pl exec perl git/gitweb/gitweb.perl

Drop gitweb-config.pl and invoke-gitweb.cgi into the directory that contains your git trees. Make invoke-gitweb.cgi executable and make sure your Apache knows to execute *.cgi as cgi scripts.

If you load invoke-gitweb.cgi in your browser you should see gitweb in action! But your URLs are stupid.

Let’s fix that with a .htaccess file:

RewriteEngine on
RewriteRule ^$ /invoke-gitweb.cgi
RewriteRule ^([?].*)$ /invoke-gitweb.cgi$1

This will make any requests for just the directory invoke gitweb and any requests starting with ? invoke gitweb with those arguments. It works pretty well for me.

Installing Ruby Gems in your home directory

NOTE: Updated for RubyGems 1.3.7

I’ve been playing with Ruby in my cheap shared hosting provider. They don’t include everything I need so I had to install Ruby Gems in my home directory. The instructions don’t work. So here’s what I did…

First set up environment variables to tell Ruby and Gems where to find stuff:

export GEM_HOME=$HOME/lib/ruby/gems/1.8
export RUBYLIB=$HOME/lib/ruby:$HOME/lib/site_ruby/1.8

Download and unpack the Gems source (this is the version I downloaded, you should grab the latest:

wget http://rubyforge.org/frs/download.php/70696/rubygems-1.3.7.tgz
tar xzvf rubygems-1.3.7.tgz
cd rubygems-1.3.7

Run the setup.rb script with the right arguments to install into your home directory:

ruby setup.rb all --prefix=$HOME
ln -s $HOME/bin/gem1.8 $HOME/bin/gem

This will install the gem command into $HOME/bin and the Gems source into $HOME/lib/site_ruby. Gems will be installed into $HOME/lib/ruby/gems/1.8. You should add $HOME/bin to your path. If you want to install it somewhere else replace $HOME with the prefix you’d like to use.

Inbox Diet

Anyone who has lived with me or worked closely with me knows that I have a lot of trouble staying organized and a lot of trouble keeping on top of my email. Typically my inbox grows to a couple of thousand messages and then I “archive” it and start again. This happens every couple of years.

When I receive email I either discard it immediately, reply if it will only take a few moments, or if its important and will take some time to deal with I leave it there to be dealt with later. This almost never happens, my inbox grows, my family wonders why I never respond to my emails, my coworkers form the opinion that I’m unreliable, my wife sick of people emailing her when they want to get my attention.

Last week I came up with a great idea. I’m going to put my inbox on a diet. Every day my inbox must shrink by at least 25 items. I estimated that I should be able to get through the 700 or so emails in a month if I can keep up the pace.

Being a nerd, the first thing I did was make a spreadsheet. For each day it tracks my goal inbox size and how I’m doing. I have ups and downs but I’m making solid progress towards zero. There’s even a chart:

As I’m going I’m also building the habit of processing email as it arrives. I can’t afford not to. I know that I’ll fall off the wagon again and I’ll let my inbox get out of control, but I think I’ve got a strategy for recovering.

Mozilla’s missed opportunities

In the past couple of months Mozilla Corporation has sought to narrow its scope. First there was the announcement that at least for the next year and a half or so the focus of platform development will be on developing the Firefox browser as the platform. This primarily means no standalone XULRunner. And since being standalone was kind of the whole point. Now there’s the announcement that they’re dropping Thunderbird because they think it’s a distraction from building their web browser.

When I look at this graph from Janco Associates I see Firefox’s market share’s growth slowing:


After all, how many people are going to download and install a new web browser. How many people really care enough? How many distribution channels can you find with just a web browser? Mozilla Corporation’s closed minded attitude to XULRunner as a platform and non-browser application might limit the spread of Free Software internet client software.

If we had set of application that could all run against a shared XULRunner runtime it would be easy and cheap to piggy-back them on each other. If a user downloaded Songbird so that they could listen to MP3 blogs on their phone it would be easy to offer them Firefox as a browser alternative, even if they hadn’t been interested in downloading and installing a replacement for Safari or Internet Explorer. If an ISP wanted to bundle Thunderbird to make it easier for their users to access email it would be cheap and easy for them to also offer Firefox because on top of any other XULRunner application Firefox is a small download.

This kind of bundling is often done by the bad guys. If you install Apple’s Quicktime codecs on Windows every update will trigger an iTunes install, even if you haven’t installed iTunes. I’m sure they’ll do the same thing for Safari on Windows. I’m not sure what iTunes’ market share on Windows is but it seems to be significant. If all those users suddenly have Safari installed that could potentially cause a big shake-up in browser market share.

We’re not trying to lock users into proprietary software and proprietary services. We’re trying to show them the potential of free software and open standards to make their lives better. We have the opportunity to do this better by embracing our platform and our ecosystem of applications, but the people holding the purse strings in the community are taking an overly conservative approach. I hope it doesn’t cost us victory and our users freedom.

The Making Of LOL Feeds

Last week I wrote and released my LOL Feeds site. It takes RSS or Atom feeds from the web and makes a series of lolcat-style images on a web page. It’s really way funnier than it sounds.

Initially I wanted to be able to auto-generate Jerk City comic strips based on my friends’ twitters, but when that seemed hard I opted for lolcats style images. After all we’d been seeing a lot of the lolcats on twitter – they’re displayed when the site is undergoing maintenance.

The original version of the script was very very clever. It used the Google AJAX Feed API and the Flickr API to pull in feeds and random images of cats from Flickr, combine them together with a PHP script I wrote to generate transparent PNGs of text live onto the page. It used the browser’s own text-flowing algorithms to lay out the text. It was however amazingly slow.

Browsers only allow a low number of concurrent connections to one site – four or eight I think – and this made the text crawl in. Also while the Google AJAX Feed API and Flickr API are pretty snappy they’re way slower than doing it server side. I was sad about this because I’m kind of in love with fully dynamic client-side applications (just look at my home page) but I actually wanted this to see the light of day.

So a rewrite ensued. I pulled down a static set of cute cats from the cute cat group on Flickr, filtered by creative commons license of course. I removed a ton I didn’t like and added a couple of pictures of cats I know well. I reworked the script that generated single words into one that could place words on a JPEG. I had to write my own text layout algorithms but it turned out to be pretty simple.

I used Magpie RSS to generate a page that referenced my image generation script. To make the image generation stable (so that everyone looking at the same feed would get the same images for the same text) the image is selected based on a hash of the message. Nothing is actually random.

Then I glued it together with a good helping of mod_rewrite

This version worked pretty well. There were some issues with non-ascii characters and even funny characters like ? and & due to some URL encoding issues but it worked well. I posted about it told some friends and waited to see what would happen.

Things were going fine till I hit metafiler. Dreamhost, my friendly cheap hosting provider (100% carbon neurtral – yay!) noticed my image generation script was slowing down my shared machine and turned it off.

Adding an image caching layer was pretty straight-forward. I turned the image generation script from a .php to a .inc and included it in my feed processing script. Instead of generating an image every time it generates an MD5 of image parameters and uses a cached image if one exists. I was able to update the existing image generation script to use the same code and send a redirect to the cached version. In the past 5 days the script has generated just over 50,000 unique images which have been accessed almost half a million times. That’s a lot of cats saying stupid things. And a lot of saved CPU cycles.

I also wrote a Facebook Platform application, but that’s a story for another day.

Out with the old, in with the goo(gle)

Some time ago I reworked my home page to feature content from various other sites I post to (blogs, flickr, delicious) by using some JSON tricks to pull in their feeds. I blogged about how to do this with Feedburner’s JSON API, so that my actual page was just static HTML and all the work was done client-side.

Last week I decided to revisit this using Google’s new AJAX feeds API. Feedburner‘s API never seemed to be well supported (it came out of a hackathon) and it forced me to serialize my requests. In the process I neatened up a bunch of the code.

Google’s API is pretty straight-forward. It uses a loader model that is similar to Dojo‘s dojo.require, so you load the main Google library:

<script src="http://www.google.com/jsapi?key=YOURAPIKEY"
  type="text/javascript"></script>

and then ask it to load their feed library:

google.load('feeds', '1');

They have a handy way of setting a callback to be called when the libraries are loaded:

google.setOnLoadCallback(function () { /* just add code */ });

By putting all three of these together we have a straight-forward way to execute code at the right time.

I refactored the code that inserts the feed data into the page a lot. I fleshed out the concept of input filters from simply filtering the title to filtering the whole item objects. This allows for a more flexible transformation from the information that is presented in the RSS feeds to information that I want to present to visitors to my page. In practice I only used it to remove my name from Twitter updates. Instead of hard-coding the DOM node creation like I did in the previous version of the code I moved to a theme model. The theme function takes a feed entry and returns a DOM node to append to the target DOM node.

The flexibility of Google’s API let me abandon my separate code path for displaying my Flickr images. Previously I used Flickr’s own JSON feed API but since Google’s feed API supports returning RSS extensions I used the Flickr’s MediaRSS compliant feeds to show thumbnails and links. They even provide a handy cross-browser implementation of getElementsByTagNameNS (google.feeds.getElementsByTagNameNS) for us to use.

I’m tempted to write a client-side implementation of Jeremy Keith‘s lifestream thing using this API.

Take a look at the code running on my home page or check out the script.