OpenID for the mathematically challenged

The other day I got the OpenID bee in my bonnet and grabbed James Walker‘s module and installed it on my server. Actually I grabbed it from CVS, and then discovered that the CVS version is half-ported to some new Drupal 6 form API, so I ended up using the DRUPAL-5 tag.

Anyway, I use Dreamhost which I love for many many reasons (primarilly it’s really cheap and seems to work really well). Unfortunately they don’t build their PHP with BCMath or even GMP, which means my PHP can’t do the hard math that’s required for crypto. Luckily there’s a mode of OpenID that doesn’t require any work on the relaying party side. So I made a small change that allows James’ module to work in this “dumb” mode.

Index: openid.install
RCS file: /cvs/drupal-contrib/contributions/modules/openid/openid.install,v
retrieving revision 1.2
diff -u -p -r1.2 openid.install
--- openid.install      25 Mar 2007 06:38:00 -0000      1.2
+++ openid.install      16 May 2007 22:59:56 -0000
@@ -2,24 +2,6 @@

- * OpenID module requires bcmath
- */
-function openid_requirements($phase) {
-  if ($phase == 'runtime') {
-    $requirements['bcmath']['title'] = t('BCMath');
-    if (function_exists('bcadd')) {
-      $requirements['bcmath']['severity'] = REQUIREMENT_OK;
-      $requirements['bcmath']['value'] = t('Enabled');
-    }
-    else {
-      $requirements['bcmath']['severity'] = REQUIREMENT_ERROR;
-      $requirements['bcmath']['description'] = t('OpenID needs the bcmath extension for encryption.');
-    }
-  }
-  return $requirements;
* Implementation of hook_install
function openid_install() {
Index: openid.module
RCS file: /cvs/drupal-contrib/contributions/modules/openid/openid.module,v
retrieving revision 1.2
diff -u -p -r1.2 openid.module
--- openid.module       25 Mar 2007 06:38:00 -0000      1.2
+++ openid.module       16 May 2007 22:59:56 -0000
@@ -133,10 +133,14 @@ function openid_login_form_submit($formi

$idp_endpoint = $services[0]['uri'];
$_SESSION['openid_idp_endpoint'] = $idp_endpoint;
-  $assoc_handle = openid_association($claimed_id, $idp_endpoint);
-  if (empty($assoc_handle)) {
-    drupal_set_message(t('OpenID Association failed'), 'error');
-    return;
+  // if we have BCMath, we should use OpenID smart mode
+  if (function_exists('bcadd')) {
+      $assoc_handle = openid_association($claimed_id, $idp_endpoint);
+      if (empty($assoc_handle)) {
+        drupal_set_message(t('OpenID Association failed'), 'error');
+        return;
+      }

Also, I put the patch up on

The Sidekick ID and the iPhone

There were two interesting announcements today. First the Sidekick ID which had been previously leaked was formally announced and reviews have started to show up. Secondly Apple announced that the OS X Leopard will ship three months late – more than two years after the previous release of OS X. This slip is being seen as evidence that Apple is having trouble building as many products at once as it wants to.

In the four years I was at Danger we were building exactly one product at a time. We failed to separate the development of the hardware, the OS and the applications. Separating the client and server schedules was a slow and painful process. In the two years since I’ve left things seem to have improved. The fact that they’re able to ship two products (even if they are quite similar) is really exciting. That Danger is succeeding where Apple, with their 30 years of experience, is beginning to stumble is cause for congratulations.

Making dynamic static pages

UPDATE: I changed how this works and blogged about it.
I wanted my home page to reflect what was going on in my life, or at least what content I was generating. There’s the concept of a lifestream floating around at the moment, but I was happy just to have a few sources (a couple of blogs, my bookmarks, my twitters and my flickr stream) shown, split out my source. The catch was I wanted to do it without implementing a web service to back it.

If you want to pull content from a bunch of different servers without writing any server-side code your only options are Flash and JSON. They’re both ways of getting around the web security model that’s protected us for so long. Flash is kind of complicated, requires proprietary, expensive tools to work with while JSON comes for free. The idea behind JSON is that we can use the browser’s tag to load a script from another serve that contains data encoded as JavaScript data structures rather than code.

A few key services like Flickr and offer JSON versions of their feeds, but most do not. In steps FeedBurner who in January added a JSON version of the feeds they serve, and since you can ask FeedBurner to host any feed you please we can use them as our high-availability, standardizing, caching feed proxy. I\’d already set up FeedBurner for this blog, so I just added feeds for my LiveJournal and twitter accounts and looked up how to get access to the JSON feeds for my Flickr and accounts.

When you call a JSON script you can often pass in the name of a callback function to be called when the data is returned. I wrote a simple one to process a JSON response from FeedBurner and turn some of the most recent items into HTML list items:

var max_items = 3;
var target = null;
var filter = null;
function fb(o) {
  if (!target) return;
  for (var i=0; i<o.feed.items.length && i<max_items; i++) {
    var item = o.feed.items[i];
    var li = document.createElement('li');
    var a = document.createElement('a');
    if (!item.title) item.title =;
    if (filter) item.title = filter(item.title)

The function depends on a couple of external variables that are set before the JSON feed is called. They are target, the DOM object to append the list items to and filter an optional function to post-process titles. I had to add filter since my twitter feed comes back a little weird.

Getting the most recent posts from this blog into a bullet list is now pretty straight-forward:

<ul id="ianloic-list" />
<script type="text/javascript">
target = document.getElementById('ianloic-list');
<script type="text/javascript"

Getting my LiveJournal in was identical and twitter just required me to write a filter function to chop up the title a little and do some unescaping. For some reason the twitter feed is both HTML entity encoded and JavaScript string escaped when I get it back from FeedBurner.

Flickr and each require some custom code since I want to handle each of them specially. For I link to the tag pages of each tag on each bookmark and for Flickr I embed a thumbnail for each image. Take a look at the source code of to see how that’s done or drop me if you’d like more explanation.

Burning your Drupal feed in two easy steps

FeedBurner provides all kinds of neat stats, but it didn’t seem straight-forward to “burn” my blog feed since I’m using Drupal 5. After a little fiddling I think I’ve got a pretty good idea how to make it work in probably the simplest way possible. In fact, it doesn’t require and Drupal configuration at all.

  1. First I set up a FeedBurner account and burned my feed. The feed Drupal produces for me is: Now when I access I get the contents of that feed. It’s pretty simple, but so far nobody is going to see that feed.
  2. Then I simply told Apache to redirect all requests for that feed, except the ones from the FeedBurner bot to my FeedBurner feed. With the slight of hand magic of mod_rewrite this is pretty straight forward. In the root of every Drupal install there’s an .htaccess file containing a bunch of stuff. I just added a few lines to the mod_rewrite.c block of that file:
      # Rewrite rss.xml to
      # unless FeedBurner is requesting the feed
      RewriteCond %{HTTP_HOST} ^ianloic\.com$ [NC]
      RewriteCond %{HTTP_USER_AGENT} !FeedBurner.*
      RewriteRule ^rss.xml$ [L,R=301]

    This will cause Apache to send a 301 redirect to any time anyone requests, unless their HTTP User Agent begins with FeedBurner.

    Now I’ve got access to all the FeedBurner statistics and fun features. Since I didn’t actually touch the Drupal configuration I’m pretty sure a similar approach can be taken to applying FeedBurner to any feed.

Tag Clouds Two Point Oh?

[flickr-photo:id=15085782,size=m] Tag clouds bore me. They’re a relatively effective way of indicating quickly what topics are popular but that’s it. From’ cloud I can see that the site is for nerds – web nerds specifically. Flickr’s tag cloud tells me that people tag events and place names but that’s about it. My personal tag clouds on these sites tell me even less. My tag cloud tells me almost nothing – its a huge block of dark-blue and light-blue text. The Flickr one isn’t much better – it tells me mostly that I took a bunch of photos kayaking in the Queen Charlotte Islands, or perhaps more specifically, I got around to tagging my kayaking photos.

I’m more interested in seeing what’s going on right now and seeing how these topics are related. Since this is a graph visualization exercize I threw graphviz at the problem. After a bit of preliminary experimentation I ended up defining a graph based on recent tags pulled from an RSS feed. Each tag is represented as a node and any tags which appear together on the same post have arcs between them. Tag text gets scaled up a little with frequency. The effect isn’t perfect. Its pretty boring when there isn’t much data like on this site:

With a bit more data, like from my recent delicious feed things can get cluttered but we can see what I’m interested in right now:

This idea isn’t fully developed. The complexity of laying these graphs out in a sensible manner increases pretty rapidly as the number of nodes and arcs increases and so does the visual clutter. I’d like to experiment with client-side graph layout (ie: implementing graphviz in JavaScript) and doing something more sensible with synonym tags – ie: tags which always appear together. Synonym tags are somewhat interesting, but can distract from the relationships between concepts. Treating all tags that are coincident over a small number of posts as synonyms may often result in false synonyms, and collapsing synonyms will make it easier to scale to more posts, so I expect that that may be a productive path to go down in scaling these visualizations up to encompass more posts.

Oh, and the final demonstration – my friend Dan is looking for and apartment and is a Ruby on Rails web application developer:

Syntax Highlighting for Drupal

[flickr-photo:id=252312738, size=m] While writing my last post, I felt the need to post some source code examples and I wanted them to be pretty. Looking around, I failed to find what I wanted. There were a few options, the codefilter module, but that only supported PHP highlighting, the geshifilter module, but that doesn’t support Drupal 5.x which I’m running, or patches against codefilter to add GeSHi support.

So I did what was probably the wrong thing and wrote my own. At least I didn’t write it from scratch, I based it largely on codefilter, with some inspiration from the patches to codefilter that add GeSHi support.

I hacked up GeSHi a little as it wants to link keywords of most languages to reference sites. While this sounds like a good idea in theory it was linking HTML keywords off to some random site I didn’t really like and didn’t think was that good, so I disabled that functionality.

Using the module is pretty straightforward. You wrap your source code in tags that look like

<code language="LANGUAGE">...</code>

where LANGUAGE is a supported language. If there’s an enter in your block then it treats it as a block otherwise it renders it inline. Also, some whitespace is trimmed, so you can force a single line to be treated as a block by putting an enter at the start or the end.

Right now it’s being maintained in the same source control as I’m using for my web site, but I’ll move it into Trac and Subversion eventually. For the time being it’s attached.

Flickr for Dojo

I’ve been working on a little Dojo based application which talks to Flickr, so I put together a little library which uses Dojo to talk to Flickr using it’s rest JSON interface.

It’s pretty simple to use, just include the JavaScript file:

<script src="flickr.js"></script>

Tell the library what your keys are:

flickr.keys(API_KEY, SECRET_KEY);

And you’re set to go.

The main entry-point is As the first argument, you pass in a hash of arguments, as described in the Flickr API documentation. The method you’re calling is included in this hash. The second argument is optional and is a callback to be called with the response from the Flickr servers. The response will come back in JSON format so it is easy to handle it in JavaScript. The Flickr JSON response format is discussed in detail on the Flickr site.

So what would all this look like? Something like this will load interesting photos from Flickr and add them to the current document:

flickr.keys(API_KEY, SECRET_KEY);
var pagenum = 1;
function interesting () {{method:'flickr.interestingness.getList',
            page: pagenum, per_page: 10}, interesting_cb);
function interesting_cb (response) {
    if (response.stat != 'ok') {
        var error = document.createElement('div');
    for (var i in {
        var photo =[i];
        var img = document.createElement('img');
        img.classname = 'interesting';
        img.setAttribute('src', 'http://farm'
        img.setAttribute('width', '75');
        img.setAttribute('height', '75');

        var a = document.createElement('a');
        a.setAttribute('href', ''+


I’m not actually using all that much from Dojo. The main thing I’m taking is the crypto library, specifically dojo.crypto.MD5. The way I’m making the actual JSON calls is by appending elements to the page. Perhaps at some point I’ll move to using Dojo’s ScriptSrcIO but right now I’m not.

The current version of the code is attached: flickr.js

Flickr Authentication Security

Recently Flickr closed a little security hole I found in their API authentication. I was able to convince their servers to hand out a token to me based on a user’s cookies and the API key and secret key of an application the user had used. Then with the JSON form of the Flickr API I had full access to the user’s account.

The there two flaws in Flickr’s security that exposed this problem. The first was that the security is based on the assumption that applications can keep a key secret. This is easy for web applications that make server to server API calls, but for anything that a user downloads and especially open source software it’s impossible to keep the key secret. My experiment used the secret key from Flock which is open source – the secret key can be found in subversion, and the secret key from Flickr’s own MacOS X uploader application which can be easilly extracted from the download from their site. Secondly the Flickr server was giving out new authentication tokens without requiring user approval.

The exploit itself is a little state-machine making a series of Flickr API calls and using one IFRAME. It goes like this:

  • Request a frob (via JSON)
  • Request authentication (via an IFRAME)
  • Request the auth token (via JSON)
  • Do evil (via JSON)

In my case the evil consisted of posting a comment on the user’s most recent photo.

The security hole is now closed, but if you’re interested in seeing how to access the Flickr API entirely from JavaScript in a web page take a look at the attached exploit: sploitr.html and the md5 library: md5.js