A Different Model For Web Services Authorization

In my last post I set out to describe how easy it is to extract private keys from desktop software. As I was concluding I stumbled on an alternative approach that might be more secure in some circumstances. I didn’t really go into details, so here’s an expansion of the idea.

Current API authentication mechanisms including Flickr Auth, OAuth, Yahoo BBAuth and Google AuthSub work by allowing users to grant an application the right to act for the user on the site. Some implementations allow the user to grant access to only a subset of the functionality — Flickr lets users grant just read access, Google lets users grant access to just one class of data (for example Google Docs but not Google Calendar). But still, the user doesn’t know which specific operations they’re allowing to be carried out. This wouldn’t matter so much if the user can trust the application that they’re granting access to, but there’s no way to provide for that completely.

Another approach might be to give third-party applications the ability to submit sets of changes to be applied to a user’s account, but require the user to approve them through the service provider’s web site before they’re applied to the the user’s real data. For example, a Flickr upload might look like this:

  • Client makes API request flickr.transaction.create(total_files=20, total_data=20M)
  • Flickr responds with transaction id X and a maximum lifetime for the transaction
  • Flickr responds with an URL to direct the user’s browser to to:
    • associate the transaction with an account, and
    • preview, permit and commit the changes

To a user this isn’t going to look or feel to different from the current Flickr upload scenario where on completion they’re presented with a set of uploaded photos and offered the opportunity to make metadata changes.

The key advantage here from a security standpoint is that users approve actions not applications. The challenge of expressing a set of changes to a user clearly is not small but unlike the challenge of identifying and trusting applications it’s solvable.

This model probably only makes sense for write-oriented sessions, and for infrequent, large write sessions (such as Flickr uploads) rather than frequent, small writes (like Twitter posting).

These ideas aren’t well thought out in my head, they’re fresh from me trying to come up with a workable model for API authentication and authorization that doesn’t depend on trusting the identity of clients. I’d really welcome feedback and ideas on how to take this forward and feedback implementers on the feasibility of adopting a model like this.

8 replies on “A Different Model For Web Services Authorization”

  1. Watching NetFlix on my XBox would be extremely irritating if I had to go to my computer to authorise my XBox to download or rate a movie. This might be a good approach, as you said, for Flickr, but I’m not sure it has enough general applicability to supplant current models of delegated auth.

    Also, implementing change sets like that would be a total pain in the ass on the server side. 🙂

  2. Ian, first you need to establish your security constraints, then you can evaluate if a proposal meets them.

    In the case of single purpose, trusted origin installed software like the Flickr Uploadr you could make a case that the single most secure option is username-password/ClientLogin over SSL. (we don’t, and we’re comfortable with that choice made for a combination of technological, policy, and customer care issues)

    And for the above submitted changeset approach above, when used in a hosted web context sounds like a XSSers dream.

    So first, the problem space.

  3. I don’t know if this sort of model is sufficient for me to trust third party applications, but I think it’s necessary. It might let one more meaningfully audit pending (and past) transactions – it’s just as valuable for reads as well as writes.

  4. I like the idea of a sandbox for changes made by potentially malicious third-party clients, but I think this model is pretty specific to services which have few or large changes, like Flickr. Using this model for Twitter, for example, would be infuriating; the amount of data a client sends isn’t over the “let me review this before it goes live” threshold. Where it does apply, however, I think it makes perfect sense — I’ve often wanted a review stage for my Flickr uploads.

  5. this doesn’t work well, as you say, for things other than upload. and it means that you need a mobile web interface for any mobile applications that want to support flickr. and it means devices without a web browser (digital photo frames, etc) are screwed (they can do initial auth using mini-tokens – see the flickr api docs).

    but on top of that, it just makes things harder to use. and so the site which provides an api that just works, as opposed to one which (in practice) allows you to queue up actions to do later on the site, is going to fail. easy to cite twitter’s success, even though they don’t support any kind of secure three-legged auth.

    we’re going to have to always (for the time being, until computers are magical) rely on the user anyway – if they install random binaries then they’re open to cookie stealing and spoofing anyway, so APIs are moot. so the confirmation-before-issuing-user-tokens seems the correct approach (ignoring that flickr’s has been broken lately *cough*).

  6. It doesn’t really buy you much in the desktop access scenario. A malicious app can still look at user’s cookies & fake the correct authorization request. You could require a full login at every authorize, but that pretty much breaks any Just Works user perception, which is a strong motivator for these sorts of apps.

Comments are closed.