Using secret keys to identify applications communicating across the internet has become popular as people have copied the very successful Flickr authentication API. Unfortunately people trust that they can keep these keys secret from attackers, even as they distribute applications that contain the secret keys to other people. I decided to see how hard it would be to extract the secret key and API key from Flickr’s new XULRunner based Uploadr.
While the Uploadr is nominally open source, they don’t include the key in the source code. In the README they instruct anyone building it to acquire their own key and modify the source code to include that key:
You'll need your own API key and secret from Flickr to build Uploadr. These can be obtained at http://flickr.com/services/api/. The key and secret must be placed in flKey.cpp as indicated. This file is at: UPLOADR/MacUploadr.app/Contents/Resources/components/flKey.cpp The API key is stored as a string. The secret is stored as individual characters so it is not easily readable from the binary.
Such noble goals.
Getting the API key was as simple as running tcpdump:
sudo tcpdump -i en1 -vvv -n -s 0 -w /tmp/DumpFile.dmp
running the Uploadr, and then looking through the dump file to find where the API key is passed. Look:
api_key=a5c4c07991a15c6b56b88005a76e7774
In theory getting the secret key is harder, but it turned out to be even easier. It’s stored in the source code for the flKey::Sign
method:
// The shared secret #ifdef XP_MACOSX bytes[0] = '-'; bytes[1] = '-'; bytes[2] = '-'; bytes[3] = '-'; bytes[4] = '-'; bytes[5] = '-'; bytes[6] = '-'; bytes[7] = '-'; bytes[8] = '-'; bytes[9] = '-'; bytes[10] = '-'; bytes[11] = '-'; bytes[12] = '-'; bytes[13] = '-'; bytes[14] = '-'; bytes[15] = '-'; #endif
All I need to do is find that code, disassemble it and reassemble the string. I ran the Uploadr under gdb:
gdb '/Applications/Flickr Uploadr.app/Contents/MacOS/xulrunner'
and once it was up and running tried to find something to break on that would expose the secret key. There weren’t symbols for the flKey::Sign
method that’s defined in the source file but I found I could break on MD5Init
that’s called from that function. So:
break MD5Init kill run
And now I stop in the middle of the signature function – and from the stack trace I can see where flKey::Sign begins:
Breakpoint 1, 0x149a086b in MD5Init () (gdb) where #0 0x149a086b in MD5Init () #1 0x149a1588 in flKey::Sign () #2 0x017bb45d in NS_InvokeByIndex_P () ...
All I have to do is disassemble flKey::Sign
:
(gdb) disassemble 0x149a1588 Dump of assembler code for function _ZN5flKey4SignERK9nsAStringRS0_: 0x149a1498 <_ZN5flKey4SignERK9nsAStringRS0_+0>: push %ebp 0x149a1499 <_ZN5flKey4SignERK9nsAStringRS0_+1>: mov %esp,%ebp 0x149a149b <_ZN5flKey4SignERK9nsAStringRS0_+3>: push %edi 0x149a149c <_ZN5flKey4SignERK9nsAStringRS0_+4>: push %esi ... 0x149a153e <_ZN5flKey4SignERK9nsAStringRS0_+166>: movb $0xXX,(%eax) 0x149a1541 <_ZN5flKey4SignERK9nsAStringRS0_+169>: movb $0xXX,0x1(%eax) 0x149a1545 <_ZN5flKey4SignERK9nsAStringRS0_+173>: movb $0xXX,0x2(%eax) 0x149a1549 <_ZN5flKey4SignERK9nsAStringRS0_+177>: movb $0xXX,0x3(%eax) 0x149a154d <_ZN5flKey4SignERK9nsAStringRS0_+181>: movb $0xXX,0x4(%eax) 0x149a1551 <_ZN5flKey4SignERK9nsAStringRS0_+185>: movb $0xXX,0x5(%eax) 0x149a1555 <_ZN5flKey4SignERK9nsAStringRS0_+189>: movb $0xXX,0x6(%eax) 0x149a1559 <_ZN5flKey4SignERK9nsAStringRS0_+193>: movb $0xXX,0x7(%eax) 0x149a155d <_ZN5flKey4SignERK9nsAStringRS0_+197>: movb $0xXX,0x8(%eax) 0x149a1561 <_ZN5flKey4SignERK9nsAStringRS0_+201>: movb $0xXX,0x9(%eax) 0x149a1565 <_ZN5flKey4SignERK9nsAStringRS0_+205>: movb $0xXX,0xa(%eax) 0x149a1569 <_ZN5flKey4SignERK9nsAStringRS0_+209>: movb $0xXX,0xb(%eax) 0x149a156d <_ZN5flKey4SignERK9nsAStringRS0_+213>: movb $0xXX,0xc(%eax) 0x149a1571 <_ZN5flKey4SignERK9nsAStringRS0_+217>: movb $0xXX,0xd(%eax) 0x149a1575 <_ZN5flKey4SignERK9nsAStringRS0_+221>: movb $0xXX,0xe(%eax) 0x149a1579 <_ZN5flKey4SignERK9nsAStringRS0_+225>: movb $0xXX,0xf(%eax) ... 0x149a168e <_ZN5flKey4SignERK9nsAStringRS0_+502>: pop %esi 0x149a168f <_ZN5flKey4SignERK9nsAStringRS0_+503>: pop %edi 0x149a1690 <_ZN5flKey4SignERK9nsAStringRS0_+504>: pop %ebp 0x149a1691 <_ZN5flKey4SignERK9nsAStringRS0_+505>: ret End of assembler dump.
And sure enough if I reconstruct those bytes, the secret key is: REDACTED
REDACTED
That wasn’t too hard was it? The fact that the code around the key was open source made it easier to find but even if it was closed source if I knew what kind of algorithm I was looking for it wouldn’t be hard to find and examine. It’s pretty easy to trace forward from user input or backward from network IO to find the signature algorithm, and the secret’s always the input to that.
Unfortunately OAuth follows the same model as Flickr, placing some value in a consumer’s “private” key. They do mention in Appendix B.7 that it’s often possible for attackers to access a secret it’s still a vital part of the protocol.
Since it’s not possible to securely identify clients we should move to a model where users approve actions against their accounts rather than granting carte-blanche actions to clients. For example Flickr could put photos that are uploaded into a staging area and require approval on the web site before including them in a user’s photo stream. This would allow users to make a informed decisions about third-party access to their accounts.
Updated: at Flickr’s request I’ve removed the “secret” key.
Updated again: I made another post about this topic, expanding on the final paragraph of this post.
…but you do need a user-token to perform any actions on a user account. and those are tied to a key, and explicitly granted by the user via the website. same as oauth.
identifying the application is mostly used for rate limiting
You can’t stay away from XULRunner, either, I see.
The API Key and, for that matter, the user’s Token (in Flickr API terms) are not meant to be secret. They are on the wire in plain text along with the signature, which is salted using the Secret, the one part that doesn’t cross the wire. OAuth does one better by having per-token secrets that must (whether it says “should” or “must” it means “must”) be sent over HTTPS and are thereafter used just like the API Key’s Secret. Still not very good, as it has to be stored somewhere and money says it’ll be stored less “securely” than even the Secret in Flickr Uploadr.
The Uploadr’s Secret was coded in the way it was because this way it doesn’t show up when you run `strings` against key.dylib. You do get the API Key from `strings`, for reasons mentioned above.
Just forcing everything to use SSL only solves the message-integrity problem but leaves the is-this-really-Flickr-Uploadr problem open. Diffie-Hellman doesn’t help you there. (Each and ever user of Flickr Uploadr would need a cert from a trusted CA for SSL to actually verify identity and even then, if an attacker has access to the filesystem, you’re toast.)
So you have the consumer key/secret. That doesn’t give you much. In order to make a request on behalf of a user you’re going to need what OAuth calls an “access token” (Flickr calls it a frob, or something) — another key/secret pair. Sure you can auth users with this key/secret, but you could have used your own key for that… what have you gained? It would definitely be bad practice to serve non-public information without an access token.
@cal, @mike: well, until this evening Flickr would happily hand out tokens based on a secret key of a trusted application and cross-site request forgery. Trust of secret keys can sneak into otherwise secure web applications when people forget how not-secret the keys can be. You can take a look at the (no longer working) exploit here.
@Richard, as long as you’re sending bits to somebody’s computer it’s fair to assume that they’ll be able to identify whatever information is encoded in there – especially if it’s encoded in an x86 executable 🙂 Secondly a distributed application binary can never be trusted – at least not without significant OS and hardware support that’s definitely lacking on all the platforms I care to run and are probably lacking in practice on the ones that claim to implemented it. The only solution is to trust users, not software – take a look at my next post.
Trust can absolutely creep into applications in ways it shouldn’t. This is true, and has nothing to do with OAuth / delegated auth.
It feels like your frustration is less with delegated auth (which is more secure than basic auth, period), and more with web people who don’t understand security. These people will always exist. Overworked teams who allow logins to the admin site via the normal site login with no VPN or otherwise will also always exist.
In South America, they have phishing attacks too; the difference is that the phishing is done with a gun, your car, and your bank card. So we’re never going to solve all of these problems. It just won’t happen. But we can strive to do better, and we should. Arguing about whether Better is Enough seems counter productive.
That said, it’s important that people understand where the possible attack vectors are with OAuth, and how to guard against them (in this case, don’t trust consumer keys in publicly distributed software as much as you do privately held consumer keys with owners who you can sue). The more writing on this, the better, as long as it’s done in a way that doesn’t throw the baby out with the bath water.
I independently found the secret key in Uploadr today (saw your post only after I had found it). Although Cal is right about a user token needing explicit authentication, my guess is that if you know a secret key, it would be fairly easy to trick a user into authenticating a token generated by an attacker. I’ve written up my thoughts about web API authentication mechanisms here: http://bit.ly/NmU5