I’m happy to report that we’ve fixed an issue in the percent encoding step of our OAuth signature library for the Jersey framework. The issue reported was caused by the fact that we were using Java’s URLEncoder and URLDecoder classes to compute OAuth’s signature base string. Unfortunately those classes do not perform an RFC3986 compliant encoding which is required in OAuth. The main difference is that a space character will be encoded as a + when we need it to be escaped as a %20 (more info here).

To fix this, we’ve chosen to leverage Jersey’s UriComponent class. There is one notable difference though with how one would encode a URI (see here for a very detailed explanation of URIs): OAuth says that the signature base string is built by concatenating the request method, the request URL and the normalized parameters (with & to separate them) and that those elements must be encoded (prior to concatenation). In effect we are re-encoding elements that are already encoded. As Paul noted, it’s as if we wanted to pass the signature base string in a URI… I remember this possibility was mentioned in conversations about debugging OAuth deployment but that’s the only case I remember for this.

Anyway, to illustrate this, below is the piece of code where the bulk of the action happens:


StringBuffer buf = new StringBuffer(request.getRequestMethod().toUpperCase());
URI uri = constructRequestURL(request);
String tp = uri.getScheme();
buf.append('&').append(UriComponent.encode(tp, UriComponent.Type.SCHEME));
tp = uri.getAuthority();
buf.append("%3A%2F%2F").append(UriComponent.encode(tp, UriComponent.Type.AUTHORITY));
tp = uri.getPath();
buf.append(UriComponent.encode(tp, UriComponent.Type.PATH_SEGMENT));
buf.append('&').append(UriComponent.encode(normalizeParameters(request, params), UriComponent.Type.QUERY_PARAM));

Our testing code now also includes elements with spaces to make sure we got it right (thanks to Michael Werle).

Thanks to my colleague Hua Cui, our OAuth implementation for OpenSSO is now upgraded to the latest 1.0a revision of the spec. There is no legacy support for (now deprecated) the 1.0 version (the version field hasn’t been changed in OAuth which, to me at least, does suggest deprecation of the previous release).

Since the signature mechanism in itself is not changed, there’s no update necessary to our Jersey OAuth signature library.

Give it a try!

In previous posts, I mentioned we have implemented an OAuth signature library for Jersey (the JAX-RS reference implementation). This signature library sports client and server filters to insulate the application from most of the OAuth signature process (signing on the client side and verifying the signature on the server). Our main goal is to allow Jersey developers to adopt OAuth to secure their messages. Our initial need though, was for our OpenSSO project which now includes an OAuth Token Service.

In doing some testing on the OpenSSO use of OAuth, we noted there’s was a bug in the server verification code of an RSA signature. I’m happy to announce that the fix is in so you can now happily use RSA-SHA1 to secure OAuth messages (in OpenSSO or simply using Jersey). Similarly to HMAC-SHA1, we have created a comprehensive test for the RSA-SHA1, reusing the same test case used here. If you want to use the library, you should take a look at the test source code.

One thing to note is that all signature algorithm implementations (HMAC-SHA1, RSA-SHA1, more could be added.) use the same interface object class OAuthSignature method. The challenge with that is that those algorithm require different types of secrets (as reflected in the OAuth spec). It is especially true for key-based algorithms like RSA-SHA1 where the sign() method requires a different secret (the private key) than the verify() method (called by the server, using the client’s public key).

In our implementation both methods take an OAuthSecrets object in argument. In the case of a public/private key-based algorithm, this object is expected to contain the private key (or public key in the verification case) within the consumerSecret field. This is indicated in the library’s Javadoc.

On the client side, signing (with RSA-SHA1) a message is quite simple. All you need to sign your message is three elements: the request, the OAuth parameters and your public key. The code below shows how you’d do that with Jersey:

        OAuthParameters params = new OAuthParameters().realm(REALM).
         consumerKey(CONSUMER_KEY).
         signatureMethod(RSA_SIGNATURE_METHOD).timestamp(RSA_TIMESTAMP).
         nonce(RSA_NONCE).version(VERSION);

        OAuthSecrets secrets = new OAuthSecrets().consumerSecret(RSA_PRIVKEY);

        OAuthSignature.sign(request, params, secrets);

On the server side, we retrieve the parameters from the request and use the public key (obtained from the client’s X509 certificate). A short example would be:

        params = new OAuthParameters();
        params.readRequest(request);
        secrets = new OAuthSecrets().consumerSecret(RSA_CERTIFICATE);
        assertTrue(OAuthSignature.verify(request, params, secrets));

A quick summary of the OAuth support we’ve recently added in a couple of key projects.

If you’re into RESTful web services and OAuth, we have implemented an extension to the Jersey project (the JAX-RS Reference Implementation). This extension allows for the signing and/or the verification of OAuth 1.0 based requests. It is based on a digital signature library accessed by server and client filters. Detailed information can be found here.

For people interested in a more integrated solution, we have also implemented a module for the open source project OpenSSO to supports OAuth as an authentication module. This module handles the Service Provider side, that is: token issuance, token & message verification as well as SSO session handling (to bridge with other protocols). This module is, for now, an extension to OpenSSO. In other words it is not yet part of the core OpenSSO and should be considered as more experimental. Beside the Java doc, a good source of information on this can be found in this article. There’s also Pat’s demo at Community One this year.

If you’re so inclined, give it a try – any feedback is more than welcome!

As mentioned before, during this last JavaOne, my colleague Pat Patterson has been showing a demo that leverages JavaFX and our OpenSSO’s OAuth module (preview for now). Daniel Raskin (our OpenSSO marketing guru at Sun) and him have created a video about the demo here. Check it out, it’s informative and very funny.

JavaOne’09 is now over. Lots of interesting sessions & great discussions. I loved the demo Pat Patterson made using our OAuth implementation. Marc Hadley and I had our BOF session and were pleasantly surprised to see more people than expected since it was late and in direct competition with JavaOne’s big party. Here are the slides we presented – hopefully this will encourage people to participate and help us in our quest for the ultimate identity framework!

The more I think and read about the session fixation issue (see the official announcement here and additional info there) that has been discovered in OAuth, the more I’m convinced of the benefits Identity Federation brings to the table.

Think about it, the main issue (beside securing the callback URL which is reasonably easy to achieve) is the fact that the (service) Consumer and the Service Provider can’t currently be sure that the user that has initiated the OAuth flow (and thus has logged in at the Consumer site) is the same user that logs in the Service Provider during the authorization process. If something akin to SAML‘s SSO model were in play (where identities of the principal at the consumer & SP site are federated in a privacy-preserving manner – meaning no correlation issue) then ensuring it is the same user would be a no brainer.

This also can be looked at from the token perspective and what information it conveys. Wouldn’t a SAML assertion be useful here?

Another interesting path would be to use something like Liberty’s Interaction Service to obtain confirmation from the user thus thwarting an attacker to obtain the access token in your name.

Yesterday, I attended the OAuth BOF that took place during the IETF meeting in San Francisco. My participation was virtual though, since I was not physically there but thanks to live mp3 coverage and a chat room it was actually possible to follow the discussions and ask questions – very nice.
There were lots of discussions addressing several areas; here’s my recollection on the main points that were discussed:

  • Interoperability: what elements do people think are must-haves to ensure interoperable implementations of the IETF OAuth specification? Mandating a minimum set of signature algorithms (yes!).
  • Backward compatibility: although very important, we should not prevent ourselves from changing key aspects of the specification for the sake of backward compatibility. This is especially true for security issues. Of course, when not essential, changes that would break compatibility will be discarded.
  • Items to be worked on: via the chat room (see log here), I asked if the 2-legged scenario could be considered as relevant to this specification (the 2-legged case is when the service consumer is equivalent to the principal. In other words, we only have 2 parties involved in the transaction). To my satisfaction, many people agreed and so, after a hum in the room passed, it was agreed to include that use case in this work.
  • Charter: although the goal was to not change it, 2 important modifications will be made: integrate the 2-legged scenario and water down the compatibility constraints.

Overall I think this was a good meeting and we now have an official OAuth working group at IETF (well, once the normal process is completed).

As promised in my previous post, here’s an update on ongoing discussions around OAuth’s signature mechanisms. Basically, the main response (thanks Blaine!) I got (here) revolves around a main argument: it’s all about the security/usability tradeoff. In other words, since the weaknesses found in SHA-1 (esp. collision ones) seems hardly exploitable and support for, say, the SHA-2 family of hash functions,  is not wide spread, it is preferable to stick to SHA-1. To be fair, it is true that chances of breakage today are slim; the addition of a timestamp makes OAuth fairly robust.

However, I disagree with the approach. SHA-1’s weaknesses are quickly increasing and it can only get easier and easier to break (just look at the evolution here). Showing weaknesses to collision will lead to weaknesses to other attacks. For this very reason, and because a standard (and its implementations or deployments) will last years, one must aim at the best security standard available at the time of specification. Also, as mentioned before, SHA-2 is really getting wide adoption these days.

The other sticking point IMO is interoperability. I still think that we should not rely on good-willing from  the implementors (no matter how great they are) to guaranty interoperability. In all the standard efforts I’ve been involved with, ensuring implementations have a minimal set to interoperate with is a MUST. I think OAuth should strive for this too and thus mandate support for one or more signature algorithm.

I’m convinced the IETF process will help addressing these concerns. In the meantime let’s keep the discussion going.