While deploying OpenSSO on Glassfish (I used v2ur2), I ran in an interesting situatation:
Although deployment goes well, OpenSSO’s configurator (that is the process OpenSSO goes through the very first time you launch it after deployment) failed with a rather laconic LDAP operation failed message. Searching into the Glassfish server log, I could see that indeed LDAP had a problem:

Message:The LDAP operation failed.
--------------------------------------------------
The lower level exception message
error result
The lower level exception:
netscape.ldap.LDAPException: error result (68); The entry
ou=BasicUser,ou=CreationTemplates,ou=templates,ou=default,
ou=GlobalConfig,ou=1.0,ou=DAI,ou=services,dc=opensso,dc=java,
dc=net cannot be added because an entry with that name already exists
at netscape.ldap.LDAPConnection.checkMsg(LDAPConnection.java:4866)
at netscape.ldap.LDAPConnection.add(LDAPConnection.java:2864)
at netscape.ldap.LDAPConnection.add(LDAPConnection.java:2879)
at netscape.ldap.LDAPConnection.add(LDAPConnection.java:2829)
/.../

After consulting experts on the matter, I had the solution to my issue:
Modify Glassfish’s domain.xml configuration file of the domain OpenSSO is deployed in (most of the time it will be the default: domain1).
The change is fairly simple:
Replace
<jvm-options>-client</jvm-options>
with
<jvm-options>-server</jvm-options>

Good to know…

OK, this post will sound a bit like a sales pitch (or is it the Coué method?) but I enjoyed reading this article about our latest quarterly report. Yes, our open source strategy seems to finally yield some results and drives concrete (and significant!) revenues.

About time…

We have published an article on OpenID in this month’s BigAdmin newsletter. The article describes the OpenID deployment we have done here at Sun.

One of the feature we were first to demonstrate with OpenID was to increase the trust a Relying Party can have in the principal’s identity by asserting the fact that this principal is also a Sun employee (in addition to the fact that he owns the OpenID URL). This basically supports the approach of whitelisting “acceptable” OpenID OPs (identity providers) from the standpoint of a Relying Party.

Although its usage is far from satisfying (did you say lack of OpenID Relying Parties?), it has been a great way to leverage  OpenSSO and demonstrate its extension mechanism.

As mentioned before, I’m one of the coauthors for an article that is to be published in the proceedings of Financial Cryptography and Data Security 2009. The article is available here:

Any comment is more than welcome of course!

 

©2009 Springer. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the publisher, Springer.

<Rant>

One of the things I dislike in the Java world is the number of moving parts I sometimes have to deal with in order to get things going. For instance, as mentioned in my previous post, I often work with XML schemas. To successfully use JAXB, you have to be aware of:

  1. The Java version you’re using (Java SE 6, or 5?)
  2. The version of JAXB (is it 2.0 or 2.1?)
  3. The version of NetBeans (try 7.0 it rocks!)
  4. The OS you’re running all the above on

Depending on the various combination you have, you may need to tune things differently…

For instance the build.xml file contains information that relates to XJC’s location. Up to recently, I was successfully using the following setting on my OS X based machines:

<taskdef name=”xjc” classname=”com.sun.tools.xjc.XJCTask”>

   <classpath>
       <fileset dir=”/Applications/NetBeans/NetBeans 6.5M1.app/Contents/Resources/NetBeans/java2/modules/ext/jaxws21″ includes=”**/*.jar”/>
   </classpath>
</taskdef>

Recently I upgraded to NetBeans 7.0 (which is great btw) and updated accordingly the build.xml for my project so that it matches the new location of jaxws21:

/Applications/NetBeans/NetBeans 7.0M1.app/Contents/Resources/NetBeans/java2/modules/ext/jaxws21

 Unfortunately, I got the following error message when building my project:

taskdef class com.sun.tools.xjc.XJCTask cannot be found

 

It turns out the proper directory is now:

/Applications/NetBeans/NetBeans 7.0M1.app/Contents/Resources/NetBeans/ide10/modules/ext/jaxb

Why the move to another location? Frankly I’m not sure but I bet it has to do with this. This site (named the unofficial JAXB guide) is a great source of information if you’re working with JAXB.

</Rant>

For most projects I’ve been working on lately, I have had to implement a library or an application starting from an XML schema file. There are several ways to generate Java classes from a schema (this process is called binding a schema to a Java representation). Of course, working at SUN, I naturally started with using JAXB

A key element of this binding process is the compiler that generates those Java classes. In JAXB, this task is dedicated to XJC. In order to call XJC you can either run it from the command line or, even better, invoke it directly from your NetBeans project. To do the latter, you will need to modify or add a few files to your project: the overall build.xml of the project and a catalog.xml and a binding.xml that relates directly to the schema(s) you have to start with.

First, you need to add a new target in your build.xml file to instruct NetBeans to generate the code and where to put those classes:

    <target name=”-pre-compile”>
        <echo message=”Compiling the schemas…”/>
        <mkdir dir=”build/gen-src”/>
        <xjc catalog=”Schemas/catalog.xml” binding=”Schemas/binding.xml” destDir=”build/gen-src”>
            <schema dir=”Schemas/”>
                <include name=”*.xsd”/>
            </schema>
            <produces dir=”build/gen-src/com/sun/myproject” includes=”**/*.java”/>
        </xjc>
    </target>

Don’t forget to explicitly declare where XJC can be found. On my Mac, it would look like:

    <taskdef name=”xjc” classname=”com.sun.tools.xjc.XJCTask”>
        <classpath>
            <fileset dir=”/Applications/NetBeans/…/ext/jaxws21″ includes=”**/*.jar”/>
        </classpath>
    </taskdef>

Once this is done, you’ll need to add a catalog.xml file in the schemas directory (or elsewhere). In a nutshell you declare where other resources (like schemas your own schema may depend upon) can be found (e.g. in a local directory or online). An example of such mapping would be:

<?xml version=”1.0″ envoding=”UTF-8″?>
<catalog xmlns=”urn:oasis:names:rc:entity:xmlns:xml:catalog” prefer=”system”>
   <system systemId=”http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/xenc-schema.xsd&#8221; uri=”w3-dsig/w3-2002-12-xenc-schema.xsd”/>
   <uri name=”http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/xenc-schema.xsd&#8221; uri=”w3-dsig/w3-2002-12-xenc-schema.xsd”/>
   <system systemId=”http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/xmldsig-core-schema.xsd&#8221; uri=”w3-dsig/w3-2000-09-xmldsig-core-schema.xsd”/>
   <uri name=”http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/xmldsig-core-schema.xsd&#8221; uri=”w3-dsig/w3-2000-09-xmldsig-core-schema.xsd”/>
</catalog>

You also need, and this is the 3rd file I mentioned, to have a binding.xml file which, among many other things, describes the mapping of the schemas to be compiled with your project structure/hierarchy. For instance, you can say that the Java classes created from the schema w3-2000-09-xmldsig-core-schema.xsd should be packaged as org.w3.xmldsig. An example of such file is below:

<jxb:bindings version=”2.1″
xmlns:jxb=”http://java.sun.com/xml/ns/jaxb&#8221;
xmlns:xs=”http://www.w3.org/2001/XMLSchema&#8221;
xmlns:xjc=”http://java.sun.com/xml/ns/jaxb/xjc”&gt;

    <jxb:bindings schemaLocation=”rest-ds.xsd” node=”//xs:schema”>
        <jxb:schemaBindings>
            <jxb:package name=”com.sun.silverlining.identity”/>
        </jxb:schemaBindings>
    </jxb:bindings>

    <jxb:bindings schemaLocation=”wssecurity-200401-secext-1.0.xsd” node=”//xs:schema”>
        <jxb:schemaBindings>
            <jxb:package name=”oasis.wss.secext”/>
        </jxb:schemaBindings>
    </jxb:bindings>

    <jxb:bindings schemaLocation=”wssecurity-200401-utility-1.0.xsd” node=”//xs:schema”>
        <jxb:schemaBindings>
            <jxb:package name=”oasis.wss.utility”/>
        </jxb:schemaBindings>
    </jxb:bindings>
 
    <jxb:bindings schemaLocation=”liberty-idwsf-utility-v2.0.xsd” node=”//xs:schema”>
        <jxb:schemaBindings>
            <jxb:package name=”liberty.util”/>
        </jxb:schemaBindings>
    </jxb:bindings>

</jxb:bindings>

Don’t forget to instruct NetBeans to look for those newly generated classes. for this, in NetBeans, go to the properties of your project and look at the Sources in the Categories tab. There you should add the folder you declared in the build.xml file (in the above example: build/gen-src).

That’s it!
Building your project should now automatically generate Java classes from the schema and then build the whole project.

News of MD5 weaknesses have been around for a while but this recent publication goes further by demonstrating how this impacts X.509 certificates and our trust in secure web browsing (a lighter explanation of MD5’s weakness can be read here).
Basically one can create a rogue CA (certificate authority) certificate that will be trusted by most web browsers. The weakness at the crux of the issue is that it is possible to create collisions (2 messages leading to the same hash) with MD5. By extension, a rogue CA can create a certificate with a hash that matches the one of a certificate issued by a trusted root CA (one browsers trust).

I guess it’s fair to assume that all MD5-based signatures on certificates (or CRLs for that matter) should be rejected.

Recently, I spent quite some time working with 2 esteemed colleagues, Susan Landau and Robin Wilton, on a paper we submitted at Financial Cryptography and Data Security 2009. The paper’s title is Achieving Privacy in a Federated Identity Management System and I’m happy to report we have been accepted (Yay!).

One of the concept we develop in this paper is one I call Privacy in depth, where, in a parallel to security in depth, privacy must no longer be handled within the realm of a single site (where the data resides). Instead, privacy must be dealt with from a global perspective. This means both in time (when is data released? for how long? how many times has it been used?) and space (who uses it? and for what purpose?).

I really like this term since I think it accurately describes an evolution that will have to happen before we lose total confidence in the web’s ability to preserve what’s left of our privacy. I’ll post the paper whenever (if) possible.

As promised in my previous post, here’s an update on ongoing discussions around OAuth’s signature mechanisms. Basically, the main response (thanks Blaine!) I got (here) revolves around a main argument: it’s all about the security/usability tradeoff. In other words, since the weaknesses found in SHA-1 (esp. collision ones) seems hardly exploitable and support for, say, the SHA-2 family of hash functions,  is not wide spread, it is preferable to stick to SHA-1. To be fair, it is true that chances of breakage today are slim; the addition of a timestamp makes OAuth fairly robust.

However, I disagree with the approach. SHA-1’s weaknesses are quickly increasing and it can only get easier and easier to break (just look at the evolution here). Showing weaknesses to collision will lead to weaknesses to other attacks. For this very reason, and because a standard (and its implementations or deployments) will last years, one must aim at the best security standard available at the time of specification. Also, as mentioned before, SHA-2 is really getting wide adoption these days.

The other sticking point IMO is interoperability. I still think that we should not rely on good-willing from  the implementors (no matter how great they are) to guaranty interoperability. In all the standard efforts I’ve been involved with, ensuring implementations have a minimal set to interoperate with is a MUST. I think OAuth should strive for this too and thus mandate support for one or more signature algorithm.

I’m convinced the IETF process will help addressing these concerns. In the meantime let’s keep the discussion going.

I had an interesting discussion with security experts inside Sun. The topic was OAuth and the way digital signature is defined there. One issue that was raised is the fact that (beside PLAINTEXT) only HMAC-SHA1 and RSA-SHA1 are defined in the specification when NIST recommends dropping those in place of SHA2 algorithms…
Why not using RSA-SHA256 at minimum and recommend to switch to the winner of the SHA3 competition whenever practical (granted, it’s gonna take a while…)?

Judging from the discussion on the OAuth mailing list prior to the release of the core spec, it seems that the main concern was the lack of support for, say, SHA256. This to me is a bit surprising for 2 reasons:
(1) I believe when creating a specification we should not settle for the lowest denominator but rather aim at a level of security that will last a reasonable amount time (between the time to implement and the time real deployments happen, security weaknesses will only get worse). This is even more important for specs, like OAuth, that seem promised to a bright & long future.
(2) It seems to me that RSA-SHA256 is available in most languages used in web development (Java, Perl, Php, python…).
So what am I missing?

Another issue is the lack of mandate for at least one signature mechanism and making the other ones optional. This certainly could cause interoperability issues as 2 implementations might be compliant to the specification and still not able to interoperate (i.e. if the consumer only supports a signature method that happens to be different from the one supported by the provider).

Hopefully, these issues will be addressed during the work done at IETF, now that the spec is headed there. More to come as I deep dive in OAuth and discuss this out with the OAuth experts.