Everywhere you look on the internet today, the chances are good that you’ll see an API-driven application. And where you see an API, the chances are even better that there’s an OAuth 2.0 authorization server protecting that API. A standard published in 2012, the OAuth 2.0 delegation protocol has been a gift to software developers across the web: it provides a powerful and flexible standardized security mechanism which, in contrast to many security protocols of the past, is relatively easy to get right. As a consequence, it has become wildly successful, and is an essential tool in the toolbox of a modern API developer.
But here in 2019, we’re really starting to see the edges of what OAuth is good at. There have been many extensions to OAuth 2.0, some of which protect specific parts of the process like PKCE for mobile apps, JAR/JARM for front channel protections, and PoP/MTLS for token presentation. Other extensions have added new ways to do OAuth, such as the device flow and CIBA. The prevalence of such extensions speaks to OAuth’s flexibility, but the landscape is getting harder for developers to navigate. New styles of client and server deployment, new security threats, and new expectations from both end-users and developers have all pushed the OAuth framework into spaces it was never designed for, and the results can be overwhelming. In the last few years, I’ve been looking at this landscape of options and attempting to draw out some of the commonalities of the various OAuth 2.0 extensions into a new abstraction that learns the lessons of OAuth 2 without carrying its hard-earned baggage.
Some of OAuth’s biggest problems come from its overuse of the front channel for passing essential security information. While front-channel redirects are an essential innovation in allowing interactive user consent, the way OAuth 2 uses these redirects leaves it open to a variety of alteration, injection, and data leakage attacks. But this begs the question: does the protocol need to put all of that information into the redirects in the first place? We can get around it by using the “Intent Registration” protocol design pattern. If the client makes a back-channel call directly to the authorization server first, all of the sensitive information for the authorization request can be passed directly between the client and the AS without going through the browser. The AS can return a reference that the client can pass to the browser that represents the sensitive information, instead of carrying the information itself.
This step creates a transaction between the client and authorization server which can be augmented, mutated, and adapted over time as new information is made available and decisions are made. If the AS decides from this first call that it doesn’t need to talk to the user, then it can return an access token immediately. If it instead decides that interaction is needed, the client and AS can signal to each other the ways that the interaction can take place. In OAuth 2, each of these paths has to be managed and chosen up front through selecting the grant type, response type, scopes, and extensions to the OAuth 2 protocol in play.
Since we’re pushing information upfront, this transactional approach gives us a chance to describe resources more richly than OAuth’s scope strings allow us. We also have a chance for the client to present and bind keys to the transaction, allowing a stronger association with the client software even when using ephemeral keys with mobile applications and single-page apps. Instead of basing all of our trust on a pre-registered client system, which is what OAuth 2 is optimized for, we have the ability to base our trust on a variety of policy mechanisms that fit different types of applications much better.
Finally, we have an opportunity to use these patterns and capabilities in a space beyond authorization delegation. For example, let’s say you have a process where most of the time you want your API calls to function without hindrance, but once in a while, you need to interact with the user to get additional information, like an updated credit card number. This transactional pattern allows us to rethink how we approach authorization and access to our APIs.
The XYZ project (with information available at https://oauth.xyz/) is a concrete proposal for how we could do all of these things in a new protocol that is based on the concepts of, but not compatible with, OAuth 2.0. I’ve presented XYZ at the Identiverse Conference (https://www.youtube.com/watch?v=U9i7YaN8v9c) and the IETF OAuth Working Group (https://www.youtube.com/watch?v=TE3Fzb5-Jz0&t=3764), and the conversation is starting to pick up in the IETF with the TXAuth mailing list (https://www.ietf.org/mailman/listinfo/txauth).
The IETF will be discussing this at the TXAuth BoF session in Singapore on Monday, November 18, 2019. You can participate remotely via the links for TXAuth at https://datatracker.ietf.org/meeting/agenda/#txauth.
Bespoke Engineering LLC
Government authorities are turning to #AI as a tool for contact tracing, but it shouldn't compromise #privacy. @TechCrunch shares more: https://tinyurl.com/y854v23b
.@InfosecurityMag shares the importance of good password practices: https://tinyurl.com/y7c7ln9h
Making the case for better #identity verification in @lawfareblog, @jgrantindc and @RosenzweigP chart a course in for more inclusive access to improved online services https://tinyurl.com/t6e7vpe
#IDPro's #BodyofKnowledge Vol 1/Issue 1 @smedinghoff shares an overview of the #legal environment that governs #identity systems, focusing on 3 different levels of legal rules: General Law, Generic Identity System Law, and Individual Identity System Rules: https://bok.idpro.org/article/id/8/
#IDPro member Janelle Allen @sparkidentity shares with @SearchSecurity the benefits of #IAM certifications: https://searchsecurity.techtarget.com/tip/Comparing-top-identity-and-access-management-certifications