This post is more about perception vs reality than anything else. When it comes to application security, we like to consider that the steps we take to protect ourselves are unassailable bastions interlocked to poke sharp things at incoming attackers. What we don’t like is knowing that our bastions are always at risk of being undermined in numerous unexpected ways. The consistent reaction among programmers is the same – we often pretend those bastions are completely unassailable no matter what and using any excuse necessary. Reality isn’t always a factor in my experience.

OAuth 2.0 is the next version of OAuth. It’s always great to see a good thing get better but the new version started off with an oddity inherited from OAuth WRAP. It removed the requirement for cryptographic signatures. Anyone who has skirmished with OAuth 1.0 has probably found the signature requirement a PITA. It’s poorly specified, subject to language specific errors, and difficult to debug. OAuth 2.0 would do away with this for these same reasons, replacing the need for digital signatures with a requirement that OAuth 2.0 operate over SSL/TLS between web servers. In this way, requests could not be intercepted by a Man-In-The-Middle (MITM), altered or replayed, thus rendering the need for digital signing obsolete. Simple as pie.

Very recently, a similar proposal was raised in relation to Pubsubhubbub which needs optional cryptographic signatures to prevent potential vulnerabilities in scenarios where topic updates must be verified as coming from a trusted source. The new Pubsubhubbub measure would have dropped any need for an all-encompassing cryptographic signature of both the topic body and its headers (currently headers are not signed which is problematic for future use) in favour of requiring SSL/TLS support on both sides (i.e. both Hubs and Subscribers). This was inspired by the bearer token approach of OAuth 2.0 over TLS currently required when using OAuth 2.0 from web servers. Technically it’s a simple effective solution.

However, both of these have the same problem. SSL/TLS offers a perception of unassailable security at odds with reality. The reality is harsh. SSL/TLS from a browser is, for most purposes, rock solid. Your connections are secure and if Firefox stumbles over a SSL Certificate it can’t validate or doesn’t immediately trust, it warns you, giving you the option of allowing an exception. This all works because we don’t each build our own browser from the ground up. The narrow field and expertise dedicated to each ensures SSL/TLS works as it should. Let’s turn that view on web applications sitting on a web server. Here we find more than a couple of discouraging trends.

The first is that setting up SSL/TLS is, for at least some percentage of people, difficult. Servers are misconfigured, SSL certificates are reused on different domains, and SSL certificates vary between self-signed and those signed by a trusted party which needs to be paid. It’s a bit on the messy side and mistakes are common. This was one reason why I objected to the Pubsubhubbub proposal – Subscribers refer to anyone wishing to receive topic updates, which is, well, everyone. The chances of everyone being able, or even capable, of setting up SSL/TLS for their websites is small.

The second is that client libraries for HTTP may be subject to insecure behaviour. The simplest example here is PHP itself, where the default options for the SSL context sets the verify_peer option to FALSE. If one were to build any SSL/TLS based protocol implementation in PHP without realising this (which people do), they will of course fail to verify SSL certificates encountered by their client. On the other hand, curl has SSL peer verification enabled by default. Insecurity by default is a nightmare.

The third, related to how hard/error prone it is for people to get SSL setup right, is the all too common practice of dealing with SSL certificate problems by deliberately disabling SSL Certificate verification in client libraries. Great for testing, bad for production purposes. This practice is widespread to ensure HTTPS requests will still work regardless of the state of the SSL certificate employed by the server. Sure, you’ll get a valid error-free response – but from who?

These three issues combine to offer a picture where SSL/TLS can be significantly broken on the web. This is hardly news. So hearing about SSL/TLS requirements in commonly used protocols begs the simple question: how does it improve security when it obviously conflicts with commonplace practice?

This is why protocols such as OAuth 2.0 and Pubsubhubbub need to tread carefully. Mandating the use of SSL/TLS introduces a single point of failure that will fail. It’s guaranteed to fail. It’s already failing. For those left vulnerable by such failures from, for example, an open source library that reaches common use, an attacker can just walk right in with a Man-In-The-Middle (MITM) attack. Sure, you’re using HTTPS, but if you’re not verifying certificates you do not have any guarantee that you are communicating with the intended trusted server.

Compare this to digital signatures. You have a shared secret that is not known to any potential MITM. You have a signature to verify the origin of any request/response. You have a random non-repeating nonce which varies the signature to prevent both replay attacks and remote timing attacks. You can additionally run it over SSL/TLS all you want, secure in the knowledge that John Doe’s PHP Streams based HTTP client will always work securely even if SSL certificate verification is still disabled by default. And best of all? You can’t optionally disable it! Either you implement it, or nothing will work.

Like practically everything in security, it’s a choice. Secure the protocol in depth, or make it easier to implement. You can’t have both which is why protocols will always be a PITA to implement when designed with security uppermost in their list of important features. It can’t be helped.

Back to OAuth 2.0, it has been mentioned that the next draft will contain an option to use cryptographic signatures instead of relying solely on SSL/TLS. This is a significant improvement in my opinion, and gives implementers back the ability to freely choose the most appropriate form of security for their APIs. You can all thank Erin Hammer-Lahav for triggering this or, you know, curse him forever when it becomes the standard means of using OAuth 2.0 from a web server and you are faced with implementing it ;) .

Related posts:

  1. OAuth Specification and Zend Framework/PEAR Proposal
  2. Writing A Simple Twitter Client Using the PHP Zend Framework’s OAuth Library (Zend_Oauth)
  3. Zend_Feed_Writer and Zend_PubSubHubbub In Proposal Queue
  4. The Mysteries Of Asynchronous Processing With PHP – Part 1: Asynchronous Benefits, Task Identification and Implementation Methods
  5. OpenID 2.0 Library – to PEAR, Zend or both?