PHP, Zend Framework and Other Crazy Stuff
Archive for August, 2013
Stateful vs Stateless CSRF Defences: Know The Difference
Aug 12th
Scanning the blogs today, I noticed an article discussing a method of implementing Stateless CSRF protection. Stateless CSRF defences are required in applications where the user has no session. That might sound a bit weird, but not all applications require sessions and their architecture may be such that they do not synchronise session data across servers.
The difference between Stateful and Stateless CSRF defences is that the former requires storing the CSRF token on the server (i.e. session data) while the latter does not, i.e. the server has zero record of any CSRF tokens. As far as the server is concerned, the number of parties with persistent knowledge of a valid token is reduced to just one - the client.
Note: I say “persistent knowledge” because you can implement a Stateless CSRF defence in one of two ways which differ only in who generates the CSRF token. The server can generate the token (the simplest for any PHP programmer), communicate it to the client, and then promptly forget about it. Alternatively, the client can itself generate the CSRF token using Javascript or, if the client is itself a server, whatever programming language is in use. Anything relying on Javascript will have security implications due to the risk of Cross-Site Scripting so leaving it to servers or non-Javascript client programming is suggested.
Let’s compare both types of CSRF protections.
Stateful CSRF Defence: Synchronizer Token Pattern
Most frameworks I can think of rely on Stateful CSRF defences so you’re all familiar with the process. The server will generate a random CSRF token. The token is stored on the server as part of the user’s session data and then communicated to the user as a hidden form field whenever they request a form containing page. It might also be passed as a header or in some other format for use in Javascript requests. If it ever occurs to you to just use the session ID as the CSRF token, please give yourself a head slap.
On all POST requests, the user will submit a valid CSRF token as part of the POST data. The server completes the CSRF defence by ensuring that the submitted token is identical to the stored copy. Unless the attacker can successfully guess the token, their CSRF attacks are rendered useless.
Stateless CSRF Defence: Double Submit Pattern
In a stateless CSRF defence, what’s really important is that requests are verified as being initiated by the client. We accomplish this by using the Double Submit method of preventing CSRF.
In a Double Submit, the client submits two tokens back to the server. The first is submitted in the POST data, the second is sent as cookie data. Since the attacker has no idea what cookie data your browser has and can’t fake cookie values in a CSRF attack, the attacker only has the ability to guess what token to inject into POST data being submitted during a CSRF attack - which won’t agree with the cookie token. The server will compare the token contained in the POST data with the token from the cookie data and check that they are identical.
Simply put, we’ve switched the token storage location. Stateful CSRF defences involve storing it in session data on the server while Stateless defences need the client to store it in a cookie (with the HttpOnly flag enabled and the cookie suitably limited by subdomain or domain). Either way, we end up with the same results - two tokens for comparison, of which one always remains unknown to the attacker.
Stateful Stateless CSRF Tokens
In the blog post I read this morning, there are two fundamental problems in the proposed stateless defence. The first was assuming that WordPress’ nonce feature made a good anti-CSRF defence model (the function doesn’t actually generate good tokens let alone real nonces). The second was not recognising its departure from the Double Submit approach.
The method suggested in the blog post generates a CSRF token by hashing together inputs which include server time in seconds, a user ID, a validity period in seconds, and a textual element like an action description. These elements are all determined by the server and they share something in common - they represent server knowledge or state. This is NOT stateless. You are still directly reliant on the server storing inputs to the token generation routine which may well be identical per user or determined by publicly accessible user information (e.g. IP address).
If you assess each input to the token individually… Time is predictable and linear. User IDs may have limited range, have been previously enumerated using a timing attack against your login logic, or be non-existent in a truly stateless application. Text descriptions are probably fixed and may be either known in advance or be subject to brute forcing or harvesting. This tells the tale of an extremely bad and dangerous generating mechanism vulnerable to brute forcing due to the lack of sufficient entropy. CSRF tokens MUST be random. If they are not randomly generated, then you are doing something wrong. In this case, the need for the convenience of a reconstructable token overrode the need for securing tokens against brute forcing.
The token is submitted with POST request data (single submit - not a double submit) and the server must then reconstruct the token in order to perform a comparison. The reconstruction requires that the server have some shared state or data harvesting which acts like a seed. Crack the seed and the defences for all users will be utterly devastated.
Double Submit and Random Tokens Are Inseparable Partners
Like most attacks, CSRF does not exist in isolation so developing a good defence requires mitigating other attacks. CSRF tokens needs to resist brute forcing by using sufficient entropy to be decently random, they must be stored securely, and you must never share tokens between HTTP and HTTPS sessions. Any good CSRF token implementation, whether stateful or stateless, should reflect those requirements with features for limiting tokens by scope and time.
BREACH Attacks: Extracting HTTPS Encrypted Data In Under A Minute Without Encryption Cracking
Aug 8th
Welcome to Black Hat Conference Season…
Last week, news started to spread from the Black Hat conference about a new oracle attack (called the BREACH attack) against HTTPS which may allow an attacker to guess desireable values contained within deflate compressed responses in a very short time, typically under a minute according to the presenters.
We call this an “oracle attack” because we’re not attempting the crack the HTTPS encryption and read the content directly but are instead monitoring the compressed size of the content which will fluctuate depending on the content being compressed thus leaking information about what had been encrypted. In a sense it’s also a side-channel attack on the compression algorithm - another example of a side-channel attack is using the time it takes to compare strings on a server to enumerate valid usernames and emails known by applications, all without accessing the database directly - some frameworks will feature a fixed time string comparison function for this very reason.
BREACH is shorthand for Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext. Someone went to a lot of trouble inventing a name to fit that acronym. As the name suggests, though it’s not always made clear in current articles online, the attack isn’t just good for HTTPS - it can also work against HTTP in situations where attackers cannot get hold of the content but can access a good record of the metadata for responses. If you’re an NSA employee this may be just a few keystrokes away.
Edit: Anthony Ferrara (ircmaxell) has post his own thoughts on the BREACH attack: http://blog.ircmaxell.com/2013/08/dont-worry-about-breach.html
Note: If you do some more research about BREACH attacks, you’ll notice the prominance of CSRF token references. CSRF tokens tend to be weakly controlled in applications and frameworks, i.e. single token per session without limited use scopes or narrow expiration times. A compromised CSRF token could conceivably be valid for a user across an entire site for every single form so long as their session remains open. This said, targeted data is not exclusively tokens - email addresses, real names, credit card information, order numbers, delivery/business addresses and pretty much ANYTHING including personally identifiable information you want hidden by HTTPS encryption could also be targeted.
The attack itself isn’t that hard to understand. The data compression algorithm we call DEFLATE (the basis of gzip) uses the LZ77 algorithm which takes advantage of repeated strings to more efficiently compress output. The more repeating characters there are, the smaller the compressed output becomes. This holds true regardless of whether the compressed content is HTTPS encrypted or not.
If an attacker can inject a string into a HTTPS response intended to match another unknown string (the target secret), they can iteratively guess the secret value by monitoring the compressed size of the responses for different guesses. The more correct a guess is (i.e. matching sequential characters strings at either the beginning or end of the guess), the more efficiently LZ77 can compress the content, and the smaller the response size becomes. In hindsight, this appears obvious but we’ve never had a concrete prove of concept before now targeting content bodies in encrypted responses.
I’ll save you the trouble of a long read if you’re short on time. The only complete surefire defence against BREACH attacks is to disable HTTP compression, i.e. mod_deflate for Apache and the gzip module for nginx. This may have significant performance implications but there is no other known comprehensive defence. I’ll mention some other possible solutions later in this post but all of them have their weaknesses and limitations. BREACH is basically a fundamental flaw in HTTP - any permanent solution will need to need to come from the HTTP layer.
Attack Requirements
The attacker needs two capabilities to pull off a BREACH Attack:
1. The ability to read responses received by the user’s browser.
2. The ability to cause the user to send requests from their browser.
3. Some part of the request must be reflected in the response.
The first is an eavesdropping ability that might be gained using ARP poisoning or something more concrete like cable splitting. Some attackers may have specialised rooms at ISPs across the planet funded by three letter agencies. Note that this does not require cracking any HTTPS encryption - we just need a way to collect data on content sizes from responses.
The second can be accomplished using the time honoured method of screwing with the user by having them visit an attacker controlled website containing iframes or Javascript. Either method should allow the attacker to have the user spawn thousands of requests with attacker controlled parameters completely unnoticed by the user.
The third is a question of application design. A multipage form, for example, might carry across extra hidden fields containing attacker injected strings (bad validation). A seach form might redisplay the search terms on the page. A messages tab would redisplay submitted messages. There are lots of valid direct and indirect (via database) reflections of user data in responses of which some will be perfectly normal and others the result of security weaknesses.
Other Factors To Consider
It’s important to note that HTTPS encryption has little to no impact on the viability of this attack. BREACH works on all SSL/TLS versions and cipher suites. Some ciphers actually make it easier and others make it harder. None pose insurmountable problems, however, since attackers can adjust the makeup of the injected guesses, the number of measurements and the scoring of the results to filter out the impact of the most difficult ciphers. The attack can only become more effective over time.
The LZ77 algorithm does not operate alone - it has a partner called Huffman coding within DEFLATE. This coding pollutes the compression measurements since it sometimes prevents repeated strings from providing compression efficiencies under LZ77. This is actually simple to solve using a bit of arcane spellcasting (literally just a bit of string padding) and using twice the number of measurements as LZ77 would require were it an attacker’s sole concern.
The attacker will also need to “bootstrap” the attack by having starting comparable strings to use. For example, the string “csrftoken=” can be known in advance if used by the target site’s output (CSRF tokens can be used in GET URLs not just forms). This quickly becomes more difficult for data delimited by tightly controlled characters, for example a CSRF token in a form will be delimited with quotes as part of the form markup. Unless you are injecting user data into markup without escaping it, the attacker won’t be able to inject a matching bootstrap string (the quote would be escaped) which limits their ability to perform attacks.
They could try something else - perhaps you pad tokens with something predictable or guessable, perhaps you inject some user data into attributes so they end up quoted anyway. For example, a token prefixed with a time element would be predictable. Attackers could also blind guess the first few characters but the math isn’t favourable under those conditions. There may be other scenarios, e.g. attribute reflection, where the quotes can still be used though we hope reflecting user data in an attribute is rare given it’s an obvious Cross-Site Scripting risk.
Application Weaknesses and Defenses
From the above, we can ascertain that an attacker needs the right application behaviour in which to develop this attack into a viable threat:
1. Responses must be served from a server which has HTTP compression enabled.
2. Some part of the user request must be reflected within a response so guesses are compressed with the targeted information.
3. The response body must contain some information desired by the attacker.
4. While not strictly necessary, it would be nice for the response to have as little noise (changing content) as possible.
We can also deduce likely defenses against BREACH attacks from the same list:
1. We can disable HTTP compression altogether.
Since the BREACH Attack relies on the compression to execute a side-channel attack, it’s sort of obvious that disabling mod_deflate for Apache or the gzip module for niginx stops this attack dead in its tracks. This is basically the standard recommended defence at this time pending something provably better. You may not like the performance implications, and the attack may not yet be widespread, but don’t doubt that there are blackhats and criminals out there working on engineering easy to deploy BREACH attacks as you read this. The theory is so simple that I wouldn’t expect it to take more than a few hours to prototype - the delays may come from automating it effectively and then figuring out what to charge the criminal markets for it. The attack takes just a few thousand requests which can be completed in under a minute - time and requests increase as the length of the value wanted does but only in a linear manner.
2. We can prevent the direct and indirect reflection of user input in responses.
Direct reflection is simply dumping data from the request parameters straight into the response. Indirect reflection just means it takes a scenic route, e.g. via the database or a third party API. It’s impossible to eradicate all user data reflection - the very notion is silly. However, you can be wary of what you are reflecting back and whether it is strictly necessary. Reducing the attack’s surface area is better than doing nothing at all. For example, monitor how you transfer hidden values across multipage forms (discard invalid parameters). The attacker will need to meddle with request URIs and/or form encoded data so your validation may catch some of what they’re attempting. This will be very hit and miss unless you simply stop users from submitting and storing any data!
3. We could randomly adjust the length of responses so compression output size also becomes randomised.
Length hiding is a common enough defence you’ll see quoted everywhere in the wild for lots of security related topics. Sadly, people forget that if you take sufficient measurements of anything that has both a fixed and random element to its length, there’s this annoying thing called “standard error” which stubbornly insists on being inversely propertional to the number of measurements. Yet more statistical arcane spellcasting by hellspawn! The more you measure, the more length hiding is averaged out until it’s rendered pointless. It will force the attacker to require more requests (which means more time and coffee trips) in their BREACH attack but that’s all will accomplish.
4. We could prevent the compression of secrets and other desireable data in responses.
You can mutate only certain compressed secret data in such a way that BREACH attacks can’t iteratively uncover it (i.e. masking). For example, you can use a per request padding concatenated to a XOR of the padding with the actual data. This shifts the data per request making a BREACH attack impossible. Obviously, this will NOT work for data which is meant to be user readable. It’s also probably simpler just to generate a unique value per request and be done with it for tokens and other such information - it’s probably going to break stuff for users using multiple browser tabs either way.
5. We could rate limit requests to the server
The BREACH attack requires thousands of requests so rate limiting may be helpful. The attack unfortunately doesn’t really need a massive number of requests - a couple of thousand in the space of a minute isn’t as big as you might think - so this may or may not be realistic. High traffic sites would barely notice such a surge and it may even be expected as a result of normal user interaction.
As it stands, disabling HTTP compression is the simplest and most effective solution. For those applications/frameworks emitting CSRF tokens for GET URLs, defence 4 is a bare minimum at this point - defense in depth dictates doing something unless you have a crystal ball to predict all possible user written code. CSRF tokens in forms should have quote delimitation making BREACH attacks, we hope, more difficult but they should also be checked to ensure there is no common or predictable token padding that would help bootstrap these attacks. The community concensus may well move against single-generated CSRF session tokens altogether despite the user impact. This is likely the most immediate concern for ALL frameworks with form capabilities.
Want to read more? There’s a BREACH attack website online by the paper authors with a link to the original whitepaper.
Related articles
- Step into the BREACH: New attack developed to read encrypted web data (go.theregister.com)