PHP, Zend Framework and Other Crazy Stuff
Posts tagged zend framework
Zend Framework Contributors Mailing-List Summary; Edition #1 (June 2011)
Jun 27th
What’s this nonsense then? Well, a few weeks ago I shot myself in the foot (I was aiming for the cat who spilled coffee all over my desk) and before my sanity returned to normal, I found myself hoodwinked on IRC into writing up weekly summaries of what is discussed in Zend Framework land. The moral of the story is that the attempted murder of any ungrateful coffee-spilling animals sharing your home never ends well.
Let’s see how good a verbose meandering writer can be at summarising things. I decided to refer to myself by name throughout to avoid confusion.
Discussion Time: In ZF2, where do things go?
Ralph Schindler sprang this topic on us back in April and it has stubbornly continued on ever since. Ralph’s initial question boiled down to where should we put resource files, i.e. files utilised by PHP class files but not written in PHP themselves. The two options presented were to store them relative to the class files inside the library directory or store them in a completely separate parallel directory specifically for resources.
Opinions varied quite a bit and Mike Willbanks opined that we should follow PEAR standards rather doing our own thing and seek to limit include_path performance issues. Matthew Weier O’Phinney noted that include_path performance concerns should be minimal using ZF2′s autoloader solution which he has researched, and the intention was to use PEAR or Pyrus. Pádraic Brady (I know that name from somewhere!) chipped in that any decision ought to be made independent of the packaging used, referencing possible weaknesses in how PEAR handles installation, unit tests and documentation viewing. Ralph responded to clarify possible workings of a separate resource directly using simple constants and allowing users to selectively override this noting the existence of the Assetic project (used by Symfony 2). Kevin McArthur added a vote to avoiding PEAR citing the need for multi-version installation support in a final solution and suggested the PHAR format for consideration.
Short version: Someone will make a choice…eventually .
How to Package ZF2
Pádraic spawned a new thread from the above earlier topic outlining the options available for packaging source code including PEAR, Pyrus, Git and a Symfony related project (now known as Composer). He also reiterated concerns previously raised regarding PEAR/Pyrus. There followed a side discussion on how individuals were actually deploying applications and managing QA and patches. Matthew raised an objection to the concept of centralised multi-version installs of Zend Framework citing alternative solutions such as deploying applications already containing the Zend Framework version required as easing maintenance and uncertainty. He also asked Kevin McArthur to clarify the use of PHAR. Kevin responded to offer an answer as to why centralised multi-installs were useful citing benefits in minimising the APC cache memory (centralised libraries offering minimal chances of having identical copies being cached), and offering an example bootstrap script for such an architecture to manage version selection.
Matthew also posted responses to points brought up in respect of Pyrus noting, among other things, that it was closer to stable than suspected, that centralised multi-versioning was possibly not as popular as believed, that git support may be possible to add independently, and that XML package definitions had a number of advantages. The debate over centralised multi-version installations of Zend Frameworks continues for a large number of emails without resolution (too much to summarise other than to note each side is firmly divided by the benefits their particular approach and multi-versioning proponents seem more numerous than expected). No concensus was reached over the method of installation with the best summarisations of the respective opinions being emailed in by Matthew and Kevin McArthur. Pádraic chimed in briefly to prompt adoption of PEAR in preference to Pyrus on the basis PEAR is already widely adopted, understood and is easily manipulated at present. This was seconded but there remained a lack of concensus. The topic ends with a suggestive note that adoption of Pyrus may be accepted recognising the absence of another realistic solution at the current time.
Short version: ZF2 may be distributed using Pyrus. Additional needs beyond that may be proposed to PEAR for Pyrus or via another tool. It’s clear Pyrus will be crop up again in a future discussion.
ZF2′s View: Some thoughts for discussion
Pádraic Brady dropped an email offering his thoughts on the direction of ZF2′s View which hadn’t seen huge feedback on the Wiki. The short version was that Zend_View was a God Class, View Helpers were confusing, integration needed improvement and templates needed additional control over layouts/placeholders. He suggested a couple of steps including elimination of the ViewRenderer helper, the replacement of View Helpers with a Controller oriented entity referred to as a “Cell”, ensuring the base template of a View had greater control over the rendering process and reiterated previously agreed changes. Marc Bennewitz added several additional concerns and posted a discussion he had with Matthew on the Zend\View\Variables class. Matthew responded with a number of points including keeping the barrier to entry low, recognising all Views are not HTML, and other areas for consideration. Nice to see everything in one place for discussion.
Short version: Not much in the way of disagreement. Seems like a topic that just needs sufficient code for someone to run off and write some proposals.
Proposal: Don’t implement BC requirement until ZF 2.1
Rob Allen emailed a proposal suggesting that backwards compatibility be deferred as a requirement until ZF 2.1. His reasoning focuses on the experience with ZF 1.0 where the frozen compatibility hurt ZF 1.x more than it helped. The proposal was quickly seconded by Ryan Mauger, Anthony Shireman, Rob Zienert (on condition of communicating this clearly to users), and H. Hatfield. Opposing views were aired by Till Klampaeckel on the grounds of keeping migrations between versions simpler. Tomáš Fejfar commented on this being a psychological proposal to increase adoption and raise feedback before the API is finally frozen. Matthew Weier O’Phinney noted his agreement that bigger features were required to increase early adoption.
Bradley Holt took the opportunity to propose alternative version/release strategies setting the context for the rest of the debate to date. His two points were to a) utilise an odd/even version system where odd numbered minor releases were considered betas and even numbered considered stable, similar to how the Apache HTTP server does things, and b) increase the pace of major releases to shorten the period between allowable compatibility breaks and speed up rolling out such improvements. The debate suggested Rob Allen would agree to faster major releases.
Short version: Implementing BC may be necessary. Might be better to shorten the release cycle and roll out compatibility breaking changes more regularly.
Proposal: Shorter Release Cycle for Major Versions
On the back of the previous topic, Bradley Holt elaborated on a proposal for shortening the release cycle for major versions. Pádraic Brady responded in agreement noting that by the time ZF2 was released, there was a possibility that PHP 5.4 with potentially advantageous features would be well on the way to a 2012 release. Based on this he suggested that ZF3 development could be executed quickly with a release date no later than end of 2012 (i.e. 18 months away) with a maximum allowed period of 2 years. Kevin McArthur inquired into a reasonable minimum period before major releases but this seems to the number needing more discussion. There has been no input from the Zend guys to date so this remains up in the air.
Short version: We want ZF3 relatively quickly and not in 4-5 years time.
Encouraging Usage of ZF 2.0 Beta
Another discussion opener from Bradley Holt. Bradley suggested an extended beta period, a communication campaign, treating all betas as regular GA releases and highlight applications build on ZF2 to encourage uptake. Kevin McArthur reiterated the need to maintain current versioning and noted his agreement to shortening the major version release cycle and having an extended alpha/beta period. Alessandro Pellizzari emailed in his thoughts from the perspective of a user and the difficulties that currently exist with checking the status of any one ZF2 component. Derek Miranda voiced his agreement with Alessandro’s thoughts.
Short version: Maybe we need a beta first?
New dev snapshot released
On the back of the work going into Zend\DI, Matthew announced the release of a new development snapshot for testing and feedback. Ralph Schindler subsequently posted links to Zend\DI examples. Feedback is ongoing. Anyone is free to check it out and offer some opinion!
Short version: Isn’t that short enough?
For those of you wondering where to go and track the inner thoughts of the Zend Framework developers, you can join us on the zf-contributors mailing list (available on Nabble here) or on IRC channel #zftalk.dev on Freenode.net. Until next time, remember, coffee + cat = bad.
Do Cryptographic Signatures Beat SSL/TLS In OAuth 2.0?
Oct 8th
This post is more about perception vs reality than anything else. When it comes to application security, we like to consider that the steps we take to protect ourselves are unassailable bastions interlocked to poke sharp things at incoming attackers. What we don’t like is knowing that our bastions are always at risk of being undermined in numerous unexpected ways. The consistent reaction among programmers is the same - we often pretend those bastions are completely unassailable no matter what and using any excuse necessary. Reality isn’t always a factor in my experience.
OAuth 2.0 is the next version of OAuth. It’s always great to see a good thing get better but the new version started off with an oddity inherited from OAuth WRAP. It removed the requirement for cryptographic signatures. Anyone who has skirmished with OAuth 1.0 has probably found the signature requirement a PITA. It’s poorly specified, subject to language specific errors, and difficult to debug. OAuth 2.0 would do away with this for these same reasons, replacing the need for digital signatures with a requirement that OAuth 2.0 operate over SSL/TLS between web servers. In this way, requests could not be intercepted by a Man-In-The-Middle (MITM), altered or replayed, thus rendering the need for digital signing obsolete. Simple as pie.
Very recently, a similar proposal was raised in relation to Pubsubhubbub which needs optional cryptographic signatures to prevent potential vulnerabilities in scenarios where topic updates must be verified as coming from a trusted source. The new Pubsubhubbub measure would have dropped any need for an all-encompassing cryptographic signature of both the topic body and its headers (currently headers are not signed which is problematic for future use) in favour of requiring SSL/TLS support on both sides (i.e. both Hubs and Subscribers). This was inspired by the bearer token approach of OAuth 2.0 over TLS currently required when using OAuth 2.0 from web servers. Technically it’s a simple effective solution.
However, both of these have the same problem. SSL/TLS offers a perception of unassailable security at odds with reality. The reality is harsh. SSL/TLS from a browser is, for most purposes, rock solid. Your connections are secure and if Firefox stumbles over a SSL Certificate it can’t validate or doesn’t immediately trust, it warns you, giving you the option of allowing an exception. This all works because we don’t each build our own browser from the ground up. The narrow field and expertise dedicated to each ensures SSL/TLS works as it should. Let’s turn that view on web applications sitting on a web server. Here we find more than a couple of discouraging trends.
The first is that setting up SSL/TLS is, for at least some percentage of people, difficult. Servers are misconfigured, SSL certificates are reused on different domains, and SSL certificates vary between self-signed and those signed by a trusted party which needs to be paid. It’s a bit on the messy side and mistakes are common. This was one reason why I objected to the Pubsubhubbub proposal - Subscribers refer to anyone wishing to receive topic updates, which is, well, everyone. The chances of everyone being able, or even capable, of setting up SSL/TLS for their websites is small.
The second is that client libraries for HTTP may be subject to insecure behaviour. The simplest example here is PHP itself, where the default options for the SSL context sets the verify_peer option to FALSE. If one were to build any SSL/TLS based protocol implementation in PHP without realising this (which people do), they will of course fail to verify SSL certificates encountered by their client. On the other hand, curl has SSL peer verification enabled by default. Insecurity by default is a nightmare.
The third, related to how hard/error prone it is for people to get SSL setup right, is the all too common practice of dealing with SSL certificate problems by deliberately disabling SSL Certificate verification in client libraries. Great for testing, bad for production purposes. This practice is widespread to ensure HTTPS requests will still work regardless of the state of the SSL certificate employed by the server. Sure, you’ll get a valid error-free response - but from who?
These three issues combine to offer a picture where SSL/TLS can be significantly broken on the web. This is hardly news. So hearing about SSL/TLS requirements in commonly used protocols begs the simple question: how does it improve security when it obviously conflicts with commonplace practice?
This is why protocols such as OAuth 2.0 and Pubsubhubbub need to tread carefully. Mandating the use of SSL/TLS introduces a single point of failure that will fail. It’s guaranteed to fail. It’s already failing. For those left vulnerable by such failures from, for example, an open source library that reaches common use, an attacker can just walk right in with a Man-In-The-Middle (MITM) attack. Sure, you’re using HTTPS, but if you’re not verifying certificates you do not have any guarantee that you are communicating with the intended trusted server.
Compare this to digital signatures. You have a shared secret that is not known to any potential MITM. You have a signature to verify the origin of any request/response. You have a random non-repeating nonce which varies the signature to prevent both replay attacks and remote timing attacks. You can additionally run it over SSL/TLS all you want, secure in the knowledge that John Doe’s PHP Streams based HTTP client will always work securely even if SSL certificate verification is still disabled by default. And best of all? You can’t optionally disable it! Either you implement it, or nothing will work.
Like practically everything in security, it’s a choice. Secure the protocol in depth, or make it easier to implement. You can’t have both which is why protocols will always be a PITA to implement when designed with security uppermost in their list of important features. It can’t be helped.
Back to OAuth 2.0, it has been mentioned that the next draft will contain an option to use cryptographic signatures instead of relying solely on SSL/TLS. This is a significant improvement in my opinion, and gives implementers back the ability to freely choose the most appropriate form of security for their APIs. You can all thank Erin Hammer-Lahav for triggering this or, you know, curse him forever when it becomes the standard means of using OAuth 2.0 from a web server and you are faced with implementing it .