This is a branch off from a separate discussion on the PHP-FIG mailing list about other ways the Framework Interoperability Group can encourage and foster wider interoperability among its member projects (and by extension, the whole PHP community). I’ll start by noting two interesting developments in recent months and one long standing best practice.
1. Launch of the SensioLabs Security Advisory Checker
The SensioLabs Security Advisor Checker is described on its website as follows.
You manage your PHP project dependencies with Composer, right? But are you sure that your project does not depend on a package with known security issues? The SensioLabs security advisories checker is a simple tool, available as a web service or as an online application, that uses the information from your composer.lock file to check for known security vulnerabilities. This checker is a frontend for the security advisories database.
The service operates by having people submit vulnerability data, as YAML files, to a centralised Github repository through pull requests. The upside is that the vulnerability data can be peer reviewed and centrally dispersed either online or via a service API. The downside is that you need to find vulnerability disclosures and people to submit them. The service currently covers Symfony, Zend Framework, Doctrine, Twig and FriendsOfSymfony bundles. It’s a tiny sample of packages available through Composer. I’m also not entirely sure if it’s sufficiently fine grained to report vulnerabilities on a project’s sub-packages where you have no direct dependency on the aggregate package (e.g. using zendframework/zend-db instead of zendframework/zendframework). That said, this is a working model of a service for checking your dependencies.
That said, the service exhibits an ambitious idea – projects sharing their vulnerability disclosures or advisories in a way that allows for automatically checking if any of your projects need to have their dependencies updated for security reasons.
2. OWASP‘s Top 10 security risks for 2013 includes “A9 – Using Components with Known Vulnerabilities”
This is a new entry onto OWASP’s Top 10 (which is currently at release candidate status for 2013). In summary, it recognises that applications are becoming ever more dependent on code not developed internally. We’ve had web application frameworks for years. Composer and Github have unleashed a storm of accessible libraries, bundles, modules, and other units of reuse that have revealed Not Invented Here (NIH Syndrome) as a psychological problem in ways not previously possible.
As reliance on externally controlled dependencies increases, so too does the risk of your applications using insecure dependencies. This is a risk that requires a lot of work to mitigate. For each dependency, you need to do a security review (no, I’m not kidding), check for security disclosures (whether voluntary or involuntary) and ensure that you end up rolling out to production with safe versions.
Quoting from the OWASP advice on preventing the use of components with known vulnerabilities…
One option is not to use components that you didn’t write. But realistically, the best way to deal with this risk is to ensure that you keep your components up-to-date. Many open source projects (and other component sources) do not create vulnerability patches for old versions. Instead, most simply fix the problem in the next version. Software projects should have a process in place to:
1. Identify the components and their versions you are using, including all dependencies. (e.g., the versions plugin)
2. Monitor the security of these components in public databases, project mailing lists, and security mailing lists, and keep them up-to-date.
3. Establish security policies governing component use, such as requiring certain software development practices, passing security tests, and acceptable licenses.
3. Disclosing security vulnerabilities in a timely and responsible manner is a best practice
As programmers, we have a responsibility to users to disclose security vulnerabilities and fix them in a timely manner to ensure that those users are protected from harm. It’s almost impossible not to end up in such a situation at some point in your career. In fact, it may even be impossible for it not to happen multiple times in a single year!
The sad truth, however, is that disclosing security vulnerabilities can be terribly hit and miss. I’ve seen people ignore vulnerabilities or fix them but fail to disclose the fact to their users. Opinions over the severity of a vulnerability can vary dramatically within even a small group of programmers. Nobody likes to air their dirty laundry in public but not doing so can mean someone including a dependency with a known vulnerability without any means of becoming aware of that vulnerability.
It is always a good thing to come clean. Fixing a vulnerability, disclosing it, and having a good security policy in place prevents the reputational damage you might suspect would occur. It’s usually the secretive rollout of fixes that gets you in trouble when someone is attacked or the reporter discloses the vulnerability through other means (usually making note of your refusal to come clean).
The method of disclosure is usually in release notes, commit messages, blog posts or emails. This article suggests using formats that are more fundamentally consumable and standardised.
Centralised Tracking Of Decentralised Vulnerability Data?
Being aware of these three, we can see the immediate value in something like SensioLabs security advisory checking service. You have dependencies which very likely have had or will have vulnerabilities, and you probably would love to know about those before releasing a new project build to production servers. The problem is that this involves work in importing vulnerability data into the checking service and, failing that at present, a trawl of the internet for vulnerability disclosure blog posts, commit messages and emails. What would happen if, as a means of improving interoperability and common security, more vendors published their disclosures at fixed URIs in just one or two easily consumable formats (e.g. YAML or RSS/Atom)?
For example, instead of relying on someone submitting a pull request to SensioLabs each time Library X discloses a vulnerability, one could simply store a URI to Library X’s disclosure feed and/or a YAML formatted summary stored in its git repository. The SensioLabs service, or something like it, could now pull in vulnerabilities automatically assuming Library X uses a predetermined consumable format. This sounds, at least to me, as a more sustainable system.
If a sufficient number of packages on Composer followed this practice, we’d have something quite brilliant and possibly easier to promote in the community. People are now very familiar with maintaining a composer.json file. Adding one more file, in lieu of an alternative RSS/Atom feed, is not that big of a stretch if enough projects request it of their dependencies. The rest would be down to the boring work of agreeing formats, procedures and other technical aspects with a view towards, *if* called for, a PSR on the topic.
Let me know what you all think in the comments or catch me on the PHP-FIG mailing list.
Summarising knowledge has as much value as writing a 200 page treatise on a topic, so here is a list of 20 brief points you should bear in mind when battling Cross-Site Scripting (XSS) in PHP. Minus my usual book length brain fart . Chances are good that ignoring or acting contrary to any one of these will lead to a potential XSS vulnerability. It’s not necessarily a complete list – if you think something needs to be added, let everyone know in the comments.
- Never pass data from untrusted origins into output without either escaping or sanitising it.
- Never forget to validate data arriving from an untrusted origin using relevant rules for the context it’s used in.
- Remember that anything not explicitly defined in source code has an untrusted origin.
- Remember that htmlentities() is incompatible with XML, including HTML5′s XML serialisation – use htmlspecialchars().
- Always include ENT_QUOTES, ENT_SUBSTITUTE and a valid character encoding when calling htmlspecialchars().
- Use rawurlencode() to escape strings being inserted into URLs and then HTML escape the entire URL.
- Validate all complete URLs if constructed from untrusted data.
- Never include resources loaded over unsecured HTTP on a page loaded over HTTPS.
- Sanitise raw HTML from untrusted origins using HTMLPurifier before injecting it into ouput.
- Sanitise the output of Markdown, BBCode and other HTML replacements using HTMLPurifier before injecting it into output.
- Remember that HTMLPurifier is the only HTML sanitiser worth using.
- Always transmit, with content, a valid Content-Type header referencing a valid character encoding.
- Ensure that cookies for use solely by the server are marked HttpOnly.
- Ensure that cookies which must only be transmitted over HTTPS are marked Secure.
- Always review dependencies and other third party code for potential XSS vulnerabilities and vectors.
Cross-Site Scripting remains, by far, the most common vulnerability in web applications. Things that will sink your application, framework or library become very obvious from the list. There are more than enough naughty applications, frameworks and libraries around that you should have little trouble identifying offenders with grep and some mental acrobatics. Rumour has it that you can now locally search any project on Github without even cloning a repository.
Yes, some entries are reminders. It’s surprising how many people try to avoid HTMLPurifier in favour of the month’s Regular Expression Powered Miracle option which is riddled with holes or have formulated the belief that, contrary to its official syntax rules, Markdown magically prevents Cross-Site Scripting.