Filed under: application security, SDLC, Software Development, tools
[UPDATE: Since there seems to be some confusion, the "We" in the title of this post is NOT "Veracode". The expression is a generic one intended to illustrate the attitude exhibited by many companies who like to downplay the value and/or effectiveness of technologies that they themselves do not sell. I can't believe I am having to explain this.]
Fair warning, this is a bit of a rant.
Back in my consulting days (early 2000, I’m getting old), we delighted in the fact that our web application penetration testing methodology didn’t rely on automated tools. This was completely true; we did everything manually, and we were among the best in the industry. Many so-called security consultants of the day would run a commercial web scanner and repackage the results as a high dollar “penetration test” — what a ripoff!
What we didn’t acknowledge to our customers is that those web scanners, even in their immature state, were probably capable of detecting some of the low hanging fruit that we didn’t want to spend our time looking for. Oh, we’d find a few “representative examples” of XSS and SQL injection, but then we’d get bored and move on to the more interesting and complex attack vectors. In our naivete, we figured developers would be inspired to revisit their entire input validation and/or output encoding practices, as opposed to just fixing the proof-of-concept examples we found.
Meanwhile, the commercial web scanner vendors were always downplaying the value of manual testing! “Why would you want to pay for an expensive penetration test when you can just run this less expensive tool and find the same vulnerabilities?” They’d gloss over all the technical challenges of automated web scanning and conveniently forget to mention how it was impossible for them to find authorization issues, cryptographic weaknesses, business logic flaws, and so on.
What’s my point?
Using multiple testing methodologies is crucial. Sure, there may be some overlap, but ultimately they are complementary to one another. That’s why at Veracode, we’ve never positioned automated static analysis (SAST) as a complete solution. That’s why we integrated both automated web scanning (DAST) and manual penetration testing into our service offerings less than a year after launching the company, even though SAST is our patented bread-and-butter technology. This meant we could always be completely honest about the strengths and weaknesses of each technique. I’ve had a slide titled “There Is No Silver Bullet” in my corporate slide deck since the very beginning.
Our silver bullet is better than yours
Meanwhile, it’s been amusing to watch other companies — who only had a single offering — having to espouse the tactic of downplaying any testing approach that wasn’t in their service portfolio.
- Over at Fortify, Brian Chess famously predicted that 2009 would mark ">the end of penetration testing.
- Over at WhiteHat, Jeremiah Grossman often downplays the value of writing secure code and testing code quality.
- Even as recently as last week, we have Errata Security (a consultancy) claiming that automated tools are useless and doomed to fail. Welcome back to 1999.
I’m only picking on these guys because they’re visible, well-respected practitioners in the application security space. Of course Brian knows source code scanning is an incomplete solution, and now that Fortify and WebInspect are part of the same parent company, I suspect he’s adjusted his message. I’m certain Jeremiah knows there’s value in writing secure code during the SDLC, which is why WhiteHat is now trying to get into the SAST market by acquiring some technology.
And I’m pretty sure Dave Maynor knows automation does provide real value. How else can a big company — spooked by all the recent breaches — quickly hunt for SQL injection vulnerabilities across 5,000 websites without the benefit of automation? How does one look for issues in the 150 third-party libraries you use, where only the binary is available? Do you hire Mark Dowd to spend a month looking at each one?
We all know a few sales reps that jump from one company to another, changing their pitch as they go no matter how much it conflicts with things they’ve said in the past. First a service-based approach is best, but suddenly an on-premise tool is better. Source code scanning used to be pointless, but now it’s the best thing since sliced bread! It’s no surprise these guys don’t experience more success — they lack credibility. The most successful account reps I’ve seen are the ones who build trust with their customers over time by being honest about what they are selling, even when hopping from one company to the next.
Look, it’s no big secret why people talk up their own stuff and imply everything else stinks. It’s part of the sales and marketing machine and by no means is it unique to the security industry. Even so, can’t we make an effort — as practitioners — to cut back on the rhetoric a little bit and be more honest with our customers? Customers look to us as experts to help them build their security programs, and what do we do? We oversell them on an approach that has huge gaps we pretend don’t exist. If you’re really looking out for your customers, start being more honest, and stop handing out kool-aid.
Here’s another approach: Instead of outright dismissing an effective technology or methodology just because you don’t sell it, sometimes it’s worth thinking about partnering, or even building something better. That’s why at Veracode we designed our service platform around the idea of technology integration. There is no silver bullet and there never will be.
It’s here! Data junkies rejoice!
Today we’re proud to release the third volume of our semi-annual State of Software Security report. This edition incorporates data from 4,835 applications analyzed via our cloud-based platform over the past 18 months. After lots of number crunching and a fair amount of head scratching, we’ve unearthed some intriguing findings that reflect the progress (or lack thereof) being made in securing the world’s software.
Not convinced yet? Here are a few of the data points I found particularly interesting:
- Over the past 8 quarters, the prevalence of SQL Injection (% of web apps affected) has decreased slightly, but XSS has remained flat.
- Security products perform worse than most other software suppliers in terms of acceptable security quality on first submission.
- Over half of developers who take our Application Security Fundamentals exam receive a grade of C or lower.
- Security quality scores are similar for companies across all revenue brackets, and there is no discernible difference between public and private companies.
And there’s a lot more where that came from. Plus histograms, whisker plots, linear regressions, and more! Download the full report to get all the juicy details, then come back here and tell us what you think. Enjoy!
Veracode Security Solutions
Web Application Security
Static Code Analysis
Source Code Analysis
Security Threat Guides
The 3rd Annual Social Security Blogger Awards were announced last week during the RSA Conference in San Francisco. Veracode received two awards, one for Best Corporate Blog and the other for Best Security Blog Post of the Year. Here is a list of all the nominees and the award winners. It’s always an honor to be recognized by peers, so on behalf of all the Veracode bloggers, thank you for reading — and for your votes!
We’re very excited here at Veracode to announce the availability of our new FREE service to detect cross-site scripting (XSS) in your web application. This is a significant milestone for our company and for the security industry, and we encourage everyone from small ISVs to major enterprises to give us a try. Hopefully this will be one of the first steps in the long road to eliminating XSS; after all, one of the first steps to recovery is admitting you have a problem!
Filed under: application security, Dynamic Analysis, SDLC, tools
As application inventories have become larger, more diverse, and increasingly complex, organizations have struggled to build application security testing programs that are effective and scalable. New technologies and methodologies promise to help streamline the Secure Development Lifecycle (SDLC), making processes more efficient and easing the burden of information overload.
In the realm of automated web application testing, today’s technologies fall into one of two categories, Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST). SAST analyzes application binaries or source code, detecting vulnerabilities by identifying insecure code paths without actually executing the program. In contrast, DAST detects vulnerabilities by conducting attacks against a running instance of the application, simulating the behavior of a live attacker. Most enterprises have incorporated at least one SAST or DAST technology; those with mature SDLCs may even use more than one of each.
In the past year or so, industry analysts and product vendors have become enamored with so-called “hybrid analysis” technologies. Hybrid techniques aim to correlate the results of SAST and DAST to dramatically expand dynamic coverage, prioritize the combined set of results, and reduce both false positives and false negatives. This whitepaper will examine each of these claims to give consumers technical insight into whether hybrid technologies can realistically live up to the hype.
Several observations will be described in the following sections:
- Hybrid analysis may expand dynamic coverage, but the lack of application context limits its effectiveness.
- The challenge of reliably generating URL-to-source mappings, coupled with the existence of URL rewriting, undermines the accuracy and usefulness of vulnerability correlation.
- Hybrid analysis does not reduce false positive rates; rather, it lulls users into a false sense of security by suggesting that non-correlated vulnerabilities are false positives.
- Correlation should not be equated with exploitability. Vulnerabilities should be prioritized based on severity and business impact, not based on how many scanners are capable of detecting it.
Download the full whitepaper.
I created this video for an internal Veracode video contest. It’s intended to poke fun at the abundance of “thought leaders” we have in our industry. I shared it on Twitter yesterday but thought I would post here on the blog as well. A handful of people have asked if it’s meant to satirize any particular person — sorry to disappoint, it’s just a composite. Enjoy!
Filed under: application security, Application Security Metrics, QA, SDLC, Software Development
Is anyone else getting tired of hearing excuses from customers — and worse yet, the security community itself — about how hard it is to fix cross-site scripting (XSS) vulnerabilities? Oh, come on. Fixing XSS is like squashing ants, but some would have you believe it’s more like slaying dragons. I haven’t felt inspired to write a blog post in a while, but every once in a while, 140 characters just isn’t enough. Grab your cup of coffee, because I may get a little rambly.
Easy to Fix vs. Easy to Eradicate
Let’s start with some terminology to make sure we’re all on the same page. Sometimes people will say XSS is “not easy to fix” but what they really mean is that it’s “not easy to eradicate.” Big difference, right? Not many vulnerability classes are easy to eradicate. Take buffer overflows as an example. Buffer overflows were first documented in the early 1970s and began to be exploited heavily in the 1990s. We understand exactly how and why they occur, yet they are far from extinct. Working to eradicate an entire vulnerability class is a noble endeavor, but it’s not remotely pragmatic for businesses to wait around for it to happen. We can bite off chunks through OS, API, and framework protections, but XSS or any other vulnerability class isn’t going to disappear completely any time soon. So in the meantime, let’s focus on the “easy to fix” angle because that’s the problem developers and businesses are struggling with today.
It’s my belief that most XSS vulnerabilities can be fixed easily. Granted, it’s not as trivial as wrapping a single encoding mechanism around any user-supplied input used to construct web content, but once you learn how to apply contextual encoding, it’s really not that bad, provided you grok the functionality of your own web application. An alarming chunk of reflected XSS vulnerabilities are trivial, reading the value of a GET/POST parameter and writing it directly to an HTML page. Plenty of others are only marginally more complicated, such as retrieving a user-influenced value from the database and writing it into an HTML attribute. I contend both of these examples are easy for a developer to fix; tell me if you disagree. Basic XSS vulnerabilities like these are still very prevalent.
Ease of Fix vs. Willingness to Fix
I’ve heard the assertion that XSS vulnerabilities aren’t getting fixed because they are difficult to fix. Asking “what percentage of XSS vulnerabilities actually get fixed and deployed to production?” is a valuable metric for the business, but it doesn’t reflect the actual difficulty of fixing an XSS vulnerability. It conflates the technical complexity with other
excusesreasons why website vulnerabilities are not fixed.
At Veracode, we collected data in our State of Software Security Vol. 2 report that reveals developers are capable of fixing security issues quickly. While our data isn’t granular enough to state exactly how long it took to fix a particular flaw, we do know that in cases where developers did choose to remediate flaws and rescan, they reached an “acceptable” level of security in an average of 16 days. This isn’t to say that every XSS was eliminated, but it suggests that most were (more details on our scoring methodology can be found in the appendix of the report).
WhiteHat’s Fall 2010 study shows that nearly half of XSS vulnerabilities are fixed, and that doing so takes their customers an average of 67 days. These numbers differ from ours — particularly with regard to the number of days — but I think that can be attributed to prioritization. Perhaps fixing the XSS vulnerability didn’t rise to the top of the queue until day 66. Again, that’s more an indication that the business isn’t taking XSS seriously than it is of the technical sophistication required to fix.
At Veracode, we see thousands — sometimes tens of thousands — of XSS vulnerabilities a week. Many are of the previously described trivial variety that can be fixed with a single line of code. Some of our customers upload a new build the following day; others never do. Motivation is clearly a factor. Think about the XSS vulnerabilities that hit highly visible websites such as Facebook, Twitter, MySpace, and others. Sometimes those companies push XSS fixes to production in a matter of hours! Are their developers really that much better? Of course not. The difference is how seriously the business takes it. When they believe it’s important, you can bet it gets fixed.
There’s a growing faction that believes security practitioners are not qualified to comment on the difficulty of security fixes (XSS or otherwise) because we’re not the ones writing the code. The ironic thing is that this position is most loudly voiced by people in the infosec community! It’s like they are trying to be the “white knights”, coddling the poor, fragile developers so their feelings aren’t hurt. Who are we to speak for them? I find the entire mindset misguided at best, disingenous and contemptuous at worst. To be fair, Dinis isn’t the only one who has expressed this view, he’s just the straw that broke the camel’s back, so to speak. You know who you are.
Look, the vast majority of security professionals aren’t developers and never have been (notable exceptions include Christien Rioux, HD Moore, Halvar Flake, etc.). Trust me, we know it. I’ve written lots of code that I’d be horrified for any real developer to see. My stuff may be secure, but I’d hate to be the guy who has to maintain, extend, or even understand it. Here’s the thing — even though I can guarantee you I’d be terrible as a developer, most XSS flaws are so simple that even a security practioner like me could fix them! Here’s another way of looking at it: developers solve problems on a daily basis that are much more complex than patching an XSS vulnerability. Implying that fixing XSS is “too hard” for them is insulting!
That being said, who says we’re not qualified to comment on a code-level vulnerability if we’re not the one writing the fix? In fact, who’s to say that the security professional isn’t more qualified to assess the difficulty in some situations? Specifically, if a developer doesn’t understand the root cause, how can he possibly estimate the effort to fix? I’ve been on readouts where developers claim initially that several hundred XSS flaws will take a day each to fix, but then once they understand how simple it is they realize they can knock them all out in a week. Communication and education go a long way. Sure, sometimes there are complicating factors involved that affect remediation time, but I can’t recall a time where a developer has told me my estimate was downright unreasonable.
Bottom line: By and large, I don’t think developers feel miffed or resentful when we try to estimate the effort to fix a vulnerability. They know that what we say isn’t the final word, it’s simply one input into a more complex equation. Yes, developers do get annoyed when it seems like the security group is creating extra work for them, but that’s a different discussion altogether.
One final pet peeve of mine is the rationalization that security vulnerabilities take longer to fix because you have to identify the root cause, account for side effects, test the fix, and roll it into a either a release or a patch. As opposed to other software bugs where fixes are accomplished by handwaving and magic incantations? Of course not; these steps are common to just about any software bug. In fact, I’d argue that identifying the root cause of a security vulnerability is much easier than hunting down an unpredictable crash, a race condition, or any other non-trivial bug. Come to think of it, testing the fix may be easier too, at least compared to a bug that’s intermittent or hard to reproduce. As for side effects and other QA testing, this is why we have regression suites! If you build software and you don’t have the capability to run an automated regression suite after fixing a bug, then let’s face it, you’ve got bigger problems than wringing out a few XSS vulnerabilities.
My high school economics teacher used the term “ceteris paribus” at least once per lecture. Loosely translated from Latin, it means “all other things being equal” and it’s often used in economics and philosophy to enable one to describe outcomes without having to account for other complicating factors. The ceteris paribus concept doesn’t apply perfectly to this situation, but it’s close enough for a blog post, to wit: ceteris paribus, fixing a security-related bug is no more difficult than fixing any other critical software bug. Rattling off all the steps involved in deploying a fix is just an attempt at misdirection.
My hope in writing this post is to spur some debate around some of the reasons, excuses, and rationalizations that often accompany the surprisingly-divisive topic of XSS. I want to hear from both security practitioners and developers on where you think I’ve hit or missed the mark. We don’t censor comments here, but there is a moderation queue, so bear with us if your comment takes a few hours to show up.
Filed under: application security, SDLC, Software Development
Lots of people have been asking us for opinions on HTML5 security lately. Chris and I discussed the potential attack vectors with the Veracode research team, most notably Brandon Creighton and Isaac Dawson. Here’s some of what we came up with. Keep in mind that the HTML5 spec and implementations are still evolving, particularly with respect to security concerns, so we shouldn’t assume any of this is set in stone.
Don’t Forget Origin Checks on Cross-Document Messaging
One bright spot with regard to cross-document messaging is that older apps won’t be threatened by these issues, only new apps that are intentionally written to rely on the feature.
Local Storage Isn’t as Problematic as You Think
Local storage doesn’t appear to present major security risks, despite a lot of FUD circulating on the topic. Besides cookies, there have always been numerous ways for web apps to store data client-side through the use of plugins (Java, JWS, Flash, Silverlight, Google Gears, etc.) or browser extensions — WebKit/Safari/Chrome have supported local storage before it was even part of HTML5.
Developers should also be aware that as currently implemented, the HTML5 sessionStorage attribute can be vulnerable to manipulation from foreign sites under certain circumstances. A remote site can get a handle to a window containing a site for which a browser has data in sessionStorage. Then, the remote site can navigate to arbitrary URLs in that window, while the window will still contain its
sessionStorage. Hopefully this implementation bug will be fixed by the time the standard is final.
New Tags Increase Attack Surface
Firefox, Safari, and Chrome currently allow cross-domain requests to be sent using XMLHttpRequest. Before the entire request is allowed to proceed, the browser sends a probe request using the OPTIONS method (instead of, for example, GET or POST) first. If the server responds to this probe with an “Access-Control-Allow-Origin” header that gives the source host permission to make the request, the browser will then resend the full request with the requested HTTP method. This is consistent with the current working draft for W3C Cross-Origin Resource Sharing.
However, IE works differently. Instead of relaxing permissions on XMLHttpRequest, it uses a new object type called XDomainRequest. Also, instead of sending a probe that replaces the normal HTTP method with OPTIONS, its probe includes the original HTTP method as well as the request body (in the other browsers, the request body is omitted).
Sandbox Attribute Could Make Security Easier
One thing that may help, depending on how the standard is eventually defined and implemented, is the support for a sandbox attribute on IFRAMEs. This attribute will allow a developer to chose how data should be interpreted. Unfortunately, this design, like much of HTML, has a pretty high chance of being misunderstood by developers and may easily be disabled for the sake of convenience. If done properly, it could help protect against malicious third-party ads or anywhere else that accepts untrusted content to be redisplayed.
Always Remember Input Validation
The most important thing that developers can do is to remember basic security tenets, for example, the idea that all user input should be considered untrusted. They should learn how the new HTML5 features actually work in order to understand where they’d be tempted to make erroneous assumptions.
Here’s a quick post to let you know all the places to get your Veracode fix at RSA Conference 2010.
- On the Expo floor, we’ll be in booth 729. I’ll be at the booth for a few hours on Tuesday and Wednesday. Stop by if you’d like to talk about our service offerings, get a quick demo, or just say hello.
- On Monday morning at 9:25am, Ashish Larivee will be giving a presentation, Metrics for Insights on the State of Application Security at Mini Metricon.
- On Monday morning at 11:15am, I’ll be on a panel, Securely Getting to Planet SaaS at the Cloud Security Alliance Summit in Green Room 131.
- On Wednesday morning at 9:10am, Chris Wysopal will be on a panel, Whitelist or Trustlist? Should There be an Industry Software Whitelist? in Orange Room 301.
- On Wednesday afternoon at 1:00pm, Chris Wysopal will be giving a presentation, Detecting “Certified Pre-owned” Software and Devices in Blue Room 104.
- On Wednesday evening at 6:00pm, I’ll be participating in a Security Round Table at Ruby Skye, prior to the Rapid7 party.
Looking forward to catching up with everyone!