Filed under: ALL THINGS SECURITY, application security, SDLC, Software Development
A developer’s main goal usually doesn’t include creating flawless, intrusion proof applications. In fact the goal is usually to create a working program as quickly as possible. Programmers aren’t security experts, and perhaps they shouldn’t be. But when 70% of applications failing to company with enterprise security standards (data from Veracode SoSS vol 5), it is clear more attention needs to be given to secure programming techniques.
This is why when I came across an article describing a new training program by the Software Assurance Forum for Excellence in Code (SAFECode), I was pleasantly surprised. The organization, led by Howard Schmidt, will offer training courses for “anyone that does development work”. The first six training courses will focus on web application security flaws such as SQL injections and Cross Site-Scripting.
I haven’t had a chance to view the full curriculum, but I have confidence in the security pros at Adobe, have put together an excellent program. Web application security flaws are some of the easiest flaws to avoid and most exploitable, yet they are also some of the most common flaws, so I think starting program with lessons on web applications is a great first step. It is an extra bonus that the material will be Creative Commons licensed which should allow for wide distribution. The free on demand training courses are available at:
The security industry needs more programs like the training from SAFECode. When combined with integrating security testing and scanning into the software development lifecycle (SDLC), these programs will help create less vulnerable applications and reduce the number of successful attacks using well known vulnerabilities. While it seems like most people agree on these points, the need for speed has somehow made slowing down to consider security during the development process uncool. This is especially true when programmers don’t have as many resources at their disposal, for example, when developing open source applications. It is as if acknowledging that you may have security flaws in your code is the same thing as admitting you aren’t a true programmer. This couldn’t be farther from the truth. Even the smartest, most innovative programmers can create software with flaws because they are human and imperfect, just like the rest of us.
Offering free training courses and materials on secure coding will hopefully serve a dual purpose. My first hope is that it will help programmers use more secure coding practices. The second is that it will eliminate the taboo of admitting (during the development stage) that an application could have security vulnerabilities. Only then can flaws be remediated before the program is released.
Filed under: ALL THINGS SECURITY, developer-first, ninja, QA, quality, SDLC, secure, Software Development
It’s only a matter of time before someone finds all the skeletons in your closet. In this case the “someone” is a hacker and the “closets” are your applications. As if that isn’t scary enough, consider all of the 3rd party applications and libraries being leveraged to make your applications function…and all of their skeletons you don’t know of. No bones about it, there’s a whole heap of issues that can no longer accept failure as the norm.
I saw an awesome request from a local OWASP chapter last week: “Bring a developer to the next meeting day!” For real, way to go! The sooner everyone learns that security is a journey that ultimately is the responsibility of all who are involved from creation to deployment, the better off we will be. In order for change like that to happen though, this is where we are going to get uncomfortable. No one said a secure approach was going to be fun, but I will say that it doesn’t have to be the heaviest of burdens to carry…and you certainly don’t need to carry it alone. I tried to figure out who could relate to that statement, but no matter which perspective I tried to look at it from, I came to the same conclusion: it should echo across the board. (No pun intended)
A developer’s job has always been to write functional code. Security teams have always been isolated from developers, but yet they are the ones held accountable when a breach occurs at the application layer. Plain and simple: developers are, for the most part, not security focused, and we security folks are, for the most part, not coders. When it comes to application security, this is where we need to redefine what quality actually means. It’s super easy to point your finger at security, but when they aren’t brought into the picture until it’s too late…what are they supposed to do? Let’s take a minute to put this into perspective. If there was a new prison being constructed in your neighborhood, do you really think it would be acceptable to talk about where the locks, gates, and controls go AFTER it’s been built? Not a chance! That logic applies here and this is why Veracode exists.
I have talked with enough developers out there since joining Veracode to know that this is not what most of them signed up for. They are all extremely good at their jobs, but are overwhelmed at the mere thought of adding another ‘thing’ to their plates and still produce. The demand for this type of knowledge has been around long enough where there is a middle ground…..developers can learn in increments within the code they are producing, then take the results and address them in real time. People like us learn better by doing anyway! We know that an SDLC can get pretty aggressive, and with that comes stress. Security isn’t meant to add to that, it’s going to end up reducing the workload that will come in the end when those skeletons are no longer hidden. If an organization has developers on their team who are coding securely… talk about ninja status!!
With all the compliance standards and certifications we have today, it’s a shame there haven’t been more calls to say that quality doesn’t just mean functional anymore, quality has to mean secure. I won’t say I’m surprised either, but I refuse to let security continue to be the butt of jokes anymore….at least not until the security umbrella starts to cover development. I always say that it costs more to un-embarrass yourself than it does to just be proactive….so where’s the beef?
Keep fighting the good fight my friends!
Filed under: application security, SDLC, Software Development, tools
[UPDATE: Since there seems to be some confusion, the "We" in the title of this post is NOT "Veracode". The expression is a generic one intended to illustrate the attitude exhibited by many companies who like to downplay the value and/or effectiveness of technologies that they themselves do not sell. I can't believe I am having to explain this.]
Fair warning, this is a bit of a rant.
Back in my consulting days (early 2000, I’m getting old), we delighted in the fact that our web application penetration testing methodology didn’t rely on automated tools. This was completely true; we did everything manually, and we were among the best in the industry. Many so-called security consultants of the day would run a commercial web scanner and repackage the results as a high dollar “penetration test” — what a ripoff!
What we didn’t acknowledge to our customers is that those web scanners, even in their immature state, were probably capable of detecting some of the low hanging fruit that we didn’t want to spend our time looking for. Oh, we’d find a few “representative examples” of XSS and SQL injection, but then we’d get bored and move on to the more interesting and complex attack vectors. In our naivete, we figured developers would be inspired to revisit their entire input validation and/or output encoding practices, as opposed to just fixing the proof-of-concept examples we found.
Meanwhile, the commercial web scanner vendors were always downplaying the value of manual testing! “Why would you want to pay for an expensive penetration test when you can just run this less expensive tool and find the same vulnerabilities?” They’d gloss over all the technical challenges of automated web scanning and conveniently forget to mention how it was impossible for them to find authorization issues, cryptographic weaknesses, business logic flaws, and so on.
What’s my point?
Using multiple testing methodologies is crucial. Sure, there may be some overlap, but ultimately they are complementary to one another. That’s why at Veracode, we’ve never positioned automated static analysis (SAST) as a complete solution. That’s why we integrated both automated web scanning (DAST) and manual penetration testing into our service offerings less than a year after launching the company, even though SAST is our patented bread-and-butter technology. This meant we could always be completely honest about the strengths and weaknesses of each technique. I’ve had a slide titled “There Is No Silver Bullet” in my corporate slide deck since the very beginning.
Our silver bullet is better than yours
Meanwhile, it’s been amusing to watch other companies — who only had a single offering — having to espouse the tactic of downplaying any testing approach that wasn’t in their service portfolio.
- Over at Fortify, Brian Chess famously predicted that 2009 would mark ">the end of penetration testing.
- Over at WhiteHat, Jeremiah Grossman often downplays the value of writing secure code and testing code quality.
- Even as recently as last week, we have Errata Security (a consultancy) claiming that automated tools are useless and doomed to fail. Welcome back to 1999.
I’m only picking on these guys because they’re visible, well-respected practitioners in the application security space. Of course Brian knows source code scanning is an incomplete solution, and now that Fortify and WebInspect are part of the same parent company, I suspect he’s adjusted his message. I’m certain Jeremiah knows there’s value in writing secure code during the SDLC, which is why WhiteHat is now trying to get into the SAST market by acquiring some technology.
And I’m pretty sure Dave Maynor knows automation does provide real value. How else can a big company — spooked by all the recent breaches — quickly hunt for SQL injection vulnerabilities across 5,000 websites without the benefit of automation? How does one look for issues in the 150 third-party libraries you use, where only the binary is available? Do you hire Mark Dowd to spend a month looking at each one?
We all know a few sales reps that jump from one company to another, changing their pitch as they go no matter how much it conflicts with things they’ve said in the past. First a service-based approach is best, but suddenly an on-premise tool is better. Source code scanning used to be pointless, but now it’s the best thing since sliced bread! It’s no surprise these guys don’t experience more success — they lack credibility. The most successful account reps I’ve seen are the ones who build trust with their customers over time by being honest about what they are selling, even when hopping from one company to the next.
Look, it’s no big secret why people talk up their own stuff and imply everything else stinks. It’s part of the sales and marketing machine and by no means is it unique to the security industry. Even so, can’t we make an effort — as practitioners — to cut back on the rhetoric a little bit and be more honest with our customers? Customers look to us as experts to help them build their security programs, and what do we do? We oversell them on an approach that has huge gaps we pretend don’t exist. If you’re really looking out for your customers, start being more honest, and stop handing out kool-aid.
Here’s another approach: Instead of outright dismissing an effective technology or methodology just because you don’t sell it, sometimes it’s worth thinking about partnering, or even building something better. That’s why at Veracode we designed our service platform around the idea of technology integration. There is no silver bullet and there never will be.
Filed under: application security, Dynamic Analysis, SDLC, tools
As application inventories have become larger, more diverse, and increasingly complex, organizations have struggled to build application security testing programs that are effective and scalable. New technologies and methodologies promise to help streamline the Secure Development Lifecycle (SDLC), making processes more efficient and easing the burden of information overload.
In the realm of automated web application testing, today’s technologies fall into one of two categories, Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST). SAST analyzes application binaries or source code, detecting vulnerabilities by identifying insecure code paths without actually executing the program. In contrast, DAST detects vulnerabilities by conducting attacks against a running instance of the application, simulating the behavior of a live attacker. Most enterprises have incorporated at least one SAST or DAST technology; those with mature SDLCs may even use more than one of each.
In the past year or so, industry analysts and product vendors have become enamored with so-called “hybrid analysis” technologies. Hybrid techniques aim to correlate the results of SAST and DAST to dramatically expand dynamic coverage, prioritize the combined set of results, and reduce both false positives and false negatives. This whitepaper will examine each of these claims to give consumers technical insight into whether hybrid technologies can realistically live up to the hype.
Several observations will be described in the following sections:
- Hybrid analysis may expand dynamic coverage, but the lack of application context limits its effectiveness.
- The challenge of reliably generating URL-to-source mappings, coupled with the existence of URL rewriting, undermines the accuracy and usefulness of vulnerability correlation.
- Hybrid analysis does not reduce false positive rates; rather, it lulls users into a false sense of security by suggesting that non-correlated vulnerabilities are false positives.
- Correlation should not be equated with exploitability. Vulnerabilities should be prioritized based on severity and business impact, not based on how many scanners are capable of detecting it.
Download the full whitepaper.
Filed under: application security, Application Security Metrics, QA, SDLC, Software Development
Is anyone else getting tired of hearing excuses from customers — and worse yet, the security community itself — about how hard it is to fix cross-site scripting (XSS) vulnerabilities? Oh, come on. Fixing XSS is like squashing ants, but some would have you believe it’s more like slaying dragons. I haven’t felt inspired to write a blog post in a while, but every once in a while, 140 characters just isn’t enough. Grab your cup of coffee, because I may get a little rambly.
Easy to Fix vs. Easy to Eradicate
Let’s start with some terminology to make sure we’re all on the same page. Sometimes people will say XSS is “not easy to fix” but what they really mean is that it’s “not easy to eradicate.” Big difference, right? Not many vulnerability classes are easy to eradicate. Take buffer overflows as an example. Buffer overflows were first documented in the early 1970s and began to be exploited heavily in the 1990s. We understand exactly how and why they occur, yet they are far from extinct. Working to eradicate an entire vulnerability class is a noble endeavor, but it’s not remotely pragmatic for businesses to wait around for it to happen. We can bite off chunks through OS, API, and framework protections, but XSS or any other vulnerability class isn’t going to disappear completely any time soon. So in the meantime, let’s focus on the “easy to fix” angle because that’s the problem developers and businesses are struggling with today.
It’s my belief that most XSS vulnerabilities can be fixed easily. Granted, it’s not as trivial as wrapping a single encoding mechanism around any user-supplied input used to construct web content, but once you learn how to apply contextual encoding, it’s really not that bad, provided you grok the functionality of your own web application. An alarming chunk of reflected XSS vulnerabilities are trivial, reading the value of a GET/POST parameter and writing it directly to an HTML page. Plenty of others are only marginally more complicated, such as retrieving a user-influenced value from the database and writing it into an HTML attribute. I contend both of these examples are easy for a developer to fix; tell me if you disagree. Basic XSS vulnerabilities like these are still very prevalent.
Ease of Fix vs. Willingness to Fix
I’ve heard the assertion that XSS vulnerabilities aren’t getting fixed because they are difficult to fix. Asking “what percentage of XSS vulnerabilities actually get fixed and deployed to production?” is a valuable metric for the business, but it doesn’t reflect the actual difficulty of fixing an XSS vulnerability. It conflates the technical complexity with other
excusesreasons why website vulnerabilities are not fixed.
At Veracode, we collected data in our State of Software Security Vol. 2 report that reveals developers are capable of fixing security issues quickly. While our data isn’t granular enough to state exactly how long it took to fix a particular flaw, we do know that in cases where developers did choose to remediate flaws and rescan, they reached an “acceptable” level of security in an average of 16 days. This isn’t to say that every XSS was eliminated, but it suggests that most were (more details on our scoring methodology can be found in the appendix of the report).
WhiteHat’s Fall 2010 study shows that nearly half of XSS vulnerabilities are fixed, and that doing so takes their customers an average of 67 days. These numbers differ from ours — particularly with regard to the number of days — but I think that can be attributed to prioritization. Perhaps fixing the XSS vulnerability didn’t rise to the top of the queue until day 66. Again, that’s more an indication that the business isn’t taking XSS seriously than it is of the technical sophistication required to fix.
At Veracode, we see thousands — sometimes tens of thousands — of XSS vulnerabilities a week. Many are of the previously described trivial variety that can be fixed with a single line of code. Some of our customers upload a new build the following day; others never do. Motivation is clearly a factor. Think about the XSS vulnerabilities that hit highly visible websites such as Facebook, Twitter, MySpace, and others. Sometimes those companies push XSS fixes to production in a matter of hours! Are their developers really that much better? Of course not. The difference is how seriously the business takes it. When they believe it’s important, you can bet it gets fixed.
There’s a growing faction that believes security practitioners are not qualified to comment on the difficulty of security fixes (XSS or otherwise) because we’re not the ones writing the code. The ironic thing is that this position is most loudly voiced by people in the infosec community! It’s like they are trying to be the “white knights”, coddling the poor, fragile developers so their feelings aren’t hurt. Who are we to speak for them? I find the entire mindset misguided at best, disingenous and contemptuous at worst. To be fair, Dinis isn’t the only one who has expressed this view, he’s just the straw that broke the camel’s back, so to speak. You know who you are.
Look, the vast majority of security professionals aren’t developers and never have been (notable exceptions include Christien Rioux, HD Moore, Halvar Flake, etc.). Trust me, we know it. I’ve written lots of code that I’d be horrified for any real developer to see. My stuff may be secure, but I’d hate to be the guy who has to maintain, extend, or even understand it. Here’s the thing — even though I can guarantee you I’d be terrible as a developer, most XSS flaws are so simple that even a security practioner like me could fix them! Here’s another way of looking at it: developers solve problems on a daily basis that are much more complex than patching an XSS vulnerability. Implying that fixing XSS is “too hard” for them is insulting!
That being said, who says we’re not qualified to comment on a code-level vulnerability if we’re not the one writing the fix? In fact, who’s to say that the security professional isn’t more qualified to assess the difficulty in some situations? Specifically, if a developer doesn’t understand the root cause, how can he possibly estimate the effort to fix? I’ve been on readouts where developers claim initially that several hundred XSS flaws will take a day each to fix, but then once they understand how simple it is they realize they can knock them all out in a week. Communication and education go a long way. Sure, sometimes there are complicating factors involved that affect remediation time, but I can’t recall a time where a developer has told me my estimate was downright unreasonable.
Bottom line: By and large, I don’t think developers feel miffed or resentful when we try to estimate the effort to fix a vulnerability. They know that what we say isn’t the final word, it’s simply one input into a more complex equation. Yes, developers do get annoyed when it seems like the security group is creating extra work for them, but that’s a different discussion altogether.
One final pet peeve of mine is the rationalization that security vulnerabilities take longer to fix because you have to identify the root cause, account for side effects, test the fix, and roll it into a either a release or a patch. As opposed to other software bugs where fixes are accomplished by handwaving and magic incantations? Of course not; these steps are common to just about any software bug. In fact, I’d argue that identifying the root cause of a security vulnerability is much easier than hunting down an unpredictable crash, a race condition, or any other non-trivial bug. Come to think of it, testing the fix may be easier too, at least compared to a bug that’s intermittent or hard to reproduce. As for side effects and other QA testing, this is why we have regression suites! If you build software and you don’t have the capability to run an automated regression suite after fixing a bug, then let’s face it, you’ve got bigger problems than wringing out a few XSS vulnerabilities.
My high school economics teacher used the term “ceteris paribus” at least once per lecture. Loosely translated from Latin, it means “all other things being equal” and it’s often used in economics and philosophy to enable one to describe outcomes without having to account for other complicating factors. The ceteris paribus concept doesn’t apply perfectly to this situation, but it’s close enough for a blog post, to wit: ceteris paribus, fixing a security-related bug is no more difficult than fixing any other critical software bug. Rattling off all the steps involved in deploying a fix is just an attempt at misdirection.
My hope in writing this post is to spur some debate around some of the reasons, excuses, and rationalizations that often accompany the surprisingly-divisive topic of XSS. I want to hear from both security practitioners and developers on where you think I’ve hit or missed the mark. We don’t censor comments here, but there is a moderation queue, so bear with us if your comment takes a few hours to show up.
Filed under: application security, SDLC, Software Development
Lots of people have been asking us for opinions on HTML5 security lately. Chris and I discussed the potential attack vectors with the Veracode research team, most notably Brandon Creighton and Isaac Dawson. Here’s some of what we came up with. Keep in mind that the HTML5 spec and implementations are still evolving, particularly with respect to security concerns, so we shouldn’t assume any of this is set in stone.
Don’t Forget Origin Checks on Cross-Document Messaging
One bright spot with regard to cross-document messaging is that older apps won’t be threatened by these issues, only new apps that are intentionally written to rely on the feature.
Local Storage Isn’t as Problematic as You Think
Local storage doesn’t appear to present major security risks, despite a lot of FUD circulating on the topic. Besides cookies, there have always been numerous ways for web apps to store data client-side through the use of plugins (Java, JWS, Flash, Silverlight, Google Gears, etc.) or browser extensions — WebKit/Safari/Chrome have supported local storage before it was even part of HTML5.
Developers should also be aware that as currently implemented, the HTML5 sessionStorage attribute can be vulnerable to manipulation from foreign sites under certain circumstances. A remote site can get a handle to a window containing a site for which a browser has data in sessionStorage. Then, the remote site can navigate to arbitrary URLs in that window, while the window will still contain its
sessionStorage. Hopefully this implementation bug will be fixed by the time the standard is final.
New Tags Increase Attack Surface
Firefox, Safari, and Chrome currently allow cross-domain requests to be sent using XMLHttpRequest. Before the entire request is allowed to proceed, the browser sends a probe request using the OPTIONS method (instead of, for example, GET or POST) first. If the server responds to this probe with an “Access-Control-Allow-Origin” header that gives the source host permission to make the request, the browser will then resend the full request with the requested HTTP method. This is consistent with the current working draft for W3C Cross-Origin Resource Sharing.
However, IE works differently. Instead of relaxing permissions on XMLHttpRequest, it uses a new object type called XDomainRequest. Also, instead of sending a probe that replaces the normal HTTP method with OPTIONS, its probe includes the original HTTP method as well as the request body (in the other browsers, the request body is omitted).
Sandbox Attribute Could Make Security Easier
One thing that may help, depending on how the standard is eventually defined and implemented, is the support for a sandbox attribute on IFRAMEs. This attribute will allow a developer to chose how data should be interpreted. Unfortunately, this design, like much of HTML, has a pretty high chance of being misunderstood by developers and may easily be disabled for the sake of convenience. If done properly, it could help protect against malicious third-party ads or anywhere else that accepts untrusted content to be redisplayed.
Always Remember Input Validation
The most important thing that developers can do is to remember basic security tenets, for example, the idea that all user input should be considered untrusted. They should learn how the new HTML5 features actually work in order to understand where they’d be tempted to make erroneous assumptions.
Filed under: application security, SDLC, Software Development, tools
A conversation on Twitter this morning started out like this:
@dinozaizovi: Finding vulnerabilities without exploiting them is like putting on a dress when you have nowhere to go.
This clever analogy spurred a discussion about the importance of proving exploitability as a prerequisite to fixing bugs. While I agree that nothing is more convincing than a working exploit, there will always be a greater volume of bugs discovered than there are vulnerability researchers to write exploits. Don’t get me wrong — as a former penetration tester, I agree that it is fun to write exploits, it just shouldn’t be a gating factor. Putting the burden of proof on the researcher to develop an exploit is not scalable, nor does it help create a development culture that improves software security over the long term.
A related topic, and one that hits closer to home for me, is how software developers deal with the results of static analysis. Static analysis is often misunderstood, particularly by people who have only dealt with dynamic analysis (fuzzing, web scanning, etc.) or penetration testing in the past. Because static analysis detects flaws without actually executing the target application, there’s an increased likelihood of finding “noise” (insignificant flaws) or false positives. On the other hand, static analysis provides broader coverage, often detecting flaws in complex code paths that a web scan or human tester would be unlikely to find. So there’s your trade-off.
Here’s a conversation I have all too frequently, paraphrased:
I don’t think I should have to fix this SQL injection flaw unless you can prove to me that it’s exploitable.
Static analysis isn’t performed against a running instance of the application. Not all flaws will be exploitable vulnerabilities, but some of them almost certainly are. Here, let me show you all of the code paths where untrusted user input enters the application and eventually gets used in the ad-hoc SQL query we’ve marked as a bug.
But what’s the URL that I can click on to exploit it?
Static analysis is different from a penetration test. The output of our analysis is a code path, not a URL. URL construction cannot be derived solely from the application code, because it depends on outside factors such as how the web server and application server are configured. Moreover, we don’t have the necessary context of how this flaw fits into the business logic of the application. Maybe this functionality is only accessible by certain users when their accounts are in a particular status. It might take a couple hours working closely with a developer in a test environment to come up with the attack URL. It might take several more hours to write a script around that attack URL to mine the database. On the other hand, it would take about 10 minutes to replace that ad-hoc query with a parameterized prepared statement.
Well, if you can’t demonstrate the vulnerability, then it’s not real.
Demonstrating a working exploit certainly proves a system is vulnerable. But the lack of a working exploit is hardly proof that it’s not vulnerable. You could spend the time to investigate every single flaw to figure out which ones are vulnerable, or you could fix them all in such a way that you’re guaranteed it won’t be vulnerable. In our opinion, the time is better spent on the latter.
[bangs head against wall]
Now imagine that conversation stretching out to 30 minutes or more. They could’ve fixed a half-dozen flaws already. And it’s not limited to SQL injection. For example, consider cross-site scripting (XSS):
I need you to prove that this XSS flaw is exploitable.
How about just applying the proper output encoding so you know the untrusted input will be rendered safely by the browser?
I need you to prove that this buffer overflow is exploitable.
How about just using a bounded copy or putting in a length check, so you know the buffer won’t overflow?
By now you get the picture. Many developers want proof, to the extent that they’ll sacrifice efficiency to get it. If we are to improve software over the long haul, developers must learn to recognize situations where it takes less time to patch a bug than to argue about its exploitability. On a more positive note, from someone who talks to static analysis customers on a daily basis, the tide is starting to turn in the right direction. But it is still an uphill battle.