Filed under: ALL THINGS SECURITY, application security, SDLC, Software Development
A developer’s main goal usually doesn’t include creating flawless, intrusion proof applications. In fact the goal is usually to create a working program as quickly as possible. Programmers aren’t security experts, and perhaps they shouldn’t be. But when 70% of applications failing to company with enterprise security standards (data from Veracode SoSS vol 5), it is clear more attention needs to be given to secure programming techniques.
This is why when I came across an article describing a new training program by the Software Assurance Forum for Excellence in Code (SAFECode), I was pleasantly surprised. The organization, led by Howard Schmidt, will offer training courses for “anyone that does development work”. The first six training courses will focus on web application security flaws such as SQL injections and Cross Site-Scripting.
I haven’t had a chance to view the full curriculum, but I have confidence in the security pros at Adobe, have put together an excellent program. Web application security flaws are some of the easiest flaws to avoid and most exploitable, yet they are also some of the most common flaws, so I think starting program with lessons on web applications is a great first step. It is an extra bonus that the material will be Creative Commons licensed which should allow for wide distribution. The free on demand training courses are available at:
The security industry needs more programs like the training from SAFECode. When combined with integrating security testing and scanning into the software development lifecycle (SDLC), these programs will help create less vulnerable applications and reduce the number of successful attacks using well known vulnerabilities. While it seems like most people agree on these points, the need for speed has somehow made slowing down to consider security during the development process uncool. This is especially true when programmers don’t have as many resources at their disposal, for example, when developing open source applications. It is as if acknowledging that you may have security flaws in your code is the same thing as admitting you aren’t a true programmer. This couldn’t be farther from the truth. Even the smartest, most innovative programmers can create software with flaws because they are human and imperfect, just like the rest of us.
Offering free training courses and materials on secure coding will hopefully serve a dual purpose. My first hope is that it will help programmers use more secure coding practices. The second is that it will eliminate the taboo of admitting (during the development stage) that an application could have security vulnerabilities. Only then can flaws be remediated before the program is released.
Filed under: application security, Binary Analysis, research
Everyone has had that dreaded experience: you open up the task manager on your computer… and there’s a program name you don’t recognize. It gets worse when you google the name and can’t find a concrete answer on what it is and why it’s there. It gets even worse when you remove it from Autoruns and it comes back. It gets terrible when you realize it has keylogger functionality. The icing on the cake, however, is when the mystery program is also eating up all your RAM.
The RAM issue is actually how this special little program on my own computer came to my attention. I recently bought a high-end Windows 8 tablet – to protect the guilty, we’ll call the manufacturer “Spacer”. Like most Windows computers, it came with an assortment of apps preinstalled by “Spacer”, ranging from the mildly useful to trash you delete without hesitation. In particular, I liked the interface that popped up when I plugged into HDMI, so I didn’t go on a vendor utility murdering spree.
I happened to have Resource Monitor open, and I noticed that the second-most RAM-hungry program was… a “Spacer” background service with a generic name, consuming 280MB. Not bad for a 15KB binary! Googling the name, “MEMS Enhancement Utility”, only turned up other customers wondering what it was and observing that getting rid of it didn’t seem to break anything. I disabled it and rebooted, but it came back. Presumably, one of the “Spacer” apps was serving as a watchdog for the others. The easy solution is to simply get rid of the program all together, but I decided to investigate what made this program so important in the first place.
Figure 1: Not the most clarifying metadata
It turns out that the program was written in .NET, which is vastly easier and faster to reverse-engineer than conventional native binaries. At Veracode, we have our own internal tools for automated analysis of .NET programs, but for interactive purposes, I recommend the free JetBrains dotPeek.
When starting an investigation of a binary, I like to take a quick tour of bundled functionality.
Figure 2: Imported Namespaces
Aside from the typical imports, Windows7.Sensors is a fairly self-explanatory name, and is in fact just a sample code kit off MSDN for reading the tablet’s accelerometer. That’s interesting but rather benign functionality. Far more… concerning is the member variables and methods of the “gma” namespace.
Figure 3: Consider my eyebrows raised
This is, of course, the classic sign of a userspace keylogger, but for every keylogger out there, there’s a hundred legitimate apps who hook the keyboard and mouse for perfectly sensible reasons; otherwise, why would it even be in the standard Windows API? I was, however, beginning to question the provenance of this application.
The actual logic of the utility, however, was… puzzlingly brief. It initiated a nearly-empty form and hid it. It set up handlers to receive keyboard, mouse, and accelerometer activity. It then set up timers to poll the accelerometer based on how long it’d been since the last keyboard or mouse activity. And that’s it. That’s all. The application does not store or transmit or even display the information polled. It does nothing. I spent the better part of two hours scouring the obscure corners of the binary, thinking surely I must be missing some cleverly hidden method that actually uses this data. I couldn’t find one.
Putting aside this issue for now, I couldn’t help but think: why in the world is this little app ballooning into hundreds of megabytes of RAM? That’s usually the sign of a runaway memory leak, but in a pure .NET application, such things are actually difficult to cause, whereas in a C/C++ program they’re very difficult not to cause.
The answer lies in the fact that the sensor-reading DLL uses marshaling to interact with native APIs, and actually calls traditional memory allocation routines. Hence, every time the accelerometer is polled, manual allocations are made that may or may not ever be manually freed depending on control flow.
Figure 4: One of the places where the sensor glue code indulges in manual memory management
Violently shaking the tablet (is this thing still under warranty?) causes the RAM usage of “MEMS Enhancement Utility” to spike, but not all at once, as the accelerometer reader is going off at timed intervals rather than constantly. The memory usage will balloon by several megabytes every time I shake the tablet, and after a few minutes, some of it will be reclaimed by the garbage collector but some will not. Hence, the base RAM usage of the process steadily creeps upwards.
So now we know what the program does and why its memory usage is so high, but that still leaves the question of why it’s doing this at all. The clues are there, vestigial remnants of removed code, exciting to any Executable Archaeologist:
Figure 5: Declared variables of the main form
The generically named “Form1” of the application contains several widgets which are never actually displayed: a start button, a stop button, a place for displaying mouse coordinates, and a text box for displaying some other unspecified data. I believe this was originally a debugging utility used by “Spacer” engineers to calibrate the accelerometer so that it would not go off when one simply tapped on the touchscreen (triggering a mouse event or keyboard event). They didn’t bother to rigorously prevent memory leaks because it was never intended to run for more than a few minutes at a time. Somehow, through some miscommunication, a copy of this program with the logic for rendering the visuals stripped ended up on the list of utilities that needed to be kept in the final version of “Spacer’s” Windows 8 image for this model of tablet. Someone then dutifully registered it to be launched in the background every time the tablet boots, and every time the tablet is tilted, shaken, or prodded a little too hard, the RAM usage goes up.
Never attribute to malware what can be adequately explained by a few lines of debugging code somebody forgot to disable.
Filed under: application security, Application Security Metrics, research
Back in November 2012 I did Veracode’s initial release of a security headers report on the top 1 million websites from the Alexa list. My goal was to turn it into a series so it would be possible to track how these sites change over time in regards to security headers that are added, removed or changed. For this recent scan, only a single change was made to the original scripts. The tool now sends a recent Chrome User-Agent to track if sites respond with different headers depending on the supplied User-Agent.
Only the Firefox User-Agent data was used when comparing the results with the previous November 2012 data set. Out of the original 1.25 million requests gathered from the November scan, a total of 719,355 URLs matched this most recent run. As for the new data, we had roughly the same amount of valid responses. There were a total of 1,256,787 responses for Firefox and 1,257,273 responses for Chrome. Both HTTP and HTTPS requests were sent to each site.
Changes, Additions and Removals
Each security header of the November 2012 and the March 2013 data sets were analyzed to see if sites had modified their value, added new headers or removed headers. A total of 2,450 new headers were added to the 719,355 URLs that exist between the two scans.
Similar to last time we tracked the following security relevant headers:
Of these headers a total of 2,198 had been added, 75 had changed and 452 headers had been removed. The majority of those removed were X-Frame-Options (246) and Access-Control-Allow-Origin (166). To reiterate, only Firefox based User-Agent requests were used when comparing the two data sets.
The rate of change matches what we expect; more popular headers, such as X-Frame-options and Access-Control-Allow-Origin, saw the highest rate of change.
Calculating sites that added headers is straightforward; we simply identified sites with security headers that did not have any during the November scan. Sites that changed their header values are a bit more difficult to characterize because a number of sites include the same header multiple times in the response. It is quite common when parsing response headers that multiple headers have their values ‘merged’ into a single value, separated by a comma and a space. This is a documented behavior which can be found in section 4.2 in RFC 2616. As an example shown below, we see a site returning the X-Frame-Options header twice.
For our purposes, we merge these headers into a single value of “X-Frame-Options: SAMEORIGIN, SAMEORIGIN.” In some cases the number of times a header is returned changes depending on when we make the request, which can skew the results when comparing the two data sets. This is most likely due to load balanced servers in which one or more servers are configured differently.
One of the more interesting data points comes from sites that removed security headers. Of the 426 sites that removed the X-Frame-Options header, 226 had originally set the value to SAMEORIGIN. A similar pattern can be seen with Access-Control-Allow-Origin. Of the 166 sites that removed the header, 130 had originally set the value to *. Of the 18 sites that removed the Strict-Transport-Security header, most had previously set a very high timeout value.
March 2013 Results
The scan conducted on March 10, 2013 used the latest Alexa list available at that time. As with the previous scan, both HTTP and HTTPS connections were attempted. This time, a second set of HTTP and HTTPS requests were sent using the latest Google Chrome browser User-Agent. Unfortunately during the scan, not every site that responded in Chrome also responded in Firefox. While the next two charts show the varying result count between the two browsers, all detailed security header value results analyzed below will only be the distinct results of requests sent using both user-agents.
For this scan, we analyzed a total of 1,256,787 responses for Firefox. We see relatively the same distribution of headers configured. X-Frame-Options are still the most popular, followed by Access-Control-Allow-Origin.
For Chrome, we see roughly the same as we do for Firefox. However, when using the Chrome User-Agent we see a total of 96 sites using the X-Webkit-CSP header, by far the largest variance between the two browsers. While 96 may seem like a lot, 83 of these sites are owned by Facebook, so only 14 of these results can really be considered unique.
This time the data was broken out a bit further than when we reported back in November. Invalid values were broken down further into sites that included conflicting headers. Conflicting headers means that a site returns two different header values for X-Frame-Options, such as a site returning DENY and SAMEORIGIN in the same response. Invalid values are simply that, for instance sites that configure Allow From (without a hyphen), or values such as ‘sameorigem’ (sic).
Overall SAMEORIGIN is still by far the most common setting, followed by DENY. GOFORIT is still used quite a bit with 215 sites configured with this value. Only twelve sites bothered to configure X-Frame-Options with an Allow-From origin list.
Cross Origin Request Sharing (CORS) Headers
CORS continues to be a popular mechanism for sharing data between sites. As described in the previous post the Access-Control-Allow-Origin header determines which sites can request a User-Agent to send a request and read the response data. When configured with the wildcard value, any site can send requests and read the response data. However, if the request object attempts to send credentials to a site configured with a wildcard, the request will fail and no response data will be returned.
We still see the wildcard value being by far the most popular way of configuring Access-Control-Allow-Origin. Configured to allow a single origin is in a very far second, with people continuing to configure it with invalid values. Most of the invalid values continue to be hosts with wildcards, such as http://*.domain.com or simply *.domain.com with out the scheme, or multiple hosts specified in various ways. For a list of valid values, please consult our previous post on this subject.
For Access-Control-Allow-Credentials, only 217 sites set the property to true, five set it to false and three had it set to an invalid value.
As last time we break STS values in to four broad categories; long max age, which is greater than 8000 seconds, short max age which is less than 8000 seconds, 0 which basically tells the User-Agent that the host should be removed from the browser’s HSTS list, and finally invalid values.
While the number of invalid values was quite low, we still see a max-age of 0 being quite high. Upon looking at the sites that have max age set to zero, the majority continue to be coming from www.etsy.com. As described last time, the reason for this can be attributed to their SSL opt-in policy.
There has only been a small change in sites using Content Security Policy. The biggest gain was in results for X-Webkit-CSP where we now seemingly have more X-Webkit-CSP responses than X-Content-Security-Policy. This can be attributed completely to Facebook. When using the Chrome User-Agent, popular user pages from Facebook began to respond with the X-Webkit-CSP header. In fact, 83 out of the 96 sites that had X-Webkit-CSP came from domains owned by Facebook. Unfortunately, we are still seeing a high amount of sites specifying the options “inline script” or “eval script”. For X-Webkit-CSP we have started to see the unsafe-eval which was also included in this count.
Overall it is good to see the number of sites adopting security headers trending upwards. It was a bit surprising to see that Content Security Policy still doesn’t have too much adoption, but compared to other security headers it is far more complex to implement and has a higher chance of impacting how a site operates. In this sense, one can understand why it is taking longer than the other security headers to become mainstream. Invalid values specified in headers continue to be a problem. If you utilize any of these headers on your site it would be worth double checking the values configured against its relevant specification.
We feel this information could be useful in the community, so Veracode has decided to distribute the raw data used in this study. Both the November 2012 and March 2013 data sets are now available for download. To give a more accurate picture of our comparison, these archives contain the full list of web sites that were analyzed whether or not they had security headers in their responses.
It’s that time of year again… A time when all the most interesting people, ideas, concepts, and attacks are on display in Las Vegas. That’s right, we are talking about Blackhat USA and associated conferences. Every year about a week before conference time, all the security analysts, researchers, and talking heads begin to espouse their thoughts regarding which of of the conference sessions will be the highlights of the week. Each person’s idea of what will be “the best talk of the week” is colored through his or her own biased lens. To this end, we asked some of our blog writers to narrow down their list to the top 3 Blackhat presentations (sorry Defcon and BSides, you guys are awesome too.. but we only have so much available time and space). Since no two lists are alike, we bring you the Veracode Zero Day Labs’ must see presentation list for Blackhat 2011.
Chris Wysopal’s List
- How a Hacker Has Helped Influence the Government – and Vice Versa – Peiter “Mudge” Zatko: Mudge is a great speaker and I always learn a new ways of looking at security from him. Now that he has immersed himself into the DoD way of looking at things I am positive some new insights will flow out of him. Note that this is a keynote so there is no excuse for missing this one.
- Femtocells: A Poisonous Needle in the Operator’s Hay Stack – Ravishankar Borgaonkar & Nico Golde & Kevin Redon: If you are like me the first time you saw a Femtocell (a small cellular base station for home use) you thought, “If I could hack that I could MITM mobile calls”. Well these guys went out and did it! They are going to discuss attacking both mobile devices and the mobile infrastructure from a hacked femtocell.
- The Law of Mobile Privacy and Security – Jennifer Granick: It’s an unfortunate fact but security researchers need to keep up with the changing legal landscape that surrounds technology. Mobile research is exploding and stepping into areas covered by different laws than the traditional CFAA or DMCA. This is a good way to keep up if you are a mobile researcher.
Tyler Shields’ List
- Apple iOS Security Evaluation: Vulnerability Analysis and Data Encryption – Dino Dai Zovi: This talk is going to be awesome. Nobody knows Apple products as well as Dino, and if he says he has tested iOS, you better believe he’s gone deep.
- Hacking Androids for Profit – Riley Hassell & Shane Macaulay: A discussion on Android security both on the device and in the marketplace and some Android 0day to boot?! What an opening gambit this talk is going to be.
- War Texting: Identifying and Interacting with Devices on the Telephone Network – Don Bailey: With the continued advent of mobility and GPS positioning, devices are being hooked up to the phone network faster than ever before. Don will demonstrate some really cool ways of analyzing and testing these devices.
Chris Eng’s List
- A second vote for Apple iOS Security Evaluation: Vulnerability Analysis and Data Encryption – Dino Dai Zovi.
- Sophail: A Critical Analysis of Sophos Antivirus – Tavis Ormandy: Why should anti-virus tools be safe from scrutiny. Let’s see what Tavis has up his sleeve.
- Chip and Pin is Definitely Broken – Adam Laurie et al: Steal a card, use it to make charges, bank thinks you used the PIN? Sounds like a winning situation to me.
Brandon Creighton’s List
- A second vote for Sophail: A Critical Analysis of Sophos Antivirus – Tavis Ormandy.
- SSL and the Future of Authenticity – Moxie Marlinspike: Moxie never fails to disappoint. That and he has awesome hair.
- Sticking to the Facts: Scientific Study of Static Analysis Tools – Willis and Britton: We might be a little biased in finding this one interesting.. static analysis is kind of our game.
Talks Presented by Veracode!
If the above doesn’t excite you, the following definitely should. Veracode researchers are participating in the following panels and talks at venues throughout Las Vegas.
- Panel: Owning Your Phone At Every Layer- Moderated by Tyler Shields: This panel, which will include our own Chris Wysopal, brings some of the best mobile researchers together to determine where the real risks in mobile devices comes from. Is the applications you install on your phone, is it the weak infrastructure, or is the operating system to blame? Come participate in this battle royale to determine what really should be keeping you up at night.
- The Web Browser Testing System – Isaac Dawson at Blackhat Arsenal: The Web Browser Testing System WBTS was built to quickly automate and test various browser and user-agents for security issues. It contains all the necessary services required for testing a browser. The following services are included: DNS, HTTP(S), Logging Services and support for VirtualHosts.
- Communicating in Code – Chris Lytle at DEFCON Kids: Cryptography is the art and science of making and breaking secret codes and ciphers. Learn about the history of cryptography, practice it for yourself, and make your very own secret cipher! There will be prizes! Please note, kids will get more from this session if they have basic reading and writing skills.
Call for Papers
IEEE Security & Privacy
Software Static Analysis
Abstract submissions due: 15 Aug. 2011
Final submissions due: 15 Sept. 2011
Publication date: May/June 2012
Secure and reliable software is hard to build, but the costs of failure are steep. Data breaches caused by attackers exploiting vulnerabilities in software made many headlines in 2011 and show no sign of abating. Sony, RSA Security, and PBS were compromised, their intellectual property stolen, and the privacy of their customers impacted; all due to vulnerabilities in software. Software reliability problems have led to bungled lotteries, medical device failures, the early release of convicted felons, and enumerable other problems.
The precise details of software failures are often scarce, but it’s clear that the defects underlying many software problems could have been identified earlier using static analysis. As software platforms proliferate, from mobile devices to the cloud to embedded devices such as the smart grid, it will be even more difficult to get software right. Will static analysis be up for the challenge?
This special issue of IEEE Security & Privacy will address both static analysis technology and the challenges of using it during software development and acquisition. Is it possible to apply static analysis to the wide range of software assurance challenges that exist today? We solicit articles from:
- individuals building static analysis technology
- individuals integrating static analysis into software development methodologies and processes
- organizations implementing software security programs that used static analysis to manage software risk organization-wide
- government agencies and industry regulators who use static analysis to manage software risk
Potential submission topics include (but are not limited to):
- How can we build more useful static analysis technology: reducing analysis errors, improving scalability, or making static analysis easier to use?
- What are the benefits of integrating static analysis with other software development technologies or processes such as dynamic testing or threat modeling?
- Can static analysis results be integrated with other information sources such as network analysis, firewall logs, or intrusion detection?
- How can an organization scale static analysis across hundreds of software teams and projects?
- Using static analysis to understand the risk in software you didn’t build.
- Using static analysis to find privacy problems.
- Can static analysis be used to help educate software developers?
- How do modern programming languages, frameworks, and trends impact the effectiveness of static analysis?
- Can static analysis be the basis for automatically repairing some kinds of vulnerabilities?
Submissions will be subject to the IEEE Computer Society’s peer-review process. Articles should be at most 6,000 words, with a maximum of 15 references, and should be understandable to a broad audience of people interested in security and privacy. The writing style should be down to earth, practical, and original. Authors should not assume that the audience will have specialized experience in a particular subfield. All accepted articles will be edited according to the IEEE Computer Society style guide. Submit your papers to ScholarOne at https://mc.manuscriptcentral.com/cs-ieee.
Contact the Guest Editors: Brian Chess (firstname.lastname@example.org) and Chris Wysopal (email@example.com)
Rich Mogull talks about real world IT security challenges today in his column, “Simple Isn’t Simple” in Dark Reading. I agree 100%. One of the Rich’s points is security has to scale or it doesn’t solve the real world problem. In most cases we know how to solve a security problem for a single instance of that problem; one SQL injection flaw in one app, for instance. The challenge is doing it at scale. If you can’t do it at scale you don’t solve the problem for the business.
Firewalls need to be on every ingress/egress point in the organization or they don’t solve the problem. Firewall technology has to scale to be manageable over every connection and work on every size pipe. Network vulnerability scanners have to scale to scan every system in the enterprise. Patch management solutions need to scale to manage every system with any OS. Likewise the only way to solve application security is to scale it to every release of every app.
At Veracode we don’t just focus on the accuracy of our application security solution. We also focus on our solution working well at large enterprise scale. Our mission is to make it possible for an organization, no matter how large, to perform security testing on all apps: every release, from every source (in house, outsourced, vendor, open source), and on every platform. We have customers that are statically scanning 1000 different applications this year. We have dynamically scanned 3000 web sites for one customer in 8 days. Scaling well is also not just the absolute number you can get to, but how quickly you can get there.
Scaling application security is a hard problem that requires automation and humans. Manual effort cannot be eliminated so it needs to be made as efficient as possible. This can be done by offloading the parts of testing that can be automated to automated solutions. Let humans find authorization issues and machines find SQL injection. If humans don’t scale well then application security experts scale less well. We should design solutions where tasks that need humans can be performed by more available resources. Let QA people crawl through business logic constraints and feed that crawl into automation rather than drive tools with application security experts. These are some of the approaches we are taking as we learn how to drive application security testing through huge application portfolios.
Filed under: application security, SDLC, Software Development, tools
[UPDATE: Since there seems to be some confusion, the "We" in the title of this post is NOT "Veracode". The expression is a generic one intended to illustrate the attitude exhibited by many companies who like to downplay the value and/or effectiveness of technologies that they themselves do not sell. I can't believe I am having to explain this.]
Fair warning, this is a bit of a rant.
Back in my consulting days (early 2000, I’m getting old), we delighted in the fact that our web application penetration testing methodology didn’t rely on automated tools. This was completely true; we did everything manually, and we were among the best in the industry. Many so-called security consultants of the day would run a commercial web scanner and repackage the results as a high dollar “penetration test” — what a ripoff!
What we didn’t acknowledge to our customers is that those web scanners, even in their immature state, were probably capable of detecting some of the low hanging fruit that we didn’t want to spend our time looking for. Oh, we’d find a few “representative examples” of XSS and SQL injection, but then we’d get bored and move on to the more interesting and complex attack vectors. In our naivete, we figured developers would be inspired to revisit their entire input validation and/or output encoding practices, as opposed to just fixing the proof-of-concept examples we found.
Meanwhile, the commercial web scanner vendors were always downplaying the value of manual testing! “Why would you want to pay for an expensive penetration test when you can just run this less expensive tool and find the same vulnerabilities?” They’d gloss over all the technical challenges of automated web scanning and conveniently forget to mention how it was impossible for them to find authorization issues, cryptographic weaknesses, business logic flaws, and so on.
What’s my point?
Using multiple testing methodologies is crucial. Sure, there may be some overlap, but ultimately they are complementary to one another. That’s why at Veracode, we’ve never positioned automated static analysis (SAST) as a complete solution. That’s why we integrated both automated web scanning (DAST) and manual penetration testing into our service offerings less than a year after launching the company, even though SAST is our patented bread-and-butter technology. This meant we could always be completely honest about the strengths and weaknesses of each technique. I’ve had a slide titled “There Is No Silver Bullet” in my corporate slide deck since the very beginning.
Our silver bullet is better than yours
Meanwhile, it’s been amusing to watch other companies — who only had a single offering — having to espouse the tactic of downplaying any testing approach that wasn’t in their service portfolio.
- Over at Fortify, Brian Chess famously predicted that 2009 would mark ">the end of penetration testing.
- Over at WhiteHat, Jeremiah Grossman often downplays the value of writing secure code and testing code quality.
- Even as recently as last week, we have Errata Security (a consultancy) claiming that automated tools are useless and doomed to fail. Welcome back to 1999.
I’m only picking on these guys because they’re visible, well-respected practitioners in the application security space. Of course Brian knows source code scanning is an incomplete solution, and now that Fortify and WebInspect are part of the same parent company, I suspect he’s adjusted his message. I’m certain Jeremiah knows there’s value in writing secure code during the SDLC, which is why WhiteHat is now trying to get into the SAST market by acquiring some technology.
And I’m pretty sure Dave Maynor knows automation does provide real value. How else can a big company — spooked by all the recent breaches — quickly hunt for SQL injection vulnerabilities across 5,000 websites without the benefit of automation? How does one look for issues in the 150 third-party libraries you use, where only the binary is available? Do you hire Mark Dowd to spend a month looking at each one?
We all know a few sales reps that jump from one company to another, changing their pitch as they go no matter how much it conflicts with things they’ve said in the past. First a service-based approach is best, but suddenly an on-premise tool is better. Source code scanning used to be pointless, but now it’s the best thing since sliced bread! It’s no surprise these guys don’t experience more success — they lack credibility. The most successful account reps I’ve seen are the ones who build trust with their customers over time by being honest about what they are selling, even when hopping from one company to the next.
Look, it’s no big secret why people talk up their own stuff and imply everything else stinks. It’s part of the sales and marketing machine and by no means is it unique to the security industry. Even so, can’t we make an effort — as practitioners — to cut back on the rhetoric a little bit and be more honest with our customers? Customers look to us as experts to help them build their security programs, and what do we do? We oversell them on an approach that has huge gaps we pretend don’t exist. If you’re really looking out for your customers, start being more honest, and stop handing out kool-aid.
Here’s another approach: Instead of outright dismissing an effective technology or methodology just because you don’t sell it, sometimes it’s worth thinking about partnering, or even building something better. That’s why at Veracode we designed our service platform around the idea of technology integration. There is no silver bullet and there never will be.
Over the last few weeks there’s been a lot of commentary around the breach of Sony’s PlayStation Network. Sadly, there has been no good discussion of how PSN was breached. What this breach means for Sony is largely defined by how it happened. Before we get to that though let’s go over a quick timeline of some of the important points in the breach’s timeline.
Jan 2, 2011: Months of battles between Sony and PS3 hackers reaches a climax when George Hotz aka GeoHot publishes the Root Key for the PS3. Among other things this allows users to sign and run any code they want on the PS3.
Jan 11, 2011: Sony responds to the releases of the Root Key by filing suit against Hotz and several other prominent PS3 enthusiasts in Sony Computer Entertainment of America LLC v. Hotz et al. Sony brought charges against the hackers on multiple charges including violation of the DMCA and Computer Fraud and Abuse Act, breach of contract, and trespass.
March 31, 2011: Rebug custom firmware released. Rebug allows access to many of the features only found in PS3 developer kits (PS3 dev kits were notoriously expensive. At one point the PS3 Reference Tool cost upwards of 10,000 USD.)
March 31, 2011: Sony Online Entertainment lays off 205 employees, an estimated 1/3 of the division.
Early April 2011: Internet group Anonymous responds to SCEA v. Hotz by launching OpSony, a DDoS of PSN and other Sony owned properties with a web presence.
April 20, 2011: Sony detects an intrusion and PlayStationNetwork and Qriocity servers are taken offline.
From there Sony’s missive to Congress pretty well documents what happened.
So, with that background laid, we now need to ask how the attacker actually got in. Sony held a press conference on May 1st 2011, during which they issued this diagram describing how they believed the intrusion happened:
This seems like a roundabout way of saying that there was a SQL injection issue in one of PSN’s applications or that the database server could have been publicly accessible and exploitable from there. That’s not very descriptive or helpful though so let’s take a minute to take a look some of the alternative ideas on how the breach happened. Please take all of this with a grain of salt as some of this is speculation or cannot be confirmed.
- Unpatched server: A chat log of several PS3 modders probing PSN has been making the rounds. In it they claim that some of PSN’s webservers were running outdated versions of Apache and Linux (2.2.15 and 2.6.9-2.6.24 respectively). It is a solid bet that if those packages were outdated, the rest of the server hadn’t been patched in the last 5 years either. If that was the case, then the intrusion would have been as simple as firing up Metasploit and going to work. As a side note, Google’s web cache shows that Sony’s servers were up to date, so this whole theory may be bunk.
- Physical attack: Several of Sony’s press releases and blog posts have talked about moving the PSN servers into a single secure location. There have been suggestions that this indicates that there was a physical component to the attack. While this certainly is a possibility, it seems much more likely that this was already happening and Sony is merely highlighting it to promote the image of a security conscious company.
- Insider attack: While this is a threat actor, not an attack, it still merits mentioning. There is a possibility that one of the 205 SOE employees who were terminated on March 31st could have used their access to attack Sony. The retaliatory attacks over the GeoHot lawsuit would have provided the perfect cover for an employee who was angry with being terminated to leverage their access against Sony.
- Leveraging a PS3 against PSN: One of the interesting features of the Rebug firmware was the ability to switch which set of PSN servers the console connected to. For instance, in one attack modders found it was possible to force a PS3 to connect to the prod-qa instance of PSN. On this particular instance, the servers would not authenticate credit card information before adding credit to the account, so attackers could simply add unlimited credit for the PSN store. Much of this information was publicly available before the breach happened. Also an IRC chat log claimed that there were 45 Internet accessible PSN instances at the time of the breach. It is possible that one of the PSN instances meant for internal use only had certain flaws or was configured in such a way that a rogue PS3 could have leveraged it against the rest of Sony’s network.
Looking at these possibilities and their likelihoods I think we can form a pretty reasonable idea of what happened beyond the attack shown in Sony’s diagram. It looks like a vulnerability in an application was the initial point of entry for this breach. Whether or not this was done using a modified PS3 is up for debate, and there isn’t any solid proof one way or another. While it is extremely probable some of the machines in PSN weren’t up date on their patches, it seems that if exploiting an outdated web service was the way into PSN for the last 5 years, we would have heard about it much sooner, given all of the automated scan-and-attack tools available today. Also, Sony’s actions that look like responses to a physical attack are probably nothing more than management handing down a blank check to make sure that all of PSN’s defenses are bulked up.
And that’s all working on the assumption that there was just one breach! Perhaps the reason why Sony’s response has seemed a little disjointed is that we keep trying to shoehorn their actions to fit our notion of them responding to a single unrealistically complicated multi-vector attack, and not them responding to a slew of simple attacks that all happen to be coming from different vectors simultaneously. In the weeks that followed PSN being taken down, we have learned that other Sony-owned resources have been compromised and taken offline (e.g. DC Universe Online, Star Wars: Galaxies, Free Realms, EverQuest, and even Sony-run Facebook games like Fortune League) and that more personal information was lost than originally reported (plus an additional 12,700 credit card numbers were discovered stolen on May 2nd). It is unlikely that this is all the work of a single attacker. Even with a best case scenario of there being only two independent simultaneous breaches, so much went on in Sony’s network during those few days that trying to assess, attribute, and respond to what happened is quite a task. Expecting them to know exactly how to best respond to a breach of this magnitude and complexity without tilting their heads a little about what happened is just unrealistic.
Finally, I would bet that this was more a crime of opportunity than a targeted attack. Much of the work that modders were doing on exploring the different PSN instances was publicly available. If someone wanted to attack PSN, the recon was done for them and the tools were already made. Since several less-than-honest modders were using the aforementioned free content trick, someone who wanted to use this information to attack would need to do it before Sony responded and nullified all of that work. Also Sony was still shoring up their defenses from the DDoS of the prior weeks, so there was perfect cover for the attack.
All in all, we probably won’t ever know all of the details surrounding this breach. This should provide a little bit of insight into what probably happened and help a bit to interpret Sony’s response to the breach.
Veracode Security Solutions
Software Testing Tools
Static Analysis Tool
Web Application Security
Static Code Analysis
Security Threat Guides
Following the industrial control system attack of Iran’s nuclear facilities dubbed Stuxnet, vulnerability researchers have intensified their scrutiny of the software that runs these industrial systems, known as SCADA systems. The results are unsettling. Given the danger of vulnerabilities in the software that controls power and water systems and industrial plants you would expect vulnerabilities to be rare. It is just the opposite. Common vulnerabilities listed in the CWE/SANS Top 25 Most Dangerous Software Errors such as SQL injection (#2), Buffer Overflow (#3), and Use of Hard Coded Credentials (#11) have been found in SCADA systems over the last few months. A visit to the ICS-CERT website shows the discouraging results. I believe these public disclosures are the tip of the iceberg as there are not as many researchers focussing on SCADA as there are focussing on common consumer and business software.
The latest revelation comes from ICS-CERT which alerts the industrial control system community to serious vulnerabilities that put them at risk. An exploitable buffer overflow has been discovered in a component of ICONICS GENESIS32 and BizViz products. This component is called an ActiveX control and is often used in cases where a web based user interface needs to interface with another piece of a software control system. ActiveX controls are commonly used in the UI for control systems built using the Microsoft Windows platform.
Buffer overflows in ActiveX controls are a serious vulnerability which came to light about 10 years ago. Back in 2001 the only solution to this problem was time consuming manual code review or manual testing. But in 2011 there is a much better solution to the buffer overflow problem known as static code analysis. All software written in C or C++ needs to be tested for buffer overflows using static code analysis before it is delivered to customers.
Those interested in learning more about testing for vulnerabilities in ActiveX controls can read a free chapter on Local Fault Injection from my book, The Art of Software Security Testing. For the safety of our critical infrastructure I hope the software engineers at Iconics will read this and consider security testing of their code.
The purchasers of industrial control systems and other business critical systems should begin to ask their vendors if they have performed security testing before software is delivered to them. Since static analysis can find the majority of the vulnerabilities in the CWE/SANS Top 25 Dangerous Software Errors there is no excuse for important software not to be tested.
Veracode Security Solutions
Web Application Security
Static Code Analysis
Source Code Analysis
Software Testing Tools