5 Best Practices in Data Breach Incident Response

August 26, 2014 by · Leave a Comment
Filed under: application security 

It goes without saying that all IT organizations should have an active Incident Response (IR) Plan in place – i.e. a policy that defines in specific terms what constitutes an information security incident, and provides a step-by-step process to follow when an incident occurs. There’s a lot of good guidance online about how to recruit a data breach response team, set initial policy, and plan for disaster.

For those organizations already prepared for IT incident response, be aware that best practices continue to evolve. The best IR plans are nimble enough to adjust over time. However, when the incident in question is feared to be a possible data breach, organizations should add a couple other goals as part of their comprehensive Application Security disaster planning:

  • The complete eradication of the threat from your environment.
  • Improved AppSec controls to prevent a similar breach in the future.

Veracode’s Information Security Assessment Team, which put together our own IR playbook, recommends that IT groups follow these five emerging guidelines to plan for the reality of today’s risks and threats.

1. Plan only for incidents of concern to your business.

Learn more about threat modeling with a free chapter from the book "Threat Modeling: Designing for Security"

Learn more about threat modeling with a free chapter from the book “Threat Modeling: Designing for Security”

According to the SANS Institute, the first two steps to handling an incident most effectively are preparation and identification. You can’t plan for everything, nor should you. For example, if no business is conducted through the organization’s website, there is probably no need to prepare for a Denial of Service attack. Companies in heavily regulated industries such as financial services or healthcare receive plenty of guidelines and mandates on the types of threats to sensitive and confidential data, but other industries may not enjoy similar “encouragement”.

Ask yourselves the question, what is OUR threat landscape, why would hackers and criminals want to attack us? The possible answer(s) will lead to a probable set of root causes for data breach attempts. Focus on what’s possible, but also don’t be afraid to think creatively. The U.S. National Security establishment was famously caught flat-footed by the events of 9/11 as the result of a “lack of imagination” about what terrorists were capable of accomplishing. By constantly re-evaluating your organization’s threat landscape (and by relying on solid threat intelligence to identify new and emerging threats), your data breach response team will remain on its best footing.

2. Don’t just plan your incident response, practice it.

Practice: it’s not just the way to Carnegie Hall. IR Plans must not be written and then left on the shelf to gather dust. A proactive and truly prepared information security organization educates its IT staff and users alike of the importance of regularly testing and updating breach response workflows. Plans must be drilled and updated regularly to remain viable. Even if it’s simply around a conference table, run through your response plan. Some organizations do this as often as monthly; your unique industry and the probable threats it faces will determine the ideal frequency of this best practice. At Veracode, we run regular Table Top Exercises on a number of possible scenarios.

The worst mistakes are typically made before the breach itself. Be prepared. The purpose of IR drills is to ensure that everyone understands what he or she should be doing to respond to a data breach, quickly and correctly. A good rule of thumb here is that “practice makes better, never perfect.” It pays to be honest about your IR team’s capabilities and their ability to effectively neutralize the most likely threats. If the necessary skills don’t exist in-house, then better plan to retain some outside help that can be standing by, just in case.

3. In speed of response, think “minutes” not “hours”.

IR teams should always strive to improve response times – that’s a given – and “within minutes” is today’s reality. On the internet, a service outage of more than 11159979_sone hour is considered significant. Social media chatter can very quickly amplify the damage that could be done to your business, so get out ahead of the crisis …and stay there.

SANS Institute defines the third step in breach response as “containment” – to neutralize the immediate threat and prevent further damage to critical or customer-facing systems. Move quickly to determine the possible severity of the data breach and then follow the customized response workflows in place for that scenario. To borrow some terminology from the military: is your “situation room” responding to a Defcon 1 attack or more like Defcon 5? Even as your IR team moves to eradicate the threat, you can be communicating to key stakeholders appropriately – according to the reality of the situation at hand.

4. Don’t over-communicate.

This guideline seems counter-intuitive. Sharing is caring, right? Wrong. Especially when it comes to the fate of your organization’s confidential or sensitive customer information. Your initial notification to customers should almost immediately follow detection as a pre-planned rote response. There will be no time to wordsmith the perfect statement in the thick of battle; better have it pre-packaged and ready ahead of time. That being said, this statement should be short and concise. Acknowledge both your awareness of the incident and the IR team’s continuing efforts to safely restore service, as soon as possible.

After that, plan to give updates to all stakeholders on some kind of methodical basis. Act like the NTSB after a plane crash. They give regularly scheduled press conferences on what they know so far, while firmly pushing back on what they don’t. Think like an investigator and deal in facts. Don’t speculate as to the root cause of the breach or even when service will be restored, unless that timeline is precisely known. Your communication to the market, while measured, should always be sympathetic and as helpful as possible. One final piece of advice: tell your customers the same thing you tell the media. There are (few if) no secrets left on the Internet.

5. Focus on restoring service first, root cause forensics later.

Uptime will keep customers happy, which is where your focus should be initially.

Uptime will keep customers happy, which is where your focus should be initially.

The root cause of a data breach incident is typically not immediately known, but that should be no impediment to your restoring service ASAP for customers (once the threat is contained and destroyed, of course). Keep the focus on the customer. Get back online as quickly as possible. Clearly, SANS outlines “recovery” as the step that ensures that no software vulnerabilities remain, but…

Ignore the engineers & analysts who want to investigate root cause immediately. With today’s sophisticated attacks, this can take weeks or months to determine, if at all. Still, incident response is not over when it’s “over”. As we’ve asserted, the best organizations – and their IR teams – take the time to learn from any mistakes. Monitor systems closely for any sign of weakness or recurrence. Analyze the incident and evaluate (honestly) how it was handled. What could be improved for better response in the future? Revise your organization’s IR Plan, making any necessary changes in people, processes or technology for when or if there is a next time. Practice any new workflows again and again until you know them cold.

Conclusion:

Solid IT risk management strategies include disaster recovery planning and the creation of a living, evolving incident response playbook. Today’s IR plans need to be focused, factual and fast. Every organization needs to budget for the hard IT costs associated with data breach recovery. However, a comprehensive and battle-tested plan will help mitigate the “soft costs” associated with poorly handled data breach incidents. These can include lingering damage to revenue, reputation or market value – long after the initial crisis is resolved.

Address Proof of Software Security for Customer Requirements in 4 Steps

The button for purchases on the keyboard. Online shop.

The world’s largest enterprises require proof of software security before they purchase new software. Why? Because third-party software is just as vulnerable to attack as software developed by internal teams. In fact, Boeing recently noted that over 90 percent of the third-party software tested as part of its program had significant, compromising flaws. As a software supplier, how do you get ahead of this trend?

Not every supplier has the resources and maturity to develop its own comprehensive secure-development process to the level of the Microsofts of the world, but that doesn’t mean security should be thrown out the window. Large, medium and small software suppliers — such as NSFOCUS and GenieConnect — have found significant benefit in incorporating binary static analysis into their development process, addressing vulnerabilities and meeting compliance with industry standards. This has earned them the VerAfied seal, which means their software product had no “very high,” “high” or “medium” severity vulnerabilities as defined by the Security Quality Score (SQS), nor any OWASP Top 10 or CWE/SANS Top 25 vulnerabilities that could be discovered using Veracode’s automated analysis.

This extra step to meet compliance with software security standards is one most suppliers don’t even consider: it could slow down development, add extra cost to the product and potentially reveal software vulnerabilities that the producer would rather not know about. Many software suppliers vainly hope that security is only necessary for a certain class of software — a banking program perhaps, but not a mobile application. However, security is relevant to every supplier, no matter their product or industry.

Software suppliers that neglect the security of their product are in for a rude awakening when the sales pipeline evaporates because they can’t answer questions about software security.

What should a supplier do to address a request for proof of software security? Here are four steps:

  1. Use — and document — secure coding practices when developing software. This may seem obvious, but developer documentation makes it easy to demonstrate that the software was developed to be secure from the very beginning.
  2. Test for vulnerabilities throughout the development process (the earlier and more frequent, the better). Don’t wait until the night before your product’s release to run your first security assessment, or your release will be delayed.
  3. Educate developers on how to find, fix and avoid security flaws. Many developers simply haven’t had proper training. Make sure they learn these skills not only for the benefit of your product, but also to improve your human capital.
  4. Proactively communicate with your customers about the steps you take to secure your product. This will improve existing relationships and help differentiate your product in the market.

It’s time for the software industry as a whole to embrace the trend of requiring proof of security as an opportunity to improve software everywhere.

Yo, A Cautionary Tale for the VC Community

By Chris Lynch, Partner, Atlas Venture

The story of Yo will be used as a cautionary tale in the VC community for years to come. Only a few days after receiving a much talked about $1.2 million in series “A” funding from Angel investor and serial entrepreneur Moshe Hogeg, Yo suffered a massive security breach. The breach made more headlines than the funding, and took the wind out of the company’s sales – possibly for good.

yo

The jury is still out on the future of the Yo app but being hacked got the app in headlines for all the wrong reasons.

How did the breach happen? Over the weeks that followed several journalists have offered their analysis including @VioletBlue: People invested $1.2 million in an app that had no security, @mikebutcher: App allegedly hacked by college students and @mthwgeek: Yo been hacked made to play Rick Astley.

While the epic rise and fall of Yo and how Yo was hacked make for an interesting story, as an investor, this is not the part of the story that jumped out at me. The question I have is how did the experienced investor, Moshe Hogeg (or any investor for that matter) invest in a technology without learning about the development process of the technology? The app was built in about eight hours. What does that indicate about the QA process? What does that say about the security of the software?

Register for this webinar here!

Join Chris Lynch and Veracode CEO Bob Brennan for a webinar discussing: Why You Need to Be a Secure Supplier

The eight hour development time is impressive, and demonstrates drive on the part of the apps’ developers. However, I have questions about the security of a product that can be developed during a single standard work day. And Yo’s prospective customers – the advertisement firms that they were inevitably selling this data to – would have asked the same question.

When I listen to a start-up pitch me on their next-gen/transformational/whatever product, I always question if the technology is truly enterprise-class: is it scalable, reliable, and secure? One or two groups within an enterprise may order a few of your widgets without this, but if you are gunning for the big bucks, you want an enterprise-wide deployment of your technology. This requires you prove that your product is just as reliable and secure as the largest players in the market. Because no one gets fired for buying IBM. People get canned when they purchase software from a cutting-edge start-up that ends up causing a data breach and costing the enterprise millions. Security is just table stakes if you want to play with the big boys. This includes enterprises buying your product and VCs like Atlas Venture backing your company.

When investing in a company, or product, it is essential that I understand everything I can about the technology – including the security of that product. It isn’t enough to scrutinize the need for the technology in the market and the product’s functionality. I must also understand how the product is developed, and if secure development practices are in use. Otherwise I am setting myself up to lose a lot of money in the event of a breach.

As investors in new companies and technologies we are taking risks, and without investors taking these risks we will never see the next Facebook or Instagram. However, these risks we take should be calculated jumps, not leaps of faith. Investing $1.2 million into a company without this level of due diligence is irresponsible – unless you are looking for some sort of revenue loss tax break.

I have a feeling Moshe Hogeg thought he had a winning product when he wrote that check. But he didn’t conduct a full due diligence process, and he is paying dearly for that mistake now. I feel badly for Moshe Hogeg, but I hope his misfortune can serve as a warning to the investment community as a whole and more broadly to buyers and users of software – whether they are consumers or businesses. Software security is as important as software functionality and simply assuming security was a consideration during the development process no longer good enough. Documented proof needs to be provided from these software development companies if they expect to get funding and ultimately to generate revenue.

Just Another Web Application Breach

Does this resemble your application security program's coverage? We can help.

Does this resemble your application security program’s coverage? We can help.

Another day another web application breach hits the news. This time ITWorld reports Hackers steal user data from the European Central Bank website, ask for money.

I can’t say that I’m surprised. Although vulnerabilities (SQL Injection, cross-site-scripting, etc.) are easy for attackers to detect and exploit, they are still very common across many web applications.

The survey that we just completed with IDG highlights the problem – 83% of respondents said it was critical or very important to close their gaps in assessing web applications for security issues. However, a typical enterprise:

  • has 804 internally developed web applications
  • plans to develop another 119 web applications with internal development teams over the next 12 months
  • tests only 38% of those web applications for security vulnerabilities

And these numbers don’t include all the web applications that are sourced to third-party software vendors or outsourced development shops.

The assessment methodologies for finding web application vulnerabilities aren’t a mystery – we all know about static and dynamic testing. It’s the scale at which web applications must be found, assessed for vulnerabilities and then remediated that makes this difficult for large enterprises.

Think about it, 119 applications over the next 365 days means a new web application is deployed on an enterprise web property every 3 days.

Is it any wonder that web application breaches keep happening?

Learn more about Veracode’s cloud-based service:

For Java: I Patch, Therefore I Am?

Oracle’s Java platform is so troubled the question is whether to patch it, or kill it off.

2145480_m

Oracle Inc. released its latest Critical Patch Update (CPU) on Tuesday of last week, with fixes for 113 vulnerabilities spread across its product portfolio, including 29 for Oracle’s Fusion Middleware, and 20 for the troubled Java platform.

The release has prompted a chorus of entreaties to “patch now,” including those from the SANS Internet Storm Center, U.S. CERT and Brian Krebs. A surprising number of them, however, also held out the possibility of not patching Java and, instead, just not using it.

This isn’t loose talk. It wasn’t that long ago that the headlines were all about new, critical security holes discovered in Java. Exploits for those vulnerabilities were used in online ‘drive by download’ and ‘watering hole’ attacks aimed at high value targets, including employees at companies like Facebook, Apple and Microsoft. The advice back then was to simply turn Java off – and leave it off – when you browse the web.

“Oracle/Java is probably by now one of the most successful charities in the world,”

- Daniel Wesemann

The furor over Java’s vulnerability subsided – even if the attacks and patches didn’t. Eight of 20 vulnerabilities fixed by Oracle in Java were rated 9.0 or higher on a severity scale of 1-10. One of them, CVE-2014-4227, rated a perfect “10.” All the reported vulnerabilities would allow a remote attacker to exploit the vulnerability without first authenticating (signing in) to the vulnerable system.

The difficulty with Java is that it is a technology that is integrated into so many devices and applications – web based and otherwise. Oracle boasts that Java runs on 97% of enterprise desktops and 3 billion mobile phones, as well as countless embedded devices, from “smart” TVs to Blu-ray Disc players.

That makes any exploitable vulnerability in Java worth its weight in gold for cyber criminals or nation-state backed hackers. A Java exploit is the key that will unlock just about every door on the Internet. The cost – to society – is large.

“Oracle/Java is probably by now one of the most successful charities in the world,” wrote Daniel Wesemann on the SANS Internet Storm Center blog. “It continues to do an outstanding job at enabling significant wealth transfer to support poor cyber criminals and their families.”

Java’s time may have come and gone. More than a few of the security experts calling attention to the latest CPU are asking out loud whether it isn’t time to ditch Java altogether. “Patch It or Pitch It” was Mr. Krebs headline – which aptly summed up the feelings of many security experts. Like the owners of an old junker, Java users may look at this latest CPU and ask themselves “is it really worth the trouble to patch?”

Widely adopted programs tend to make for more lucrative mines.

Widely adopted programs tend to make for more lucrative mines.

Where does the blame lie? The truth is that technologies that are widely adopted and deployed almost always attract the attention of cyber criminals. Active X was a popular back in the ‘dotcom’ era. It also became a favorite target of cyber criminals. Over time, that pushed developers and software publishers away from the platform and to alternatives…like Java.

Technologies like Java are so ubiquitous that it can be impossible for anyone – individual or a business – to know whether a given product uses a vulnerable component until its too late.

But, as competitors like Microsoft have endeavored to make their software update and patching process transparent, Oracle has opted to keep its security process extremely opaque. The company’s monthly CPU releases are massive and stretch across scores of disparate products and platforms. Some vulnerabilities affect multiple products, making it hard to know what’s going on. Researchers who dig for details often come away scratching their head.

For the latest patch, Ross Barrett, a security engineer at the firm Rapid7 points out that the top two patches for Oracle Database 12 fix an issue that Oracle patched in an earlier version of the same product a year ago. That would suggest that Oracle either failed to appreciate the reach of the vulnerability last year or knew about it and chose to leave Oracle 12 customers unprotected. Either is troubling.

In response, Oracle management – including Chief Security Officer Mary Ann Davidson, are often combative rather than conciliatory. Ms. Davidson recently penned a derisive blog post, “Those that can’t do audit” to cast doubt on the utility of code audits and suggest that large companies like Oracle shouldn’t have to bother with third party audits like the little guys. The message, on security: “trust us.”

As the critical vulnerabilities in Java, MySQL and Oracle’s other products mount, however, trust is getting hard to come by.

For Java: I Patch, Therefore I Am?

Oracle’s Java platform is so troubled the question is whether to patch it, or kill it off.

2145480_m

Oracle Inc. released its latest Critical Patch Update (CPU) on Tuesday of last week, with fixes for 113 vulnerabilities spread across its product portfolio, including 29 for Oracle’s Fusion Middleware, and 20 for the troubled Java platform.

The release has prompted a chorus of entreaties to “patch now,” including those from the SANS Internet Storm Center, U.S. CERT and Brian Krebs. A surprising number of them, however, also held out the possibility of not patching Java and, instead, just not using it.

This isn’t loose talk. It wasn’t that long ago that the headlines were all about new, critical security holes discovered in Java. Exploits for those vulnerabilities were used in online ‘drive by download’ and ‘watering hole’ attacks aimed at high value targets, including employees at companies like Facebook, Apple and Microsoft. The advice back then was to simply turn Java off – and leave it off – when you browse the web.

“Oracle/Java is probably by now one of the most successful charities in the world,”

- Daniel Wesemann

The furor over Java’s vulnerability subsided – even if the attacks and patches didn’t. Eight of 20 vulnerabilities fixed by Oracle in Java were rated 9.0 or higher on a severity scale of 1-10. One of them, CVE-2014-4227, rated a perfect “10.” All the reported vulnerabilities would allow a remote attacker to exploit the vulnerability without first authenticating (signing in) to the vulnerable system.

The difficulty with Java is that it is a technology that is integrated into so many devices and applications – web based and otherwise. Oracle boasts that Java runs on 97% of enterprise desktops and 3 billion mobile phones, as well as countless embedded devices, from “smart” TVs to Blu-ray Disc players.

That makes any exploitable vulnerability in Java worth its weight in gold for cyber criminals or nation-state backed hackers. A Java exploit is the key that will unlock just about every door on the Internet. The cost – to society – is large.

“Oracle/Java is probably by now one of the most successful charities in the world,” wrote Daniel Wesemann on the SANS Internet Storm Center blog. “It continues to do an outstanding job at enabling significant wealth transfer to support poor cyber criminals and their families.”

Java’s time may have come and gone. More than a few of the security experts calling attention to the latest CPU are asking out loud whether it isn’t time to ditch Java altogether. “Patch It or Pitch It” was Mr. Krebs headline – which aptly summed up the feelings of many security experts. Like the owners of an old junker, Java users may look at this latest CPU and ask themselves “is it really worth the trouble to patch?”

Widely adopted programs tend to make for more lucrative mines.

Widely adopted programs tend to make for more lucrative mines.

Where does the blame lie? The truth is that technologies that are widely adopted and deployed almost always attract the attention of cyber criminals. Active X was a popular back in the ‘dotcom’ era. It also became a favorite target of cyber criminals. Over time, that pushed developers and software publishers away from the platform and to alternatives…like Java.

Technologies like Java are so ubiquitous that it can be impossible for anyone – individual or a business – to know whether a given product uses a vulnerable component until its too late.

But, as competitors like Microsoft have endeavored to make their software update and patching process transparent, Oracle has opted to keep its security process extremely opaque. The company’s monthly CPU releases are massive and stretch across scores of disparate products and platforms. Some vulnerabilities affect multiple products, making it hard to know what’s going on. Researchers who dig for details often come away scratching their head.

For the latest patch, Ross Barrett, a security engineer at the firm Rapid7 points out that the top two patches for Oracle Database 12 fix an issue that Oracle patched in an earlier version of the same product a year ago. That would suggest that Oracle either failed to appreciate the reach of the vulnerability last year or knew about it and chose to leave Oracle 12 customers unprotected. Either is troubling.

In response, Oracle management – including Chief Security Officer Mary Ann Davidson, are often combative rather than conciliatory. Ms. Davidson recently penned a derisive blog post, “Those that can’t do audit” to cast doubt on the utility of code audits and suggest that large companies like Oracle shouldn’t have to bother with third party audits like the little guys. The message, on security: “trust us.”

As the critical vulnerabilities in Java, MySQL and Oracle’s other products mount, however, trust is getting hard to come by.

Four Steps to Successfully Implementing Security into a Continuous Development Shop

18458476_sSo you live in a continuous deployment shop and you have been told to inject security into the process. Are you afraid? Don’t be. When the world moved from waterfall to agile, did everything go smoothly? Of course not – you experienced setbacks and hiccups, just like everyone else. But, eventually you worked through the setbacks and lived to tell the tale. As with any new initiative, it will take time to mature. Take baby steps.

Step one: crawl.

Baseline the security of your application by using multiple testing methods. Static, dynamic and manual analysis will let you know exactly where you stand today. Understand that you may be overwhelmed with your results. You can’t fix it all at once, so don’t panic. At least you know what you have to work with. Integration with your SDLC tools is going to be your best friend. It will allow you to measure your progress over time and spot problematic trends early.

Step two: stand.

Come up with a plan based on your baseline. What has to be fixed now? What won’t we fix? You didn’t get here in a day and you won’t be able to fix it in a day. Work with your security team to build your backlog. Prioritize, deprioritize, decompose, repeat. Now would be a great time to introduce a little education into the organization. Take a look at your flaw prevalence and priorities and train your developers. If you teach them secure coding practices they will write more secure code the first time.

Step three: walk.

Stop digging and put the shovels down. We know that we have problems to fix from the old code (security debt). Let’s make sure we don’t add to the pile. Now is the time to institute a security gate. No new code can be merged until it passes your security policy. We’re not talking about the entire application, just the new stuff. Don’t let insecure code come into the system. By finding and addressing the problems before check-ins, you won’t slow your downstream process. This is a 13881278_sgood time to make sure your security auditing systems integrate with your software development lifecycle systems (JIRA, Jenkins, etc.). Integrating with these systems will make the processes more seamless.

Step four: run!

Now you have a backlog of prioritized work for your team to fix and you’re not allowing the problem to get worse. You’re constantly measuring your security posture and showing continuous improvement. As you pay down your security debt you will have more time for feature development and a team with great secure coding habits.

Integrating a new standard into a system that is already working can be intimidating. But following these four steps will make the task more manageable. Also, once security is integrated, it will become a normal part of the continuous development lifecycle and your software will be better for it.

Related Links

Introduction, or How Securing the Supply Chain is like “Going Green”

Application security is, as any practitioner will tell you, a hard technical and business problem unlike any other. The best advice for successfully securing software is usually to avoid thinking about it like any other problem — software security testers are not like quality assurance professionals, and many security failures arise when developers think conventionally about use cases rather than abuse cases.

But just because application security is a distinct problem does not mean that we should fail to learn from other fields, when applicable. And one of the opportunities for learning is in what appears at first glance to be a doubly difficult problem: securing the software supply chain. Why is software supply chain security needed? The majority of businesses are not building every application they use, they are turning to third parties like outsourced and commercial software vendors. According to IDG, over 62% of an enterprises’ software portfolio was developed outside the enterprise.

Caption

Over 62% of an enterprises’ software portfolio is developed outside the enterprise.

How should these enterprises be thinking about security? Software supply chain security efforts have all the challenges of conventional app sec initiatives, combined with the contractual, legal, and organizational issues of motivating change across organizational boundaries.

But the consequences of ignoring supply chain issues in an application security program are momentous. Most applications are composed of first party code surrounding libraries and other code sourced from third parties — both commercial libraries and open source projects. Purchased applications deployed on the internet or the internal network may access sensitive customer or corporate data and must be evaluated and secured just like first party code, lest a thief steal data through an unlocked virtual door. And increasingly standards like PCI are holding enterprises responsible for driving security requirements into their suppliers.

So what are we to do? Fortunately, software security is not the only large, complex initiative that has implications on the supply chain. Software supply chain security initiatives can take inspiration from other supply chain transformation initiatives, including the rollout of RFID in the early 2000s by Walmart and others, and — particularly — the rise of “green” supply chain efforts.

12728281_sIn fact, software security bears close similarity to “green” efforts to reduce CO2 emissions and waste in the supply chain. Both “green” and security have significant societal benefits, but have historically been avoided in favor of projects more directly connected to revenue. Both have recently seen turns where customers have started to demand a higher standard of performance from companies. And both require coordination of efforts across the supply chain to be successful.

This series of blog posts will explore some simple principles for supply chain transformation that can be derived from efforts to implant “green” or to drive RFID adoption. The basic building blocks stem from research done into green efforts by the Wharton School of Business and published in 2012, and are supplemented with learnings from RFID. We’ll cover seven principles of supply chain transformation and show you how to apply them to your software supply chain initiative:

The Seven Habits of Highly Effective Third-Party Software Security Programs

  1. Choose the right suppliers
  2. Put your efforts where they do the most good
  3. Collaborate to innovate
  4. Use suppliers as force multipliers
  5. The elephant in the room is compliance
  6. Drive compliance via “WIIFM”
  7. Align benefits for enterprise and supplier – or pay

I hope you enjoy the series and look forward to the discussion!

Is It Time For Customs To Inspect Software?

The Zombie Zero malware proves that sophisticated attackers are targeting the supply chain. Is it time to think about inspecting imported hardware and software?

The time for securing supply chain software is now.

If you want to import beef, eggs or chicken into the U.S., you need to get your cargo past inspectors from the U.S. Department of Agriculture. Not so hardware and software imported into the U.S. and sold to domestic corporations.

But a spate of stories about products shipping with malicious software raises the question: is it time for random audits to expose compromised supply chains?

Concerns about ‘certified, pre-pwned’ hardware and software are nothing new. In fact, they’ve permeated the board rooms of technology and defense firms, as well as the halls of power in Washington, D.C. for years.

The U.S. Congress conducted a high profile investigation of Chinese networking equipment maker ZTE in 2012 with the sole purpose of exploring links between the company and The People’s Liberation Army, and (unfounded) allegations that products sold by the companies were pre-loaded with spyware.

Of course, now we know that such threats are real. And we know because documents leaked by Edward Snowden and released in March showed how the U.S. National Security Agency intercepts networking equipment exported by firms like Cisco and implants spyware and remote access tools on it, before sending it on its way. Presumably, the NSA wasn’t the first state intelligence agency to figure this out.

If backdoors pre-loaded on your Cisco switches and routers aren’t scary enough, this week, the firm TrapX issued a report on a piece of malicious software they called “Zombie Zero.” TrapX claims to have found the malware installed on scanners used in shipping and logistics to track packages and other products. The scanners were manufactured in China and sold to companies globally. The factory that manufactured the devices is located close to the Lanxiang Vocational School, an academy that is believed to have played a role in sophisticated attacks on Google and other western technology firms dubbed “Aurora.” Traffic associated with a command and control botnet set up by Zombie Zero were also observed connecting to servers at the same facility – which is suggestive, but not proof of the School’s involvement in the attack.

TrapX said that its analysis found that 16 of 64 scanners sold to a shipping and logistics firm that they consulted with were infected. The Zombie Zero malware was programmed to exploit access to corporate wireless networks at the target firms to attack finance and ERP systems at the firms.

Scanners outfitted with another variant of Zombie Zero were shipped to eight other firms, including what is described as a “major robotics” manufacturer, TrapX claims.

If accurate, TrapX’s Zombie Zero is the most flagrant example of compromised hardware being used in a targeted attack. Its significant because it shows how factory loaded malware on an embedded device (in this case: embedded XP) could be used to gain a foothold on the networks of a wide range of companies in a specific vertical.

Prior “malicious supply chain” stories haven’t had that kind of specificity. Dell warned about compromised PowerEdge motherboards back in 2010, but there was no indication that the compromised motherboards were directed to particular kinds of Dell customers. Recent news about Android smartphones pre-loaded with spyware and teapots with wireless “spy chips” seemed more indicative of an undifferentiated cyber criminal operation satisfied to cast a wide net.

Not so TrapX, whose creators seemed intent both on compromising a particular type of firm (by virtue of the kind of device they used as their calling card) and extracting a particular type of data from those firm – the hallmarks of a sophisticated “APT” style actor.

There’s really no easy answer to this. Warning U.S. firms away from Chinese products is all well and good, but it’s also a strategy that won’t work, while punishing lots of innocent companies selling quality product. The truth is that any technology product you buy today is almost certain to contain components that were sourced in China. Any of those components could contain malicious software supplied by a compromised or unscrupulous down steam supplier. “Buy American” is even more pointless in the context of technology than it was back in the automobile sector back in the 70s and 80s.

What’s to be done? Security conscious firms need to take much more interest in the provenance of the hardware and software they buy. Firms, like Apple, that are big enough to have leverage might consider random audits of equipment and firmware looking for compromises. They might also insist on reviewing the manufacturing facilities where devices are assembled to see what kinds of quality controls the manufacturer has over the software and hardware that is installed in their products.

Beyond that, the U.S. government – via U.S. Customs and Border Protection (and like agencies in other nations) could take an interest in the contents and quality of IT products that are imported from China and other countries.

A system of random inspections and audits – akin to the inspections that are done for agricultural and consumer products – could raise the stakes for firms and governments intent on slipping compromised IT equipment and embedded devices into the U.S. market.

Truth, Fiction and a 20 Year Old Vulnerability

July 10, 2014 by · Leave a Comment
Filed under: application security 

The impact of a 20 year old flaw in the LZ4 is still a matter of conjecture. The moral of the story isn’t.

I think we can all agree it's not quite THIS critical.

I think we can all agree it’s not quite THIS critical.

What were you doing in 1996? You remember ’96, right? Jerry McGuire, Independence Day and Fargo were in the theaters. Everybody was dancing the “Macarena”?

In the technology world, 1996 was also a big year. Among other, less notable developments: two obscure graduate students, Larry Page and Sergey Brin, introduced a novel search engine called “Backrub.” Elsewhere, a software engineer named Markus F. X. J. Oberhumer published a novel compression algorithm dubbed LZO. Written in ANSI C, LZO offered what its author described as “pretty fast compression and *extremely* fast decompression.” LZO was particularly adept at compressing and decompressing raw image data such as photos and video.

Soon enough, folks found their way to LZO and used it. Today, LZ4 – based upon LZO – is a core component of the Linux kernel and is implemented on Samsung’s version of the Android mobile device operating system. It is also a part of the ZFS file system which, in turn, is bundled with open source platforms like FreeBSD. But the true reach of LZ4 is a matter for conjecture.

That’s a problem, because way back in 1996, Mr. Oberhumer managed to miss a pretty straight-forward, but serious integer overflow vulnerability in the LZ4 source code. As described by Kelly Jackson Higgins over at Dark Reading, the flaw could allow a remote attacker to carry out denial of service attacks against vulnerable devices or trigger

the flaw could allow a remote attacker to carry out denial of service attacks against vulnerable devices or trigger remote code execution on those devices

remote code execution on those devices – running their own (malicious) code on the device. The integer overflow bug was discovered during a code audit of LZ4.

Twenty years later, that simple mistake is the source of a lot of heartbleed…err…heartburn as open source platforms, embedded device makers and other downstream consumers of LZ4 find themselves exposed.

Patches for the integer overflow bug were issued in recent days for both the Linux kernel and affected open-source media libraries. But there is concern that not everyone who uses LZ4 may be aware of their exposure to the flaw. And Mr. Bailey has speculated that some critical operating systems – including embedded devices used in automobiles or even aircraft might be vulnerable. We really don’t know.

As is often the case in the security industry, however, there is some disagreement about the seriousness of the vulnerability and some chest thumping over Mr. Bailey’s decision to go public with his findings.

Writing on his blog, the security researcher Yann Collett (Cyan4973) has raised serious questions about the real impact of LZ4. While generally supporting the decision to patch the hole (and recommending patching for those exposed to it), Mr. Collett suggests that the LZ4 vulnerability is quite limited.

Specifically: Collett notes that to trigger the vulnerability, an attacker would need to create a special compressed block to overflow the 32-bits address space. To do that, the malicious compressed block would need to be in the neighborhood of 16 MB of data. That’s possible, theoretically, but not practical. Legacy LZ4 limits file formats to 8MB blocks – maximum. “Any value larger than that just stops the decoding process,” he writes, and 8MB is not enough to trigger a problem. A newer streaming format is even stricter, with a hard limit at 4 MB. “As a consequence, it’s not possible to exploit that vulnerability using the documented LZ4 file/streaming format,” he says. LZ4, Mr. Collett says, is no OpenSSL.

In response to Collett and others, Bailey wrote an even more detailed analysis of the LZ4 vulnerability and found that attackers actually wouldn’t be limited by the 8MB or 4 MB limit. And, while all kinds of mitigating factors may exist, depending on the platform that LZ4 is running on, Bailey concludes that exploits could be written against the current implementations of LZ4 and that block sizes of less than 4MB could be malicious. While some modern platforms may have features that mitigate the risk, “this is the kind of critical arbitrary-write bug attackers look for when they have a corresponding memory information disclosure (read) that exposes addresses in memory.”

While the LZ4 vulnerability debate has become an example of security industry “insider baseball,” there is (fortunately) a larger truth here that everyone can agree on. That larger truth is that we’re all a lot more reliant on software than we used to be. And, as that reliance has grown stronger, the interactions between software powered devices in our environment , has become more complex and our grasp of what makes up the software we rely on has loosened. Veracode has written about this before – in relation to OpenSSL and other related topics.

It may be the case that the LZ4 vulnerability is a lot harder to exploit that we were led to believe. But nobody should take too much comfort in that when a casual audit of just one element of the Linux Kernel uncovered a 20 year old, remotely exploitable vulnerability. That discovery should make you wonder what else is out there has escaped notice. That’s a scary question.

Next Page »