Filed under: application security, Third-Party Software
The Zombie Zero malware proves that sophisticated attackers are targeting the supply chain. Is it time to think about inspecting imported hardware and software?
If you want to import beef, eggs or chicken into the U.S., you need to get your cargo past inspectors from the U.S. Department of Agriculture. Not so hardware and software imported into the U.S. and sold to domestic corporations.
But a spate of stories about products shipping with malicious software raises the question: is it time for random audits to expose compromised supply chains?
Concerns about ‘certified, pre-pwned’ hardware and software are nothing new. In fact, they’ve permeated the board rooms of technology and defense firms, as well as the halls of power in Washington, D.C. for years.
The U.S. Congress conducted a high profile investigation of Chinese networking equipment maker ZTE in 2012 with the sole purpose of exploring links between the company and The People’s Liberation Army, and (unfounded) allegations that products sold by the companies were pre-loaded with spyware.
Of course, now we know that such threats are real. And we know because documents leaked by Edward Snowden and released in March showed how the U.S. National Security Agency intercepts networking equipment exported by firms like Cisco and implants spyware and remote access tools on it, before sending it on its way. Presumably, the NSA wasn’t the first state intelligence agency to figure this out.
If backdoors pre-loaded on your Cisco switches and routers aren’t scary enough, this week, the firm TrapX issued a report on a piece of malicious software they called “Zombie Zero.” TrapX claims to have found the malware installed on scanners used in shipping and logistics to track packages and other products. The scanners were manufactured in China and sold to companies globally. The factory that manufactured the devices is located close to the Lanxiang Vocational School, an academy that is believed to have played a role in sophisticated attacks on Google and other western technology firms dubbed “Aurora.” Traffic associated with a command and control botnet set up by Zombie Zero were also observed connecting to servers at the same facility – which is suggestive, but not proof of the School’s involvement in the attack.
TrapX said that its analysis found that 16 of 64 scanners sold to a shipping and logistics firm that they consulted with were infected. The Zombie Zero malware was programmed to exploit access to corporate wireless networks at the target firms to attack finance and ERP systems at the firms.
Scanners outfitted with another variant of Zombie Zero were shipped to eight other firms, including what is described as a “major robotics” manufacturer, TrapX claims.
If accurate, TrapX’s Zombie Zero is the most flagrant example of compromised hardware being used in a targeted attack. Its significant because it shows how factory loaded malware on an embedded device (in this case: embedded XP) could be used to gain a foothold on the networks of a wide range of companies in a specific vertical.
Prior “malicious supply chain” stories haven’t had that kind of specificity. Dell warned about compromised PowerEdge motherboards back in 2010, but there was no indication that the compromised motherboards were directed to particular kinds of Dell customers. Recent news about Android smartphones pre-loaded with spyware and teapots with wireless “spy chips” seemed more indicative of an undifferentiated cyber criminal operation satisfied to cast a wide net.
Not so TrapX, whose creators seemed intent both on compromising a particular type of firm (by virtue of the kind of device they used as their calling card) and extracting a particular type of data from those firm – the hallmarks of a sophisticated “APT” style actor.
There’s really no easy answer to this. Warning U.S. firms away from Chinese products is all well and good, but it’s also a strategy that won’t work, while punishing lots of innocent companies selling quality product. The truth is that any technology product you buy today is almost certain to contain components that were sourced in China. Any of those components could contain malicious software supplied by a compromised or unscrupulous down steam supplier. “Buy American” is even more pointless in the context of technology than it was back in the automobile sector back in the 70s and 80s.
What’s to be done? Security conscious firms need to take much more interest in the provenance of the hardware and software they buy. Firms, like Apple, that are big enough to have leverage might consider random audits of equipment and firmware looking for compromises. They might also insist on reviewing the manufacturing facilities where devices are assembled to see what kinds of quality controls the manufacturer has over the software and hardware that is installed in their products.
Beyond that, the U.S. government – via U.S. Customs and Border Protection (and like agencies in other nations) could take an interest in the contents and quality of IT products that are imported from China and other countries.
A system of random inspections and audits – akin to the inspections that are done for agricultural and consumer products – could raise the stakes for firms and governments intent on slipping compromised IT equipment and embedded devices into the U.S. market.
The impact of a 20 year old flaw in the LZ4 is still a matter of conjecture. The moral of the story isn’t.
What were you doing in 1996? You remember ’96, right? Jerry McGuire, Independence Day and Fargo were in the theaters. Everybody was dancing the “Macarena”?
In the technology world, 1996 was also a big year. Among other, less notable developments: two obscure graduate students, Larry Page and Sergey Brin, introduced a novel search engine called “Backrub.” Elsewhere, a software engineer named Markus F. X. J. Oberhumer published a novel compression algorithm dubbed LZO. Written in ANSI C, LZO offered what its author described as “pretty fast compression and *extremely* fast decompression.” LZO was particularly adept at compressing and decompressing raw image data such as photos and video.
Soon enough, folks found their way to LZO and used it. Today, LZ4 – based upon LZO – is a core component of the Linux kernel and is implemented on Samsung’s version of the Android mobile device operating system. It is also a part of the ZFS file system which, in turn, is bundled with open source platforms like FreeBSD. But the true reach of LZ4 is a matter for conjecture.
That’s a problem, because way back in 1996, Mr. Oberhumer managed to miss a pretty straight-forward, but serious integer overflow vulnerability in the LZ4 source code. As described by Kelly Jackson Higgins over at Dark Reading, the flaw could allow a remote attacker to carry out denial of service attacks against vulnerable devices or trigger
the flaw could allow a remote attacker to carry out denial of service attacks against vulnerable devices or trigger remote code execution on those devices
remote code execution on those devices – running their own (malicious) code on the device. The integer overflow bug was discovered during a code audit of LZ4.
Twenty years later, that simple mistake is the source of a lot of heartbleed…err…heartburn as open source platforms, embedded device makers and other downstream consumers of LZ4 find themselves exposed.
Patches for the integer overflow bug were issued in recent days for both the Linux kernel and affected open-source media libraries. But there is concern that not everyone who uses LZ4 may be aware of their exposure to the flaw. And Mr. Bailey has speculated that some critical operating systems – including embedded devices used in automobiles or even aircraft might be vulnerable. We really don’t know.
As is often the case in the security industry, however, there is some disagreement about the seriousness of the vulnerability and some chest thumping over Mr. Bailey’s decision to go public with his findings.
Writing on his blog, the security researcher Yann Collett (Cyan4973) has raised serious questions about the real impact of LZ4. While generally supporting the decision to patch the hole (and recommending patching for those exposed to it), Mr. Collett suggests that the LZ4 vulnerability is quite limited.
Specifically: Collett notes that to trigger the vulnerability, an attacker would need to create a special compressed block to overflow the 32-bits address space. To do that, the malicious compressed block would need to be in the neighborhood of 16 MB of data. That’s possible, theoretically, but not practical. Legacy LZ4 limits file formats to 8MB blocks – maximum. “Any value larger than that just stops the decoding process,” he writes, and 8MB is not enough to trigger a problem. A newer streaming format is even stricter, with a hard limit at 4 MB. “As a consequence, it’s not possible to exploit that vulnerability using the documented LZ4 file/streaming format,” he says. LZ4, Mr. Collett says, is no OpenSSL.
In response to Collett and others, Bailey wrote an even more detailed analysis of the LZ4 vulnerability and found that attackers actually wouldn’t be limited by the 8MB or 4 MB limit. And, while all kinds of mitigating factors may exist, depending on the platform that LZ4 is running on, Bailey concludes that exploits could be written against the current implementations of LZ4 and that block sizes of less than 4MB could be malicious. While some modern platforms may have features that mitigate the risk, “this is the kind of critical arbitrary-write bug attackers look for when they have a corresponding memory information disclosure (read) that exposes addresses in memory.”
While the LZ4 vulnerability debate has become an example of security industry “insider baseball,” there is (fortunately) a larger truth here that everyone can agree on. That larger truth is that we’re all a lot more reliant on software than we used to be. And, as that reliance has grown stronger, the interactions between software powered devices in our environment , has become more complex and our grasp of what makes up the software we rely on has loosened. Veracode has written about this before – in relation to OpenSSL and other related topics.
It may be the case that the LZ4 vulnerability is a lot harder to exploit that we were led to believe. But nobody should take too much comfort in that when a casual audit of just one element of the Linux Kernel uncovered a 20 year old, remotely exploitable vulnerability. That discovery should make you wonder what else is out there has escaped notice. That’s a scary question.
Filed under: application security, Application Security Metrics
This year I’m working with IDG to survey enterprises to understand their application portfolio, how it’s changing and what firms are doing to secure their application infrastructure.
The study found that on average enterprises expect to develop over 340 new applications in the 12 months. As someone that has been working in and around the enterprise software industry for more years than I care to admit here, I find that number astounding. Enterprises really are turning into software companies.
Think about it – how many new applications did software vendors like Microsoft, Oracle, or SAP bring to market in the last 12 months? The number is probably in the hundreds, but you would expect that because they are software vendors. Every application sold is money in their pocket. The more software they make the more opportunities there are for them to increase their revenue and profits.
On average enterprises expect to develop over 340 new applications in 12 months.
So why are enterprises developing as many applications as software vendors? The answer is the same. The more software they make the more opportunities there are for them to increase their revenue and profits. It may not be a short and direct line between software development, revenue and profits like it is for software vendors, but the connection is there otherwise enterprises wouldn’t be doing it.
The problem is that all those applications represent both opportunities and risks for the enterprises developing them. How much risk? It’s hard to say without assessing them for vulnerabilities. However, most of those 300+ new applications will not be assessed for security risks. The survey found that only 37% of enterprise-developed applications are assessed for security vulnerabilities.
Or look at it another way – enterprises are blindly choosing to operate in a hostile environment for 63% of the business opportunities represented by software. If it were me, I would rather take off the blindfold and see exactly what I’m getting into. I can only hope that enterprise executives start feeling the same way.
- Focus Shift: From the Critical Five Percent to the Entire Application Infrastructure
- Majority of Web Apps Not Assessed for Critical Security Vulnerabilities
- Innovate securely or else!
Later this week I’ll be joining IDG Market Research Manager, Perry Laberis for a webinar to discuss a study on how application infrastructures are changing and how security teams will keep up with those changes to manage enterprise risk.
At Veracode this is a very important discussion because we know that applications are the lifeblood of every enterprise. The last time we did a survey like this we found that focus had shifted from securing only mission critical applications to instead a broader and better understanding of your entire application infrastructure. Discussions with our customers showed that they were increasingly concerned about their entire application infrastructure.
They are concerned because attackers are using well known vulnerabilities in low priority applications as a stepping stone to get access to more valuable data. For example, we’ve known how to find, fix and prevent SQL injection vulnerabilities for 20+ years. Yet it still shows up — and is exploitable — in modern web applications.
It’s still showing up in enterprise application infrastructures because most enterprise development teams are not required to find and fix security vulnerabilities. The IDG study found that more than sixty percent of internally developed applications are not assessed for critical security vulnerabilities such as SQL Injection.
So there is this gap between what people worried about securing two years ago and what they are worried about now.
The fundamental question our customers are asking us is – how can they go further faster? They also ask us a lot of questions about what are other people doing:
- What baseline should I be comparing myself to – tell me what my peer group is doing and who it doing appsec best?
- What does their current coverage look like?
- How fast is their application infrastructure growing?
- How much are they spending to get that coverage and what are the spending it on?
- How do my peers drive up adoption of secure development practices across all of their development teams?
- What are the critical factors for success and how do I benchmark my progress?
That’s a broad range of topics – so we decided it would be best to get systematic about getting answers to these types of questions.
The research results Perry and I will be discussing are the beginning a whole series of efforts to deliver answers for our customers. I hope you find the insights valuable and that you will give us suggestions on how to make it even more relevant to your particular challenges.
Our corporate “Monster In Your Corner” theme really landed with me — when was the last time you heard the EVP of Development say something like that about a marketing campaign?
The “Monster in your corner” means you have the full force of Veracode’s scalable cloud-based service in your corner — backed by our world-class security experts — to help you reduce application-layer risk.
The stakes are very high for executives like me. We either deliver innovative software on a timescale of relevance, securely — or we’re toast. Harsh, but true. Second, the “securely” part is — as we say in New England—“Wicked Hahd,” particularly if you try to go it alone. So, I feel like I need a monster in my corner.
Innovate securely or else!
Look, my customers are probably very similar to yours. They want new offerings and product enhancements fast — we’re a SaaS player so if we fail to meet their expectations, they shut us down — no renewal, no expansion, no reference — no IPO! Our team leverages Agile, DevOps, and AWS to meet customer expectations — and we leverage good security hygiene across the SDLC plus Veracode’s cloud-based service to do it rapidly and securely. Shameless plug alert — check out previous content by Pete Chestna and Chris Eng to learn how Veracode implements secure agile in our own development environment.
Application security is “Wicked Hahd” — and going it alone sets your Dev and Security teams up for failure. Security isn’t just another non-functional requirement like quality or performance—not that quality and performance aren’t important or challenging in their own right but neither involve planning for malicious intent in the face of focused cyber-attackers — that don’t need to be right very often to cause significant harm to your enterprise. As a result, it’s not enough to ask a developer to get more knowledgeable about writing secure code and/or to train them on a simple scanning tool. Better development security hygiene is no longer enough, given today’s AppSec threat landscape, because it’s the equivalent of bringing a pen knife to a gun fight. So, I like the “Monster in Your Corner” theme because it suggests that those of us leading Dev organizations (and our CISO counterparts) need help (no, not psychological help, although there are days…) from experts on implementing enterprise-wide governance programs to reduce risk across web, mobile, legacy and third-party applications.
The AppSec threat landscape has evolved to a point where the only way to setup your Dev team for success — you know, deliver timely innovation without sacrificing security — is by having a Monster in Your Corner. Honestly, this sounds so corny that I can’t believe I wrote it, but it’s true. Look, it’s fair and reasonable to ask my development team to develop software with secure coding practices in mind, and to incorporate corporate security policy into “doneness” criteria, etc.—all while going at a breakneck, Agile-at-scale pace. That said, it’s irresponsible not to give them access to a powerful, centralized AppSec platform with on-demand AppSec expertise to help level the ridiculously disproportionate playing field that they’re dealing with.
- Webinar: Secure Agile Through Automated Toolchains: How Veracode R&D Does It
- Webinar: Building Security Into the Agile SDLC: View from the Trenches
- Webinar: A Pragmatic Approach to Benchmarking Application Security
Filed under: ALL THINGS SECURITY, application security, Third-Party Software
With data breaches through third-party applications lighting up news headlines left and right, the scrutiny on cohesion between software vendors and their customers is at an all time high. And it should be high because as we noted in our State of Software Security Supplement Report 90% of third-party code does not comply with enterprise security standards such as the OWASP Top 10.
As a result of the large and growing footprint of third-party software in the enterprise, regulatory bodies such as the OCC and industry organizations such as FS-ISAC, OWASP and the PCI Security Standards Council are now placing increased focus on controls required to mitigate the risks introduced by third-party software.
That’s why the next question in our Future of Application Security series is:
What’s the best way to work with vendors and suppliers on application security?
Watch Our Other Video Surveys
- Video 1: When will the number of data breach incidents per year finally begin to fall?
- Video 2: How can security professionals promote growth and innovation at their organizations?
- Video 3: What methods are best to involve software development teams in application security?
- Video 4: What’s the best way to work with vendors and suppliers on application security?
Filed under: application security, Binary Analysis, SDLC
1. Coverage, both within applications you build and within your entire application portfolio
One of the primary benefits of binary static analysis is that it allows you to inspect all the code in your application. Mobile apps especially have binary components, but web apps, legacy back office and desktop apps do too. You don’t want to only analyze the code you compile from source but also the code you link in from components. Binary analysis lets vendors feel comfortable about getting an independent code level analysis of the code you are purchasing through procurement. This enables you to do code level security testing of the COTS applications in your organization’s portfolio. Binary analysis lets you cover all of the code running in your organization.
2. Tight integration into the build system and continuous integration (CI) environment
If you integrate binary static analysis into your CI you can build in 100% automation with no need for manual human (developer) steps. The build process can run the binary analysis by calling an API and results can be automatically brought into a defect ticketing system also through an API. Code analysis is now transparent and inescapable. Developers will then see security defects in their normal defect queue. Developers will be fixing security flaws without needing to perform any configuration or testing saving valuable developer time.
3. Contextual analysis
Binary static analysis analyzes your code along with all the other components of the application, within the context of the platform it was built for. Binary static analysis can view tainted source data flow through the complete data flow to a risky sink function. Partial application analysis of pieces of a program miss this context and be will less accurate on both false positives and false negatives. Any security expert will tell you context is extremely important. A section of code can be rendered insecure or secure by the code it is called from or the code it calls into.
With a complete program you can perform Software Composition Analysis (SCA) to identify components that have known vulnerabilities in them. A9-Using Components with Known Vulnerabilities is one of the OWASP Top 10 Risks so you want to make sure you can analyze the entire program. Veracode has built SCA into the binary static analysis process.
4. Higher fidelity of analysis
Some languages like C and C++ give latitude to the compiler to generate different machine code. Source code analysis is blind to decisions made by the compiler. There are documented cases of both the GCC and the Microsoft C/C++ compiler removing security checks and the clearing of memory which opened up security holes. MITRE CWE has categorized this vulnerability: CWE-14: Compiler Removal of Code to Clear Buffers. The paper WYSINWYX: What You See Is Not What You Execute by Gogul Balakrishnan describes how “there can be a mismatch between what a programmer intends and what is actually executed on the processor.”
More on binary static analysis
- Download our updated Binary Static Analysis Fact Sheet
- Lessons of Binary Static Analysis: Presentation by Christien Rioux at SOURCE Boston 2012
- Bytecode Analysis is not the Same as Binary Analysis
Filed under: application security, Software Development
We’re back with another question for security pros around the world. This video is part of our Future of Application Security series where we asked a group of appsec professionals in attendance at RSA Conference 2014 their thoughts around some of the biggest industry topics. Check out the video and if you have an opinion, we want to hear it!
Secure software development remains one of the most challenging obstacles many enterprises face today. With developers more focused on feature deadlines and time being a scarcity, putting focus on security is all too often secondary.
What methods are best to involve software development teams in application security?
Sound off in the comments or your social media platform of choice, and use the hashtag #FutureofAppSec so we can easily share your thoughts.
Watch Our Other Video Surveys
- Video 1: When will the number of data breach incidents per year finally begin to fall?
- Video 2: How can security professionals promote growth and innovation at their organizations?
- Video 3: What methods are best to involve software development teams in application security?
Filed under: application security, Compliance, Customer Success
A report released in the UK this week highlighted nicely the link between software security and data protection- a very hot topic this side of the pond in the midst of EU regulation reform and post-PRISM privacy concerns. The Information Commissioner’s Office (ICO), the UK’s independent regulatory office dealing with data protection and data privacy, released a report on the most common security weaknesses found during investigation of data breaches.
Those looking for tales of sophisticated cyber-attacks and industry buzz-phrases such as Advanced Persistent Threat will be disappointed. The ICO report shows that there are still too many businesses struggling with the basics, including failing to apply security updates to software, inadequate password storage and SQL injection. Organisations need to be responsible custodians of customer data to succeed and failure to adequately protect this due to ineffective IT security will leave you at risk of fines from regulatory bodies across the world.
Earlier this year the British Pregnancy Advice Service was fined by the ICO after the contact details of 10,000 people were exposed. The ICO found a number of security failings including the fact that they had neglected to carry out security testing and were therefore ignorant of website vulnerabilities. These vulnerabilities enabled an attacker to access personal data by requesting a call-back about having an abortion and threatened to post this online. Thinking for a moment about the potential personal impact on those individuals if their names had been published on the internet for inquiring about an abortion shows why the ICO took this incident so seriously, and how important data protection can be.
The ICO report features eight main areas which have led to data breaches and I will not be attempting to cover them all in this short blog post, but one area that naturally jumped out to me was that of SQL injection. The ICO points out that this method of attack is particularly relevant to data protection because it uses vulnerabilities in publicly-available websites to access a database- which is likely to contain personal information. As such, SQL injection carries a high risk of compromising significant amounts of personal data, and should therefore be a high priority for those concerned with data protection.
The ICO’s report provides clear, digestible advice for dealing with the prevention, detection and remediation of SQL injection. For externally developed applications this includes keeping software up to date; for internally developed applications the focus is on embedding security into the software development lifecycle through developer training and code review. For both internally and externally developed applications the ICO also suggests that organisations consider automated vulnerability assessments and penetration testing.
Also featured in the ICO report are the serious security implications of failing to decommission software and services which are no longer being used and therefore not being properly maintained. The example given is that of an organisation which fails to properly take down a website which is connected to a back-end database that collects and processes personal data. Countless times with Veracode’s Discovery solution (which identifies all public-facing web applications through an automated scan) the difference between the number of applications an organisation thinks they have and the number that actually exist is huge. Identifying old applications that have not been properly decommissioned can be a quick win in reducing the attack surface, reducing risk of breaches and non-compliance to data protection regulations; whilst you saving money through cutting down on unnecessary hosting fees.
So the message off the back of this report from the ICO? Get the basic stuff right. In a world of increasing investment in IT security and a proliferation of competing solutions for IT teams to consider, my advice is that before looking at the newest and shiniest solutions which promise to solve world hunger, identify a scalable and cost effective way to eradicate well-known security weaknesses- like SQL injection- to significantly reduce the risk of a data breach.
Application security testing is finally mainstream, after years of effort. Whether it’s compliance-driven or a result of the increasing realization that information security is about a lot more than just firewalls, application security testing is happening in most organizations. Here at Veracode, we test thousands of apps a year – and that number is only growing.
All of this testing is great! It’s bringing awareness to security flaws that may have otherwise lived in the wild until being exploited. However, it has surfaced a larger, more challenging issue – what do you do after the test is complete and you have your results?
“Fix them!” is the obvious answer – but it isn’t always that simple. Let’s talk about a few of the things that get in the way:
- Time Budgets – This is the number one issue that comes up in my experience – security testing is performed too late in the release cycle, and the product managers have not budgeted adequate time for the test and the remediation activities
- Governance – The corporate security team may have the mandate to perform security testing – but they don’t have the mandate to stop a release of vulnerable code
- Financial Budgets – The security testing budget may have only included the security test itself – not any remediation assistance or re-testing
- Training – Developers try and fix security flaws, but are unable to do so before releasing to production
- Poor Results Quality – The security tester or platform may have found security flaws, but not clearly explained how to replicate them– developers may not even know how to replicate the test to check if their fix works
None of these are exactly technical problems – these are management issues. IT leadership and security and development teams can take a few strategies to ensure that application security testing leads to effective remediation:
Budget Security Testing and Remediation into Your SDLC
- Whether Agile or Waterfall – security testing needs to be an explicit part of the release schedule so that expectations around testing and remediation are set properly at the beginning of a project
Streamline the Testing and Re-testing Process
- If you’re using an automated platform (like Veracode) – allow the development teams to submit their own tests and to re-test them without needing to involve outside teams
- If working with manual penetration testers – coordinate closely to ensure that re-testing is possible with your schedule requirements – and make re-testing part of the pen test project budget
- Train your development teams on secure coding practices!
- Having a “security focal” developer is a model that works well – a senior developer who can answer other team members’ security questions
Empower Security Teams
- Give information security teams the final call if an application can be released
- Security teams aren’t “getting in the way” – they’re trying to protect users and the company brand
Veracode helps support these activities in several ways. Our self-service testing platform makes it simple for developers to rescan applications. Additionally, our APIs can integrate with build platforms, simplifying the testing process even further. Our security consulting team is available to all customers to provide technical remediation support for flaws identified in the testing process. And our program management team helps to build effective application security testing processes.
Application security testing without a clear remediation plan can be, in some ways, worse than no testing at all. Testing without a remediation plan shows that the organization tried to find security flaws, but didn’t follow the process all of the way through. That is the type of mistake that can lead to some major finger pointing and potential liability in the case of a security breach. And while each organization is different, what consistently works in most cases is to empower both the security and development teams to take charge of their security testing processes, with the clear goal in mind of fixing the identified flaws. Nobody wants to release flawed code, but sometimes teams are pressured to do just that. Making security testing a seamless part of the development cycle will improve the overall security of the business, leading to happier customers, investors, and – developers!