Dispelling the “What Mobile Security Threat?” Myth

August 19, 2014 by · Leave a Comment
Filed under: Mobile 

Post 1 of 6: Dispelling Mobile App Security Myths – Myth #1

This is post one in a series on Mobile Application Security.


Mobile applications are everywhere. The growth of enterprise mobile apps in the past few years has been absolutely staggering. Forrester Research reports that 23 percent of the workforce has downloaded 11 or more apps (paid or free) to the smartphone they use for work, while 16 percent have installed that many apps to their work tablets. Up to 40 percent of workers admit to adding 10 apps or more to their work devices, across the board.

Some mobile workers have two or three different devices that they use for work. With an average of 50+ apps installed on most mobile devices, the potential attack surface from untested software grows exponentially for the average enterprise. The reality is hundreds of applications per user are brought in close proximity to enterprise data stored or accessed via approved BYOD devices. Any one of them could be a malicious gateway to a potential data breach.

I wish that most enterprises were attacking the reality of this problem head on. But they’re not. Instead, a bunch of myths about mobile security – specifically mobile app security – have taken hold. Six myths, to be exact.

Why do these myths exist? We perpetuate them primarily because they are comforting and make us feel better. The problem with a myth is that ultimately, reality gets in the way. The best way to shatter myths is with empirical evidence to the contrary.

Let’s examine these six myths one by one and discuss how best to dispel them at your organization.

Myth #1: “What mobile security threat?”

Like the proverbial ostrich with its head in the sand, perpetrators of this myth point to the lack of media coverage of major mobile data breaches as proof the problem doesn’t exist. The fact is, nearly half of companies that permit Bring-Your-Own-Device (BYOD) have experienced a breach as a result of an employee-owned device – they’re just not talking about it.

Six out of ten malware analysts at U.S. enterprises admit having investigated or addressed a data breach that was never disclosed by their company. This should surprise no one. The majority of companies still have no formal BYOD policy, and one-third have no application security program of any kind. This means that the software they are developing, mobile or otherwise, is at a higher risk of containing known security vulnerabilities.

Secure software development practices are still not as widespread as they should be. For the mobile apps that most internal teams are producing, more than two-thirds of those first submitted to Veracode for vulnerability analysis failed to comply with the enterprise’s own policies or industry standards such as the OWASP “top ten”. Errors present in in-house apps often involve insecure data storage – broken cryptography, weak input validation, unsecured transport layer or weak server-side controls. While most mobile app flaws are easily remediated and most pass their next inspection, the high initial failure rate we’ve seen proves that CISOs have good reason to be concerned about threats to their mobile ecosystem.

The magnitude of the mobile app security threat is compounding not just by the sheer numbers of devices and supposedly safe public apps out there that your employees are consuming, but also by the ever-increasing volume and sophistication of risky and malicious apps.

In a webinar I recently hosted with Tyler Shields, senior security and risk analyst at Forrester, he revealed that a clear majority of enterprises are now concerned by the drastic growth of mobile malware… with good reason. It has been on an explosive trajectory over the last few years, especially on the Android platform. Juniper Networks latest Mobile Threats Report calculated that the number of malicious apps grew an astounding 614 percent from 2012 to 2013. These apps exhibit risky behaviors such as accessing files or logs, monitoring email or calls, sharing contacts or location, installing other software, and even rooting the device.

Infected apps and malware executables find their way on to users’ mobile devices any number of ways. Risky user behaviors include downloading untrusted or unverified apps, allowing a family member to use a company-owned device, clicking on a malicious link in a phishing email, even visiting adult websites.

Once installed, these apps get very close to enterprise data, especially if the device doesn’t use an MDM to enforce policies to prohibit apps that pose a risk. On an unprotected device, enterprise data can be accessed, intermingled, duplicated and even moved to the cloud.

Let’s dispel this myth. The mobile security threat is real, and growing.

In my next post, we’ll continue to break these six myths around mobile application security, exposing the realities confronting the enterprise mobile ecosystem.

Use Software Suppliers as Force Multipliers

August 14, 2014 by · Leave a Comment
Filed under: Third-Party Software 
No, no. Not this type of force.

No, no. Not this type of force.

One of the most alarming facts of modern software, considering the deep insecurity of most software, is the degree to which it is composed of many other software components of varying origin and unknown security. Almost every enterprise software portfolio has internally developed, purchased, outsourced and open source software; but almost every application in a portfolio has code that has multiple origins as well.

This is one of the things that makes a pure source code based security analysis of software, by definition, incomplete: if you can’t scan the components for which you don’t have source, you don’t have a complete picture of the software risk.

But worse than the problem of finding the flaws is the problem of getting them fixed. If you learn that your supplier has an issue in their code, you may be able to hold them accountable for a fix, but if the issue is actually in fourth party code that they use in their application, you are reliant on their ability to manage their own software supply chain to get a fix.

Considering one level of risk removed from the software itself, the supplier may use purchased software that puts the quality of their own software at risk, thereby putting you at risk. A real world example a few years ago was the compromise of the Apache project’s source control credentials via a cross-site scripting vulnerability in their local copy of Atlassian’s JIRA software. Though the break-in was caught, it is possible that Apache’s software might have been compromised due to this hack.

This is where securing the software supply chain starts to seem like an intractable problem. Even if you can get security attestations about the quality of the vendor’s software, what about their internal systems and processes that might put you at equal risk?

Here, as before in this series, other supply chain transformation efforts suggest a solution: use the supplier as a force multiplier. Specifically, require the supplier to hold their supply chain to the same standards that you hold them to. An example (cited in Wharton, “Managing Green Supply Chains” is IBM’s Social & Environmental Management Systems program, which holds its suppliers responsible for achieving measurable performance against stated environmental goals. IBM’s program requires that its suppliers publicly disclose their metrics and results, and “cascade” the program to any suppliers whose work is material to IBM’s business. The result: a rapid transformation of the compliance level of the whole supply chain.

This approach of cascading compliance requirements is in force in other efforts, such as generation of environmental bill of materials impact information (BOMCheck), corporate responsibility initiatives, and material data systems reporting requirements at Volvo. Indeed, organizational research suggests that cascading performance factors and associated goals to the supply chain is required for effective supply chain management.

Given the sensitive nature of the data protected by software and the complex nature of the software supply chain, cascading software supply chain security program requirements to major suppliers may be the only way to ensure that the enterprise is completely protected. The good news is that it need not be an uphill struggle. The more enterprises require secure software, the more vendors will read the writing on the wall and start to understand that security is a market requirement.

The Seven Habits of Highly Effective Third-Party Software Security Programs

  1. Choose the right suppliers
  2. Put your efforts where they do the most good
  3. Use suppliers as force multipliers
  4. Collaborate to innovate
  5. The elephant in the room is compliance
  6. Drive compliance via “WIIFM”
  7. Align benefits for enterprise and supplier – or pay

Stop Freaking Out About Facebook Messenger

Facebook recently announced that mobile chat functionality would soon require users to install Facebook Messenger. Fueled by the media, many people have been overreacting about the permissions that Messenger requests before taking time to understand what the true privacy implications were.

In a nutshell, Messenger is hardly an outlier relative to the other social media apps on your phone.

Why the uproar, then? In part, people love to pick on Facebook because of their past privacy UI transgressions. They’ve deserved much of that. But it’s a little crazy that there’s such an incendiary reaction to the privacy implications of a mobile app that, permissions-wise, isn’t that different from the multitude of social apps people happily download without a second thought.

Still skeptical? We (and by “we” I mean Andrew Reiter) made a list of the Android permissions requested by the latest Facebook Messenger app. Then we checked the remaining 49 of the top 50 social apps in the Google Play store to see how many of those requested the same permissions. To nobody’s surprise whatsoever, they are all pretty greedy.


If it’s not obvious how to read this chart, here’s an example: 67% of the other popular social apps also require the READ_CONTACTS permission. 47% of them require the CAMERA permission. And so on. Again, this shouldn’t surprise anybody. Mobile apps need these permissions if you want them to function properly. Messenger is a feature-packed app; some of the others may not be. Asking for all those permissions doesn’t necessarily mean the access will be abused.

We didn’t do the meta-analysis to determine how many of those permissions were requested by first-party code vs. third-party ad libraries. Ad libraries are old news at this point, and it kind of doesn’t matter who’s asking for permission as long as you’re granting it.

So stop freaking out… at least until there is something to freak out about.

5 Things You Can Do With the Veracode API

When you use the Veracode API you get an economy of scale through automation. One customer uploaded and scanned 100 applications concurrently over a weekend. image001Another one scheduled monthly recurring scans. “Application programming interface” (API) is more than jargon. It is the industrial revolution (automation) meets the information age (your application security intelligence). Here are five ways you can wield that power.

You make security testing invisible to developers

This is not to say developers are excluded from security goals. I mean the process is invisible. Imagine writing code and committing it to the build server to trigger a scan. We call this pattern “Upload and Scan” and use it in-house for our own development. See the Agile Integration SDK for more details.

You look beyond critical applications to the entire application infrastructure

Web security scans can be launched against your entire application infrastructure to quickly identify the “low-hanging fruit.” This allows you to cover everything and focus remediation on the severe. Use the API to schedule frequency such as weekly, monthly or quarterly. Scan many applications regularly and review the results that only exceed your risk appetite.

You gain flexibility managing your security initiatives

Why not delegate the administration of your security platform to the department that manages your IT? The Veracode “Admin API” makes it simple to perform common administrative tasks in bulk. You can create a standard operating procedure to create 100 application profiles or enroll 100 developers. And you can integrate your identity and access management (IAM) system for user management. The result is an elastic security program that complies with your change control procedures. The benefit is less time spent on administrative tasks by the security team.

You export your data when you need it in other systems

The Veracode “Results API” makes is easy to get your data in the format you need. Feed your application results into a governance dashboard, a defect tracking system, or a custom python application. Allow people to choose the format of their results. PDF reports for some, XML for others, and results right inside of the IDE for the rest.

You leverage application security as a selling point

The Veracode Vendor Application Security Testing (VAST) program has APIs for automating vendor and enterprise tasks. I predict more customers will use the VAST APIs, especially as more software suppliers address questions about the security of their product from their customers. Use the VAST API to retrieve the shared Veracode results of your software vendors.

Anything that can be accomplished through the User Interface (UI) can be done through the application programming interface (API). These are a few examples. While automation alone does not solve every problem, it can be a distinctive element of a security program when combined with strong program management. Veracode has deep API expertise and can help you get started using our existing tools or building a custom integration solution for your environment.

The Rise of Application Security Requirements and What to Do About Them

Secured Data Transfer

As an engineering manager, I am challenged to keep pace with ever-expanding expectations for non-functional software requirements. One requirement, application security, has become increasingly critical in recent years, posing new challenges for software engineering teams.

In what manner has security emerged as an application requirement? Are software teams equipped to respond? What can engineering managers do to ensure their teams build secure software applications?

In the ’90s, security was not a visible software requirement. During this time I worked with a team developing an innovative web content management system. We focused on scalability and performance, ensuring that dynamically generated web pages rendered quickly and scaled as the web audience grew. I don’t remember any security requirements nor any conversations about application security. Scary, but true! Our system was deployed by early-adopter media companies racing to deliver real-time online content. IT teams deploying our system may have considered security, but if they did, they focused on infrastructure and didn’t address security with us, their software vendor.

Ten years later, application security requirements began emerging in a limited way, focusing on compliance and process. At the time, I was working at a startup, developing software for financial institutions to uncover fraud through analysis of sensitive financial, customer and employee data. We routinely responded to requests for proposal (RFPs) with security questions about the controls to prevent my company’s employees from stealing data. We described our rigorous release, deployment and services processes. Without much ado, and without changes to our development process, our team simply “checked the box” for security.

A few years later the stakes became higher. New questions began showing up in RFPs: “How does your software architecture support security?” “How do your engineering practices enforce security?” And the most difficult to answer: “Provide independent attestation that your software is secure.”

I faced a sobering realization. The security of our software relied entirely on the technical acumen of our engineering leads (which fortunately was strong), and was not supported in a formal way in the engineering process. Even worse, I was starting from scratch to learn security basics. I needed help, and fast!

Engineering leads at small independent software vendors (ISVs), such as NSFOCUS and Questionmark, face this challenge routinely. Where should they start? What concrete steps can they take to secure their code and establish a process for application security?

Pete Chestna recently posted “Four Steps to Successfully Implementing Security into a Continuous Development Shop.” His approach has worked for us at Veracode and translates well to small and large engineering teams:

  • Learn, hire or consult with experts to understand the threat landscape for your software. Develop an application-security policy aligned with your risk profile.
  • Baseline the security of your application. Review and prioritize the issues according to your policy.
  • Educate your developers on security fundamentals and assign them to fix and remediate issues.
  • Update your software development life cycle (SDLC) to embed security practices so that new software is developed securely.

You will need budget and time to accomplish this. Consult with experts or engage with security services such as Veracode to benefit from their experience and expertise.

Don’t try to wing it. The stakes are too high.

Coming to a computer near you, SQL: The Sequel

August 8, 2014 by · Leave a Comment

It might sound like a bad movie, but it’s playing out in real life – despite what seems like endless hacks using SQL injections, SQLi related breaches keep turning up like a bad penny.


Most recently, Hold Security reported that they discovered a breach by Russian Hacker Ring. While details of this series of breaches are still surfacing, it is time for enterprises to start taking web perimeter security just as seriously as those aimed at the network.

Vulnerabilities like SQL injection are pervasive in web applications, yet most enterprises aren’t aware that their web perimeter is putting their organization at risk. This is because enterprises don’t typically know how many web applications they have in their domain. When working with an organization to reduce web application perimeter risk, we regularly find 40% more web sites than what customers provide as an input range. Couple this with the Verizon Data Breach Report findings that web application vulnerabilities are the number one cause of data breaches, and 80 percent of web application breaches in the retail industry exploit SQL injection vulnerabilities, and there is a recipe for disaster.

Without visibility into the entire web perimeter, enterprises are leaving thousands of applications vulnerable, and creating a long-term security threat, as cyber-criminals are constantly scanning the Internet looking for vulnerabilities like SQL injection. Given the large number of breaches caused by SQL injection and other web application vulnerabilities, we are getting to the point where it is reckless to assume that because your critical web-sites are secure, your risk is appropriately mitigated.

So what can enterprises do? Here are a few steps enterprises can take to help reduce risk:

  • Get stronger visibility into their entire web perimeter through use of a discovery solution (most enterprises don’t know the contents of their web perimeter, it’s typical to be unaware of up to 40% of websites within the enterprise domain).
  • Determine which sites have vulnerabilities by scanning them and looking for common exploits such as SQL Injection. Modern automated cloud-based services can now accomplish this quickly and continuously — with minimum setup time and effort — across tens of thousands of sites, in days versus weeks or months.
  • Take action: decommission sites no longer in use, which ultimately reduces your company’s attack surface. In one recent example, a global 1000 company reduced 50% of their perimeter risk by shutting down just three websites that were using unpatched software and were no longer required.

Put Your Efforts Where They Do the Most Good

August 7, 2014 by · Leave a Comment
Filed under: Third-Party Software 

20163336_sWhen doing anything challenging whether it’s a diet or writing a book, the hardest part can be figuring out where to start. Addressing software supply chain security is no different.

The typical organization has 390 business critical applications that are supplied by third parties, to say nothing of the multitudes of marketing web sites, operational sites, partner sites, off-the-shelf customer data management software, and others that represent its overall third-party-developed software footprint. It’s all too tempting to either lay down a blanket rule across all suppliers with no practical plan to implement, or to give up and turn a blind eye to supplier-provided vulnerabilities.

Giving up is not recommended, given that there are proven alternatives like Veracode’s vendor application security testing program that have been successful for Boeing and Thomson Reuters, among others. But it’s also important to not fall into implementation paralysis by reaching too broadly. Or, in other words, don’t boil the ocean!

Other supply chain transformation efforts suggest several ways to go after the problem. These include the 80/20 rule and low-hanging fruit. (These examples are drawn from the excellent Wharton article “Managing Green Supply Chains.”) To these best practices, Veracode would add the “go-forward” rule.

The 80/20 rule: Wal-Mart’s energy-saving supply chain initiative began with its top 200 suppliers in China, who represented (in 2008) constituted 60% to 80% of its total supply chain. By analogy, an enterprise could identify top software suppliers based on number of applications, or amount of data under management, and concentrate its initial supply chain efforts there.

Low-hanging fruit: The National Resources Defense Council (NRDC) recommends instead gathering easy wins to create momentum. In the software supply chain, this means addressing suppliers who may already supply attestations either publicly or to other customers, and documenting the process to create a “quick win” that can be reused as a case study.

Go-Forward: A software-supply-chain specific variation on the “low hanging fruit” strategy is to implement the new practices on suppliers as they enter or renew their presence in the supply chain via purchase or renewal of services. This is the part in the vendor relationship where the enterprise has natural negotiating power and is a good place to address new supply chain requirements if the enterprise lacks the market power to impose them on settled suppliers.

There are a variety of approaches that can be used to rapidly transform part of a supply chain, and an enterprise can choose among them based on the structure of their supplier base and their market power. Once the supply chain approach is chosen, attention turns to working with the supplier themselves. I’ll discuss some ways to do that in the next post.

The Seven Habits of Highly Effective Third-Party Software Security Programs

  1. Choose the right suppliers
  2. Put your efforts where they do the most good
  3. Collaborate to innovate
  4. Use suppliers as force multipliers
  5. The elephant in the room is compliance
  6. Drive compliance via “WIIFM”
  7. Align benefits for enterprise and supplier – or pay

Address Proof of Software Security for Customer Requirements in 4 Steps

The button for purchases on the keyboard. Online shop.

The world’s largest enterprises require proof of software security before they purchase new software. Why? Because third-party software is just as vulnerable to attack as software developed by internal teams. In fact, Boeing recently noted that over 90 percent of the third-party software tested as part of its program had significant, compromising flaws. As a software supplier, how do you get ahead of this trend?

Not every supplier has the resources and maturity to develop its own comprehensive secure-development process to the level of the Microsofts of the world, but that doesn’t mean security should be thrown out the window. Large, medium and small software suppliers — such as NSFOCUS and GenieConnect — have found significant benefit in incorporating binary static analysis into their development process, addressing vulnerabilities and meeting compliance with industry standards. This has earned them the VerAfied seal, which means their software product had no “very high,” “high” or “medium” severity vulnerabilities as defined by the Security Quality Score (SQS), nor any OWASP Top 10 or CWE/SANS Top 25 vulnerabilities that could be discovered using Veracode’s automated analysis.

This extra step to meet compliance with software security standards is one most suppliers don’t even consider: it could slow down development, add extra cost to the product and potentially reveal software vulnerabilities that the producer would rather not know about. Many software suppliers vainly hope that security is only necessary for a certain class of software — a banking program perhaps, but not a mobile application. However, security is relevant to every supplier, no matter their product or industry.

Software suppliers that neglect the security of their product are in for a rude awakening when the sales pipeline evaporates because they can’t answer questions about software security.

What should a supplier do to address a request for proof of software security? Here are four steps:

  1. Use — and document — secure coding practices when developing software. This may seem obvious, but developer documentation makes it easy to demonstrate that the software was developed to be secure from the very beginning.
  2. Test for vulnerabilities throughout the development process (the earlier and more frequent, the better). Don’t wait until the night before your product’s release to run your first security assessment, or your release will be delayed.
  3. Educate developers on how to find, fix and avoid security flaws. Many developers simply haven’t had proper training. Make sure they learn these skills not only for the benefit of your product, but also to improve your human capital.
  4. Proactively communicate with your customers about the steps you take to secure your product. This will improve existing relationships and help differentiate your product in the market.

It’s time for the software industry as a whole to embrace the trend of requiring proof of security as an opportunity to improve software everywhere.

Endless Summer: Hacker Cons Ride Wave of Third-Party Software Holes

August 5, 2014 by · Leave a Comment

OpenSSL set the stage, but at this week’s Black Hat and DEFCON conferences, researchers will bring down the house on third-party code.


We didn’t need the Black Hat and DEFCON hacker conferences to make us aware that vulnerabilities in third-party software were a major security concern – for software vendors and their customers. The “Heartbleed” vulnerability in OpenSSL did a great job of driving that point home.

But this month’s twin security conferences in Las Vegas will amplify what has already become a high-decibel conversation about the role that (unaudited) third-party components play in successful attacks against otherwise robust platforms these days.

In just one example, Jeff Forristal, the CTO at mobile security firm Bluebox Security will unveil a critical security hole that affects almost every Android phone in circulation.

The vulnerability, which Forristal is calling “FakeID” is linked to a third-party software component that has been bundled with Android since 2010. Due to a flaw in Android application handling, malicious applications can use the vulnerability to escape the restrictions of the Android application sandbox and get special security privileges without any user notification or interaction.

In a conversation I had with Forristal, he said the code in question was an open source component that was “sucked into” the original version of Android in 2008. Even though the component in question has since been “deprecated” (discontinued), it has persisted in Android – the world’s most widely used mobile operating system.

This is no ‘theoretic’ attack. The vulnerability could be used by attackers to sneak malicious applications onto Android devices by making them appear to come from legitimate publishers. That could lead to a malicious application having the ability to steal user data, recover passwords and secrets or otherwise compromise the Android device, he said.

In another presentation, Kymberlee Price, director of ecosystem strategy at Synack, and Jake Kouns, CEO of the Open Security Foundation will use epidemiologic models to explain the spread of vulnerabilities through an IT ecosystem by way of third-party code, Price recently told Dark Reading.

The data the two have collected provide proof that vulnerabilities flow throughout IT environments with the code they’re contained in (no surprise there). But they will also argue that current rankings of “severity” don’t take into account the pervasiveness of the vulnerable code and the context (how that third-party code is deployed). OpenSSL, for example, had a CVSS severity rating of “5” out of “10.” However, its widespread use and re-use within third-party applications made its actual impact much higher. More than 200 advisories linked to products using OpenSSL were issued in the wake of the revelation about the “Heartbleed” vulnerability.

“Once you start looking at what’s being accessed in some of these products it starts looking significantly more impactful. It may not be a 10.0, but it’s incredibly damaging,” Price told Dark Reading.

Forristal says that business dynamics often drive decisions to integrate third-party components. “They cut your time to market but, as OpenSSL shows us, you can pay the price.” He said he can’t explain why organizations don’t do thorough audits of third-party code they use, but suspects that human failings play a big role.

“People decide ‘we’ll just trust this thing,’ or ‘it seems good,’ or ‘other people use it,’” he said. While no piece of software is perfectly secure, companies owe it to themselves to do audits that can detect glaring vulnerabilities or flaws in code they use, he said.

How to Choose the Right Software Suppliers

July 30, 2014 by · Leave a Comment
Filed under: Third-Party Software 


When you think about securing your software supply chain, don’t reinvent the wheel: you can learn a lot from initiative like the “green” supply chain.

When undertaking something as momentous as driving a new buying criterion into the purchase of software, enterprises would be advised to start practically, by choosing suppliers who are already building and selling secure software and need not be hectored into it. “Choose the right suppliers” has, nevertheless, the same sort of oxymoronic ring as “test the most insecure applications.” How do you know which suppliers are the right—i.e. secure—ones?

However, this advice is ultimately more practical than it sounds.

Ensuring that suppliers chosen adhere to a new supply chain requirement depends upon two things: a capability to measure and enforce the supplier’s adherence to the requirement, and a clearly defined standard or certification that the supplier can use to advertise their capabilities on the issues.

Measuring and enforcing supplier compliance with the “green” standard has been carried out in multiple ways. Massive suppliers like Walmart may well be able to enforce new initiatives at the peril of the inability of the supplier to do business with Walmart. The Wharton article “Managing Green Supply Chains” talks about a Walmart supplier’s conference in China at which the law was laid down: “the (supplier) CEOs were told that half of them would be getting more business from Walmart and the other half would no longer be doing any business at all with the retail giant. Walmart’s new environmental rules were then handed out and the CEOs were told to make sure they figured out how to end up in the winning half.”

On the other end of the spectrum is a collaborative effort with suppliers where there is a joint effort to identify the right ways to measure and describe compliance.

Establishing industry standards in transformation efforts is a follow-on to the stage of market evolution in which suppliers are working to “figure it out.” For all their weaknesses, standards can have a way of removing tremendous costs from supply chain transformation initiatives by defining clearly what counts as “compliant” and giving suppliers the ability to proactively advertise their compliance, rather than having to negotiate with each customer to establish what compliance means.

In “green” this can take a variety of different forms. For instance, when International Paper chooses suppliers that meet its goal to provide wood fiber from sustainable sources, they accept certifications from multiple bodies, stating, “The key is to work with the certification agencies rather than starting to get into arguments about differentiating very subtle differences between the approaches of the different certification bodies.”

It is harder to establish a standard signal of compliance in the application security world. Initiatives like “hacker proof” statements and seals based on limited testing draw scorn from security practitioners. At some point, though, there must be some balance struck between perfect, contextual security and adoption of a sufficiently strong standard, lest the perfect be the enemy of the good. In this light, the recent FS-ISAC working paper that establishes a combination of vBSIMM or equivalent maturity model, software composition analysis, and binary static analysis as the required controls for third party software security is a welcome sign of market maturity and a big step toward making it possible for an enterprise to choose secure suppliers.

The Seven Habits of Highly Effective Third-Party Software Security Programs

  1. Choose the right suppliers
  2. Put your efforts where they do the most good
  3. Collaborate to innovate
  4. Use suppliers as force multipliers
  5. The elephant in the room is compliance
  6. Drive compliance via “WIIFM”
  7. Align benefits for enterprise and supplier – or pay

Next Page »