Filed under: application security, Software Development
Surely and not-so-slowly, the concept of “internationality” is disappearing — at least in terms of the free exchange of information — and the tiny, expensive devices in our pockets and purses are leading the charge.
For end users, the benefits of global information access are as obvious as they are numerous, especially thanks to apps such as Word Lens that can make you feel at home almost anywhere. But for developers facing international audiences for the first time, globalization brings a whole set of problems packed into a single, powerful word: standards.
This is especially true where application security testing is concerned. While every mobile platform is built with some degree of localization for the international markets it’s used in, the nuts and bolts are largely the same. And that’s only one compelling argument for global standardization of application security practices.
Different Countries, Different Needs, Different Practices
None of this is to say all mobile software needs a homogenized approach. Country-specific software often takes on the traits of its developers’ culture. Anyone who’s seen an interface designed for use in Germany, for instance, will tell you software on that side of Atlantic tends to offer users a lot more options, often to the detriment of overall ease of use.
But security testing standards are different, especially when an app is designed for an international audience. Web- and mobile-app users everywhere might have varied needs and expectations when it comes to security and privacy, but they all have them. And though communication is usually the first line of defense here, the language gap can sabotage attempts at international security outreach. Though the math that runs our favorite mobile apps is universal enough, the language needed to apply it effectively is anything but.
To put all this another way, there’s a difference between theory and practice. Even if international companies want to get on board with uniform security testing standards, there’s still a big challenge: How?
Bridging the International Gap
The answer? Plenty of automation, for starters, and expert input when human intervention is required.
Agile accounts for a lot of the software being developed today, and automation is already a huge part of that process. Applying it to an international security testing standard just makes sense: The more aspects of your security management you can automate, the less time you’ll spend trying to bridge those aforementioned gaps.
Think about it. Unlike people, automated processes don’t need to spend time explaining the newest, most bleeding-edge concepts in security testing. Done properly, a single group of people in a centralized location can implement them anywhere. Behaviors can be analyzed, strings of code can be tested, and security standards can be enforced consistently across the globe, leaving little room for the language barrier (or lack of information, or any other number of security-related problems) to get in the way.
In a mobile-app scene where multibillion-dollar corporations and two-person garage groups work shoulder-to-shoulder in the same industry, this also democratizes security from a finance perspective. When all apps share the same focus on keeping end users and their data safe, every company has the opportunity to release safe software.
The second leg of our proposed solution is a preventative — as opposed to a prescriptive or reactive — outlook on security testing, administered by experts in the field. The same benefits of centralized knowledge apply regardless of language. Automation can take the sting out of a lot of the more common security issues by providing a consistent set of practices and procedures, but when exceptions arise, who better to deal with them at the international level than people who have dedicated their careers to security?
A Globalized Security Focus
As the old saying goes, “The world keeps getting smaller every day.” When companies the world over approach app security with the same mindsets and standards, the “wall” protecting their end users only becomes stronger. That creates a healthier market for everyone — which is a good thing, no matter what language you speak.
Photo Source: Wikimedia Commons
Enterprises are using more apps than ever, many of which are cloud-based. That’s according to a recent Forbes article, and — no surprise — this increased use comes with increased risk. Survey data found that 85 percent of all data uploaded went to apps that enabled file sharing, and, perhaps more worrisome, 81 percent of data downloaded came from apps with no encryption of at-rest data. It’s no shock, then, to see a push from IT executives for enterprise-wide security programs that vet and review any app created, used or purchased by a company. And yet companies in both the United States and the United Kingdom are struggling to stay at the forefront of AppSec initiatives. With enterprise apps presenting more risk than ever before, why the disconnect?
Apps don’t come from a single source. As revealed by a Veracode/IDG webinar, 43 percent of apps in the US were internally developed, compared to 36 percent in the UK. Both countries sourced 35 percent of their apps from commercial vendors, while UK companies outsourced slightly more apps to a third party (30 percent — the US outsourced only 25 percent). But that 25 covers some big-name enterprises — for example, John Deere. While the global farm-equipment maker won’t outsource the design of “customer experience,” according to CNBC, it has outsourced mobile device code. John Reid, the director of product technology and innovation at Deere, says, “We could take the stance that we need to know how to write all the apps ourselves, but that’s not what makes the difference to our customers.” It might, however, if that app code doesn’t pass basic security requirements.
The easiest way to reduce app vulnerabilities is to create an enterprise-wide security program. In the United States, 52 percent of company executives have mandated this kind of program and are tracking its implementation, while 32 percent are aware of such programs but haven’t made them mandatory. Results in the UK are more concerning: While almost the same number of execs are aware of these programs as in the US, only 38 percent have made them mandatory. So what’s the holdup? Why does the UK lag behind the US, and why are stateside businesses not 100 percent in favor of end-to-end AppSec?
Part and Parcel
There are two major hurdles that any company must overcome to implement this kind of holistic enterprise policy. First is an understanding of what testers are looking for when they analyze in-house, commercial or third-party apps. For example, the National Institute for Standards and Technology (NIST) is creating a mobile-application vetting guide (the current draft, “SP 800-163: Technical Considerations for Vetting 3rd Party Mobile Applications,” is available online or review and comment) to help companies identify potential vulnerabilities. In many cases, these vulnerabilities aren’t obvious. As noted by NIST Computer Scientist Tom Karygiannis, “Apps with malware can even make a phone call recording and forward conversations without its owner knowing it.” It’s also possible for apps to gain access to contact lists or track a user’s location. Without a set of best practices for analyzing and reporting application vulnerabilities, any enterprise-wide effort ends up being slapdash and ad-hoc, ultimately defeating the purpose.
The second part to this AppSec problem is the pipeline. With apps coming from so many sources and with so many cloud-enabled functions, it can be almost impossible for local IT professionals to catch and inspect each one. As a result, mandated security programs may fail not for lack of effort or guidelines, but rather from lack of resources. Due to this, it’s often worth partnering with an application security provider that can monitor, test and report on apps in real time — even when an enterprise is scaling to test hundreds to thousands of apps — and provide the framework for an effective, enterprise-wide initiative. Combined with a set of testing best practices like those from NIST, it’s possible to manage the app pipeline and ensure only “clean” applications come out the other side.
Businesses in the UK lag behind their US counterparts when it comes to application security, but companies in the United States aren’t immune to application risks. Intelligently managed, clearly defined and enterprise-wide AppSec is essential to reduce cloud-based application risks.
Photo Source: Bigstock
Filed under: application security, Third-Party Software
Supply chain management may conjure thoughts of enterprises driving business relationships with an iron hand – think of Walmart’s legendary purchasing power driving innovation into its suppliers. But some supply chain transformations occur through collaboration between the supplier and the enterprise in support of meeting the enterprise’s goal.
In green supply chain transformations, there are examples of this both in the formulation of environmental guidelines and in developing practical solutions to environmental challenges. The same can be seen in secure supply chain efforts. Some of the innovations in Veracode’s VAST program, such as vendor on-boarding and scoping calls, have come from supplier suggestions. Better still, the frame of VAST itself, in which suppliers are required to reach compliance with a policy and given latitude about how they test and correct issues to meet that policy, encourages collaboration between supplier and enterprise.
Veracode’s own VAST offering is a good example of collaboration between enterprises and vendors. Enterprises wanted the ability to understand the security of their purchased software, as they understood that vulnerable third-party applications put their data at risk. Software vendors had two concerns: they didn’t want enterprises to have sensitive data that could risk their IP, and they didn’t want to do bespoke assessments for each supplier. The outcome of the desires of both parties has been the Veracode VAST model. By choosing to work with Veracode for a security attestation, software suppliers can provide the needed proof of security to their customers and prospects, while still protecting their data and intellectual property.
As you work to secure your supply chain, you should be mindful of the partnership between you and the software supplier. By presenting software security as a common goal, you will gain better acceptance and adoption.
The Seven Habits of Highly Effective Third-Party Software Security Programs
- Choose the right suppliers
- Put your efforts where they do the most good
- Use suppliers as force multipliers
- Collaborate to innovate
- The elephant in the room is compliance
- Drive compliance via “WIIFM”
- Align benefits for enterprise and supplier – or pay
It’s all over the news lately: new, flashy apps make it out of the oven, get great press coverage—and are hacked days later. Even the satirically simple app Yo, which sends a “Yo” message to a user’s friends, was a victim. In many cases, app developers could have easily avoided massive blows to their reputations by taking planned approaches to application security.
Following a preemptive strategy is the best way to arm your app against external threats that can compromise users’ security. Take steps to ensure your app is secure before its release—this will help you build trust with your users and save you from unnecessary risks on your investment.
Code audits have become integral to the product development cycle. While it’s one thing to test an app for potential bugs by having users give it a beta run, an in-depth code audit is really the only way to evaluate an application against the types of attacks that seasoned hackers will almost certainly attempt when it hits the mainstream.
An application security audit is the process in which a team of code auditors (usually former hackers themselves) comb through an application’s codebase and perform a series of checks, such as:
- Is the code doing something it shouldn’t be doing?
- Can the code be coaxed to do something fishy?
- Is the app transmitting user-sensitive data in the clear?
- Have programmers implemented security precautions appropriately?
Aside from these manual checks, audits can include automated testing for security issues. “White box” automated testing looks at the application from the inside checking to see if hacker inputs can make the application behave in odd ways. “Black box” automated testing looks for issues while the application is running. Applications can also be fuzzed, meaning it is subjected to massive loads of randomized inputs in order to see if it can handle them without crashing or compromising a user’s device or information.
It’s important to run a code audit on any application before its release, as security bugs can’t be caught by beta testers (who are more likely to find general usability problems). An app’s original programmers themselves can’t be reasonably tasked with finding security bugs in their own code, either—these bugs almost always need another team’s perspective in order to be discovered.
After an app has had its initial audit, results are communicated back to its developers, who will typically need to fix an issue or two before the app is published. It’s perfectly normal for audits to find bugs—in fact, bug-free audits are almost never good signs. Once that app is published, an annual audit ensures that added features are also checked against bugs. Auditing early, and often, is not only an intelligent way to save yourself from risking your investment, it’s a concrete testament to your devotion to user security, especially as privacy concerns become increasingly mainstream.
Consider Open-Sourcing Your Code
If your app offers users advanced security features such as data encryption or anonymity, it’s definitely worth asking your team to open-source the application’s code by posting it online for other programmers to evaluate. In the security community, open-sourcing security-critical code is a tradition that gets more eyes on the code to evaluate it and help decide whether it’s trustworthy. It also gives you free audit hours and, more importantly, builds your credibility with other programmers in the tech community.
Photo Source: Flickr
It’s becoming evident that modern enterprise executives understand the importance of application security (AppSec). Despite this, however, only a very small percentage of applications undergo a true security vulnerability assessment, leaving the majority wide open to attack. Enterprise executives who understand the importance of AppSec must learn how to secure both new and existing apps, along with develop a solution that makes it simple to keep them secure going forward — even when scaling to test hundreds to thousands of apps per month.
The Gap in Enterprise-Wide Application Security
A joint study with CSO and IDG found that executives are more focused than ever on ensuring their applications are secure. The study surveyed US executives and determined that 52 percent of enterprises have mandated an enterprise-wide application security program, which marks a significant improvement from similar surveys in previous years.
This shift in executive thinking is likely due to both the visibility that application security issues have received in the past year and the fact that even after these executives have shored up the access points and physical infrastructure, hackers and thieves are able to gain access. Applications don’t have to be weak points in an enterprise’s network, but if they are never tested for obvious vulnerabilities, they can act as open doors.
Despite these facts, and despite the renewed attention that AppSec is getting in the C-suite, only 36 percent of enterprise-developed applications go through a security vulnerability assessment to determine if security holes exist, and less than 10 percent of enterprises are ensuring that all critical apps are tested during production. With almost two-thirds of applications currently untested, and the ease of development putting new applications into play every day, there’s little wonder why one business after another suffers data thefts or system attacks.
The good news is that businesses appear ready to address these problems: The survey found that 70 percent of US businesses expect to increase their spending on security over the next year, a number that jumps to 80 percent when only large enterprises are considered.
Delivering Necessary Application Security
Given how few internally developed applications have been properly tested, executives may fear that true AppSec on thousands of existing apps is unattainable. But ensuring enterprise-wide application security isn’t impossible, even for large enterprises with dozens or hundreds of development teams.
Chief information security officers (CISOs) and managers must begin by getting an understanding of the development teams, what type of applications they are working on and their respective cycles. This can be a fairly large undertaking, especially since some teams may have adopted an Agile development paradigm while others remain on a legacy waterfall system. Once this list is in place, use it to create an overall security policy that takes into account the needs of all the apps in development. This way, the policy remains consistent and doesn’t have to be built from scratch each time the AppSec program expands to a new team.
Taking into account these four points will allow CISOs to find a security solution that fits their needs:
- Scalable: A cloud-based program can effortlessly grow as more development teams are added.
- Centralized: A security policy runs more smoothly when it is managed and updated from a single dashboard.
- Intelligent: It should learn as it scans and utilize the power of the cloud to recognize and check for new threats as they emerge.
- Has a binary static testing option: The solution must be able to scan apps that are still in development and which contain third-party code.
Roll out the solution to a handful of development teams to begin with, creating a subset consisting of those working on mission-critical apps and those with timetables allowing them the flexibility of adding new steps to their software development life cycle. As these teams find success with the program, and as the enterprise becomes more secure, this small series of wins will make it easier to scale up the program to the entire enterprise. Once all the development teams are on board, the same program can be expanded to provide a security vulnerability assessment on existing apps, eventually ensuring that the entire enterprise and thousands of apps are secure.
If enterprise executives want to get serious about enterprise-wide application security, creating a solution around these practices will provide the best opportunity for success. Plus, when the right cloud-based solution is in place, scaling up as application needs increase or shifting gears as the threat landscape changes is remarkably simple and ensures that the entire stable of applications remains secure in the coming years.
Photo Source: Flickr
Nothing says ‘yawn’ like the topic of insurance. One notable exception may be the mushrooming marketplace for cyber risk insurance. But do insurers really know what they’re underwriting?
Nothing says ‘yawn’ quite like insurance – and I say this as the son of one insurance salesman, and the brother of one more. After all: the insurance industry exists to manage risks: steering the ship of our lives in a calm and even path through the vicissitudes that make life exciting.
So bland is the insurance business perceived to be, that it’s the stuff of Hollywood comedy. In the 2004 film Along Came Polly, Ben Stiller played a skittish, risk averse insurance adjuster with actuarial data on bathroom hygiene at his fingertips (no pun). Woody Allen famously depicts his hapless criminal Virgil Starkwell locked in solitary confinement with an eager insurance salesman as in the 1969 mocumentary Take the Money and Run. Cruel and unusual punishment, indeed.
Boring though it may be, insurance markets are incredibly important in helping society manage risks of all sorts. Insurance markets also have a funny way of shaping behavior – both personal and commercial – in ways that serve the public interest.
Take the response to Hurricane Sandy as just one example. Law makers in Washington D.C. may never agree on whether that storm was a product of a warming climate. In fact, they may debate the ‘facts’ of climate change from now until the end of time. But property owners and businesses in that storm’s path are already adjusting to the reality of a more volatile climate – moving critical electric, environmental- and building management systems onto higher floors. And they’re doing so because of pressure from private insurers to mitigate future risks from flooding and storm related damage.
Many of us would like to see the same thing happen with cyber security – especially given the justified concerns about regulating an industry as dynamic as the tech sector and (more immediately) Washington’s difficulty passing even straightforward legislation. (Highway funding, anyone?) A wider reliance on cyber insurance to hedge risk may well have the effect of enforcing best practices on organizations across industries – from authentication to application development. That would replace today’s variable and ad-hoc approach to security, in which each company is left to survive by its own wits.
And change is happening – slowly. Target Stores, the box retailer that was the target (pun) of a major data breach last year reportedly had $100 million worth of cyber insurance coverage through a variety of separate policies. That money has helped to offset the monetary damage of the breach and spurred other companies to look for ways to hedge their cyber risk as well. The firm Marsh & McLennan estimates that the cyber insurance market could double to $2 billion in 2014.
But as Reuters reported recently, insurers are having a difficult time getting their arms around cyber risk. And that threatens to hold back the entire cyber insurance market at a critical time.
As you can imagine, insuring cyber risk is very different from insuring lives or automobiles – and potentially a lot more risky. For one thing, insurance companies have scant experience with and knowledge of “hackers” (broadly defined) – especially compared with the decades of data they have on driver behavior and automobiles. Not understanding how the thing you’re insuring against might behave leaves insurers on the hook for damages they might not have anticipated (and priced into their policies).
Why is that? Policies are often written based on the insured’s attention to standard defensive measures, rather than the findings of comprehensive security audits, Bryan Rose, a managing director at Stroz Friedberg told Reuters. That’s a big problem. As we know only too well, the gap between threats and defenses is wide and getting wider every day. As Target illustrates, questionnaires that fail to address third-party risk will also omit a major avenue for successful attacks.
More to the point for this blog: insurance firms that fail to take a hard look at application security, both in third-party products and in the internally developed applications will find themselves skating on thin ice, risk-wise.
As we know, many of the largest and most damaging data breaches come by way of attacks that exploit common and avoidable application vulnerabilities like SQL injection and Cross Site Scripting. Insurance firms that want to manage their risk need to have a strong foundation in application security, as well as the tools and talent to spot vulnerable and shaky applications – if not to ferret out specific, exploitable holes.
This will be a painful transition for both insurers and their customers. Insurers will almost certainly take more baths in the wake of major breaches, as it appears they did in the Target incident. And companies seeking to hedge their cyber risk will need to do more than check the box next to “firewall,” “antivirus” and “intrusion detection.” They’ll have to lay bare both the protections they rely on and the security and resiliency of the applications they’re protecting.
In the long term, however, the entire society will benefit from wider adoption of cyber insurance. The public- and private sectors have pursued countless strategies to address this endemic problem. Lists like the OWASP Top 10 or the SANS Top 20 have long highlighted the most serious and common problems, with little to show for it. As companies look to manage cyber risk, and insurance companies step forward to help them do it, we’ll begin to create structures that internalize the true costs of cyber security for society. That process will bring short term pain all around. But it will also produce long-term gains.
It goes without saying that all IT organizations should have an active Incident Response (IR) Plan in place – i.e. a policy that defines in specific terms what constitutes an information security incident, and provides a step-by-step process to follow when an incident occurs. There’s a lot of good guidance online about how to recruit a data breach response team, set initial policy, and plan for disaster.
For those organizations already prepared for IT incident response, be aware that best practices continue to evolve. The best IR plans are nimble enough to adjust over time. However, when the incident in question is feared to be a possible data breach, organizations should add a couple other goals as part of their comprehensive Application Security disaster planning:
- The complete eradication of the threat from your environment.
- Improved AppSec controls to prevent a similar breach in the future.
Veracode’s Information Security Assessment Team, which put together our own IR playbook, recommends that IT groups follow these five emerging guidelines to plan for the reality of today’s risks and threats.
1. Plan only for incidents of concern to your business.
According to the SANS Institute, the first two steps to handling an incident most effectively are preparation and identification. You can’t plan for everything, nor should you. For example, if no business is conducted through the organization’s website, there is probably no need to prepare for a Denial of Service attack. Companies in heavily regulated industries such as financial services or healthcare receive plenty of guidelines and mandates on the types of threats to sensitive and confidential data, but other industries may not enjoy similar “encouragement”.
Ask yourselves the question, what is OUR threat landscape, why would hackers and criminals want to attack us? The possible answer(s) will lead to a probable set of root causes for data breach attempts. Focus on what’s possible, but also don’t be afraid to think creatively. The U.S. National Security establishment was famously caught flat-footed by the events of 9/11 as the result of a “lack of imagination” about what terrorists were capable of accomplishing. By constantly re-evaluating your organization’s threat landscape (and by relying on solid threat intelligence to identify new and emerging threats), your data breach response team will remain on its best footing.
2. Don’t just plan your incident response, practice it.
Practice: it’s not just the way to Carnegie Hall. IR Plans must not be written and then left on the shelf to gather dust. A proactive and truly prepared information security organization educates its IT staff and users alike of the importance of regularly testing and updating breach response workflows. Plans must be drilled and updated regularly to remain viable. Even if it’s simply around a conference table, run through your response plan. Some organizations do this as often as monthly; your unique industry and the probable threats it faces will determine the ideal frequency of this best practice. At Veracode, we run regular Table Top Exercises on a number of possible scenarios.
The worst mistakes are typically made before the breach itself. Be prepared. The purpose of IR drills is to ensure that everyone understands what he or she should be doing to respond to a data breach, quickly and correctly. A good rule of thumb here is that “practice makes better, never perfect.” It pays to be honest about your IR team’s capabilities and their ability to effectively neutralize the most likely threats. If the necessary skills don’t exist in-house, then better plan to retain some outside help that can be standing by, just in case.
3. In speed of response, think “minutes” not “hours”.
IR teams should always strive to improve response times – that’s a given – and “within minutes” is today’s reality. On the internet, a service outage of more than one hour is considered significant. Social media chatter can very quickly amplify the damage that could be done to your business, so get out ahead of the crisis …and stay there.
SANS Institute defines the third step in breach response as “containment” – to neutralize the immediate threat and prevent further damage to critical or customer-facing systems. Move quickly to determine the possible severity of the data breach and then follow the customized response workflows in place for that scenario. To borrow some terminology from the military: is your “situation room” responding to a Defcon 1 attack or more like Defcon 5? Even as your IR team moves to eradicate the threat, you can be communicating to key stakeholders appropriately – according to the reality of the situation at hand.
4. Don’t over-communicate.
This guideline seems counter-intuitive. Sharing is caring, right? Wrong. Especially when it comes to the fate of your organization’s confidential or sensitive customer information. Your initial notification to customers should almost immediately follow detection as a pre-planned rote response. There will be no time to wordsmith the perfect statement in the thick of battle; better have it pre-packaged and ready ahead of time. That being said, this statement should be short and concise. Acknowledge both your awareness of the incident and the IR team’s continuing efforts to safely restore service, as soon as possible.
After that, plan to give updates to all stakeholders on some kind of methodical basis. Act like the NTSB after a plane crash. They give regularly scheduled press conferences on what they know so far, while firmly pushing back on what they don’t. Think like an investigator and deal in facts. Don’t speculate as to the root cause of the breach or even when service will be restored, unless that timeline is precisely known. Your communication to the market, while measured, should always be sympathetic and as helpful as possible. One final piece of advice: tell your customers the same thing you tell the media. There are (few if) no secrets left on the Internet.
5. Focus on restoring service first, root cause forensics later.
The root cause of a data breach incident is typically not immediately known, but that should be no impediment to your restoring service ASAP for customers (once the threat is contained and destroyed, of course). Keep the focus on the customer. Get back online as quickly as possible. Clearly, SANS outlines “recovery” as the step that ensures that no software vulnerabilities remain, but…
Ignore the engineers & analysts who want to investigate root cause immediately. With today’s sophisticated attacks, this can take weeks or months to determine, if at all. Still, incident response is not over when it’s “over”. As we’ve asserted, the best organizations – and their IR teams – take the time to learn from any mistakes. Monitor systems closely for any sign of weakness or recurrence. Analyze the incident and evaluate (honestly) how it was handled. What could be improved for better response in the future? Revise your organization’s IR Plan, making any necessary changes in people, processes or technology for when or if there is a next time. Practice any new workflows again and again until you know them cold.
Solid IT risk management strategies include disaster recovery planning and the creation of a living, evolving incident response playbook. Today’s IR plans need to be focused, factual and fast. Every organization needs to budget for the hard IT costs associated with data breach recovery. However, a comprehensive and battle-tested plan will help mitigate the “soft costs” associated with poorly handled data breach incidents. These can include lingering damage to revenue, reputation or market value – long after the initial crisis is resolved.
Filed under: application security, Third-Party Software
The world’s largest enterprises require proof of software security before they purchase new software. Why? Because third-party software is just as vulnerable to attack as software developed by internal teams. In fact, Boeing recently noted that over 90 percent of the third-party software tested as part of its program had significant, compromising flaws. As a software supplier, how do you get ahead of this trend?
Not every supplier has the resources and maturity to develop its own comprehensive secure-development process to the level of the Microsofts of the world, but that doesn’t mean security should be thrown out the window. Large, medium and small software suppliers — such as NSFOCUS and GenieConnect — have found significant benefit in incorporating binary static analysis into their development process, addressing vulnerabilities and meeting compliance with industry standards. This has earned them the VerAfied seal, which means their software product had no “very high,” “high” or “medium” severity vulnerabilities as defined by the Security Quality Score (SQS), nor any OWASP Top 10 or CWE/SANS Top 25 vulnerabilities that could be discovered using Veracode’s automated analysis.
This extra step to meet compliance with software security standards is one most suppliers don’t even consider: it could slow down development, add extra cost to the product and potentially reveal software vulnerabilities that the producer would rather not know about. Many software suppliers vainly hope that security is only necessary for a certain class of software — a banking program perhaps, but not a mobile application. However, security is relevant to every supplier, no matter their product or industry.
Software suppliers that neglect the security of their product are in for a rude awakening when the sales pipeline evaporates because they can’t answer questions about software security.
What should a supplier do to address a request for proof of software security? Here are four steps:
- Use — and document — secure coding practices when developing software. This may seem obvious, but developer documentation makes it easy to demonstrate that the software was developed to be secure from the very beginning.
- Test for vulnerabilities throughout the development process (the earlier and more frequent, the better). Don’t wait until the night before your product’s release to run your first security assessment, or your release will be delayed.
- Educate developers on how to find, fix and avoid security flaws. Many developers simply haven’t had proper training. Make sure they learn these skills not only for the benefit of your product, but also to improve your human capital.
- Proactively communicate with your customers about the steps you take to secure your product. This will improve existing relationships and help differentiate your product in the market.
It’s time for the software industry as a whole to embrace the trend of requiring proof of security as an opportunity to improve software everywhere.
Filed under: application security, SDLC, Software Development
By Chris Lynch, Partner, Atlas Venture
The story of Yo will be used as a cautionary tale in the VC community for years to come. Only a few days after receiving a much talked about $1.2 million in series “A” funding from Angel investor and serial entrepreneur Moshe Hogeg, Yo suffered a massive security breach. The breach made more headlines than the funding, and took the wind out of the company’s sales – possibly for good.
How did the breach happen? Over the weeks that followed several journalists have offered their analysis including @VioletBlue: People invested $1.2 million in an app that had no security, @mikebutcher: App allegedly hacked by college students and @mthwgeek: Yo been hacked made to play Rick Astley.
While the epic rise and fall of Yo and how Yo was hacked make for an interesting story, as an investor, this is not the part of the story that jumped out at me. The question I have is how did the experienced investor, Moshe Hogeg (or any investor for that matter) invest in a technology without learning about the development process of the technology? The app was built in about eight hours. What does that indicate about the QA process? What does that say about the security of the software?
The eight hour development time is impressive, and demonstrates drive on the part of the apps’ developers. However, I have questions about the security of a product that can be developed during a single standard work day. And Yo’s prospective customers – the advertisement firms that they were inevitably selling this data to – would have asked the same question.
When I listen to a start-up pitch me on their next-gen/transformational/whatever product, I always question if the technology is truly enterprise-class: is it scalable, reliable, and secure? One or two groups within an enterprise may order a few of your widgets without this, but if you are gunning for the big bucks, you want an enterprise-wide deployment of your technology. This requires you prove that your product is just as reliable and secure as the largest players in the market. Because no one gets fired for buying IBM. People get canned when they purchase software from a cutting-edge start-up that ends up causing a data breach and costing the enterprise millions. Security is just table stakes if you want to play with the big boys. This includes enterprises buying your product and VCs like Atlas Venture backing your company.
When investing in a company, or product, it is essential that I understand everything I can about the technology – including the security of that product. It isn’t enough to scrutinize the need for the technology in the market and the product’s functionality. I must also understand how the product is developed, and if secure development practices are in use. Otherwise I am setting myself up to lose a lot of money in the event of a breach.
As investors in new companies and technologies we are taking risks, and without investors taking these risks we will never see the next Facebook or Instagram. However, these risks we take should be calculated jumps, not leaps of faith. Investing $1.2 million into a company without this level of due diligence is irresponsible – unless you are looking for some sort of revenue loss tax break.
I have a feeling Moshe Hogeg thought he had a winning product when he wrote that check. But he didn’t conduct a full due diligence process, and he is paying dearly for that mistake now. I feel badly for Moshe Hogeg, but I hope his misfortune can serve as a warning to the investment community as a whole and more broadly to buyers and users of software – whether they are consumers or businesses. Software security is as important as software functionality and simply assuming security was a consideration during the development process no longer good enough. Documented proof needs to be provided from these software development companies if they expect to get funding and ultimately to generate revenue.
Filed under: application security, Application Security Metrics, Dynamic Analysis
Another day another web application breach hits the news. This time ITWorld reports Hackers steal user data from the European Central Bank website, ask for money.
I can’t say that I’m surprised. Although vulnerabilities (SQL Injection, cross-site-scripting, etc.) are easy for attackers to detect and exploit, they are still very common across many web applications.
The survey that we just completed with IDG highlights the problem – 83% of respondents said it was critical or very important to close their gaps in assessing web applications for security issues. However, a typical enterprise:
- has 804 internally developed web applications
- plans to develop another 119 web applications with internal development teams over the next 12 months
- tests only 38% of those web applications for security vulnerabilities
And these numbers don’t include all the web applications that are sourced to third-party software vendors or outsourced development shops.
The assessment methodologies for finding web application vulnerabilities aren’t a mystery – we all know about static and dynamic testing. It’s the scale at which web applications must be found, assessed for vulnerabilities and then remediated that makes this difficult for large enterprises.
Think about it, 119 applications over the next 365 days means a new web application is deployed on an enterprise web property every 3 days.
Is it any wonder that web application breaches keep happening?
Learn more about Veracode’s cloud-based service:
- Web Application Perimeter Monitoring
- Dynamic Analysis (DAST)
- Third-Party Program: Vendor Application Security Testing