Yo, A Cautionary Tale for the VC Community

By Chris Lynch, Partner, Atlas Venture

The story of Yo will be used as a cautionary tale in the VC community for years to come. Only a few days after receiving a much talked about $1.2 million in series “A” funding from Angel investor and serial entrepreneur Moshe Hogeg, Yo suffered a massive security breach. The breach made more headlines than the funding, and took the wind out of the company’s sales – possibly for good.

yo

The jury is still out on the future of the Yo app but being hacked got the app in headlines for all the wrong reasons.

How did the breach happen? Over the weeks that followed several journalists have offered their analysis including @VioletBlue: People invested $1.2 million in an app that had no security, @mikebutcher: App allegedly hacked by college students and @mthwgeek: Yo been hacked made to play Rick Astley.

While the epic rise and fall of Yo and how Yo was hacked make for an interesting story, as an investor, this is not the part of the story that jumped out at me. The question I have is how did the experienced investor, Moshe Hogeg (or any investor for that matter) invest in a technology without learning about the development process of the technology? The app was built in about eight hours. What does that indicate about the QA process? What does that say about the security of the software?

Register for this webinar here!

Join Chris Lynch and Veracode CEO Bob Brennan for a webinar discussing: Why You Need to Be a Secure Supplier

The eight hour development time is impressive, and demonstrates drive on the part of the apps’ developers. However, I have questions about the security of a product that can be developed during a single standard work day. And Yo’s prospective customers – the advertisement firms that they were inevitably selling this data to – would have asked the same question.

When I listen to a start-up pitch me on their next-gen/transformational/whatever product, I always question if the technology is truly enterprise-class: is it scalable, reliable, and secure? One or two groups within an enterprise may order a few of your widgets without this, but if you are gunning for the big bucks, you want an enterprise-wide deployment of your technology. This requires you prove that your product is just as reliable and secure as the largest players in the market. Because no one gets fired for buying IBM. People get canned when they purchase software from a cutting-edge start-up that ends up causing a data breach and costing the enterprise millions. Security is just table stakes if you want to play with the big boys. This includes enterprises buying your product and VCs like Atlas Venture backing your company.

When investing in a company, or product, it is essential that I understand everything I can about the technology – including the security of that product. It isn’t enough to scrutinize the need for the technology in the market and the product’s functionality. I must also understand how the product is developed, and if secure development practices are in use. Otherwise I am setting myself up to lose a lot of money in the event of a breach.

As investors in new companies and technologies we are taking risks, and without investors taking these risks we will never see the next Facebook or Instagram. However, these risks we take should be calculated jumps, not leaps of faith. Investing $1.2 million into a company without this level of due diligence is irresponsible – unless you are looking for some sort of revenue loss tax break.

I have a feeling Moshe Hogeg thought he had a winning product when he wrote that check. But he didn’t conduct a full due diligence process, and he is paying dearly for that mistake now. I feel badly for Moshe Hogeg, but I hope his misfortune can serve as a warning to the investment community as a whole and more broadly to buyers and users of software – whether they are consumers or businesses. Software security is as important as software functionality and simply assuming security was a consideration during the development process no longer good enough. Documented proof needs to be provided from these software development companies if they expect to get funding and ultimately to generate revenue.

VerAfied Feature – Security: the ugly secret at the heart of #eventtech?

July 25, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY 

This blog post was originally published by GenieConnect at http://www.genie-connect.com/blog/security-the-ugly-secret-at-the-heart-of-eventtech. GenieConnect joined the ranks of our VerAfied secure software directory in June of this year using our static binary analysis service. We’re excited to see and supportive of GenieConnect’s decision to make the security of their software and users, a priority.

If you’re short of something to do today, try putting mobile security into Google News – you’ll get over 6 million hits. It’s not difficult to see why: in an age of BYOD, the proliferation of tablets and the ever increasing sophistication of smartphones, information is going mobile – and the implications of this are scaring the hell out of people. Industry analyst Gartner claimed that 75 percent of mobile security breaches will result from mobile application misconfiguration. Even the largest app vendors are not immune –Spotify recently required users to update to a new, more secure version of its Android app.security

Now, how many results would you find if you put ‘event tech’ mobile security into Google news? Well, given the importance of the data stored in native event apps (corporate plans and the personal records of thousands of attendees, for instance) and the debate around the securing of mobile devices, there should be millions, right? Wrong. There’s only seven – and three of them relate to our recent announcement that we were the first #eventtech vendor to achieve the VerAfied security mark.

It’s curious, isn’t it? Is there an industry omertà – a code of silence – around this issue? I’m beginning to think so. Earlier this year, TechWeekEurope reported that a mobile app (ironically for the RSA security conference) had “leaked data on thousands of users”. Now, in a hugely competitive industry where companies fight tooth and nail for the slightest competitive advantage, I was expecting a deluge of coverage over this issue as rivals crawled over each other to exploit this flaw. But there was nothing.

With hindsight, I think there was an industry-wide sigh of relief, a sense that, “there but for the grace of God go I”; and, thankful that the hackers had chosen to go elsewhere, most event tech vendors put their heads back into the sand. Well, GenieConnect chose not to do this.

We knew that achieving VerAfied status would tell the market that we took security seriously. So we submitted our entire platform to the VerAfied testing regime. As our CEO Giles Welch said, “By enlisting the services of Veracode, the world’s most powerful application security platform, we can reassure clients that that our software complies with the highest security standards.”

Particularly over the past few months, we’ve seen an increased focus on the security aspects of our solution. In fact, we’ve recently won some major contracts following a global procurement process in which security was a paramount consideration. This issue is clearly not going to go away and, at GenieConnect, we believe that security certification will become the new normal for event tech.

So, isn’t it time that we as an industry take our heads out of the sand and embrace this as an opportunity
rather than resisting it as a threat?

To find out more about securing your #eventtech solution, download our Best Practice guide.

Author

Just Another Web Application Breach

Does this resemble your application security program's coverage? We can help.

Does this resemble your application security program’s coverage? We can help.

Another day another web application breach hits the news. This time ITWorld reports Hackers steal user data from the European Central Bank website, ask for money.

I can’t say that I’m surprised. Although vulnerabilities (SQL Injection, cross-site-scripting, etc.) are easy for attackers to detect and exploit, they are still very common across many web applications.

The survey that we just completed with IDG highlights the problem – 83% of respondents said it was critical or very important to close their gaps in assessing web applications for security issues. However, a typical enterprise:

  • has 804 internally developed web applications
  • plans to develop another 119 web applications with internal development teams over the next 12 months
  • tests only 38% of those web applications for security vulnerabilities

And these numbers don’t include all the web applications that are sourced to third-party software vendors or outsourced development shops.

The assessment methodologies for finding web application vulnerabilities aren’t a mystery – we all know about static and dynamic testing. It’s the scale at which web applications must be found, assessed for vulnerabilities and then remediated that makes this difficult for large enterprises.

Think about it, 119 applications over the next 365 days means a new web application is deployed on an enterprise web property every 3 days.

Is it any wonder that web application breaches keep happening?

Learn more about Veracode’s cloud-based service:

For Java: I Patch, Therefore I Am?

Oracle’s Java platform is so troubled the question is whether to patch it, or kill it off.

2145480_m

Oracle Inc. released its latest Critical Patch Update (CPU) on Tuesday of last week, with fixes for 113 vulnerabilities spread across its product portfolio, including 29 for Oracle’s Fusion Middleware, and 20 for the troubled Java platform.

The release has prompted a chorus of entreaties to “patch now,” including those from the SANS Internet Storm Center, U.S. CERT and Brian Krebs. A surprising number of them, however, also held out the possibility of not patching Java and, instead, just not using it.

This isn’t loose talk. It wasn’t that long ago that the headlines were all about new, critical security holes discovered in Java. Exploits for those vulnerabilities were used in online ‘drive by download’ and ‘watering hole’ attacks aimed at high value targets, including employees at companies like Facebook, Apple and Microsoft. The advice back then was to simply turn Java off – and leave it off – when you browse the web.

“Oracle/Java is probably by now one of the most successful charities in the world,”

- Daniel Wesemann

The furor over Java’s vulnerability subsided – even if the attacks and patches didn’t. Eight of 20 vulnerabilities fixed by Oracle in Java were rated 9.0 or higher on a severity scale of 1-10. One of them, CVE-2014-4227, rated a perfect “10.” All the reported vulnerabilities would allow a remote attacker to exploit the vulnerability without first authenticating (signing in) to the vulnerable system.

The difficulty with Java is that it is a technology that is integrated into so many devices and applications – web based and otherwise. Oracle boasts that Java runs on 97% of enterprise desktops and 3 billion mobile phones, as well as countless embedded devices, from “smart” TVs to Blu-ray Disc players.

That makes any exploitable vulnerability in Java worth its weight in gold for cyber criminals or nation-state backed hackers. A Java exploit is the key that will unlock just about every door on the Internet. The cost – to society – is large.

“Oracle/Java is probably by now one of the most successful charities in the world,” wrote Daniel Wesemann on the SANS Internet Storm Center blog. “It continues to do an outstanding job at enabling significant wealth transfer to support poor cyber criminals and their families.”

Java’s time may have come and gone. More than a few of the security experts calling attention to the latest CPU are asking out loud whether it isn’t time to ditch Java altogether. “Patch It or Pitch It” was Mr. Krebs headline – which aptly summed up the feelings of many security experts. Like the owners of an old junker, Java users may look at this latest CPU and ask themselves “is it really worth the trouble to patch?”

Widely adopted programs tend to make for more lucrative mines.

Widely adopted programs tend to make for more lucrative mines.

Where does the blame lie? The truth is that technologies that are widely adopted and deployed almost always attract the attention of cyber criminals. Active X was a popular back in the ‘dotcom’ era. It also became a favorite target of cyber criminals. Over time, that pushed developers and software publishers away from the platform and to alternatives…like Java.

Technologies like Java are so ubiquitous that it can be impossible for anyone – individual or a business – to know whether a given product uses a vulnerable component until its too late.

But, as competitors like Microsoft have endeavored to make their software update and patching process transparent, Oracle has opted to keep its security process extremely opaque. The company’s monthly CPU releases are massive and stretch across scores of disparate products and platforms. Some vulnerabilities affect multiple products, making it hard to know what’s going on. Researchers who dig for details often come away scratching their head.

For the latest patch, Ross Barrett, a security engineer at the firm Rapid7 points out that the top two patches for Oracle Database 12 fix an issue that Oracle patched in an earlier version of the same product a year ago. That would suggest that Oracle either failed to appreciate the reach of the vulnerability last year or knew about it and chose to leave Oracle 12 customers unprotected. Either is troubling.

In response, Oracle management – including Chief Security Officer Mary Ann Davidson, are often combative rather than conciliatory. Ms. Davidson recently penned a derisive blog post, “Those that can’t do audit” to cast doubt on the utility of code audits and suggest that large companies like Oracle shouldn’t have to bother with third party audits like the little guys. The message, on security: “trust us.”

As the critical vulnerabilities in Java, MySQL and Oracle’s other products mount, however, trust is getting hard to come by.

For Java: I Patch, Therefore I Am?

Oracle’s Java platform is so troubled the question is whether to patch it, or kill it off.

2145480_m

Oracle Inc. released its latest Critical Patch Update (CPU) on Tuesday of last week, with fixes for 113 vulnerabilities spread across its product portfolio, including 29 for Oracle’s Fusion Middleware, and 20 for the troubled Java platform.

The release has prompted a chorus of entreaties to “patch now,” including those from the SANS Internet Storm Center, U.S. CERT and Brian Krebs. A surprising number of them, however, also held out the possibility of not patching Java and, instead, just not using it.

This isn’t loose talk. It wasn’t that long ago that the headlines were all about new, critical security holes discovered in Java. Exploits for those vulnerabilities were used in online ‘drive by download’ and ‘watering hole’ attacks aimed at high value targets, including employees at companies like Facebook, Apple and Microsoft. The advice back then was to simply turn Java off – and leave it off – when you browse the web.

“Oracle/Java is probably by now one of the most successful charities in the world,”

- Daniel Wesemann

The furor over Java’s vulnerability subsided – even if the attacks and patches didn’t. Eight of 20 vulnerabilities fixed by Oracle in Java were rated 9.0 or higher on a severity scale of 1-10. One of them, CVE-2014-4227, rated a perfect “10.” All the reported vulnerabilities would allow a remote attacker to exploit the vulnerability without first authenticating (signing in) to the vulnerable system.

The difficulty with Java is that it is a technology that is integrated into so many devices and applications – web based and otherwise. Oracle boasts that Java runs on 97% of enterprise desktops and 3 billion mobile phones, as well as countless embedded devices, from “smart” TVs to Blu-ray Disc players.

That makes any exploitable vulnerability in Java worth its weight in gold for cyber criminals or nation-state backed hackers. A Java exploit is the key that will unlock just about every door on the Internet. The cost – to society – is large.

“Oracle/Java is probably by now one of the most successful charities in the world,” wrote Daniel Wesemann on the SANS Internet Storm Center blog. “It continues to do an outstanding job at enabling significant wealth transfer to support poor cyber criminals and their families.”

Java’s time may have come and gone. More than a few of the security experts calling attention to the latest CPU are asking out loud whether it isn’t time to ditch Java altogether. “Patch It or Pitch It” was Mr. Krebs headline – which aptly summed up the feelings of many security experts. Like the owners of an old junker, Java users may look at this latest CPU and ask themselves “is it really worth the trouble to patch?”

Widely adopted programs tend to make for more lucrative mines.

Widely adopted programs tend to make for more lucrative mines.

Where does the blame lie? The truth is that technologies that are widely adopted and deployed almost always attract the attention of cyber criminals. Active X was a popular back in the ‘dotcom’ era. It also became a favorite target of cyber criminals. Over time, that pushed developers and software publishers away from the platform and to alternatives…like Java.

Technologies like Java are so ubiquitous that it can be impossible for anyone – individual or a business – to know whether a given product uses a vulnerable component until its too late.

But, as competitors like Microsoft have endeavored to make their software update and patching process transparent, Oracle has opted to keep its security process extremely opaque. The company’s monthly CPU releases are massive and stretch across scores of disparate products and platforms. Some vulnerabilities affect multiple products, making it hard to know what’s going on. Researchers who dig for details often come away scratching their head.

For the latest patch, Ross Barrett, a security engineer at the firm Rapid7 points out that the top two patches for Oracle Database 12 fix an issue that Oracle patched in an earlier version of the same product a year ago. That would suggest that Oracle either failed to appreciate the reach of the vulnerability last year or knew about it and chose to leave Oracle 12 customers unprotected. Either is troubling.

In response, Oracle management – including Chief Security Officer Mary Ann Davidson, are often combative rather than conciliatory. Ms. Davidson recently penned a derisive blog post, “Those that can’t do audit” to cast doubt on the utility of code audits and suggest that large companies like Oracle shouldn’t have to bother with third party audits like the little guys. The message, on security: “trust us.”

As the critical vulnerabilities in Java, MySQL and Oracle’s other products mount, however, trust is getting hard to come by.

Four Steps to Successfully Implementing Security into a Continuous Development Shop

18458476_sSo you live in a continuous deployment shop and you have been told to inject security into the process. Are you afraid? Don’t be. When the world moved from waterfall to agile, did everything go smoothly? Of course not – you experienced setbacks and hiccups, just like everyone else. But, eventually you worked through the setbacks and lived to tell the tale. As with any new initiative, it will take time to mature. Take baby steps.

Step one: crawl.

Baseline the security of your application by using multiple testing methods. Static, dynamic and manual analysis will let you know exactly where you stand today. Understand that you may be overwhelmed with your results. You can’t fix it all at once, so don’t panic. At least you know what you have to work with. Integration with your SDLC tools is going to be your best friend. It will allow you to measure your progress over time and spot problematic trends early.

Step two: stand.

Come up with a plan based on your baseline. What has to be fixed now? What won’t we fix? You didn’t get here in a day and you won’t be able to fix it in a day. Work with your security team to build your backlog. Prioritize, deprioritize, decompose, repeat. Now would be a great time to introduce a little education into the organization. Take a look at your flaw prevalence and priorities and train your developers. If you teach them secure coding practices they will write more secure code the first time.

Step three: walk.

Stop digging and put the shovels down. We know that we have problems to fix from the old code (security debt). Let’s make sure we don’t add to the pile. Now is the time to institute a security gate. No new code can be merged until it passes your security policy. We’re not talking about the entire application, just the new stuff. Don’t let insecure code come into the system. By finding and addressing the problems before check-ins, you won’t slow your downstream process. This is a 13881278_sgood time to make sure your security auditing systems integrate with your software development lifecycle systems (JIRA, Jenkins, etc.). Integrating with these systems will make the processes more seamless.

Step four: run!

Now you have a backlog of prioritized work for your team to fix and you’re not allowing the problem to get worse. You’re constantly measuring your security posture and showing continuous improvement. As you pay down your security debt you will have more time for feature development and a team with great secure coding habits.

Integrating a new standard into a system that is already working can be intimidating. But following these four steps will make the task more manageable. Also, once security is integrated, it will become a normal part of the continuous development lifecycle and your software will be better for it.

Related Links

Introduction, or How Securing the Supply Chain is like “Going Green”

Application security is, as any practitioner will tell you, a hard technical and business problem unlike any other. The best advice for successfully securing software is usually to avoid thinking about it like any other problem — software security testers are not like quality assurance professionals, and many security failures arise when developers think conventionally about use cases rather than abuse cases.

But just because application security is a distinct problem does not mean that we should fail to learn from other fields, when applicable. And one of the opportunities for learning is in what appears at first glance to be a doubly difficult problem: securing the software supply chain. Why is software supply chain security needed? The majority of businesses are not building every application they use, they are turning to third parties like outsourced and commercial software vendors. According to IDG, over 62% of an enterprises’ software portfolio was developed outside the enterprise.

Caption

Over 62% of an enterprises’ software portfolio is developed outside the enterprise.

How should these enterprises be thinking about security? Software supply chain security efforts have all the challenges of conventional app sec initiatives, combined with the contractual, legal, and organizational issues of motivating change across organizational boundaries.

But the consequences of ignoring supply chain issues in an application security program are momentous. Most applications are composed of first party code surrounding libraries and other code sourced from third parties — both commercial libraries and open source projects. Purchased applications deployed on the internet or the internal network may access sensitive customer or corporate data and must be evaluated and secured just like first party code, lest a thief steal data through an unlocked virtual door. And increasingly standards like PCI are holding enterprises responsible for driving security requirements into their suppliers.

So what are we to do? Fortunately, software security is not the only large, complex initiative that has implications on the supply chain. Software supply chain security initiatives can take inspiration from other supply chain transformation initiatives, including the rollout of RFID in the early 2000s by Walmart and others, and — particularly — the rise of “green” supply chain efforts.

12728281_sIn fact, software security bears close similarity to “green” efforts to reduce CO2 emissions and waste in the supply chain. Both “green” and security have significant societal benefits, but have historically been avoided in favor of projects more directly connected to revenue. Both have recently seen turns where customers have started to demand a higher standard of performance from companies. And both require coordination of efforts across the supply chain to be successful.

This series of blog posts will explore some simple principles for supply chain transformation that can be derived from efforts to implant “green” or to drive RFID adoption. The basic building blocks stem from research done into green efforts by the Wharton School of Business and published in 2012, and are supplemented with learnings from RFID. We’ll cover seven principles of supply chain transformation and show you how to apply them to your software supply chain initiative:

  1. Choose the right suppliers
  2. Put your efforts where they do the most good
  3. Collaborate to innovate
  4. Use suppliers as force multipliers
  5. The elephant in the room is compliance
  6. Drive compliance via “WIIFM”
  7. Align benefits for enterprise and supplier – or pay

I hope you enjoy the series and look forward to the discussion!

Is It Time For Customs To Inspect Software?

The Zombie Zero malware proves that sophisticated attackers are targeting the supply chain. Is it time to think about inspecting imported hardware and software?

The time for securing supply chain software is now.

If you want to import beef, eggs or chicken into the U.S., you need to get your cargo past inspectors from the U.S. Department of Agriculture. Not so hardware and software imported into the U.S. and sold to domestic corporations.

But a spate of stories about products shipping with malicious software raises the question: is it time for random audits to expose compromised supply chains?

Concerns about ‘certified, pre-pwned’ hardware and software are nothing new. In fact, they’ve permeated the board rooms of technology and defense firms, as well as the halls of power in Washington, D.C. for years.

The U.S. Congress conducted a high profile investigation of Chinese networking equipment maker ZTE in 2012 with the sole purpose of exploring links between the company and The People’s Liberation Army, and (unfounded) allegations that products sold by the companies were pre-loaded with spyware.

Of course, now we know that such threats are real. And we know because documents leaked by Edward Snowden and released in March showed how the U.S. National Security Agency intercepts networking equipment exported by firms like Cisco and implants spyware and remote access tools on it, before sending it on its way. Presumably, the NSA wasn’t the first state intelligence agency to figure this out.

If backdoors pre-loaded on your Cisco switches and routers aren’t scary enough, this week, the firm TrapX issued a report on a piece of malicious software they called “Zombie Zero.” TrapX claims to have found the malware installed on scanners used in shipping and logistics to track packages and other products. The scanners were manufactured in China and sold to companies globally. The factory that manufactured the devices is located close to the Lanxiang Vocational School, an academy that is believed to have played a role in sophisticated attacks on Google and other western technology firms dubbed “Aurora.” Traffic associated with a command and control botnet set up by Zombie Zero were also observed connecting to servers at the same facility – which is suggestive, but not proof of the School’s involvement in the attack.

TrapX said that its analysis found that 16 of 64 scanners sold to a shipping and logistics firm that they consulted with were infected. The Zombie Zero malware was programmed to exploit access to corporate wireless networks at the target firms to attack finance and ERP systems at the firms.

Scanners outfitted with another variant of Zombie Zero were shipped to eight other firms, including what is described as a “major robotics” manufacturer, TrapX claims.

If accurate, TrapX’s Zombie Zero is the most flagrant example of compromised hardware being used in a targeted attack. Its significant because it shows how factory loaded malware on an embedded device (in this case: embedded XP) could be used to gain a foothold on the networks of a wide range of companies in a specific vertical.

Prior “malicious supply chain” stories haven’t had that kind of specificity. Dell warned about compromised PowerEdge motherboards back in 2010, but there was no indication that the compromised motherboards were directed to particular kinds of Dell customers. Recent news about Android smartphones pre-loaded with spyware and teapots with wireless “spy chips” seemed more indicative of an undifferentiated cyber criminal operation satisfied to cast a wide net.

Not so TrapX, whose creators seemed intent both on compromising a particular type of firm (by virtue of the kind of device they used as their calling card) and extracting a particular type of data from those firm – the hallmarks of a sophisticated “APT” style actor.

There’s really no easy answer to this. Warning U.S. firms away from Chinese products is all well and good, but it’s also a strategy that won’t work, while punishing lots of innocent companies selling quality product. The truth is that any technology product you buy today is almost certain to contain components that were sourced in China. Any of those components could contain malicious software supplied by a compromised or unscrupulous down steam supplier. “Buy American” is even more pointless in the context of technology than it was back in the automobile sector back in the 70s and 80s.

What’s to be done? Security conscious firms need to take much more interest in the provenance of the hardware and software they buy. Firms, like Apple, that are big enough to have leverage might consider random audits of equipment and firmware looking for compromises. They might also insist on reviewing the manufacturing facilities where devices are assembled to see what kinds of quality controls the manufacturer has over the software and hardware that is installed in their products.

Beyond that, the U.S. government – via U.S. Customs and Border Protection (and like agencies in other nations) could take an interest in the contents and quality of IT products that are imported from China and other countries.

A system of random inspections and audits – akin to the inspections that are done for agricultural and consumer products – could raise the stakes for firms and governments intent on slipping compromised IT equipment and embedded devices into the U.S. market.

PCI Compliance & Secure Coding: Implementing Best Practices from the Beginning

July 15, 2014 by · Leave a Comment
Filed under: Compliance, SDLC 
Is your SDLC process built on a shaky foundation?

Is your SDLC process built on a shaky foundation?

A lot of the revisions to PCI DSS point toward the realization that security must be built into the development process. The foundation that ultimately controls the success or failure of this process must be built upon knowledge — that means training developers to avoid common coding flaws that can lead to different types of vulnerabilities being introduced. So let’s take a quick look at one of the common flaws that will become part of the mandate on July 30th, 2015.

PCI 3.0 added “Broken Authentication and Session Management” (OWASP Top 10 Category A2) as a category of common coding flaws that developers should protect against during the software development process. Left exposed, this category opens some pretty serious doors for attackers, as accounts, passwords, and session IDs can all be leveraged to hijack an authenticated session and impersonate unsuspecting end users. It’s great that your authentication page itself is secure, that’s your proverbial fortress door, but if an attacker can become your user(s), it doesn’t matter how strong those doors were…they got through.

To have a secure development process aligned to PCI that works, developers must be aware of these types of issues from the beginning. If critical functions aren’t being secured because they are missing authentication controls, using hard-coded passwords, and/or limiting authentication attempts, etc., you need to evaluate how you got into this predicament in the first place. It all starts with those who design and develop your application(s). For the record, nobody expects them to become security experts, but we do expect them to know what flawed code looks like, and how NOT to introduce it over and over again.

According to the April 2013 Veracode State of Software Security report, stolen credentials, brute force attacks, and cross-site scripting (XSS) are among the most common attack methods used by hackers to exploit web applications. The revisions found in PCI DSS 3.0 did a lot to clarify what was originally left open to interpretation; it’s worth noting that by redefining what quality assurance (QA) means, it doesn’t mean you are going to rock the world of your developers.

Change is scary, we get that, which is why the output we provide was designed and meant for the developers to consume, not a security team. The number of successful attacks leading to access of critical data and systems via hijacked sessions will never decrease unless we coach our developers on the basics of how to build security into their development process.

Related Links

Video Survey: What Would You Do with a Monster in Your Corner?

July 11, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY 

In our final video survey installment as part of the Future of AppSec Series, we talk about the idea of having a “Monster in Your Corner“. Application security often feels like a massive intractable problem, the sort of problem that requires a really big friend to help you solve, or in our thinking – a monster.

When we talk about having a monster in your corner, what do we mean? Well, we’re talking about the Veracode platform, our automated scanning techniques and the time-saving, massively scalable approach they give you to mountains of code and thousands of applications. But we’re also talking about the brilliant security engineers and minds behind the technology, the same minds that are continuously driving our service to improve and make the world of software safer on a daily basis. And lastly but certainly not least, our amazing Customer Services team of trained application security experts that are on-hand to help every customer get the most out of our cloud-based service.

Everyone at Veracode makes up the monster and we’d love to be in your corner. Help us understand what appsec problems you need help with tackling because this is what do day in and day out.

If You Had an Application Security Monster in Your Corner What Problem Would it Attack?

Watch Our Other Video Surveys

Next Page »