Yo, A Cautionary Tale for the VC Community

By Chris Lynch, Partner, Atlas Venture

The story of Yo will be used as a cautionary tale in the VC community for years to come. Only a few days after receiving a much talked about $1.2 million in series “A” funding from Angel investor and serial entrepreneur Moshe Hogeg, Yo suffered a massive security breach. The breach made more headlines than the funding, and took the wind out of the company’s sales – possibly for good.


The jury is still out on the future of the Yo app but being hacked got the app in headlines for all the wrong reasons.

How did the breach happen? Over the weeks that followed several journalists have offered their analysis including @VioletBlue: People invested $1.2 million in an app that had no security, @mikebutcher: App allegedly hacked by college students and @mthwgeek: Yo been hacked made to play Rick Astley.

While the epic rise and fall of Yo and how Yo was hacked make for an interesting story, as an investor, this is not the part of the story that jumped out at me. The question I have is how did the experienced investor, Moshe Hogeg (or any investor for that matter) invest in a technology without learning about the development process of the technology? The app was built in about eight hours. What does that indicate about the QA process? What does that say about the security of the software?

Register for this webinar here!

Join Chris Lynch and Veracode CEO Bob Brennan for a webinar discussing: Why You Need to Be a Secure Supplier

The eight hour development time is impressive, and demonstrates drive on the part of the apps’ developers. However, I have questions about the security of a product that can be developed during a single standard work day. And Yo’s prospective customers – the advertisement firms that they were inevitably selling this data to – would have asked the same question.

When I listen to a start-up pitch me on their next-gen/transformational/whatever product, I always question if the technology is truly enterprise-class: is it scalable, reliable, and secure? One or two groups within an enterprise may order a few of your widgets without this, but if you are gunning for the big bucks, you want an enterprise-wide deployment of your technology. This requires you prove that your product is just as reliable and secure as the largest players in the market. Because no one gets fired for buying IBM. People get canned when they purchase software from a cutting-edge start-up that ends up causing a data breach and costing the enterprise millions. Security is just table stakes if you want to play with the big boys. This includes enterprises buying your product and VCs like Atlas Venture backing your company.

When investing in a company, or product, it is essential that I understand everything I can about the technology – including the security of that product. It isn’t enough to scrutinize the need for the technology in the market and the product’s functionality. I must also understand how the product is developed, and if secure development practices are in use. Otherwise I am setting myself up to lose a lot of money in the event of a breach.

As investors in new companies and technologies we are taking risks, and without investors taking these risks we will never see the next Facebook or Instagram. However, these risks we take should be calculated jumps, not leaps of faith. Investing $1.2 million into a company without this level of due diligence is irresponsible – unless you are looking for some sort of revenue loss tax break.

I have a feeling Moshe Hogeg thought he had a winning product when he wrote that check. But he didn’t conduct a full due diligence process, and he is paying dearly for that mistake now. I feel badly for Moshe Hogeg, but I hope his misfortune can serve as a warning to the investment community as a whole and more broadly to buyers and users of software – whether they are consumers or businesses. Software security is as important as software functionality and simply assuming security was a consideration during the development process no longer good enough. Documented proof needs to be provided from these software development companies if they expect to get funding and ultimately to generate revenue.

Four Steps to Successfully Implementing Security into a Continuous Development Shop

18458476_sSo you live in a continuous deployment shop and you have been told to inject security into the process. Are you afraid? Don’t be. When the world moved from waterfall to agile, did everything go smoothly? Of course not – you experienced setbacks and hiccups, just like everyone else. But, eventually you worked through the setbacks and lived to tell the tale. As with any new initiative, it will take time to mature. Take baby steps.

Step one: crawl.

Baseline the security of your application by using multiple testing methods. Static, dynamic and manual analysis will let you know exactly where you stand today. Understand that you may be overwhelmed with your results. You can’t fix it all at once, so don’t panic. At least you know what you have to work with. Integration with your SDLC tools is going to be your best friend. It will allow you to measure your progress over time and spot problematic trends early.

Step two: stand.

Come up with a plan based on your baseline. What has to be fixed now? What won’t we fix? You didn’t get here in a day and you won’t be able to fix it in a day. Work with your security team to build your backlog. Prioritize, deprioritize, decompose, repeat. Now would be a great time to introduce a little education into the organization. Take a look at your flaw prevalence and priorities and train your developers. If you teach them secure coding practices they will write more secure code the first time.

Step three: walk.

Stop digging and put the shovels down. We know that we have problems to fix from the old code (security debt). Let’s make sure we don’t add to the pile. Now is the time to institute a security gate. No new code can be merged until it passes your security policy. We’re not talking about the entire application, just the new stuff. Don’t let insecure code come into the system. By finding and addressing the problems before check-ins, you won’t slow your downstream process. This is a 13881278_sgood time to make sure your security auditing systems integrate with your software development lifecycle systems (JIRA, Jenkins, etc.). Integrating with these systems will make the processes more seamless.

Step four: run!

Now you have a backlog of prioritized work for your team to fix and you’re not allowing the problem to get worse. You’re constantly measuring your security posture and showing continuous improvement. As you pay down your security debt you will have more time for feature development and a team with great secure coding habits.

Integrating a new standard into a system that is already working can be intimidating. But following these four steps will make the task more manageable. Also, once security is integrated, it will become a normal part of the continuous development lifecycle and your software will be better for it.

Related Links

PCI Compliance & Secure Coding: Implementing Best Practices from the Beginning

July 15, 2014 by · Leave a Comment
Filed under: Compliance, SDLC 
Is your SDLC process built on a shaky foundation?

Is your SDLC process built on a shaky foundation?

A lot of the revisions to PCI DSS point toward the realization that security must be built into the development process. The foundation that ultimately controls the success or failure of this process must be built upon knowledge — that means training developers to avoid common coding flaws that can lead to different types of vulnerabilities being introduced. So let’s take a quick look at one of the common flaws that will become part of the mandate on July 30th, 2015.

PCI 3.0 added “Broken Authentication and Session Management” (OWASP Top 10 Category A2) as a category of common coding flaws that developers should protect against during the software development process. Left exposed, this category opens some pretty serious doors for attackers, as accounts, passwords, and session IDs can all be leveraged to hijack an authenticated session and impersonate unsuspecting end users. It’s great that your authentication page itself is secure, that’s your proverbial fortress door, but if an attacker can become your user(s), it doesn’t matter how strong those doors were…they got through.

To have a secure development process aligned to PCI that works, developers must be aware of these types of issues from the beginning. If critical functions aren’t being secured because they are missing authentication controls, using hard-coded passwords, and/or limiting authentication attempts, etc., you need to evaluate how you got into this predicament in the first place. It all starts with those who design and develop your application(s). For the record, nobody expects them to become security experts, but we do expect them to know what flawed code looks like, and how NOT to introduce it over and over again.

According to the April 2013 Veracode State of Software Security report, stolen credentials, brute force attacks, and cross-site scripting (XSS) are among the most common attack methods used by hackers to exploit web applications. The revisions found in PCI DSS 3.0 did a lot to clarify what was originally left open to interpretation; it’s worth noting that by redefining what quality assurance (QA) means, it doesn’t mean you are going to rock the world of your developers.

Change is scary, we get that, which is why the output we provide was designed and meant for the developers to consume, not a security team. The number of successful attacks leading to access of critical data and systems via hijacked sessions will never decrease unless we coach our developers on the basics of how to build security into their development process.

Related Links

Secure Agile Q&A: API’s, IDE’s and Environment Integration

cloud plugin

A few weeks back, I hosted a webinar called “Secure Agile Through Automated Toolchains: How Veracode R&D Does It”, and in this webinar I discussed the importance of security testing and how to integrate it into the Agile SDLC. There were so many questions from our open discussion following the webinar that I have taken this time to follow up with them. Thank you to everyone who attended the live webinar, and now on to your questions:

Q: Can you upload non-compiled applications, from the IDE, using the IDE plugins?

A: Yes, you can upload any kind of document through both the Eclipse and Visual Studio IDE plugins. It is also possible to create other plugins using our Integrations SDK.

Q: What other Continuous Integration tools do you have a plugin for?

A: Veracode has the ability to integrate with several Continuous Integration environments. Our Jenkins Plug-In makes it easy to automate uploading to Veracode from your CI environment. In addition, Veracode provides APIs and how-to instructions for automating Veracode upload into Microsoft Team Foundation Server (TFS), Maven and Bamboo CI environments.

Q: Do you have any plugins for Visual Studio which can be integrated with Sandbox and JIRA?

visual studio

A: The current version of Visual Studio cannot be integrated with Sandbox, but we plan to provide this functionality in the near future. There is no specific integration between Visual Studio plugin and JIRA. You can use the Visual Studio plugin to download scan results directly from the Veracode Platform.

Q: My company is a Microsoft shop – when will these tools be ready for Visual Studio/TFS environment?

developer network

Instructions for integration of the Veracode service with Microsoft Team Foundation Server (TFS) are available today in the Veracode Online Help. We want to develop an end-to-end workflow that follows the process described in the Webinar. The goal is to provide it in the second half of the year.

intellijideaQ: Will you also be providing an IntelliJ IDEA integration SDK?

A: At this point we do not have plans to provide a plugin for InteliJ IDEA. The goal of the SDK is to assist with integration into environments that are not supported out of the box.

Q: Do you have a reference implementation using TeamCity instead of Jenkins?

teamcity_logo (2)

A: We do not have a reference implementation for TeamCity. We recommend using our API wrapper to integrate Veracode with TeamCity. Please see our Integrations SDK for more information.

This concludes this first round of Q&A from “Secure Agile Through Automated Toolchains: How Veracode R&D Does It”. Be sure to check out the on-demand webinar if you missed it, and come back here soon for more of this Q&A.

view the webinar

While you wait for part two, you might also be interested in a webinar from my colleagues Chris Eng, Veracode’s VP of Research, and Senior Security Researcher, Ryan O’Boyle titled “Building Security Into the Agile SDLC: View from the Trenches”. Chris and Ryan discuss how we’ve embedded security into our own Agile Scrum processes – to rapidly deliver new applications without exposing them to critical vulnerabilities. If you have any more questions regarding anything from the webinar, I would love to hear from you in the comments section below.

Benefits of Binary Static Analysis


1. Coverage, both within applications you build and within your entire application portfolio

One of the primary benefits of binary static analysis is that it allows you to inspect all the code in your application. Mobile apps especially have binary components, but web apps, legacy back office and desktop apps do too. You don’t want to only analyze the code you compile from source but also the code you link in from components. Binary analysis lets vendors feel comfortable about getting an independent code level analysis of the code you are purchasing through procurement. This enables you to do code level security testing of the COTS applications in your organization’s portfolio. Binary analysis lets you cover all of the code running in your organization.

2. Tight integration into the build system and continuous integration (CI) environment

If you integrate binary static analysis into your CI you can build in 100% automation with no need for manual human (developer) steps. The build process can run the binary analysis by calling an API and results can be automatically brought into a defect ticketing system also through an API. Code analysis is now transparent and inescapable. Developers will then see security defects in their normal defect queue. Developers will be fixing security flaws without needing to perform any configuration or testing saving valuable developer time.

3. Contextual analysis

Binary static analysis analyzes your code along with all the other components of the application, within the context of the platform it was built for. Binary static analysis can view tainted source data flow through the complete data flow to a risky sink function. Partial application analysis of pieces of a program miss this context and be will less accurate on both false positives and false negatives. Any security expert will tell you context is extremely important. A section of code can be rendered insecure or secure by the code it is called from or the code it calls into.

With a complete program you can perform Software Composition Analysis (SCA) to identify components that have known vulnerabilities in them. A9-Using Components with Known Vulnerabilities is one of the OWASP Top 10 Risks so you want to make sure you can analyze the entire program. Veracode has built SCA into the binary static analysis process.

Veracode's binary static analysis process.

Veracode’s binary static analysis process. Click to view the full size image.

4. Higher fidelity of analysis

Some languages like C and C++ give latitude to the compiler to generate different machine code. Source code analysis is blind to decisions made by the compiler. There are documented cases of both the GCC and the Microsoft C/C++ compiler removing security checks and the clearing of memory which opened up security holes. MITRE CWE has categorized this vulnerability: CWE-14: Compiler Removal of Code to Clear Buffers. The paper WYSINWYX: What You See Is Not What You Execute by Gogul Balakrishnan describes how “there can be a mismatch between what a programmer intends and what is actually executed on the processor.”

More on binary static analysis

Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part II

Welcome to another round of Agile SDLC Q&A. Last week Ryan and I took some time to answer questions from our webinar, “Building Security Into the Agile SDLC: View from the Trenches“; in case you missed it, you can see Part I here. Now on to more of your questions!

Q. What would you recommend as a security process around continuous build?


Chris: It really depends on what the frequency is. If you’re deploying once a day and you have automated security tools as a gating function, it’s possible but probably only if you’ve baked those tools into the build process and minimized human interaction. If you’re deploying more often than that, you’re probably going to start thinking differently about security – taking it out of the critical path but somehow ensuring nothing gets overlooked. We’ve spoken with companies who deploy multiple times a day, and the common theme is that they build very robust monitoring and incident response capabilities, and they look for anomalies. The minute something looks suspect they can react and investigate quickly. And the nice thing is, if they need to hotfix, they can do it insanely fast. This is uncharted territory for us; we’ll let you know when we get there.

Q. What if you only have one security resource to deal with app security – how would you leverage just one resource with this “grooming” process?


Chris: You’d probably want to have that person work with one Scrum team (or a small handful) at a time. As they security groomed with each team, they would want to document as rigorously as possible the criteria that led to them attaching security tasks to a particular story. This will vary from one team to the next because every product has a different threat model. Once the security grooming criteria are documented, you should be able to hand off that part of the process to a team member, ideally a Security Champion type person who would own and take accountability for representing security needs. From time to time, the security SME might want to audit the sprint and make sure that nothing is slipping through the cracks, and if so, revise the guidelines accordingly.

Q. Your “security champion” makes me think to the “security satellite” from BSIMM; do you have an opinion on BSIMM applicability in the context of Agile?


Chris: Yes, the Security Satellite concept maps very well to the Security Champion role. BSIMM is a good framework for considering the different security activities important to an organization, but it’s not particularly prescriptive in the context of Agile.

Q. We are an agile shop with weekly release cycles. The time between when the build is complete, and the release is about 24 hours. We are implementing web application vulnerability scans for each release. How can we fix high risk vulnerabilities before each release? Is it better to delay the release or fix it in the next release?


Chris: One way to approach this is to put a policy in place to determine whether or not the release can ship. For example, “all high and very high severity flaws must be fixed” makes the acceptance criteria very clear. If you think about security acceptance in the same way as feature acceptance, it makes a lot of sense. You wouldn’t push out the release with a new feature only half-working, right? Another approach is to handle each vulnerability on a case-by-case basis. The challenge is, if there is not a strong security culture, the team may face pressure to push the release regardless of the severity.

Q. How do you address findings identified from regular automated scans? Are they added to the next day’s coding activities? Do you ever have a security sprint?


Ryan: Our goal is to address any findings identified within the sprint. This means while it may not be next-day it will be very soon afterwards and prior to release. We have considered dedicated security sprints.

Q. Who will do security grooming? Development team or security team? What checklist included in the grooming?


Ryan: Security grooming is a joint effort between the teams. In some cases the security representative, Security Architect in our terminology, attends the full team grooming meeting. In the cases where the full team grooming meeting would be too large of a time commitment for the Security Architect, they will hold a separate, shorter security grooming session soon afterwards instead.

Q. How important to your success was working with your release engineering teams?


Chris: Initially not very important, because we didn’t have dedicated release engineering. The development and QA teams were in charge of deploying the release. Even with a release engineering team, though, most of the security work is done well before the final release is cut, so the nature of their work doesn’t change much. Certainly it was helpful to understand the release process – when is feature freeze, code freeze, push night, etc. – and the various procedures surrounding a release, so that you as a security team can understand their perspective.

Q. How to you handle accumulated security debt?


Chris: The first challenge is to measure all of it, particularly debt that accumulated prior to having a real SDLC! Even security debt that you’re aware of may never get taken in to a sprint because some feature will always be deemed more important. So far the way we’ve been able to chip away at security debt is to advocate directly with product management and the technical leads. This isn’t exactly ideal, but it beats not addressing it at all. If your organization ever pushes to reduce tech debt, it’s a good opportunity to point out that security debt should be considered as part of tech debt.

view the webinar

This now concludes our Q&A. A big thank you to everyone who attended the webinar for making it such a huge success. If you have any more questions, we would love to hear from you in the comments section below. In addition, If you are interested in learning more about Agile Security, you might be interested in this upcoming webinar from Veracode’s director of platform engineering. On April 17th, Peter Chestna will be hosting this webinar entitled “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this webinar Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success. Register now!

Automating Good Practice Into The Development Process

April 7, 2014 by · Leave a Comment

I’ve always liked code reviews. Can I make others like them too?


I’ve understood the benefit of code reviews, and enjoyed them, for almost as long as I’ve been developing software. It’s not just the excuse to attack others (although that can be fun), but the learning—looking at solutions other people come up with, hearing suggestions on my code. It’s easy to fall into patterns in coding, and not realize the old patterns aren’t the best approach for the current programming language or aren’t the most efficient approach for the current project.

Dwelling on good code review comments can be a great learning experience. Overlooked patterns can be appreciated, structure and error handling can be improved, and teams can develop more constancy in coding style. Even poor review feedback like “I don’t get this” can identify flaws or highlight where code needs to be reworked for clarity and improved maintenance.

But code reviews are rare. Many developers don’t like them. Some management teams don’t see the value, while other managers claim code reviews are good but don’t make room in the schedule (or push them out of the schedule when a project starts to slip.) I remember one meeting where the development manager said “remember what will be happening after code freeze.” He expected us to say “Code Reviews!”, but a couple members of the team responded “I’m going to Disney World!” Everyone laughed, but the Disney trips were enjoyed while the code reviews never happened.

In many groups and projects, code reviews never happened, except when I dragged people to my cubicle and forced them to look at small pieces of code. I developed a personal strategy which helped somewhat: When I’m about ready to commit a change set I try to review the change as though it would be sent to a code review. “What would someone complain about if they saw this change?” It takes discipline and doesn’t have most of the benefits of a real code review but it has helped improve my code.

The development of interactive code review tools helped the situation. Discussions on changes could be asynchronous instead of trying to find a common time to schedule a meeting, and reviewers could see and riff on each other’s comments. It was still hard to encourage good comments and find the time for code reviews (even if “mandated,”) but the situation was better.

The next advancement was integrating code review tools into the source control workflow. This required (or at least strongly encouraged depending on configuration) approved code reviews before allowing merges. The integration meant less effort was needed to set up the code reviews. There’s also another hammer to encourage people to review the code: “Please review my code so I can commit my change.”

The barriers to code reviews also exist for security reviews, but the problem can be worse as many developers aren’t trained to find security problems. Security issues are obviously in-scope for code reviews, but the issue of security typically isn’t front of mind for reviewers. Even at Veracode the focus is on making the code work and adjusting the user interface to be understandable for customers.

But we do have access to Veracode’s security platform. We added “run our software on itself” to our release process. We would start a scan, wait for the results, review the flaws found, and tell developers to fix the issues. As with code reviews, security reviews can be easy to put off because it takes time to go through the process steps.

As with code reviews, we have taken steps to integrate security review into the standard workflow. The first step was to automatically run a scan during automated builds. A source update to a release branch causes a build to be run, sending out an email if the build fails. If the build works, the script uses the Veracode APIs to start a static scan of the build. This eliminated the first few manual steps in the security scan process. (With the Veracode 2014.2 release the Veracode upload APIs have an “auto start” feature to start a scan without intervention after a successful pre-scan, making automatic submission of scans easier.)

To further reduce the overhead of the security scans, we improved the Veracode JIRA Import Plugin to match our development process. After a scan completes, the Import plugin notices the new results, and imports the significant flaws into JIRA bug reports in the correct JIRA project. Flaws still need to be assigned to developers to fix, but it now happens in the standard triage process used for any reported problem. If a flaw has a mitigation approved, or if a code change eliminates the flaw, the plugin notices the change and marks the JIRA issue as resolved.

The automated security scans aren’t our entire process. We also have security reviews for proposals and designs so developers understand the key security issues before they start to code, and the security experts are always available for consultation in addition to being involved in every stage of development. The main benefit of the automated scans is that they take care of the boring review to catch minor omissions and oversights in coding, leaving more time for the security experts to work on the higher level security strategy instead of closing yet another potential XSS issue.

view the webinar

Veracode’s software engineers understand the challenge of building security into the Agile SDLC. We live and breathe that challenge. We use our own application security technology to scale our security processes so our developers can go further faster. On April 17th, our director of platform engineering, Peter Chestna, will share in a free webinar, how we’ve leveraged our cloud-based platform to integrate application security testing with our Agile development toolchain—and why it’s become essential to our success. Register for Peter’s webinar, “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It” to learn from our experience.

Strategies for Rapid Adoption of a Security Programme within a Large Enterprise

A large-scale deployment of the Veracode static code analysis tool across a large enterprise presents a number of unique challenges such as understanding your application estate, prioritising your applications for scanning, and communicating with your application owners. This blog post provides some guidance based on my experience at delivering several hundred scanned applications in a 14-month time frame.


Understanding Your Application Estate

The first challenge is to understand the nature of your application estate – where are the applications hosted, where are the codebases, who is responsible for building and maintaining them, what development languages are used, how critical are they to the organisation, and so on. Most enterprise organisations will maintain an asset inventory of some sort; you should immediately familiarise yourself with this and determine the extent of information recorded, and what export formats are available. In my experience two problems exist: data accuracy and completeness. In many instances the contact details of application owners were incorrect or missing entirely. In our application repository the programming languages or frameworks are not recorded, and in only a few instances is the source code repository location specified. After my initial attempts to use the application repository as the principle data source I realised I would need to augment this with my own data gathered during an initial application profiling phase.

Application Profiling and Assigning Criticality

My initial attempts at profiling applications used a crude MS Word questionnaire containing questions such as technical contacts (capable of building the binaries), application language and frameworks, source code repository, binary image size, application version, and continuous integration environment. This questionnaire was sent to the registered application owners and the responses were then entered manually into a tracking spreadsheet based on an export from the application repository. It soon became apparent that this method was cumbersome and time consuming, so I deployed a web form based version of the profiling questionnaire which captured the responses to a backing spreadsheet enabling easy import to the main spreadsheet. Reviewing the responses it became apparent that not all applications would be suitable for static code analysis due to factors such as host operating system or language incompatibility. Once those applications were eliminated it was necessary to prioritise the list in order to ensure that our licence usage was targeted at the most critical business applications.

In order to ensure you are focused on the most critical applications consider a number of indicators; does the application require an Application Penetration Test, is the application externally facing, does the application have any particular regulatory requirements, has the application been the subject of recent incidents? For example, the Monetary Authority of Singapore (MAS) guidelines mandate a code review process which may be fulfilled in part by static code analysis, so I used the MAS compliance as an immediate inclusion criteria. It is important that whatever selection criteria you employ you are able to justify this both in terms of the Veracode licence usage and the manpower of the application teams who will be required to perform the upload, scan, and review.

Communication with Application Owners and Teams

Armed with your list of applications you will now need to gain a mandate from senior management within the application delivery sector of your organisation supporting your programme and encouraging 6525275_sthe participation of application teams using a “carrot and stick” message; for instance they need to comply with MAS legislation. Veracode will help vendors with this compliance. It is important that this message come from the upper management of the application teams and that the message stress the value of the programme rather than coming as an edict from within the upper echelons of the security part of the organisation. Our programme failed to achieve an initial foothold due to our lack of a clear mandate which enabled recalcitrant application teams an easy opt out clause. In many cases application teams were easily convinced of the value of early flaw detection and engaged with the programme quite willingly; however in a number of cases no amount of convincing or persuading could convince them to participate. One of the most frequent objections encountered was the perceived workload in on boarding to Veracode. It is important that you make the process of account creation as efficient as possible, and that your have the relevant support in place in terms of documentation, knowledge bases, support e-mails, etc. Many teams were pleasantly surprised at the ease of the process, and it was apparent that this news propagated within the development communities as we saw reduced friction as the programme progressed.

In order to ensure that our programme was not constrained by the team resources, I automated the process of user account and application profile creation on the Veracode platform by leveraging the rich API’s available. The application spreadsheet was used as the data source and a Microsoft Visual Studio Tools for Office (VSTO) plugin was developed which provided an additional toolbar within Excel (this is the subject of a future blog post.) This plugin allowed for the creation, modification or deletion of accounts on the platform based on the underlying spreadsheet data. Although I invested a significant upfront effort in developing the tooling, I reaped the benefits later in the programme when I was able to completely on board up to a hundred applications in one day. Additionally, I was able to add metadata specific to our organisation (business alignment, investment strategy, software vendor) to the application profiles on the platform which greatly enriched the reports generated within the platform’s analytics engine.

Within a few weeks of the programme it became apparent that teams were often asking the same questions, so I started capturing these questions and their answers as a set of Frequently Asked Questions available on our internal social media-like platform. Through appropriate tagging and hyperlinking I quickly developed an organisation specific knowledge base which again lowered the barrier to entry for application teams who no longer had to wait for an answer or struggle with a problem. During the midpoint of our programme I identified a few obvious success stories (applications which had performed a number of scans and were showing a clear improvement in security posture) and I asked the teams working on those applications to contribute their experience to our social media platform in order to encourage the participation of other teams.

This brief blog post highlighted some of the challenges facing a new Veracode static code analysis deployment along with some solutions that I have come across in the process. I hope some of the approaches and solutions I described will ensure that you are soon well underway with analysis. In the process you will not only find strengths in your team but flaws during review –this is the subject of my next blog post.

Read Colin’s earlier blog post: How to Run a Successful Proof of Concept for an Application Security Programme

How to Run a Successful Proof of Concept for an Application Security Programme

So you’ve got upper management buy-in for your application security proof of concept and are ready to start scanning applications: how do you make sure your proof of concept (PoC) is a success and that you demonstrate the need to progress to a full scale program. This article describes some of the lessons learned at the start of our large-scale deployment of Veracode within our organisation.

Socialising the Proof of Concept

9745381_sThe first step is to socialise the PoC internally through word of mouth, discussion forums, and developer communities by driving interest in the availability of a new tool for developers, which will assist in the development process and produce better code. Ensure that you are familiar with the platform and the various IDE plugins and can demonstrate their effectiveness on a real world application (we used the OWASP WebGoat application as our technology demonstrator). The emphasis should be pro-active use of the tool to detect flaws at the point of introduction rather than as a security measurement tool. Key success factors for development teams will be the integration of the tool within common IDEs and the ease of adoption (specifically no need for in-depth product knowledge or the use of vendor specialists). Once you are familiar with the platform and your toolbox you will need some applications to scan.

Application Selection

The selection of which applications to scan is a key success factor: it is important that the applications chosen should be of strategic significance to the organisation in order to demonstrate the significance of the findings to senior stakeholders. Much of our difficulty arose in determining suitable applications since this information may not be readily presented in an application repository (and an application repository may be inaccurate or not exist at all). The forging of informal networks and word of mouth will be essential to success – talk to people in the canteen, be on the lookout for internal events of interest to developers; in our case we had access to an internal social media site which was an excellent platform for creating interest and awareness. Do resist the temptation to scan applications that are of low importance simply because they happen to be available, this will reduce the impact of your PoC.

Building and Scanning

Now for the moment of truth: building the code and performing the scans. It is vital that you or your team should have a working knowledge of mechanics of building software – getting access to source code is one thing, but expecting a busy development team to help you perform the requisite debug builds for scanning is unlikely to be met favourably. The ability to speak the developer’s language is a key objective in establishing an application security program and demonstrating our competence with their environments and toolchains gave us credibility when conducting the initial reviews of the scan results. Be sure to review the findings internally before distributing to the teams, and ensure that your team is familiar with the nature of the findings and can speak confidently to the risk presented by such flaws. Establishing and maintaining the credibility of your team is vital at this stage.

By this stage there is certain to be a high level of interest in the PoC from within various parts of your organisation and it is important to demonstrate results as soon as possible, and indeed many teams will be eager to see their application’s results. Be sure to manage expectations around expected scan times from Veracode to avoid any possible negative perceptions of the use of a SaaS product; the emphasis should be on the ease of adoption and lack of specialised knowledge.

Common Criticisms

15431307_sYou should be prepared for a fair deal of scepticism around your initiative, which may be based on out-dated views of the capability of static code analysis tools, through to a belief that no problems exist within the organisation. A common objection to the use of static code analysis is the high false positive count or lack of actionable output; ensure that all applications scanned in the PoC have a readout call to demonstrate the accuracy of the findings and the specificity of the results in the flaw viewer. Negative feedback from a development team to their management could be disastrous for your future program.

At the conclusion of the PoC you will need to demonstrate the value of your PoC to senior stakeholders: the key message is the rate of flaw detection that has been achieved, emphasising that such rates could not have been achieved by any manual review process. In our case we demonstrated a comparison between a Veracode scan and the traditional approach of an Application Penetration Test; the benefits in terms of cost (in our case by an order of magnitude) and timescales clearly favouring the Veracode analysis. Identifying the so called “smoking gun” is useful in demonstrating the need for a larger scale program, however be sensitive to the application team concerned and emphasise the role of your team in an advisory capacity in reducing vulnerabilities.

By now you will have clearer view of the application estate within your organisation and will have an appreciation for the challenges you will face in scaling this process to a fully-fledged programme.

Static Testing vs. Dynamic Testing

With reports of website vulnerabilities and data breaches regularly featuring in the news, securing the software development life cycle (SDLC) has never been so important. The enterprise must, therefore, choose carefully the correct security techniques to implement. Static and dynamic analyses are two of the most popular types of security tests. Before implementation however, the security-conscious enterprise should examine precisely how both types of test can help to secure the SDLC. Testing, after all, can be considered an investment that should be carefully monitored.


Static and Dynamic Analyses Explained

Static analysis is performed in a non-runtime environment. Typically a static analysis tool will inspect program code for all possible run-time behaviors and seek out coding flaws, back doors, and potentially malicious code. Dynamic analysis adopts the opposite approach and is executed while a program is in operation. A dynamic test will monitor system memory, functional behavior, response time, and overall performance of the system. This method is not wholly dissimilar to the manner in which a malicious third party may interact with an application. Having originated and evolved separately, static and dynamic analysis have, at times, been mistakenly viewed in opposition. There are, however, a number of strengths and weaknesses associated with both approaches to consider.

Strengths and Weaknesses of Static and Dynamic Analyses

Static analysis, with its whitebox visibility, is certainly the more thorough approach and may also prove more cost-efficient with the ability to detect bugs at an early phase of the software development life cycle. For example, if an error is spotted at a review meeting or a desk-check – both types of static analysis – it can be relatively cheap to remedy. Had the error become lodged in the system, costs would multiply. Static analysis can also unearth future errors that would not emerge in a dynamic test. Dynamic analysis, on the other hand, is capable of exposing a subtle flaw or vulnerability too complicated for static analysis alone to reveal and can also be the more expedient method of testing. A dynamic test, however, will only find defects in the part of the code that is actually executed. The enterprise must weigh up these considerations with the complexities of their own situation in mind. Application type, time, and company resources are some of the primary concerns. The level of technical debt the enterprise is willing to take on may also be measured. A certain amount of technical debt may be taken on if the financial benefits of beating a competitor to the market place outweigh the potential savings of more rigorously tested code. While both static and dynamic tests have their shortcomings, it is not ideal that the enterprise should face a choice. While static analysis could be considered a superior method of testing, it does not necessarily follow that it should automatically be chosen over dynamic analysis in every situation where the choice emerges.

When to Automate

While static and dynamic analysis can be performed manually they can also be automated. Used wisely, automated tools can dramatically improve the return on testing investment. Automated testing tools are an ideal option in certain situations. For example, automation may be used to test a system’s reaction to a heavy volume of users or to confirm a bug fix works as expected. It also helps to automate tests that are run on a regular basis during the SDLC.

As the enterprise strives to secure the SDLC, it must be noted that there is no panacea. Neither static nor dynamic testing alone can offer blanket protection. Ideally, an enterprise will perform both static and dynamic analyses. This approach will benefit from the synergistic relationship that exists between static and dynamic testing.

Next Page »