PCI Compliance & Secure Coding: Implementing Best Practices from the Beginning

July 15, 2014 by · Leave a Comment
Filed under: Compliance, SDLC 
Is your SDLC process built on a shaky foundation?

Is your SDLC process built on a shaky foundation?

A lot of the revisions to PCI DSS point toward the realization that security must be built into the development process. The foundation that ultimately controls the success or failure of this process must be built upon knowledge — that means training developers to avoid common coding flaws that can lead to different types of vulnerabilities being introduced. So let’s take a quick look at one of the common flaws that will become part of the mandate on July 30th, 2015.

PCI 3.0 added “Broken Authentication and Session Management” (OWASP Top 10 Category A2) as a category of common coding flaws that developers should protect against during the software development process. Left exposed, this category opens some pretty serious doors for attackers, as accounts, passwords, and session IDs can all be leveraged to hijack an authenticated session and impersonate unsuspecting end users. It’s great that your authentication page itself is secure, that’s your proverbial fortress door, but if an attacker can become your user(s), it doesn’t matter how strong those doors were…they got through.

To have a secure development process aligned to PCI that works, developers must be aware of these types of issues from the beginning. If critical functions aren’t being secured because they are missing authentication controls, using hard-coded passwords, and/or limiting authentication attempts, etc., you need to evaluate how you got into this predicament in the first place. It all starts with those who design and develop your application(s). For the record, nobody expects them to become security experts, but we do expect them to know what flawed code looks like, and how NOT to introduce it over and over again.

According to the April 2013 Veracode State of Software Security report, stolen credentials, brute force attacks, and cross-site scripting (XSS) are among the most common attack methods used by hackers to exploit web applications. The revisions found in PCI DSS 3.0 did a lot to clarify what was originally left open to interpretation; it’s worth noting that by redefining what quality assurance (QA) means, it doesn’t mean you are going to rock the world of your developers.

Change is scary, we get that, which is why the output we provide was designed and meant for the developers to consume, not a security team. The number of successful attacks leading to access of critical data and systems via hijacked sessions will never decrease unless we coach our developers on the basics of how to build security into their development process.

Related Links

Secure Agile Q&A: API’s, IDE’s and Environment Integration

cloud plugin

A few weeks back, I hosted a webinar called “Secure Agile Through Automated Toolchains: How Veracode R&D Does It”, and in this webinar I discussed the importance of security testing and how to integrate it into the Agile SDLC. There were so many questions from our open discussion following the webinar that I have taken this time to follow up with them. Thank you to everyone who attended the live webinar, and now on to your questions:

Q: Can you upload non-compiled applications, from the IDE, using the IDE plugins?

A: Yes, you can upload any kind of document through both the Eclipse and Visual Studio IDE plugins. It is also possible to create other plugins using our Integrations SDK.

Q: What other Continuous Integration tools do you have a plugin for?

A: Veracode has the ability to integrate with several Continuous Integration environments. Our Jenkins Plug-In makes it easy to automate uploading to Veracode from your CI environment. In addition, Veracode provides APIs and how-to instructions for automating Veracode upload into Microsoft Team Foundation Server (TFS), Maven and Bamboo CI environments.

Q: Do you have any plugins for Visual Studio which can be integrated with Sandbox and JIRA?

visual studio

A: The current version of Visual Studio cannot be integrated with Sandbox, but we plan to provide this functionality in the near future. There is no specific integration between Visual Studio plugin and JIRA. You can use the Visual Studio plugin to download scan results directly from the Veracode Platform.

Q: My company is a Microsoft shop – when will these tools be ready for Visual Studio/TFS environment?

developer network

Instructions for integration of the Veracode service with Microsoft Team Foundation Server (TFS) are available today in the Veracode Online Help. We want to develop an end-to-end workflow that follows the process described in the Webinar. The goal is to provide it in the second half of the year.

intellijideaQ: Will you also be providing an IntelliJ IDEA integration SDK?

A: At this point we do not have plans to provide a plugin for InteliJ IDEA. The goal of the SDK is to assist with integration into environments that are not supported out of the box.

Q: Do you have a reference implementation using TeamCity instead of Jenkins?

teamcity_logo (2)

A: We do not have a reference implementation for TeamCity. We recommend using our API wrapper to integrate Veracode with TeamCity. Please see our Integrations SDK for more information.

This concludes this first round of Q&A from “Secure Agile Through Automated Toolchains: How Veracode R&D Does It”. Be sure to check out the on-demand webinar if you missed it, and come back here soon for more of this Q&A.

view the webinar

While you wait for part two, you might also be interested in a webinar from my colleagues Chris Eng, Veracode’s VP of Research, and Senior Security Researcher, Ryan O’Boyle titled “Building Security Into the Agile SDLC: View from the Trenches”. Chris and Ryan discuss how we’ve embedded security into our own Agile Scrum processes – to rapidly deliver new applications without exposing them to critical vulnerabilities. If you have any more questions regarding anything from the webinar, I would love to hear from you in the comments section below.

Benefits of Binary Static Analysis

we-heart-binaries

1. Coverage, both within applications you build and within your entire application portfolio

One of the primary benefits of binary static analysis is that it allows you to inspect all the code in your application. Mobile apps especially have binary components, but web apps, legacy back office and desktop apps do too. You don’t want to only analyze the code you compile from source but also the code you link in from components. Binary analysis lets vendors feel comfortable about getting an independent code level analysis of the code you are purchasing through procurement. This enables you to do code level security testing of the COTS applications in your organization’s portfolio. Binary analysis lets you cover all of the code running in your organization.

2. Tight integration into the build system and continuous integration (CI) environment

If you integrate binary static analysis into your CI you can build in 100% automation with no need for manual human (developer) steps. The build process can run the binary analysis by calling an API and results can be automatically brought into a defect ticketing system also through an API. Code analysis is now transparent and inescapable. Developers will then see security defects in their normal defect queue. Developers will be fixing security flaws without needing to perform any configuration or testing saving valuable developer time.

3. Contextual analysis

Binary static analysis analyzes your code along with all the other components of the application, within the context of the platform it was built for. Binary static analysis can view tainted source data flow through the complete data flow to a risky sink function. Partial application analysis of pieces of a program miss this context and be will less accurate on both false positives and false negatives. Any security expert will tell you context is extremely important. A section of code can be rendered insecure or secure by the code it is called from or the code it calls into.

With a complete program you can perform Software Composition Analysis (SCA) to identify components that have known vulnerabilities in them. A9-Using Components with Known Vulnerabilities is one of the OWASP Top 10 Risks so you want to make sure you can analyze the entire program. Veracode has built SCA into the binary static analysis process.

Veracode's binary static analysis process.

Veracode’s binary static analysis process. Click to view the full size image.

4. Higher fidelity of analysis

Some languages like C and C++ give latitude to the compiler to generate different machine code. Source code analysis is blind to decisions made by the compiler. There are documented cases of both the GCC and the Microsoft C/C++ compiler removing security checks and the clearing of memory which opened up security holes. MITRE CWE has categorized this vulnerability: CWE-14: Compiler Removal of Code to Clear Buffers. The paper WYSINWYX: What You See Is Not What You Execute by Gogul Balakrishnan describes how “there can be a mismatch between what a programmer intends and what is actually executed on the processor.”

More on binary static analysis

Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part II

Welcome to another round of Agile SDLC Q&A. Last week Ryan and I took some time to answer questions from our webinar, “Building Security Into the Agile SDLC: View from the Trenches“; in case you missed it, you can see Part I here. Now on to more of your questions!

Q. What would you recommend as a security process around continuous build?

Chris

Chris: It really depends on what the frequency is. If you’re deploying once a day and you have automated security tools as a gating function, it’s possible but probably only if you’ve baked those tools into the build process and minimized human interaction. If you’re deploying more often than that, you’re probably going to start thinking differently about security – taking it out of the critical path but somehow ensuring nothing gets overlooked. We’ve spoken with companies who deploy multiple times a day, and the common theme is that they build very robust monitoring and incident response capabilities, and they look for anomalies. The minute something looks suspect they can react and investigate quickly. And the nice thing is, if they need to hotfix, they can do it insanely fast. This is uncharted territory for us; we’ll let you know when we get there.

Q. What if you only have one security resource to deal with app security – how would you leverage just one resource with this “grooming” process?

Chris

Chris: You’d probably want to have that person work with one Scrum team (or a small handful) at a time. As they security groomed with each team, they would want to document as rigorously as possible the criteria that led to them attaching security tasks to a particular story. This will vary from one team to the next because every product has a different threat model. Once the security grooming criteria are documented, you should be able to hand off that part of the process to a team member, ideally a Security Champion type person who would own and take accountability for representing security needs. From time to time, the security SME might want to audit the sprint and make sure that nothing is slipping through the cracks, and if so, revise the guidelines accordingly.

Q. Your “security champion” makes me think to the “security satellite” from BSIMM; do you have an opinion on BSIMM applicability in the context of Agile?

Chris

Chris: Yes, the Security Satellite concept maps very well to the Security Champion role. BSIMM is a good framework for considering the different security activities important to an organization, but it’s not particularly prescriptive in the context of Agile.

Q. We are an agile shop with weekly release cycles. The time between when the build is complete, and the release is about 24 hours. We are implementing web application vulnerability scans for each release. How can we fix high risk vulnerabilities before each release? Is it better to delay the release or fix it in the next release?

Chris

Chris: One way to approach this is to put a policy in place to determine whether or not the release can ship. For example, “all high and very high severity flaws must be fixed” makes the acceptance criteria very clear. If you think about security acceptance in the same way as feature acceptance, it makes a lot of sense. You wouldn’t push out the release with a new feature only half-working, right? Another approach is to handle each vulnerability on a case-by-case basis. The challenge is, if there is not a strong security culture, the team may face pressure to push the release regardless of the severity.

Q. How do you address findings identified from regular automated scans? Are they added to the next day’s coding activities? Do you ever have a security sprint?

Ryan

Ryan: Our goal is to address any findings identified within the sprint. This means while it may not be next-day it will be very soon afterwards and prior to release. We have considered dedicated security sprints.

Q. Who will do security grooming? Development team or security team? What checklist included in the grooming?

Ryan

Ryan: Security grooming is a joint effort between the teams. In some cases the security representative, Security Architect in our terminology, attends the full team grooming meeting. In the cases where the full team grooming meeting would be too large of a time commitment for the Security Architect, they will hold a separate, shorter security grooming session soon afterwards instead.

Q. How important to your success was working with your release engineering teams?

Chris

Chris: Initially not very important, because we didn’t have dedicated release engineering. The development and QA teams were in charge of deploying the release. Even with a release engineering team, though, most of the security work is done well before the final release is cut, so the nature of their work doesn’t change much. Certainly it was helpful to understand the release process – when is feature freeze, code freeze, push night, etc. – and the various procedures surrounding a release, so that you as a security team can understand their perspective.

Q. How to you handle accumulated security debt?

Chris

Chris: The first challenge is to measure all of it, particularly debt that accumulated prior to having a real SDLC! Even security debt that you’re aware of may never get taken in to a sprint because some feature will always be deemed more important. So far the way we’ve been able to chip away at security debt is to advocate directly with product management and the technical leads. This isn’t exactly ideal, but it beats not addressing it at all. If your organization ever pushes to reduce tech debt, it’s a good opportunity to point out that security debt should be considered as part of tech debt.

view the webinar

This now concludes our Q&A. A big thank you to everyone who attended the webinar for making it such a huge success. If you have any more questions, we would love to hear from you in the comments section below. In addition, If you are interested in learning more about Agile Security, you might be interested in this upcoming webinar from Veracode’s director of platform engineering. On April 17th, Peter Chestna will be hosting this webinar entitled “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this webinar Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success. Register now!

Automating Good Practice Into The Development Process

April 7, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY, SDLC 

I’ve always liked code reviews. Can I make others like them too?

9478191_m

I’ve understood the benefit of code reviews, and enjoyed them, for almost as long as I’ve been developing software. It’s not just the excuse to attack others (although that can be fun), but the learning—looking at solutions other people come up with, hearing suggestions on my code. It’s easy to fall into patterns in coding, and not realize the old patterns aren’t the best approach for the current programming language or aren’t the most efficient approach for the current project.

Dwelling on good code review comments can be a great learning experience. Overlooked patterns can be appreciated, structure and error handling can be improved, and teams can develop more constancy in coding style. Even poor review feedback like “I don’t get this” can identify flaws or highlight where code needs to be reworked for clarity and improved maintenance.

But code reviews are rare. Many developers don’t like them. Some management teams don’t see the value, while other managers claim code reviews are good but don’t make room in the schedule (or push them out of the schedule when a project starts to slip.) I remember one meeting where the development manager said “remember what will be happening after code freeze.” He expected us to say “Code Reviews!”, but a couple members of the team responded “I’m going to Disney World!” Everyone laughed, but the Disney trips were enjoyed while the code reviews never happened.

In many groups and projects, code reviews never happened, except when I dragged people to my cubicle and forced them to look at small pieces of code. I developed a personal strategy which helped somewhat: When I’m about ready to commit a change set I try to review the change as though it would be sent to a code review. “What would someone complain about if they saw this change?” It takes discipline and doesn’t have most of the benefits of a real code review but it has helped improve my code.

The development of interactive code review tools helped the situation. Discussions on changes could be asynchronous instead of trying to find a common time to schedule a meeting, and reviewers could see and riff on each other’s comments. It was still hard to encourage good comments and find the time for code reviews (even if “mandated,”) but the situation was better.

The next advancement was integrating code review tools into the source control workflow. This required (or at least strongly encouraged depending on configuration) approved code reviews before allowing merges. The integration meant less effort was needed to set up the code reviews. There’s also another hammer to encourage people to review the code: “Please review my code so I can commit my change.”

The barriers to code reviews also exist for security reviews, but the problem can be worse as many developers aren’t trained to find security problems. Security issues are obviously in-scope for code reviews, but the issue of security typically isn’t front of mind for reviewers. Even at Veracode the focus is on making the code work and adjusting the user interface to be understandable for customers.

But we do have access to Veracode’s security platform. We added “run our software on itself” to our release process. We would start a scan, wait for the results, review the flaws found, and tell developers to fix the issues. As with code reviews, security reviews can be easy to put off because it takes time to go through the process steps.

As with code reviews, we have taken steps to integrate security review into the standard workflow. The first step was to automatically run a scan during automated builds. A source update to a release branch causes a build to be run, sending out an email if the build fails. If the build works, the script uses the Veracode APIs to start a static scan of the build. This eliminated the first few manual steps in the security scan process. (With the Veracode 2014.2 release the Veracode upload APIs have an “auto start” feature to start a scan without intervention after a successful pre-scan, making automatic submission of scans easier.)

To further reduce the overhead of the security scans, we improved the Veracode JIRA Import Plugin to match our development process. After a scan completes, the Import plugin notices the new results, and imports the significant flaws into JIRA bug reports in the correct JIRA project. Flaws still need to be assigned to developers to fix, but it now happens in the standard triage process used for any reported problem. If a flaw has a mitigation approved, or if a code change eliminates the flaw, the plugin notices the change and marks the JIRA issue as resolved.

The automated security scans aren’t our entire process. We also have security reviews for proposals and designs so developers understand the key security issues before they start to code, and the security experts are always available for consultation in addition to being involved in every stage of development. The main benefit of the automated scans is that they take care of the boring review to catch minor omissions and oversights in coding, leaving more time for the security experts to work on the higher level security strategy instead of closing yet another potential XSS issue.

view the webinar

Veracode’s software engineers understand the challenge of building security into the Agile SDLC. We live and breathe that challenge. We use our own application security technology to scale our security processes so our developers can go further faster. On April 17th, our director of platform engineering, Peter Chestna, will share in a free webinar, how we’ve leveraged our cloud-based platform to integrate application security testing with our Agile development toolchain—and why it’s become essential to our success. Register for Peter’s webinar, “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It” to learn from our experience.

Strategies for Rapid Adoption of a Security Programme within a Large Enterprise

A large-scale deployment of the Veracode static code analysis tool across a large enterprise presents a number of unique challenges such as understanding your application estate, prioritising your applications for scanning, and communicating with your application owners. This blog post provides some guidance based on my experience at delivering several hundred scanned applications in a 14-month time frame.

security-program-adoption

Understanding Your Application Estate

The first challenge is to understand the nature of your application estate – where are the applications hosted, where are the codebases, who is responsible for building and maintaining them, what development languages are used, how critical are they to the organisation, and so on. Most enterprise organisations will maintain an asset inventory of some sort; you should immediately familiarise yourself with this and determine the extent of information recorded, and what export formats are available. In my experience two problems exist: data accuracy and completeness. In many instances the contact details of application owners were incorrect or missing entirely. In our application repository the programming languages or frameworks are not recorded, and in only a few instances is the source code repository location specified. After my initial attempts to use the application repository as the principle data source I realised I would need to augment this with my own data gathered during an initial application profiling phase.

Application Profiling and Assigning Criticality

My initial attempts at profiling applications used a crude MS Word questionnaire containing questions such as technical contacts (capable of building the binaries), application language and frameworks, source code repository, binary image size, application version, and continuous integration environment. This questionnaire was sent to the registered application owners and the responses were then entered manually into a tracking spreadsheet based on an export from the application repository. It soon became apparent that this method was cumbersome and time consuming, so I deployed a web form based version of the profiling questionnaire which captured the responses to a backing spreadsheet enabling easy import to the main spreadsheet. Reviewing the responses it became apparent that not all applications would be suitable for static code analysis due to factors such as host operating system or language incompatibility. Once those applications were eliminated it was necessary to prioritise the list in order to ensure that our licence usage was targeted at the most critical business applications.

In order to ensure you are focused on the most critical applications consider a number of indicators; does the application require an Application Penetration Test, is the application externally facing, does the application have any particular regulatory requirements, has the application been the subject of recent incidents? For example, the Monetary Authority of Singapore (MAS) guidelines mandate a code review process which may be fulfilled in part by static code analysis, so I used the MAS compliance as an immediate inclusion criteria. It is important that whatever selection criteria you employ you are able to justify this both in terms of the Veracode licence usage and the manpower of the application teams who will be required to perform the upload, scan, and review.

Communication with Application Owners and Teams

Armed with your list of applications you will now need to gain a mandate from senior management within the application delivery sector of your organisation supporting your programme and encouraging 6525275_sthe participation of application teams using a “carrot and stick” message; for instance they need to comply with MAS legislation. Veracode will help vendors with this compliance. It is important that this message come from the upper management of the application teams and that the message stress the value of the programme rather than coming as an edict from within the upper echelons of the security part of the organisation. Our programme failed to achieve an initial foothold due to our lack of a clear mandate which enabled recalcitrant application teams an easy opt out clause. In many cases application teams were easily convinced of the value of early flaw detection and engaged with the programme quite willingly; however in a number of cases no amount of convincing or persuading could convince them to participate. One of the most frequent objections encountered was the perceived workload in on boarding to Veracode. It is important that you make the process of account creation as efficient as possible, and that your have the relevant support in place in terms of documentation, knowledge bases, support e-mails, etc. Many teams were pleasantly surprised at the ease of the process, and it was apparent that this news propagated within the development communities as we saw reduced friction as the programme progressed.

In order to ensure that our programme was not constrained by the team resources, I automated the process of user account and application profile creation on the Veracode platform by leveraging the rich API’s available. The application spreadsheet was used as the data source and a Microsoft Visual Studio Tools for Office (VSTO) plugin was developed which provided an additional toolbar within Excel (this is the subject of a future blog post.) This plugin allowed for the creation, modification or deletion of accounts on the platform based on the underlying spreadsheet data. Although I invested a significant upfront effort in developing the tooling, I reaped the benefits later in the programme when I was able to completely on board up to a hundred applications in one day. Additionally, I was able to add metadata specific to our organisation (business alignment, investment strategy, software vendor) to the application profiles on the platform which greatly enriched the reports generated within the platform’s analytics engine.

Within a few weeks of the programme it became apparent that teams were often asking the same questions, so I started capturing these questions and their answers as a set of Frequently Asked Questions available on our internal social media-like platform. Through appropriate tagging and hyperlinking I quickly developed an organisation specific knowledge base which again lowered the barrier to entry for application teams who no longer had to wait for an answer or struggle with a problem. During the midpoint of our programme I identified a few obvious success stories (applications which had performed a number of scans and were showing a clear improvement in security posture) and I asked the teams working on those applications to contribute their experience to our social media platform in order to encourage the participation of other teams.

This brief blog post highlighted some of the challenges facing a new Veracode static code analysis deployment along with some solutions that I have come across in the process. I hope some of the approaches and solutions I described will ensure that you are soon well underway with analysis. In the process you will not only find strengths in your team but flaws during review –this is the subject of my next blog post.

Read Colin’s earlier blog post: How to Run a Successful Proof of Concept for an Application Security Programme

How to Run a Successful Proof of Concept for an Application Security Programme

So you’ve got upper management buy-in for your application security proof of concept and are ready to start scanning applications: how do you make sure your proof of concept (PoC) is a success and that you demonstrate the need to progress to a full scale program. This article describes some of the lessons learned at the start of our large-scale deployment of Veracode within our organisation.

Socialising the Proof of Concept

9745381_sThe first step is to socialise the PoC internally through word of mouth, discussion forums, and developer communities by driving interest in the availability of a new tool for developers, which will assist in the development process and produce better code. Ensure that you are familiar with the platform and the various IDE plugins and can demonstrate their effectiveness on a real world application (we used the OWASP WebGoat application as our technology demonstrator). The emphasis should be pro-active use of the tool to detect flaws at the point of introduction rather than as a security measurement tool. Key success factors for development teams will be the integration of the tool within common IDEs and the ease of adoption (specifically no need for in-depth product knowledge or the use of vendor specialists). Once you are familiar with the platform and your toolbox you will need some applications to scan.

Application Selection

The selection of which applications to scan is a key success factor: it is important that the applications chosen should be of strategic significance to the organisation in order to demonstrate the significance of the findings to senior stakeholders. Much of our difficulty arose in determining suitable applications since this information may not be readily presented in an application repository (and an application repository may be inaccurate or not exist at all). The forging of informal networks and word of mouth will be essential to success – talk to people in the canteen, be on the lookout for internal events of interest to developers; in our case we had access to an internal social media site which was an excellent platform for creating interest and awareness. Do resist the temptation to scan applications that are of low importance simply because they happen to be available, this will reduce the impact of your PoC.

Building and Scanning

Now for the moment of truth: building the code and performing the scans. It is vital that you or your team should have a working knowledge of mechanics of building software – getting access to source code is one thing, but expecting a busy development team to help you perform the requisite debug builds for scanning is unlikely to be met favourably. The ability to speak the developer’s language is a key objective in establishing an application security program and demonstrating our competence with their environments and toolchains gave us credibility when conducting the initial reviews of the scan results. Be sure to review the findings internally before distributing to the teams, and ensure that your team is familiar with the nature of the findings and can speak confidently to the risk presented by such flaws. Establishing and maintaining the credibility of your team is vital at this stage.

By this stage there is certain to be a high level of interest in the PoC from within various parts of your organisation and it is important to demonstrate results as soon as possible, and indeed many teams will be eager to see their application’s results. Be sure to manage expectations around expected scan times from Veracode to avoid any possible negative perceptions of the use of a SaaS product; the emphasis should be on the ease of adoption and lack of specialised knowledge.

Common Criticisms

15431307_sYou should be prepared for a fair deal of scepticism around your initiative, which may be based on out-dated views of the capability of static code analysis tools, through to a belief that no problems exist within the organisation. A common objection to the use of static code analysis is the high false positive count or lack of actionable output; ensure that all applications scanned in the PoC have a readout call to demonstrate the accuracy of the findings and the specificity of the results in the flaw viewer. Negative feedback from a development team to their management could be disastrous for your future program.

At the conclusion of the PoC you will need to demonstrate the value of your PoC to senior stakeholders: the key message is the rate of flaw detection that has been achieved, emphasising that such rates could not have been achieved by any manual review process. In our case we demonstrated a comparison between a Veracode scan and the traditional approach of an Application Penetration Test; the benefits in terms of cost (in our case by an order of magnitude) and timescales clearly favouring the Veracode analysis. Identifying the so called “smoking gun” is useful in demonstrating the need for a larger scale program, however be sensitive to the application team concerned and emphasise the role of your team in an advisory capacity in reducing vulnerabilities.

By now you will have clearer view of the application estate within your organisation and will have an appreciation for the challenges you will face in scaling this process to a fully-fledged programme.

Static Testing vs. Dynamic Testing

With reports of website vulnerabilities and data breaches regularly featuring in the news, securing the software development life cycle (SDLC) has never been so important. The enterprise must, therefore, choose carefully the correct security techniques to implement. Static and dynamic analyses are two of the most popular types of security tests. Before implementation however, the security-conscious enterprise should examine precisely how both types of test can help to secure the SDLC. Testing, after all, can be considered an investment that should be carefully monitored.

static-analysis-dynamic-analysis

Static and Dynamic Analyses Explained

Static analysis is performed in a non-runtime environment. Typically a static analysis tool will inspect program code for all possible run-time behaviors and seek out coding flaws, back doors, and potentially malicious code. Dynamic analysis adopts the opposite approach and is executed while a program is in operation. A dynamic test will monitor system memory, functional behavior, response time, and overall performance of the system. This method is not wholly dissimilar to the manner in which a malicious third party may interact with an application. Having originated and evolved separately, static and dynamic analysis have, at times, been mistakenly viewed in opposition. There are, however, a number of strengths and weaknesses associated with both approaches to consider.

Strengths and Weaknesses of Static and Dynamic Analyses

Static analysis, with its whitebox visibility, is certainly the more thorough approach and may also prove more cost-efficient with the ability to detect bugs at an early phase of the software development life cycle. For example, if an error is spotted at a review meeting or a desk-check – both types of static analysis – it can be relatively cheap to remedy. Had the error become lodged in the system, costs would multiply. Static analysis can also unearth future errors that would not emerge in a dynamic test. Dynamic analysis, on the other hand, is capable of exposing a subtle flaw or vulnerability too complicated for static analysis alone to reveal and can also be the more expedient method of testing. A dynamic test, however, will only find defects in the part of the code that is actually executed. The enterprise must weigh up these considerations with the complexities of their own situation in mind. Application type, time, and company resources are some of the primary concerns. The level of technical debt the enterprise is willing to take on may also be measured. A certain amount of technical debt may be taken on if the financial benefits of beating a competitor to the market place outweigh the potential savings of more rigorously tested code. While both static and dynamic tests have their shortcomings, it is not ideal that the enterprise should face a choice. While static analysis could be considered a superior method of testing, it does not necessarily follow that it should automatically be chosen over dynamic analysis in every situation where the choice emerges.

When to Automate

While static and dynamic analysis can be performed manually they can also be automated. Used wisely, automated tools can dramatically improve the return on testing investment. Automated testing tools are an ideal option in certain situations. For example, automation may be used to test a system’s reaction to a heavy volume of users or to confirm a bug fix works as expected. It also helps to automate tests that are run on a regular basis during the SDLC.

As the enterprise strives to secure the SDLC, it must be noted that there is no panacea. Neither static nor dynamic testing alone can offer blanket protection. Ideally, an enterprise will perform both static and dynamic analyses. This approach will benefit from the synergistic relationship that exists between static and dynamic testing.

The Appsec Program Maturity Curve 1 of 4


About the Appsec Program Maturity Curve – a good model to follow…

As information security professionals, we must pursue any opportunity to evolve our approach to application security. Most enterprises with in-house development teams do some kind of ad-hoc appSec testing, usually during the QA process. But maybe you think it’s time to do more than that, to get a bit more proactive in confronting the potential threats the organization faces from weak software security. Luckily there is a proven appsec program maturity curve that can help mature your existing effort, following a well-traveled road to overcoming common challenges along the way. Here’s the really good news: it’s easy to climb a few levels of the curve over a matter of months, not years.

Securing your software is a strategy, not a tactic.

Maybe your organization’s approach is not as proactive as it could be. All too often organizations wait for a data breach incident or compliance audit as the triggering event for appsec projects and investment. Veracode found in a recent study that 70% of CIOs already understand the need for application security. However, the majority of them still will not move to increase their investment in securing the software that runs their business without a triggering event, such as a data breach. This position begs a simple question: Why wait for something bad to happen?

CIOs clearly understand the importance of securing the software supply chain, but have mindsets or limitations that result in inertia and inaction. That’s why understanding some simple ways to move forward, in incremental steps of maturity, is so important. To start, you should be able to recognize at which stage of appsec maturity your particular organization is, and be able to outline a concrete path to get yourselves to the next level and beyond.

application-security-program-maturity-curve

The Appsec Program Maturity Curve

The appsec program maturity curve been validated by Veracode using the real world results of hundreds of organizations who have followed its path to success with software security. Yes, results may vary by organizational size, staffing constraints, budget and a host of other factors specific to your situation. Still, there is much to be learned from peer experience. The key to positive return on investment over time is to start small and scale up with each milestone.

This maturity model has six levels. If your organization is already pursuing an ad-hoc testing approach to manage the security of your software, you are not alone. Most organizations who understand the fundamental importance of appsec start here. However, as the model demonstrates, there are five program stages which are more advanced. While there are serious limitations to an ad hoc “program” (let’s use this term loosely), it is still fundamentally better than those whose appsec approach is “Do Nothing”.

ad-hoc-application-security-program

Level 1: Ad-hoc Program

Objective: What are we doing?
Program: Inconsistent testing of applications with poor visibility and no development support.
Time Period: Doomed to repeat or mercifully short… you decide.

blueprint-for-application-security-program

Level 2 Blueprint Program

Objective: We know what we need to do.
Program: The foundation of a real program, including an app inventory and governing policy.
Time Period: As quick as 30 days.

baseline-application-security-program

Level 3: Baseline Program

Objective: We’re rolling it out.
Program: Test all critical apps, scorecards results and onboard development teams.
Time Period: As little as 60 days.

integrated-application-security-program

Level 4: Integrated Program

Objective: We’re going big!
Program: Sustainable program scaled across enterprise with full SDLC integration.
Time Period: About 3 months.

improved-application-security-program

Level 5: Improved Program

Objective: We’re reducing our risk.
Program: Mitigate risk across portfolio with automation, retesting, analysis and ongoing education.
Time Period: About 6 months.

optimized-appsec-program

Level 6: Optimized Program

Objective: We’ve achieved excellence!
Program: Center of Excellence addressing all internal applications with high ROI.
Time Period: Ongoing.

As sensitive data continues to migrate out of the organization – whether to the cloud or the tablet – it’s imperative that information security professionals continue to champion a shift in organizational attitudes and priorities toward Application Security. Let’s move our organizations along the curve from proactivity to pre-emption. Maybe it’s time to evolve your organization’s approach to appsec to adapt and survive in a hostile world.

The forthcoming posts in this series will examine the common trajectory others have followed and describe a methodology for success.

The AppSec Program Maturity Curve 1 of 4


About the AppSec Program Maturity Curve – a good model to follow…

As information security professionals, we must pursue any opportunity to evolve our approach to application security. Most enterprises with in-house development teams do some kind of ad-hoc appSec testing, usually during the QA process. But maybe you think it’s time to do more than that, to get a bit more proactive in confronting the potential threats the organization faces from weak software security. Luckily there is a proven appsec program maturity curve that can help mature your existing effort, following a well-traveled road to overcoming common challenges along the way. Here’s the really good news: it’s easy to climb a few levels of the curve over a matter of months, not years.

Securing your software is a strategy, not a tactic.

Maybe your organization’s approach is not as proactive as it could be. All too often organizations wait for a data breach incident or compliance audit as the triggering event for appsec projects and investment. Veracode found in a recent study that 70% of CIOs already understand the need for application security. However, the majority of them still will not move to increase their investment in securing the software that runs their business without a triggering event, such as a data breach. This position begs a simple question: Why wait for something bad to happen?

CIOs clearly understand the importance of securing the software supply chain, but have mindsets or limitations that result in inertia and inaction. That’s why understanding some simple ways to move forward, in incremental steps of maturity, is so important. To start, you should be able to recognize at which stage of appsec maturity your particular organization is, and be able to outline a concrete path to get yourselves to the next level and beyond.

application-security-program-maturity-curve

The AppSec Program Maturity Curve

The appsec program maturity curve been validated by Veracode using the real world results of hundreds of organizations who have followed its path to success with software security. Yes, results may vary by organizational size, staffing constraints, budget and a host of other factors specific to your situation. Still, there is much to be learned from peer experience. The key to positive return on investment over time is to start small and scale up with each milestone.

This maturity model has six levels. If your organization is already pursuing an ad-hoc testing approach to manage the security of your software, you are not alone. Most organizations who understand the fundamental importance of appsec start here. However, as the model demonstrates, there are five program stages which are more advanced. While there are serious limitations to an ad hoc “program” (let’s use this term loosely), it is still fundamentally better than those whose appsec approach is “Do Nothing”.

ad-hoc-application-security-program

Level 1: Ad-hoc Program

Objective: What are we doing?
Program: Inconsistent testing of applications with poor visibility and no development support.
Time Period: Doomed to repeat or mercifully short… you decide.

blueprint-for-application-security-program

Level 2 Blueprint Program

Objective: We know what we need to do.
Program: The foundation of a real program, including an app inventory and governing policy.
Time Period: As quick as 30 days.

baseline-application-security-program

Level 3: Baseline Program

Objective: We’re rolling it out.
Program: Test all critical apps, scorecards results and onboard development teams.
Time Period: As little as 60 days.

integrated-application-security-program

Level 4: Integrated Program

Objective: We’re going big!
Program: Sustainable program scaled across enterprise with full SDLC integration.
Time Period: About 3 months.

improved-application-security-program

Level 5: Improved Program

Objective: We’re reducing our risk.
Program: Mitigate risk across portfolio with automation, retesting, analysis and ongoing education.
Time Period: About 6 months.

optimized-appsec-program

Level 6: Optimized Program

Objective: We’ve achieved excellence!
Program: Center of Excellence addressing all internal applications with high ROI.
Time Period: Ongoing.

As sensitive data continues to migrate out of the organization – whether to the cloud or the tablet – it’s imperative that information security professionals continue to champion a shift in organizational attitudes and priorities toward Application Security. Let’s move our organizations along the curve from proactivity to pre-emption. Maybe it’s time to evolve your organization’s approach to appsec to adapt and survive in a hostile world.

The forthcoming posts in this series will examine the common trajectory others have followed and describe a methodology for success.

  • Post 2: Program Levels 1 to 2 – from Ad-Hoc to Blueprint, Coming on 10/30/13
  • Post 3: Program Levels 3 to 4 – from Baseline to Integrated, Coming on 11/6/13
  • Post 4: Program Levels 5 to 6 – from Improved to Optimized, Coming on 11/13/13

Next Page »