Threats in Custom App Development: Enterprises’ Lack of Security

September 4, 2014 by · Leave a Comment
Filed under: SDLC, Software Development 

Without proper security, custom applications can leave networks open to attack.

Proper App Development Requires Security from the Start

There is no greater threat to information security than the belief that systems are secure when, in fact, they are anything but. The growth in popularity of custom app development over the past few years has created a situation where many enterprises have thousands of applications in production with little or no security testing behind them—and most executives have no idea these security holes even exist. Enterprises concerned about information security must find a way to integrate sufficient testing into the software development life cycle (SDLC) to ensure their systems are as secure as possible.

The Ongoing Threat from Custom Apps

The average enterprise has come to use thousands of custom apps on a daily basis. The problem? Many of these organizations have no idea how many custom apps they are using, and it’s impossible to secure apps that IT management doesn’t even know exist.

What’s worse is that even the best AppSec programs are only testing approximately 10 percent of their apps, according to Aspect Security. As a result, 54 percent of breaches come through custom apps—a number that will only increase in the coming years, given such conditions as lax security and the growing number of available apps.

Custom apps are particularly vulnerable to hackers and thieves because they represent the path of least resistance. They are constantly exposed to the Internet (which makes them easy to probe for vulnerabilities), are developed quickly with little regard for in-development security and are assembled from a variety of code and library sources. In addition, they represent larger attack surfaces than traditional targets.

Securing App Development in an Agile World

In order to tackle the growing issues with custom apps, IT executives have to embrace a holistic approach throughout the entire SDLC while avoiding the additional expenses incurred by extra IT staff, consultants and infrastructure.

Through a discovery process, IT can take an exact inventory of all the apps they need to secure, granting organizations a handle on their existing app pool. From there, a massively parallel scan will quickly identify highly exploitable vulnerabilities, like those in the OWASP Top 10, and allow IT to segregate questionable apps until a deeper scan can be done.

Once IT has a grip on existing apps, an enterprise has to turn its attention to the development process and find a scalable solution that works from early code development right through production, especially in the fast-paced world of agile development. Implementing static application security testing (SAST) early in the development life cycle will allow IT to find many vulnerabilities, such as cross-site scripting or SQL injections issues, as well as coding errors, at a stage when they should be easy to fix. Because SAST works with binary code to find paths through an application, it can also be used with third-party software or libraries that exist in many modern custom applications. As the app-development phase moves to QA, dynamic application security testing (DAST) should be conducted to ensure that additional vulnerabilities aren’t exposed by the addition of a web interface. Finally, manual penetration testing has to be done on all critical apps, and the test results from every stage of development need to be centralized in a comprehensive solution to minimize false positives and help ensure overall accuracy.

The threat posed by custom app development is only going to increase in the coming years, as these apps will make up more and more of the overall network infrastructure. Enterprises that are serious about network security have to begin taking steps to mitigate problems today. The right solution has to be comprehensive, scalable through the cloud, and designed to discover both the security issues of today and the potential security flaws that hackers may exploit tomorrow.

Photo Source: Flickr

Secure Development – One Bathroom Break At A Time

August 25, 2014 by · Leave a Comment
Filed under: SDLC, Software Development 

Google went to great lengths to educate their developers about the benefits of security testing – even developing educational materials specifically to be read on the toilet.

6276693_m

There’s enough evidence in favor of the use of security testing throughout the development cycle as to make “debates” about it moot. Still, many software development operations still lack a comprehensive and consistent approach to testing.

Why? One of the most commonly cited reasons is the development “culture.” That’s a fuzzy term that encompasses a lot of things. Often it boils down to personal resistance on the part of (influential) developers and managers within an organization, or adherence to wrong – headed tradition. “We don’t do that here.” Full stop.

It goes without saying that, if you want to change development practices, you need to change the development culture within your organization. But how?

Mike Bland

Mike Bland

Back in June, Mike Bland, a former Google Engineer, penned a great blog post that provides something of a roadmap to implementing a culture of secure development and testing. Bland worked at Google from 2005 to 2011 and, for much of time, was Google’s “Testing Evangelist,” helping to implement a system for thorough application testing and, more importantly, making practices like unit testing for developed code a cultural norm at Google.

Bland’s post is worth printing out and reading. He starts off with an analysis of the Apple GoTo Fail and Heartbleed vulnerabilities, and how they’re object lessons for the importance of unit testing. (Veracode tackled some of the same issues regarding Goto Fail in this blog post.)

But Bland also wraps in a (substantial) guide to developing a culture of secure development and testing. Bland weighs in on the relative importance and merits of various types of tests – integration testing vs. unit testing vs. fuzzing. And he talks about the organizational challenges of growing a culture of testing at Google.

Bland says that, contrary to what you may believe, Google’s engineering-heavy culture was not hospitable soil for the adoption of a culture of unit testing. Despite the company’s considerable resources, Bland notes that “vast pools of resources and talent” within the company often got in the way by “reinforc(ing) the notion that everything is going as well as possible, allowing problems to fester in the long shadows cast by towering success.”

“Google was not able to change its development culture by virtue of being Google,” Bland writes. “Rather, changing Google’s development culture is what helped its development environment and software products continue to scale and to live up to expectations despite the ever-growing ranks of developers and users.”

Bland describes an all-hands approach to bending Google’s engineering practice in the direction of more testing. Resistance, he said, was the byproduct of complex forces: a lack of proper developer education in unit testing practices and “old tools” that strained to scale with the pace of Google’s development.

Bland’s response was to form what he calls a “Testing Grouplet” within Google that served as a support group and community for like-minded folks who wanted to implement unit testing procedures. That group operated like testing guerillas, developing and driving the adoption of new tools that made unit testing less painful, sharing best practices and engaging in testing propaganda within the development organization.

One of the most successful initiatives was “Testing on the Toilet,” a circa 2006 program in which Grouplet members designed short (one page) lessons on a variety of topics (“Don’t Put Logic in Tests,” “Debug IDs”) then plastered hundreds of Google bathroom stalls worldwide with the *ahem* tactical reading material.

The idea here isn’t to shock folks. Rather: its to take the role of education in your secure development program seriously. Bland said the bathroom literature was part of a multi-pronged education effort that also included more common elements like brown bag lunches and a guest speaker series.

Check out Mike’s full post on building a unit testing culture at Google here.

The Rise of Application Security Requirements and What to Do About Them

Secured Data Transfer

As an engineering manager, I am challenged to keep pace with ever-expanding expectations for non-functional software requirements. One requirement, application security, has become increasingly critical in recent years, posing new challenges for software engineering teams.

In what manner has security emerged as an application requirement? Are software teams equipped to respond? What can engineering managers do to ensure their teams build secure software applications?

In the ’90s, security was not a visible software requirement. During this time I worked with a team developing an innovative web content management system. We focused on scalability and performance, ensuring that dynamically generated web pages rendered quickly and scaled as the web audience grew. I don’t remember any security requirements nor any conversations about application security. Scary, but true! Our system was deployed by early-adopter media companies racing to deliver real-time online content. IT teams deploying our system may have considered security, but if they did, they focused on infrastructure and didn’t address security with us, their software vendor.

Ten years later, application security requirements began emerging in a limited way, focusing on compliance and process. At the time, I was working at a startup, developing software for financial institutions to uncover fraud through analysis of sensitive financial, customer and employee data. We routinely responded to requests for proposal (RFPs) with security questions about the controls to prevent my company’s employees from stealing data. We described our rigorous release, deployment and services processes. Without much ado, and without changes to our development process, our team simply “checked the box” for security.

A few years later the stakes became higher. New questions began showing up in RFPs: “How does your software architecture support security?” “How do your engineering practices enforce security?” And the most difficult to answer: “Provide independent attestation that your software is secure.”

I faced a sobering realization. The security of our software relied entirely on the technical acumen of our engineering leads (which fortunately was strong), and was not supported in a formal way in the engineering process. Even worse, I was starting from scratch to learn security basics. I needed help, and fast!

Engineering leads at small independent software vendors (ISVs), such as NSFOCUS and Questionmark, face this challenge routinely. Where should they start? What concrete steps can they take to secure their code and establish a process for application security?

Pete Chestna recently posted “Four Steps to Successfully Implementing Security into a Continuous Development Shop.” His approach has worked for us at Veracode and translates well to small and large engineering teams:

  • Learn, hire or consult with experts to understand the threat landscape for your software. Develop an application-security policy aligned with your risk profile.
  • Baseline the security of your application. Review and prioritize the issues according to your policy.
  • Educate your developers on security fundamentals and assign them to fix and remediate issues.
  • Update your software development life cycle (SDLC) to embed security practices so that new software is developed securely.

You will need budget and time to accomplish this. Consult with experts or engage with security services such as Veracode to benefit from their experience and expertise.

Don’t try to wing it. The stakes are too high.

Yo, A Cautionary Tale for the VC Community

By Chris Lynch, Partner, Atlas Venture

The story of Yo will be used as a cautionary tale in the VC community for years to come. Only a few days after receiving a much talked about $1.2 million in series “A” funding from Angel investor and serial entrepreneur Moshe Hogeg, Yo suffered a massive security breach. The breach made more headlines than the funding, and took the wind out of the company’s sales – possibly for good.

yo

The jury is still out on the future of the Yo app but being hacked got the app in headlines for all the wrong reasons.

How did the breach happen? Over the weeks that followed several journalists have offered their analysis including @VioletBlue: People invested $1.2 million in an app that had no security, @mikebutcher: App allegedly hacked by college students and @mthwgeek: Yo been hacked made to play Rick Astley.

While the epic rise and fall of Yo and how Yo was hacked make for an interesting story, as an investor, this is not the part of the story that jumped out at me. The question I have is how did the experienced investor, Moshe Hogeg (or any investor for that matter) invest in a technology without learning about the development process of the technology? The app was built in about eight hours. What does that indicate about the QA process? What does that say about the security of the software?

Register for this webinar here!

Join Chris Lynch and Veracode CEO Bob Brennan for a webinar discussing: Why You Need to Be a Secure Supplier

The eight hour development time is impressive, and demonstrates drive on the part of the apps’ developers. However, I have questions about the security of a product that can be developed during a single standard work day. And Yo’s prospective customers – the advertisement firms that they were inevitably selling this data to – would have asked the same question.

When I listen to a start-up pitch me on their next-gen/transformational/whatever product, I always question if the technology is truly enterprise-class: is it scalable, reliable, and secure? One or two groups within an enterprise may order a few of your widgets without this, but if you are gunning for the big bucks, you want an enterprise-wide deployment of your technology. This requires you prove that your product is just as reliable and secure as the largest players in the market. Because no one gets fired for buying IBM. People get canned when they purchase software from a cutting-edge start-up that ends up causing a data breach and costing the enterprise millions. Security is just table stakes if you want to play with the big boys. This includes enterprises buying your product and VCs like Atlas Venture backing your company.

When investing in a company, or product, it is essential that I understand everything I can about the technology – including the security of that product. It isn’t enough to scrutinize the need for the technology in the market and the product’s functionality. I must also understand how the product is developed, and if secure development practices are in use. Otherwise I am setting myself up to lose a lot of money in the event of a breach.

As investors in new companies and technologies we are taking risks, and without investors taking these risks we will never see the next Facebook or Instagram. However, these risks we take should be calculated jumps, not leaps of faith. Investing $1.2 million into a company without this level of due diligence is irresponsible – unless you are looking for some sort of revenue loss tax break.

I have a feeling Moshe Hogeg thought he had a winning product when he wrote that check. But he didn’t conduct a full due diligence process, and he is paying dearly for that mistake now. I feel badly for Moshe Hogeg, but I hope his misfortune can serve as a warning to the investment community as a whole and more broadly to buyers and users of software – whether they are consumers or businesses. Software security is as important as software functionality and simply assuming security was a consideration during the development process no longer good enough. Documented proof needs to be provided from these software development companies if they expect to get funding and ultimately to generate revenue.

Four Steps to Successfully Implementing Security into a Continuous Development Shop

18458476_sSo you live in a continuous deployment shop and you have been told to inject security into the process. Are you afraid? Don’t be. When the world moved from waterfall to agile, did everything go smoothly? Of course not – you experienced setbacks and hiccups, just like everyone else. But, eventually you worked through the setbacks and lived to tell the tale. As with any new initiative, it will take time to mature. Take baby steps.

Step one: crawl.

Baseline the security of your application by using multiple testing methods. Static, dynamic and manual analysis will let you know exactly where you stand today. Understand that you may be overwhelmed with your results. You can’t fix it all at once, so don’t panic. At least you know what you have to work with. Integration with your SDLC tools is going to be your best friend. It will allow you to measure your progress over time and spot problematic trends early.

Step two: stand.

Come up with a plan based on your baseline. What has to be fixed now? What won’t we fix? You didn’t get here in a day and you won’t be able to fix it in a day. Work with your security team to build your backlog. Prioritize, deprioritize, decompose, repeat. Now would be a great time to introduce a little education into the organization. Take a look at your flaw prevalence and priorities and train your developers. If you teach them secure coding practices they will write more secure code the first time.

Step three: walk.

Stop digging and put the shovels down. We know that we have problems to fix from the old code (security debt). Let’s make sure we don’t add to the pile. Now is the time to institute a security gate. No new code can be merged until it passes your security policy. We’re not talking about the entire application, just the new stuff. Don’t let insecure code come into the system. By finding and addressing the problems before check-ins, you won’t slow your downstream process. This is a 13881278_sgood time to make sure your security auditing systems integrate with your software development lifecycle systems (JIRA, Jenkins, etc.). Integrating with these systems will make the processes more seamless.

Step four: run!

Now you have a backlog of prioritized work for your team to fix and you’re not allowing the problem to get worse. You’re constantly measuring your security posture and showing continuous improvement. As you pay down your security debt you will have more time for feature development and a team with great secure coding habits.

Integrating a new standard into a system that is already working can be intimidating. But following these four steps will make the task more manageable. Also, once security is integrated, it will become a normal part of the continuous development lifecycle and your software will be better for it.

Related Links

PCI Compliance & Secure Coding: Implementing Best Practices from the Beginning

July 15, 2014 by · Leave a Comment
Filed under: Compliance, SDLC 
Is your SDLC process built on a shaky foundation?

Is your SDLC process built on a shaky foundation?

A lot of the revisions to PCI DSS point toward the realization that security must be built into the development process. The foundation that ultimately controls the success or failure of this process must be built upon knowledge — that means training developers to avoid common coding flaws that can lead to different types of vulnerabilities being introduced. So let’s take a quick look at one of the common flaws that will become part of the mandate on July 30th, 2015.

PCI 3.0 added “Broken Authentication and Session Management” (OWASP Top 10 Category A2) as a category of common coding flaws that developers should protect against during the software development process. Left exposed, this category opens some pretty serious doors for attackers, as accounts, passwords, and session IDs can all be leveraged to hijack an authenticated session and impersonate unsuspecting end users. It’s great that your authentication page itself is secure, that’s your proverbial fortress door, but if an attacker can become your user(s), it doesn’t matter how strong those doors were…they got through.

To have a secure development process aligned to PCI that works, developers must be aware of these types of issues from the beginning. If critical functions aren’t being secured because they are missing authentication controls, using hard-coded passwords, and/or limiting authentication attempts, etc., you need to evaluate how you got into this predicament in the first place. It all starts with those who design and develop your application(s). For the record, nobody expects them to become security experts, but we do expect them to know what flawed code looks like, and how NOT to introduce it over and over again.

According to the April 2013 Veracode State of Software Security report, stolen credentials, brute force attacks, and cross-site scripting (XSS) are among the most common attack methods used by hackers to exploit web applications. The revisions found in PCI DSS 3.0 did a lot to clarify what was originally left open to interpretation; it’s worth noting that by redefining what quality assurance (QA) means, it doesn’t mean you are going to rock the world of your developers.

Change is scary, we get that, which is why the output we provide was designed and meant for the developers to consume, not a security team. The number of successful attacks leading to access of critical data and systems via hijacked sessions will never decrease unless we coach our developers on the basics of how to build security into their development process.

Related Links

Secure Agile Q&A: API’s, IDE’s and Environment Integration

cloud plugin

A few weeks back, I hosted a webinar called “Secure Agile Through Automated Toolchains: How Veracode R&D Does It”, and in this webinar I discussed the importance of security testing and how to integrate it into the Agile SDLC. There were so many questions from our open discussion following the webinar that I have taken this time to follow up with them. Thank you to everyone who attended the live webinar, and now on to your questions:

Q: Can you upload non-compiled applications, from the IDE, using the IDE plugins?

A: Yes, you can upload any kind of document through both the Eclipse and Visual Studio IDE plugins. It is also possible to create other plugins using our Integrations SDK.

Q: What other Continuous Integration tools do you have a plugin for?

A: Veracode has the ability to integrate with several Continuous Integration environments. Our Jenkins Plug-In makes it easy to automate uploading to Veracode from your CI environment. In addition, Veracode provides APIs and how-to instructions for automating Veracode upload into Microsoft Team Foundation Server (TFS), Maven and Bamboo CI environments.

Q: Do you have any plugins for Visual Studio which can be integrated with Sandbox and JIRA?

visual studio

A: The current version of Visual Studio cannot be integrated with Sandbox, but we plan to provide this functionality in the near future. There is no specific integration between Visual Studio plugin and JIRA. You can use the Visual Studio plugin to download scan results directly from the Veracode Platform.

Q: My company is a Microsoft shop – when will these tools be ready for Visual Studio/TFS environment?

developer network

Instructions for integration of the Veracode service with Microsoft Team Foundation Server (TFS) are available today in the Veracode Online Help. We want to develop an end-to-end workflow that follows the process described in the Webinar. The goal is to provide it in the second half of the year.

intellijideaQ: Will you also be providing an IntelliJ IDEA integration SDK?

A: At this point we do not have plans to provide a plugin for InteliJ IDEA. The goal of the SDK is to assist with integration into environments that are not supported out of the box.

Q: Do you have a reference implementation using TeamCity instead of Jenkins?

teamcity_logo (2)

A: We do not have a reference implementation for TeamCity. We recommend using our API wrapper to integrate Veracode with TeamCity. Please see our Integrations SDK for more information.

This concludes this first round of Q&A from “Secure Agile Through Automated Toolchains: How Veracode R&D Does It”. Be sure to check out the on-demand webinar if you missed it, and come back here soon for more of this Q&A.

view the webinar

While you wait for part two, you might also be interested in a webinar from my colleagues Chris Eng, Veracode’s VP of Research, and Senior Security Researcher, Ryan O’Boyle titled “Building Security Into the Agile SDLC: View from the Trenches”. Chris and Ryan discuss how we’ve embedded security into our own Agile Scrum processes – to rapidly deliver new applications without exposing them to critical vulnerabilities. If you have any more questions regarding anything from the webinar, I would love to hear from you in the comments section below.

Benefits of Binary Static Analysis

we-heart-binaries

1. Coverage, both within applications you build and within your entire application portfolio

One of the primary benefits of binary static analysis is that it allows you to inspect all the code in your application. Mobile apps especially have binary components, but web apps, legacy back office and desktop apps do too. You don’t want to only analyze the code you compile from source but also the code you link in from components. Binary analysis lets vendors feel comfortable about getting an independent code level analysis of the code you are purchasing through procurement. This enables you to do code level security testing of the COTS applications in your organization’s portfolio. Binary analysis lets you cover all of the code running in your organization.

2. Tight integration into the build system and continuous integration (CI) environment

If you integrate binary static analysis into your CI you can build in 100% automation with no need for manual human (developer) steps. The build process can run the binary analysis by calling an API and results can be automatically brought into a defect ticketing system also through an API. Code analysis is now transparent and inescapable. Developers will then see security defects in their normal defect queue. Developers will be fixing security flaws without needing to perform any configuration or testing saving valuable developer time.

3. Contextual analysis

Binary static analysis analyzes your code along with all the other components of the application, within the context of the platform it was built for. Binary static analysis can view tainted source data flow through the complete data flow to a risky sink function. Partial application analysis of pieces of a program miss this context and be will less accurate on both false positives and false negatives. Any security expert will tell you context is extremely important. A section of code can be rendered insecure or secure by the code it is called from or the code it calls into.

With a complete program you can perform Software Composition Analysis (SCA) to identify components that have known vulnerabilities in them. A9-Using Components with Known Vulnerabilities is one of the OWASP Top 10 Risks so you want to make sure you can analyze the entire program. Veracode has built SCA into the binary static analysis process.

Veracode's binary static analysis process.

Veracode’s binary static analysis process. Click to view the full size image.

4. Higher fidelity of analysis

Some languages like C and C++ give latitude to the compiler to generate different machine code. Source code analysis is blind to decisions made by the compiler. There are documented cases of both the GCC and the Microsoft C/C++ compiler removing security checks and the clearing of memory which opened up security holes. MITRE CWE has categorized this vulnerability: CWE-14: Compiler Removal of Code to Clear Buffers. The paper WYSINWYX: What You See Is Not What You Execute by Gogul Balakrishnan describes how “there can be a mismatch between what a programmer intends and what is actually executed on the processor.”

More on binary static analysis

Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part II

Welcome to another round of Agile SDLC Q&A. Last week Ryan and I took some time to answer questions from our webinar, “Building Security Into the Agile SDLC: View from the Trenches“; in case you missed it, you can see Part I here. Now on to more of your questions!

Q. What would you recommend as a security process around continuous build?

Chris

Chris: It really depends on what the frequency is. If you’re deploying once a day and you have automated security tools as a gating function, it’s possible but probably only if you’ve baked those tools into the build process and minimized human interaction. If you’re deploying more often than that, you’re probably going to start thinking differently about security – taking it out of the critical path but somehow ensuring nothing gets overlooked. We’ve spoken with companies who deploy multiple times a day, and the common theme is that they build very robust monitoring and incident response capabilities, and they look for anomalies. The minute something looks suspect they can react and investigate quickly. And the nice thing is, if they need to hotfix, they can do it insanely fast. This is uncharted territory for us; we’ll let you know when we get there.

Q. What if you only have one security resource to deal with app security – how would you leverage just one resource with this “grooming” process?

Chris

Chris: You’d probably want to have that person work with one Scrum team (or a small handful) at a time. As they security groomed with each team, they would want to document as rigorously as possible the criteria that led to them attaching security tasks to a particular story. This will vary from one team to the next because every product has a different threat model. Once the security grooming criteria are documented, you should be able to hand off that part of the process to a team member, ideally a Security Champion type person who would own and take accountability for representing security needs. From time to time, the security SME might want to audit the sprint and make sure that nothing is slipping through the cracks, and if so, revise the guidelines accordingly.

Q. Your “security champion” makes me think to the “security satellite” from BSIMM; do you have an opinion on BSIMM applicability in the context of Agile?

Chris

Chris: Yes, the Security Satellite concept maps very well to the Security Champion role. BSIMM is a good framework for considering the different security activities important to an organization, but it’s not particularly prescriptive in the context of Agile.

Q. We are an agile shop with weekly release cycles. The time between when the build is complete, and the release is about 24 hours. We are implementing web application vulnerability scans for each release. How can we fix high risk vulnerabilities before each release? Is it better to delay the release or fix it in the next release?

Chris

Chris: One way to approach this is to put a policy in place to determine whether or not the release can ship. For example, “all high and very high severity flaws must be fixed” makes the acceptance criteria very clear. If you think about security acceptance in the same way as feature acceptance, it makes a lot of sense. You wouldn’t push out the release with a new feature only half-working, right? Another approach is to handle each vulnerability on a case-by-case basis. The challenge is, if there is not a strong security culture, the team may face pressure to push the release regardless of the severity.

Q. How do you address findings identified from regular automated scans? Are they added to the next day’s coding activities? Do you ever have a security sprint?

Ryan

Ryan: Our goal is to address any findings identified within the sprint. This means while it may not be next-day it will be very soon afterwards and prior to release. We have considered dedicated security sprints.

Q. Who will do security grooming? Development team or security team? What checklist included in the grooming?

Ryan

Ryan: Security grooming is a joint effort between the teams. In some cases the security representative, Security Architect in our terminology, attends the full team grooming meeting. In the cases where the full team grooming meeting would be too large of a time commitment for the Security Architect, they will hold a separate, shorter security grooming session soon afterwards instead.

Q. How important to your success was working with your release engineering teams?

Chris

Chris: Initially not very important, because we didn’t have dedicated release engineering. The development and QA teams were in charge of deploying the release. Even with a release engineering team, though, most of the security work is done well before the final release is cut, so the nature of their work doesn’t change much. Certainly it was helpful to understand the release process – when is feature freeze, code freeze, push night, etc. – and the various procedures surrounding a release, so that you as a security team can understand their perspective.

Q. How to you handle accumulated security debt?

Chris

Chris: The first challenge is to measure all of it, particularly debt that accumulated prior to having a real SDLC! Even security debt that you’re aware of may never get taken in to a sprint because some feature will always be deemed more important. So far the way we’ve been able to chip away at security debt is to advocate directly with product management and the technical leads. This isn’t exactly ideal, but it beats not addressing it at all. If your organization ever pushes to reduce tech debt, it’s a good opportunity to point out that security debt should be considered as part of tech debt.

view the webinar

This now concludes our Q&A. A big thank you to everyone who attended the webinar for making it such a huge success. If you have any more questions, we would love to hear from you in the comments section below. In addition, If you are interested in learning more about Agile Security, you might be interested in this upcoming webinar from Veracode’s director of platform engineering. On April 17th, Peter Chestna will be hosting this webinar entitled “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this webinar Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success. Register now!

Automating Good Practice Into The Development Process

April 7, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY, SDLC 

I’ve always liked code reviews. Can I make others like them too?

9478191_m

I’ve understood the benefit of code reviews, and enjoyed them, for almost as long as I’ve been developing software. It’s not just the excuse to attack others (although that can be fun), but the learning—looking at solutions other people come up with, hearing suggestions on my code. It’s easy to fall into patterns in coding, and not realize the old patterns aren’t the best approach for the current programming language or aren’t the most efficient approach for the current project.

Dwelling on good code review comments can be a great learning experience. Overlooked patterns can be appreciated, structure and error handling can be improved, and teams can develop more constancy in coding style. Even poor review feedback like “I don’t get this” can identify flaws or highlight where code needs to be reworked for clarity and improved maintenance.

But code reviews are rare. Many developers don’t like them. Some management teams don’t see the value, while other managers claim code reviews are good but don’t make room in the schedule (or push them out of the schedule when a project starts to slip.) I remember one meeting where the development manager said “remember what will be happening after code freeze.” He expected us to say “Code Reviews!”, but a couple members of the team responded “I’m going to Disney World!” Everyone laughed, but the Disney trips were enjoyed while the code reviews never happened.

In many groups and projects, code reviews never happened, except when I dragged people to my cubicle and forced them to look at small pieces of code. I developed a personal strategy which helped somewhat: When I’m about ready to commit a change set I try to review the change as though it would be sent to a code review. “What would someone complain about if they saw this change?” It takes discipline and doesn’t have most of the benefits of a real code review but it has helped improve my code.

The development of interactive code review tools helped the situation. Discussions on changes could be asynchronous instead of trying to find a common time to schedule a meeting, and reviewers could see and riff on each other’s comments. It was still hard to encourage good comments and find the time for code reviews (even if “mandated,”) but the situation was better.

The next advancement was integrating code review tools into the source control workflow. This required (or at least strongly encouraged depending on configuration) approved code reviews before allowing merges. The integration meant less effort was needed to set up the code reviews. There’s also another hammer to encourage people to review the code: “Please review my code so I can commit my change.”

The barriers to code reviews also exist for security reviews, but the problem can be worse as many developers aren’t trained to find security problems. Security issues are obviously in-scope for code reviews, but the issue of security typically isn’t front of mind for reviewers. Even at Veracode the focus is on making the code work and adjusting the user interface to be understandable for customers.

But we do have access to Veracode’s security platform. We added “run our software on itself” to our release process. We would start a scan, wait for the results, review the flaws found, and tell developers to fix the issues. As with code reviews, security reviews can be easy to put off because it takes time to go through the process steps.

As with code reviews, we have taken steps to integrate security review into the standard workflow. The first step was to automatically run a scan during automated builds. A source update to a release branch causes a build to be run, sending out an email if the build fails. If the build works, the script uses the Veracode APIs to start a static scan of the build. This eliminated the first few manual steps in the security scan process. (With the Veracode 2014.2 release the Veracode upload APIs have an “auto start” feature to start a scan without intervention after a successful pre-scan, making automatic submission of scans easier.)

To further reduce the overhead of the security scans, we improved the Veracode JIRA Import Plugin to match our development process. After a scan completes, the Import plugin notices the new results, and imports the significant flaws into JIRA bug reports in the correct JIRA project. Flaws still need to be assigned to developers to fix, but it now happens in the standard triage process used for any reported problem. If a flaw has a mitigation approved, or if a code change eliminates the flaw, the plugin notices the change and marks the JIRA issue as resolved.

The automated security scans aren’t our entire process. We also have security reviews for proposals and designs so developers understand the key security issues before they start to code, and the security experts are always available for consultation in addition to being involved in every stage of development. The main benefit of the automated scans is that they take care of the boring review to catch minor omissions and oversights in coding, leaving more time for the security experts to work on the higher level security strategy instead of closing yet another potential XSS issue.

view the webinar

Veracode’s software engineers understand the challenge of building security into the Agile SDLC. We live and breathe that challenge. We use our own application security technology to scale our security processes so our developers can go further faster. On April 17th, our director of platform engineering, Peter Chestna, will share in a free webinar, how we’ve leveraged our cloud-based platform to integrate application security testing with our Agile development toolchain—and why it’s become essential to our success. Register for Peter’s webinar, “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It” to learn from our experience.

Next Page »