Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part II

Welcome to another round of Agile SDLC Q&A. Last week Ryan and I took some time to answer questions from our webinar, “Building Security Into the Agile SDLC: View from the Trenches“; in case you missed it, you can see Part I here. Now on to more of your questions!

Q. What would you recommend as a security process around continuous build?

Chris

Chris: It really depends on what the frequency is. If you’re deploying once a day and you have automated security tools as a gating function, it’s possible but probably only if you’ve baked those tools into the build process and minimized human interaction. If you’re deploying more often than that, you’re probably going to start thinking differently about security – taking it out of the critical path but somehow ensuring nothing gets overlooked. We’ve spoken with companies who deploy multiple times a day, and the common theme is that they build very robust monitoring and incident response capabilities, and they look for anomalies. The minute something looks suspect they can react and investigate quickly. And the nice thing is, if they need to hotfix, they can do it insanely fast. This is uncharted territory for us; we’ll let you know when we get there.

Q. What if you only have one security resource to deal with app security – how would you leverage just one resource with this “grooming” process?

Chris

Chris: You’d probably want to have that person work with one Scrum team (or a small handful) at a time. As they security groomed with each team, they would want to document as rigorously as possible the criteria that led to them attaching security tasks to a particular story. This will vary from one team to the next because every product has a different threat model. Once the security grooming criteria are documented, you should be able to hand off that part of the process to a team member, ideally a Security Champion type person who would own and take accountability for representing security needs. From time to time, the security SME might want to audit the sprint and make sure that nothing is slipping through the cracks, and if so, revise the guidelines accordingly.

Q. Your “security champion” makes me think to the “security satellite” from BSIMM; do you have an opinion on BSIMM applicability in the context of Agile?

Chris

Chris: Yes, the Security Satellite concept maps very well to the Security Champion role. BSIMM is a good framework for considering the different security activities important to an organization, but it’s not particularly prescriptive in the context of Agile.

Q. We are an agile shop with weekly release cycles. The time between when the build is complete, and the release is about 24 hours. We are implementing web application vulnerability scans for each release. How can we fix high risk vulnerabilities before each release? Is it better to delay the release or fix it in the next release?

Chris

Chris: One way to approach this is to put a policy in place to determine whether or not the release can ship. For example, “all high and very high severity flaws must be fixed” makes the acceptance criteria very clear. If you think about security acceptance in the same way as feature acceptance, it makes a lot of sense. You wouldn’t push out the release with a new feature only half-working, right? Another approach is to handle each vulnerability on a case-by-case basis. The challenge is, if there is not a strong security culture, the team may face pressure to push the release regardless of the severity.

Q. How do you address findings identified from regular automated scans? Are they added to the next day’s coding activities? Do you ever have a security sprint?

Ryan

Ryan: Our goal is to address any findings identified within the sprint. This means while it may not be next-day it will be very soon afterwards and prior to release. We have considered dedicated security sprints.

Q. Who will do security grooming? Development team or security team? What checklist included in the grooming?

Ryan

Ryan: Security grooming is a joint effort between the teams. In some cases the security representative, Security Architect in our terminology, attends the full team grooming meeting. In the cases where the full team grooming meeting would be too large of a time commitment for the Security Architect, they will hold a separate, shorter security grooming session soon afterwards instead.

Q. How important to your success was working with your release engineering teams?

Chris

Chris: Initially not very important, because we didn’t have dedicated release engineering. The development and QA teams were in charge of deploying the release. Even with a release engineering team, though, most of the security work is done well before the final release is cut, so the nature of their work doesn’t change much. Certainly it was helpful to understand the release process – when is feature freeze, code freeze, push night, etc. – and the various procedures surrounding a release, so that you as a security team can understand their perspective.

Q. How to you handle accumulated security debt?

Chris

Chris: The first challenge is to measure all of it, particularly debt that accumulated prior to having a real SDLC! Even security debt that you’re aware of may never get taken in to a sprint because some feature will always be deemed more important. So far the way we’ve been able to chip away at security debt is to advocate directly with product management and the technical leads. This isn’t exactly ideal, but it beats not addressing it at all. If your organization ever pushes to reduce tech debt, it’s a good opportunity to point out that security debt should be considered as part of tech debt.

view the webinar

This now concludes our Q&A. A big thank you to everyone who attended the webinar for making it such a huge success. If you have any more questions, we would love to hear from you in the comments section below. In addition, If you are interested in learning more about Agile Security, you might be interested in this upcoming webinar from Veracode’s director of platform engineering. On April 17th, Peter Chestna will be hosting this webinar entitled “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this webinar Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success. Register now!

Customer Announcement: Securing Your Applications From Heartbleed

April 12, 2014 by · Leave a Comment
Filed under: Customer Success, Vulnerabilities 

heartbleedIf you are a current Veracode customer, we’re delighted to announce that we can help you rapidly address the Heartbleed bug. We are offering our comprehensive capabilities for application vulnerability detection to all our customers, at no-charge, to help you respond to this threat.

What is Veracode doing to help our customers?

We have two capabilities in particular to help you determine your risks from Heartbleed. These services will identify potentially vulnerable components in both your application code and public facing websites.

  • Heartbleed Component Analysis: Our software composition analysis engine looks for evidence of use of OpenSSL in your code (static analysis) and produces a report detailing at risk applications.
  • Heartbleed Web Perimeter Analysis: Our massively parallel dynamic analysis Discovery technology detects the use of OpenSSL and produces a report of vulnerable websites.

Learn more about what we’re doing to help our customers here.

Or reach out to us directly to get started with securing your application infrastructure.

Heartbleed And The Curse Of Third-Party Code

The recently disclosed vulnerability in OpenSSL pokes a number of enterprise pain points. Chief among them: the proliferation of vulnerable, third-party code.

heartbleed

By now, a lot has been written about Heartbleed (heartbleed.com), the gaping hole in OpenSSL that laid bare the security of hundreds of thousands of web sites and web based applications globally.

Heartbleed is best understood as a really, nasty coding error in a feature that was added to a ‘heartbeat’ feature that was added to OpenSSL in March 2012. The heartbeat was designed to prevent OpenSSL connections from timing out – a common problem with always-on Web applications that was impacting the performance of those applications.

If you haven’t read about Heartbleed, there are some great write-ups available. I’ve covered the problem here. And, if you so inclined, there’s a blow-by-blow analysis of the code underlying the Heartbleed flaw here and here. Are you wondering if your web site or web-based application is vulnerable to Heartbleed vulnerability? Try this site: http://filippo.io/Heartbleed.

This one hurts – there’s no question about it. As the firm IOActive notes, it exposes private encryption keys, allowing encrypted SSL sessions to be revealed. But it also appears to leave data such as user sessions subject to hijacking, and exposes encrypted search queries and passwords used to access major online services– at least until those services are patched. And, because the vulnerable version of OpenSSL has circulated for over two years, it’s apparent that many of these services and the data that traverses them has been vulnerable to snooping.

But Heartbleed hurts for other reasons. Notably: it’s a plain reminder of the extent to which modern, IT infrastructure has become dependent on the integrity of third-party code that too often proves to be unreliable. In fact, Heartbleed and OpenSSL may end up being the poster child for third-party code audits.

First, the programming error in question was a head-slapper, Johannes Ullrich of the SANS Internet Storm Center told me. Specifically, the TLS heartbeat extension that was added is missing a bounds check when handling requests. The flaw means that TLS heartbeat requests can be used to retrieve up to 64K of memory on the machine running OpenSSL to a connected client or server.

Second: OpenSSL’s use is so pervasive that even OpenSSL.org, which maintains the software, can’t say for sure where it’s being used. But Ullrich says the list is a long one, and includes ubiquitous tools like OpenVPN, countless mailservers that use SSL, client software including web browsers on PCs and even Android mobile devices.

We’ve talked about the difficulty of securing third-party code before – and often. In our Talking Code video series, Veracode CTO Chris Wysopal said that organizations need to work with their software suppliers – whether they are commercial or open source groups. “The best thing to do is to tell them what issues you found. Ask them questions about their process.”

Josh Corman, now of the firm Sonatype, has called the use and reuse of open source code like OpenSSL a ‘force multiplier’ for vulnerabilities – meaning the impact of any exploitable vulnerability in the platform grows with the popularity of that software.

For firms that want to know not “am I exposed?” (you are) but “how am I exposed?” to problems like Heartbleed, there aren’t easy answers.

Veracode has introduced a couple products and services to address the kinds of problems raised by Heartbleed. Today, customers can take advantage of a couple services that make response and recovery easier.

Services like the Software Composition Analysis can find vulnerable, third-party components in an application portfolio. Knowing what components you have in advance makes the job of patching and recovering that much easier.

Also, the Web Application Perimeter Monitoring service will identify public-facing application servers operating in your environment. It’s strange to say: but many organizations don’t have a clear idea of how many public facing applications they even have, or who is responsible for their management.

Beyond that, some important groups are starting to take notice. The latest OWASP Top 10 added the use of “known vulnerable components” to the list of security issues that most hamper web applications. And, in November, the FS-ISAC added audits of third-party code to their list of recommendations for vendor governance programs.

Fixing Heartbleed will, as its name suggests, be messy and take years. But it will be worthwhile if Heartbleed’s heartburn serves as a wake-up call to organizations to pay more attention to the third-party components at use within their IT environments.

Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part I

April 10, 2014 by · Leave a Comment
Filed under: research 

Recently, Ryan O’Boyle and I hosted the webinar “Building Security Into the Agile SDLC: View From the Trenches”. We would like to take a minute to thank all those who attended the live broadcast for submitting questions. There were so many questions from our open discussion following the webinar that we wanted to take the time to follow up and answer them. So without further ado, the Q&A.

Q. Did using JIRA give you greater visibility?

Ryan

Ryan: Standardizing on one tool for tracking development work across all development teams, and using the same tool to track the security reviews gave both us and the development teams improved visibility.

Q. Was the Kanban team a dedicated security team or was it just a team performing in a different way?

Ryan
Ryan: Just a team performing in a different way. This meant that while we had developed our core process around Scrum teams, we had to find a similar way to integrate with a new team operating with a different process.

Q. Do you recommend we have security training and expect security requirements coming from those writing stories/reqs or would that all be on the SCRUM team?

Ryan
Ryan: In our process, the Security Architect is responsible for working with the Product Owner to define security-related Acceptance Criteria or entire stories. As those participating in security grooming gain familiarity and certain patterns emerge, they can write them as well. I would recommend security training for everyone involved.

Q. Can a Technical Lead/Scrum Master play Security Engineer Role if they have security background?

Chris Chris: Yes, though I think you want to be careful of putting too many responsibilities on the Scrum Master. A Tech Lead can certainly be trained up to pitch in on some subset of the Security Engineer role, such as routine code reviews. This is similar to what we are rolling out with our Security Champions program, except that the Security Champion can be any member of the team. It will take longer for them to develop the expertise and intuition needed to perform tasks like security design reviews or focused penetration testing.

Q. How did you ensure test strategy, test plan, and security considerations are still correct when the stories are constantly being added or modified during the sprints?

ChrisChris: Modifying stories during sprints is a violation of Scrum principles, so if/when this does happen, we try to make sure it is addressed during Retro. Adding stories during sprints can still be challenging in the cases where the story was created on-the-fly. If it was pulled out of backlog, it would already have security criteria attached. However if it was a “just-in-time” story (e.g. acute customer pain point), we ask the Scrum Masters to inform us ASAP so that we can assess the security needs. In the near future it will be the Security Champion’s job to keep an eye out for things like this.

Q. What threat modeling tools do you use? Do you use any risk analysis/assessments to shape how you develop security requirements and their priorities?

ChrisChris: We do not use formal threat modeling tools. At the story level, we are doing light, informal threat modeling focused heavily on protecting against unauthorized access to customer data. We plan to take some steps to formalize this, but we also want to be cautious of creating a bloated process.

Q. Outside of reviewing every user story, how do you ensure you don’t miss things?

ChrisChris: We run automated static and dynamic analyses against each release candidate after code freeze. Every once in a while this picks up an implementation issue that might have been missed during code review, so it serves as a nice additional layer of defense. Additionally, we hire external consulting firms to perform a web app penetration test twice a year. All that being said, we’ll absolutely miss things. Nothing is perfect. When we do become aware of any security issues that have escaped to production, we take a risk-based approach to determining the urgency of the fix. What’s nice is that our deployment process allows us to test and push fixes relatively quickly if an off-cycle patch is needed.

Q. Did you guys make security requirements as part of Definition of Done of user stories?

RyanRyan: Yes, we consider security a part of our Definition of Done and to that point add and review against specific Acceptance Criteria on stories with security impact.

Q. So for any security testing, are the results ever sent directly back to the contributing developer? Or are the security test results always reviewed first by SMEs to triage/prioritize?

RyanRyan: Development teams run their own static analysis scans and do the initial review of the results. A security SME will review the results of later scan that incorporates many developers. Code review or pen. test findings that result from an in-sprint security review will be communicated back to the developer immediately so they can be addressed.

Q. Do you see any process changes for security testing?

Ryan
Ryan: Automation, automation, automation.

view the webinarThat is all we have time for at the moment, but check back next week for the second half of our Agile SDLC Q&A. In the meantime, if you found the Agile Security webinar useful, consider registering for Veracode’s director of platform engineering, Peter Chestna’s webinar: “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this technical webinar, Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success.

Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part I

April 10, 2014 by · Leave a Comment
Filed under: research 

Recently, Ryan O’Boyle and I hosted the webinar “Building Security Into the Agile SDLC: View From the Trenches”. We would like to take a minute to thank all those who attended the live broadcast for submitting questions. There were so many questions from our open discussion following the webinar that we wanted to take the time to follow up and answer them. So without further ado, the Q&A.

Q. Did using JIRA give you greater visibility?

Ryan

Ryan: Standardizing on one tool for tracking development work across all development teams, and using the same tool to track the security reviews gave both us and the development teams improved visibility.

Q. Was the Kanban team a dedicated security team or was it just a team performing in a different way?

Ryan
Ryan: Just a team performing in a different way. This meant that while we had developed our core process around Scrum teams, we had to find a similar way to integrate with a new team operating with a different process.

Q. Do you recommend we have security training and expect security requirements coming from those writing stories/reqs or would that all be on the SCRUM team?

Ryan
Ryan: In our process, the Security Architect is responsible for working with the Product Owner to define security-related Acceptance Criteria or entire stories. As those participating in security grooming gain familiarity and certain patterns emerge, they can write them as well. I would recommend security training for everyone involved.

Q. Can a Technical Lead/Scrum Master play Security Engineer Role if they have security background?

Chris Chris: Yes, though I think you want to be careful of putting too many responsibilities on the Scrum Master. A Tech Lead can certainly be trained up to pitch in on some subset of the Security Engineer role, such as routine code reviews. This is similar to what we are rolling out with our Security Champions program, except that the Security Champion can be any member of the team. It will take longer for them to develop the expertise and intuition needed to perform tasks like security design reviews or focused penetration testing.

Q. How did you ensure test strategy, test plan, and security considerations are still correct when the stories are constantly being added or modified during the sprints?

ChrisChris: Modifying stories during sprints is a violation of Scrum principles, so if/when this does happen, we try to make sure it is addressed during Retro. Adding stories during sprints can still be challenging in the cases where the story was created on-the-fly. If it was pulled out of backlog, it would already have security criteria attached. However if it was a “just-in-time” story (e.g. acute customer pain point), we ask the Scrum Masters to inform us ASAP so that we can assess the security needs. In the near future it will be the Security Champion’s job to keep an eye out for things like this.

Q. What threat modeling tools do you use? Do you use any risk analysis/assessments to shape how you develop security requirements and their priorities?

ChrisChris: We do not use formal threat modeling tools. At the story level, we are doing light, informal threat modeling focused heavily on protecting against unauthorized access to customer data. We plan to take some steps to formalize this, but we also want to be cautious of creating a bloated process.

Q. Outside of reviewing every user story, how do you ensure you don’t miss things?

ChrisChris: We run automated static and dynamic analyses against each release candidate after code freeze. Every once in a while this picks up an implementation issue that might have been missed during code review, so it serves as a nice additional layer of defense. Additionally, we hire external consulting firms to perform a web app penetration test twice a year. All that being said, we’ll absolutely miss things. Nothing is perfect. When we do become aware of any security issues that have escaped to production, we take a risk-based approach to determining the urgency of the fix. What’s nice is that our deployment process allows us to test and push fixes relatively quickly if an off-cycle patch is needed.

Q. Did you guys make security requirements as part of Definition of Done of user stories?

RyanRyan: Yes, we consider security a part of our Definition of Done and to that point add and review against specific Acceptance Criteria on stories with security impact.

Q. So for any security testing, are the results ever sent directly back to the contributing developer? Or are the security test results always reviewed first by SMEs to triage/prioritize?

RyanRyan: Development teams run their own static analysis scans and do the initial review of the results. A security SME will review the results of later scan that incorporates many developers. Code review or pen. test findings that result from an in-sprint security review will be communicated back to the developer immediately so they can be addressed.

Q. Do you see any process changes for security testing?

Ryan
Ryan: Automation, automation, automation.

view the webinarThat is all we have time for at the moment, but check back next week for the second half of our Agile SDLC Q&A. In the meantime, if you found the Agile Security webinar useful, consider registering for Veracode’s director of platform engineering, Peter Chestna’s webinar: “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this technical webinar, Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success.

Beware the Takeout Menu

April 9, 2014 by · Leave a Comment
Filed under: Third-Party Software 

When addressing enterprise security, the weakest links – the points of least resistance – should be hardened to prevent breaches.

Chinese-Menu

An illuminating article came out in the New York Times yesterday about the cyber-security risk posed to large enterprises by third-parties.

The article describes a classic, drive-by application-layer attack in which cyber-attackers breached a big oil company by injecting malware into the online menu of a Chinese restaurant that was popular with employees. When the workers browsed the menu, they inadvertently downloaded code that gave the attackers a foothold in the oil company’s network — and presumably, access to all kinds of valuable IP such as the quantity and location of all of the company’s oil discoveries worldwide.

The point of the article is that cyber-attackers are now targeting third-party applications and suppliers — such as the Chinese takeout software used in in the watering hole attack and the HVAC company whose credentials were stolen for the Target breach — as the path of least resistance to sensitive enterprise data. One of the sources quoted in the article suggests that third-party suppliers are involved in as many as 70% of breaches.

(Someone posted an amusing comment that “The movie 2001 had it wrong. It won’t be HAL that won’t open the pod bay door but a pimply faced kid in New Jersey hacking into HAL” — but the reality is that it’s more likely to be an organized crime gang in Eastern Europe or foreign military units performing state-sponsored espionage.)

As security teams get better at hardening their networks with next-generation technologies such as Palo Alto and FireEye, attackers are simply getting smarter by looking for weak links at the application layer and in the software supply chain. As the article points outs, this is a clever strategy because supply chain vendors are already behind the firewall and “often don’t have the same security standards as their clients.”

The analytics collected by our cloud-based application security platform reinforces that point: 90% of third-party applications uploaded to the platform include at least one OWASP Top 10 vulnerability such as SQL Injection and Cross-Site Scripting (Enterprise Testing of the Software Supply Chain).

What are the best practices for addressing third-party risk? Start by understanding all aspects of your third-party supply chain: the software you outsource, purchase or use via SaaS; the software you incorporate as components and frameworks in your in-house applications; and the service providers and contractors who have privileged access to your systems. If you aren’t continuously assessing these, you are accepting a much higher level of risk.

whitepaper longAnother interesting factoid from the Times article: Unlike banks which spend up to 12% of their IT budgets on security, retailers spend, on average, less than 5% of their budgets on security. To see what leaders in financial services — such Morgan Stanley, Goldman Sachs, GE Capital, Thompson Reuters — are recommending as three critical controls for managing third-party software risk, see the FS-ISAC whitepaper “Appropriate Software Security Control Types for Third Party Service and Product Providers”.

One of the controls recommended by FS-ISAC is the use of automated binary static analysis to ensure your third-party software is compliant with corporate security policies, based on minimum acceptable levels of risk (e.g., OWASP Top 10, CWE severity levels, etc.). This matches our experience working with hundreds of third-party vendors — enterprises can successfully reduce third-party software risk by creating ongoing, enterprise-wide governance programs with standardized policies and by working directly with their vendors to ensure they’re compliant.

As Target taught us, the security posture of your third-party vendors is also your responsibility. And if they turn out to be the path of least resistance for cyber-attackers, it’s your company and your customers that ultimately suffer.

Automating Good Practice Into The Development Process

April 7, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY, SDLC 

I’ve always liked code reviews. Can I make others like them too?

9478191_m

I’ve understood the benefit of code reviews, and enjoyed them, for almost as long as I’ve been developing software. It’s not just the excuse to attack others (although that can be fun), but the learning—looking at solutions other people come up with, hearing suggestions on my code. It’s easy to fall into patterns in coding, and not realize the old patterns aren’t the best approach for the current programming language or aren’t the most efficient approach for the current project.

Dwelling on good code review comments can be a great learning experience. Overlooked patterns can be appreciated, structure and error handling can be improved, and teams can develop more constancy in coding style. Even poor review feedback like “I don’t get this” can identify flaws or highlight where code needs to be reworked for clarity and improved maintenance.

But code reviews are rare. Many developers don’t like them. Some management teams don’t see the value, while other managers claim code reviews are good but don’t make room in the schedule (or push them out of the schedule when a project starts to slip.) I remember one meeting where the development manager said “remember what will be happening after code freeze.” He expected us to say “Code Reviews!”, but a couple members of the team responded “I’m going to Disney World!” Everyone laughed, but the Disney trips were enjoyed while the code reviews never happened.

In many groups and projects, code reviews never happened, except when I dragged people to my cubicle and forced them to look at small pieces of code. I developed a personal strategy which helped somewhat: When I’m about ready to commit a change set I try to review the change as though it would be sent to a code review. “What would someone complain about if they saw this change?” It takes discipline and doesn’t have most of the benefits of a real code review but it has helped improve my code.

The development of interactive code review tools helped the situation. Discussions on changes could be asynchronous instead of trying to find a common time to schedule a meeting, and reviewers could see and riff on each other’s comments. It was still hard to encourage good comments and find the time for code reviews (even if “mandated,”) but the situation was better.

The next advancement was integrating code review tools into the source control workflow. This required (or at least strongly encouraged depending on configuration) approved code reviews before allowing merges. The integration meant less effort was needed to set up the code reviews. There’s also another hammer to encourage people to review the code: “Please review my code so I can commit my change.”

The barriers to code reviews also exist for security reviews, but the problem can be worse as many developers aren’t trained to find security problems. Security issues are obviously in-scope for code reviews, but the issue of security typically isn’t front of mind for reviewers. Even at Veracode the focus is on making the code work and adjusting the user interface to be understandable for customers.

But we do have access to Veracode’s security platform. We added “run our software on itself” to our release process. We would start a scan, wait for the results, review the flaws found, and tell developers to fix the issues. As with code reviews, security reviews can be easy to put off because it takes time to go through the process steps.

As with code reviews, we have taken steps to integrate security review into the standard workflow. The first step was to automatically run a scan during automated builds. A source update to a release branch causes a build to be run, sending out an email if the build fails. If the build works, the script uses the Veracode APIs to start a static scan of the build. This eliminated the first few manual steps in the security scan process. (With the Veracode 2014.2 release the Veracode upload APIs have an “auto start” feature to start a scan without intervention after a successful pre-scan, making automatic submission of scans easier.)

To further reduce the overhead of the security scans, we improved the Veracode JIRA Import Plugin to match our development process. After a scan completes, the Import plugin notices the new results, and imports the significant flaws into JIRA bug reports in the correct JIRA project. Flaws still need to be assigned to developers to fix, but it now happens in the standard triage process used for any reported problem. If a flaw has a mitigation approved, or if a code change eliminates the flaw, the plugin notices the change and marks the JIRA issue as resolved.

The automated security scans aren’t our entire process. We also have security reviews for proposals and designs so developers understand the key security issues before they start to code, and the security experts are always available for consultation in addition to being involved in every stage of development. The main benefit of the automated scans is that they take care of the boring review to catch minor omissions and oversights in coding, leaving more time for the security experts to work on the higher level security strategy instead of closing yet another potential XSS issue.

view the webinar

Veracode’s software engineers understand the challenge of building security into the Agile SDLC. We live and breathe that challenge. We use our own application security technology to scale our security processes so our developers can go further faster. On April 17th, our director of platform engineering, Peter Chestna, will share in a free webinar, how we’ve leveraged our cloud-based platform to integrate application security testing with our Agile development toolchain—and why it’s become essential to our success. Register for Peter’s webinar, “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It” to learn from our experience.

CERF: Classified NSA Work Mucked Up Security For Early TCP/IP

April 3, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY 

Internet pioneer Vint Cerf says that he had access to cutting edge cryptographic technology in the mid 1970s that could have made TCP/IP more secure – too bad the NSA wouldn’t let him!

computer-guy

Did the National Security Agency, way back in the 1970s, allow its own priorities to stand in the way of technology that might have given rise to a more secure Internet? You wouldn’t be crazy to reach that conclusion after hearing an interview with Google Vice President and Internet Evangelist Vint Cerf on Wednesday.

As a graduate student in Stanford in the 1970s, Cerf had a hand in the creation of ARPANet, the world’s first packet-switched network. He later went on to work as a program manager at DARPA, where he funded research into packet network interconnection protocols that led to the creation of the TCP/IP protocol that is the foundation of the modern Internet.

Cerf is a living legend who has received just about every honor a technologist can: including the National Medal of Technology, the Turing Award and the Presidential Medal of Freedom. But he made clear in the Google Hangout with host Leo Laporte that the work he has been decorated for – TCP/IP, the Internet’s lingua franca – was at best intended as a proof of concept, and that only now – with the adoption of IPv6 – is it mature (and secure) enough for what Cerf called “production use.”

Specifically, Cerf said that given the chance to do it over again he would have designed earlier versions of TCP/IP to look and work like IPV6, the latest version of the IP protocol with its integrated network-layer security and massive 128 bit address space. IPv6 is only now beginning to replace the exhausted IPV4 protocol globally.

“If I had in my hands the kinds of cryptographic technology we have today, I would absolutely have used it,” Cerf said. (Check it out here)

Researchers at the time were working on the development of just such a lightweight but powerful cryptosystem. On Stanford’s campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described a public key cryptography system. But they didn’t have the algorithms to make it practical. (That task would fall to Ron Rivest, Adi Shamir and Leonard Adleman, who published the RSA algorithm in 1977).

Curiously enough, however, Cerf revealed that he did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren’t they used, then? The culprit is one that’s well known now: the National Security Agency.

Cerf told host Leo Laporte that the crypto tools were part of a classified project he was working on at Stanford in the mid 1970s to build a secure, classified Internet for the National Security Agency.

“During the mid 1970s while I was still at Stanford and working on this, I also worked with the NSA on a secure version of the Internet, but one that used classified cryptographic technology. At the time I couldn’t share that with my friends,” Cerf said. “So I was leading this kind of schizoid existence for a while.”

Social-mediaHindsight is 20:20, as the saying goes. Neither Cerf, nor the NSA nor anyone else could have predicted how much of our economy and that of the globe would come to depend on what was then a government backed experiment in computer networking. Besides, we don’t know exactly what the cryptographic tools Cerf had access to as part of his secure Internet research or how suitable (and scalable) they would have been.

And who knows, maybe too much security early on would have stifled the growth of the Internet in its infancy – keeping it focused on the defense and research community, but acting as an inhibitor to wider commercial adoption?

But the specter of the NSA acting in its own interest without any obvious interest in fostering the larger technology sector is one that has been well documented in recent months, as revelations by the former NSA contractor Edward Snowden revealed how the NSA worked to undermine cryptographic standards promoted by NIST and the firm RSA .

It’s hard to listen to Cerf lamenting the absence of strong authentication and encryption in the foundational protocol of the Internet, or to think about the myriad of online ills in the past two decades that might have been preempted with a stronger and more secure protocol and not wonder what might have been.

Lawsuits, Regulations and Third-Party Security

March 28, 2014 by · Leave a Comment
Filed under: Compliance 

PCI Compliance is a hot issue for retailers.

Every year the world seems to grow a little more regulated – and punitive. We’re now seeing banks suing retailers and compliance management firms over PCI assessments. And the recent breach in question appears to be related to insufficient controls around third-party suppliers.

According to the Verizon PCI Compliance Report, 84% of organizations that suffered a data breach were out of compliance with application-layer security controls (Requirement 6) — compared to an average of only 47% of all organizations assessed by Verizon QSAs in 2013. This suggests a strong correlation between the likelihood of suffering a data breach and non-compliance with application-layer security.

Security and regulation are inextricably linked – and will continue to be so. A recent survey of security executives conducted by 451 Research showed that the #1 driver for security budgets is compliance – with 38% of respondents saying their budgets increased specifically to address regulatory or legal compliance requirements.

So does compliance actually strengthen security – or does it simply create the illusion of security?

On the one hand, security programs structured solely to meet compliance and audit requirements tend to be implemented as annual “point-in-time” projects. These programs are really once-a-year sprints to appease auditors who are often more “checkbox-oriented” and may not spend the time required to uncover weaknesses in how controls are actually implemented and monitored.

This typically leads companies down the path of doing just enough to comply – in other words, actually lowering enterprise risk may not be the primary goal of these projects.

On the other hand, many enterprises have implemented effective security programs which are structured to implement, improve and mature security practices. Unlike unstructured, ad hoc approaches to enterprise security, these programs are – well – programmatic.

Programmatic approaches are focused on managing risk, accelerating adoption of best practices and using automation to simplify, centralize and standardize controls where possible. In these programs, compliance is merely the reporting output of an ongoing effort, rather than an isolated project.

One interesting thing we’ve seen is that effective, enterprise-wide risk reduction programs often begin as compliance-funded projects.

This is one of the topics our experts will be exploring in next week’s PCI 3.0 webinar – how we’ve seen our customers use existing PCI projects as a springboard for building more structured application security programs with automated testing and centralized policies, metrics and reporting.

view the webinarPCI 3.0 focuses on third-party software risk for the first time, adding guidance on “custom software developed by a third party.” Similarly, the OWASP Top 10 now includes a requirement to check third-party components and libraries for vulnerabilities, and PCI 3.0 recommends using the OWASP Top 10 guidelines as a best practice.

All of which may now expand the scope of compliance efforts to include controlling risk from third-party software such as packaged and outsourced applications, third-party frameworks and components, and open source code.

If more of your budget increases will be driven by compliance, isn’t it better to spend it on a program that actually reduces your enterprise risk?

Hell is Other Contexts: How Wearables Will Transform Application Development

Wearable technology is in its infancy. But don’t be fooled: the advent of wearables will fundamentally change the job of the application developer. Here’s how.

Wearbles

There’s no doubt about it: wearable technology is picking up steam. But as wearables gain traction with consumers and businesses, application developers will need to tackle a huge, new challenge, namely: context.

What do I mean by ‘context’? It’s the notion – unique to wearable technology – that applications will need to be authored to be aware of and respond to the situation of the wearer. Just received a new email message? Great. But do you want to splash an alert to your user if she’s hurtling down a crowded city street on her bicycle? Text message? Great – but do you want to buzz your users watch if the heart rate monitor suggests that he’s asleep?

These kinds of conundrums are a new consideration for application developers accustomed to writing for devices – ‘endpoints’ that are presumed to be objects that are distinct from their owner and, often, stationary.

Google has already called attention to this in its developer previews of Android Wear – that company’s attempt to extend its Android mobile phone OS to wearables. Google has encouraged wearable developers to be “good citizens.” “With great power comes great responsibility,” Google’s Justin Koh reminds would-be developers in a Google video.

“Its extremely important that you be considerate of when and how you notify a user….” Developers are strongly encouraged to make notifications and other interactions between the wearable device and its wearer as ‘contextually relevant as possible.’ Google has provided APIs (application program interfaces) to help with this. For example, Koh notes that developers can use APIs in Google Play Services to set up a geo-fence that will make sure the wearer is in a specific location (i.e. “home”) before displaying certain information.

Wearables2

Or, motion detection APIs for Wear can be used to front or hide notifications when the wearer is performing certain actions, like bicycling. Google is having fun with that; a promotional video shows a watch prompting its dancing wearer to look up the name of the song she’s dancing to. But it’s likely that the activity detection APIs will be just as important as a safety feature of Android Wear devices.

The problem, of course, is that considerations like these require a much deeper understanding about how humans behave in a much wider range of contexts than just ‘sitting at a desk.’ Anyone who has had the experience of pulling up behind a car whose driver is engaged in a cell phone conversation or (God forbid) texting appreciates the dangers posed by portable devices—the design of which doesn’t take context into consideration.

In the very near future, application design decisions will need to do a much better job of balancing feature development against an almost limitless range of use contexts as well as considerations of personal safety. Sensors will no longer be simply an excuse for extending features – they’ll be the developer’s lifelines to the wearer: a source of real-time information about the context that the user is in. That data will (or should) affect the behavior of the wearable application.

It’s also likely that wearable device makers will need to give some thought to fields such as cognitive science and even sociology in designing their products. Google Glass is a hugely important development: the first commercially available consumer technology that attempts to break down the wall between the device and the wearer. But recent stories about Glass wearers (derisively referred to as “Glassholes”) being harassed and even attacked by irate, privacy minded crowds suggests that maybe the public isn’t ready to embrace the ‘everyone is filming everyone all the time’ model of human social interaction. That matters.

Or, consider that most wearable devices have settled on dings and vibrations to notify users of events (new email, calendar appointment, etc.). But that’s a factor of the technology that can be miniaturized and implanted in a small device, not the wearers’ feelings about how to best inform them of something. Shouldn’t we at least start with an idea of what customers want – even if that’s different from what has come before? We’ve learned a lot since the days of Clippy, Microsoft’s hateful talking paperclip. But we haven’t learned everything.

To be clear: wearable tech is still in its infancy. For all the hype, Google Wear is just a platform for relaying alerts and other data from your Android phone to a compatible Android watch. That’s cool – but hardly earth-shattering. But it’s a mistake to discount the movement toward wearable tech as a fad, or wearable devices as the mobile phone’s poor cousin. The migration to wearables will change the way we live, work, and play. But it’s a change that requires some thought and planning by the software development community to get right. It’s far from clear that will happen.

Next Page »