Shining a Flashlight on Mobile Application Permissions

April 23, 2014 by · Leave a Comment
Filed under: application security, Mobile 

Brightest Flashlight App Permissions
The Federal Trade Commission (FTC) recently completed and announced the terms of a settlement with GoldenShore Technologies, a one-man development shop based out of Idaho and creator of the popular “Brightest Flashlight” application for Android. Back in December the FTC, in response to a number of complaints, began investigating the app, which was doing a lot more than turning on your phone’s LED camera flash. Prior to installation, the app requested permission to reach the internet, to access contacts, and even to track real-time geolocation via GPS or IP address. So why does a basic flashlight app need all those permissions? To sell the private data of its 50 to 100 million users to less-than-scrupulous third parties, of course.

Consumers often don’t pay attention to the EULA, allowing developers to slip in all kinds of pernicious language. And lest you think this is just an Android problem, it occurs in Apple mobile applications as well. Because apps like this don’t behave in the way that traditional malware behaves they often get through both Android’s and Apple’s vetting processes. It becomes incredibly easy for developers to collect private information on a massive scale and then sell that data to a disreputable party. These types of privacy issues are only amplified in enterprises with weak or no MDM policies. Think about the types of data your employees could be unknowingly transmitting just by clicking “OK” to a set of permissions they didn’t read for some mobile app they thought was innocuous. Pretty scary, huh?

But the FTC just doled out some punishment, right? Well, yes, but it amounts to a slap on the wrist with a wet noodle. GoldenShore Technologies is ordered to delete all existing geolocation and device-specific data the app has collected. Going forward, the app must make clear to consumers that the it is collecting their data and what will happen to it. There are a few other restrictions, but most importantly, there is no financial penalty. The developer won’t even have to remit the profits he made from selling user data. Without a significant monetary penalty it’s unlikely that this type of behavior will be curbed in any way. Developers will continue to profit from exposing consumer and enterprise data, to the detriment of us all.

So the question is, what can enterprises do to mitigate the risks inherent in mobile applications? Our static and dynamic behavioral analysis can pick up on the types of things that Android and even Apple gatekeepers miss. Our dynamic testing simulates the way the end-user would deploy an app and then reports exactly what is happening: the internal mechanisms, network connections made, and the data that is compiled and sent out across those connections. Our partnerships with MDM and MAM vendors help enterprises use the information provided by our APIs to easily enforce BYOD policies by setting up rules that use risk ratings to allow or block apps from the mobile device. That way you can protect your enterprise from applications dangerous to your privacy, your network, and your information – because it’s unlikely that the GoldenShore Technology settlement will encourage widespread development of less risky mobile apps.

Time to Crowdfund Open Source Security?

Will crowd funding bug bounties for OpenSSL solve its security problems? Probably not.

crowfund-openssl-bug-bounty

For years, security experts and thought leaders have railed against the concept of “security through obscurity” – the notion that you can keep vulnerable software secure just by preventing others from understanding how it works.

Corporate executives worried about relying on open source operating systems and software like the Linux operating system – whose underlying source code was managed by volunteers and there for the whole world to see.

The answer to such doubters was the argument that open source was more secure (or, at least, no less secure) than closed source competitors precisely because it was open. Open source packages– especially widely used packages– were managed by a virtual army of volunteers. Dozens or even scores of eyes vetted new features or updates. And anyone was free to audit the code at any time. With such an active community supporting open source projects, developers who submitted sub-par code or, god forbid, introduced a bug, vulnerability or back door would be identified, called to task and banished.

That was a comforting notion. And there are certainly plenty of examples of just this kind of thing happening. (Linux creator Linus Torvalds recently made news by openly castigating a key Linux kernel developer Kay Sievers for submitting buggy code, suspending him from further contributions.)

But the discovery of the Heartbleed vulnerability puts the lie to the idea of the ‘thousands of eyes’ notion. Some of the earliest reporting on Heartbleed noted that the team supporting the software consisted of just four developers – only one of them full time.

“The limited resources behind the encryption code highlight a challenge for Web developers amid increased concern about hackers and government snoops,” the Wall Street Journal noted. OpenSSL Software Foundation President Steve Marquess was later asked about security audits and replied, “we simply don’t have the funding for that. The funding we have is to support food and rent for people doing the most work on OpenSSL.”

So does Heartbleed mean a shift away from reliance on open source? Is it a final victory of security-through-obscurity? Not so fast. As I noted in my post last week, vulnerabilities aren’t limited to open source components – any third party code might contain potentially damaging code flaws and vulnerabilities that escape detection.

Akamai learned that lesson the hard way this week with a proprietary code the company had been using to do memory allocation around SSL keys. The company initially claimed the patch provided mitigation against the Heartbleed vulnerability and contributed it back to the OpenSSL community. But a quick review found a glaring vulnerability in the patch code that, combined with the Heartbleed vulnerability, would have still left SSL encryption keys vulnerable to snooping.

“Our lesson of the last few days is that proprietary products are not stronger,” Akamai’s CSO Andy Ellis told me in an interview. “So, ‘yes,’ you can move to proprietary code, but whose? And how can you trust it?” Rather than run away from open source, Ellis believes the technology community should ‘lean in’ (my words not his) and pour resources – people and money – into projects like OpenSSL.

But how? Casey Ellis over at the firm BugCrowd has one idea on how to fund improvements to- and a proper audit of OpenSSL. He launched a crowd-funded project to fund bug bounties for a security audit of OpenSSL.

“Not every Internet user can contribute code or security testing skills to OpenSSL,” Ellis wrote. “But with a very minor donation to the fund, everyone can play a part in making the Internet safer.”

A paid bounty program would mirror efforts by companies like Google, Adobe and Microsoft to attract the attention of the best and brightest security researchers to their platform. No doubt: bounties will beget bug discoveries, some of them important. But a bounty program isn’t a substitute for a full security audit and, beyond that, a program for managing OpenSSL (or similar projects) over the long term. And, after all, the Heartbleed vulnerability doesn’t just point out a security failing, it raises questions about the growth and complexity of the OpenSSL code base. Bounties won’t make it any easier to address those bigger and important problems.

As I noted in a recent article over at ITWorld, even companies like Apple, with multi-billion dollar war chests and a heavy reliance on open source software are reluctant to channel money to the organizations like the Apache Software Foundation, Eclipse or the Linux Foundation that help to manage open source projects. This article over at Mashable makes a similar (albeit broader) argument: if companies want to pick the fruit of open source projects, they should water the tree as well.

In the end, there’s no easy solution to the problem. Funding critical open source code is going to require both individuals and corporations to step up and donate money, time and attention – whether through licensing and support agreements, or as part of a concerted effort to provide key projects with the organizational and technical support they need to maintain and expand critical technology platforms like OpenSSL.

Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part II

Welcome to another round of Agile SDLC Q&A. Last week Ryan and I took some time to answer questions from our webinar, “Building Security Into the Agile SDLC: View from the Trenches“; in case you missed it, you can see Part I here. Now on to more of your questions!

Q. What would you recommend as a security process around continuous build?

Chris

Chris: It really depends on what the frequency is. If you’re deploying once a day and you have automated security tools as a gating function, it’s possible but probably only if you’ve baked those tools into the build process and minimized human interaction. If you’re deploying more often than that, you’re probably going to start thinking differently about security – taking it out of the critical path but somehow ensuring nothing gets overlooked. We’ve spoken with companies who deploy multiple times a day, and the common theme is that they build very robust monitoring and incident response capabilities, and they look for anomalies. The minute something looks suspect they can react and investigate quickly. And the nice thing is, if they need to hotfix, they can do it insanely fast. This is uncharted territory for us; we’ll let you know when we get there.

Q. What if you only have one security resource to deal with app security – how would you leverage just one resource with this “grooming” process?

Chris

Chris: You’d probably want to have that person work with one Scrum team (or a small handful) at a time. As they security groomed with each team, they would want to document as rigorously as possible the criteria that led to them attaching security tasks to a particular story. This will vary from one team to the next because every product has a different threat model. Once the security grooming criteria are documented, you should be able to hand off that part of the process to a team member, ideally a Security Champion type person who would own and take accountability for representing security needs. From time to time, the security SME might want to audit the sprint and make sure that nothing is slipping through the cracks, and if so, revise the guidelines accordingly.

Q. Your “security champion” makes me think to the “security satellite” from BSIMM; do you have an opinion on BSIMM applicability in the context of Agile?

Chris

Chris: Yes, the Security Satellite concept maps very well to the Security Champion role. BSIMM is a good framework for considering the different security activities important to an organization, but it’s not particularly prescriptive in the context of Agile.

Q. We are an agile shop with weekly release cycles. The time between when the build is complete, and the release is about 24 hours. We are implementing web application vulnerability scans for each release. How can we fix high risk vulnerabilities before each release? Is it better to delay the release or fix it in the next release?

Chris

Chris: One way to approach this is to put a policy in place to determine whether or not the release can ship. For example, “all high and very high severity flaws must be fixed” makes the acceptance criteria very clear. If you think about security acceptance in the same way as feature acceptance, it makes a lot of sense. You wouldn’t push out the release with a new feature only half-working, right? Another approach is to handle each vulnerability on a case-by-case basis. The challenge is, if there is not a strong security culture, the team may face pressure to push the release regardless of the severity.

Q. How do you address findings identified from regular automated scans? Are they added to the next day’s coding activities? Do you ever have a security sprint?

Ryan

Ryan: Our goal is to address any findings identified within the sprint. This means while it may not be next-day it will be very soon afterwards and prior to release. We have considered dedicated security sprints.

Q. Who will do security grooming? Development team or security team? What checklist included in the grooming?

Ryan

Ryan: Security grooming is a joint effort between the teams. In some cases the security representative, Security Architect in our terminology, attends the full team grooming meeting. In the cases where the full team grooming meeting would be too large of a time commitment for the Security Architect, they will hold a separate, shorter security grooming session soon afterwards instead.

Q. How important to your success was working with your release engineering teams?

Chris

Chris: Initially not very important, because we didn’t have dedicated release engineering. The development and QA teams were in charge of deploying the release. Even with a release engineering team, though, most of the security work is done well before the final release is cut, so the nature of their work doesn’t change much. Certainly it was helpful to understand the release process – when is feature freeze, code freeze, push night, etc. – and the various procedures surrounding a release, so that you as a security team can understand their perspective.

Q. How to you handle accumulated security debt?

Chris

Chris: The first challenge is to measure all of it, particularly debt that accumulated prior to having a real SDLC! Even security debt that you’re aware of may never get taken in to a sprint because some feature will always be deemed more important. So far the way we’ve been able to chip away at security debt is to advocate directly with product management and the technical leads. This isn’t exactly ideal, but it beats not addressing it at all. If your organization ever pushes to reduce tech debt, it’s a good opportunity to point out that security debt should be considered as part of tech debt.

view the webinar

This now concludes our Q&A. A big thank you to everyone who attended the webinar for making it such a huge success. If you have any more questions, we would love to hear from you in the comments section below. In addition, If you are interested in learning more about Agile Security, you might be interested in this upcoming webinar from Veracode’s director of platform engineering. On April 17th, Peter Chestna will be hosting this webinar entitled “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this webinar Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success. Register now!

Heartbleed And The Curse Of Third-Party Code

The recently disclosed vulnerability in OpenSSL pokes a number of enterprise pain points. Chief among them: the proliferation of vulnerable, third-party code.

heartbleed

By now, a lot has been written about Heartbleed (heartbleed.com), the gaping hole in OpenSSL that laid bare the security of hundreds of thousands of web sites and web based applications globally.

Heartbleed is best understood as a really, nasty coding error in a feature that was added to a ‘heartbeat’ feature that was added to OpenSSL in March 2012. The heartbeat was designed to prevent OpenSSL connections from timing out – a common problem with always-on Web applications that was impacting the performance of those applications.

If you haven’t read about Heartbleed, there are some great write-ups available. I’ve covered the problem here. And, if you so inclined, there’s a blow-by-blow analysis of the code underlying the Heartbleed flaw here and here. Are you wondering if your web site or web-based application is vulnerable to Heartbleed vulnerability? Try this site: http://filippo.io/Heartbleed.

This one hurts – there’s no question about it. As the firm IOActive notes, it exposes private encryption keys, allowing encrypted SSL sessions to be revealed. But it also appears to leave data such as user sessions subject to hijacking, and exposes encrypted search queries and passwords used to access major online services– at least until those services are patched. And, because the vulnerable version of OpenSSL has circulated for over two years, it’s apparent that many of these services and the data that traverses them has been vulnerable to snooping.

But Heartbleed hurts for other reasons. Notably: it’s a plain reminder of the extent to which modern, IT infrastructure has become dependent on the integrity of third-party code that too often proves to be unreliable. In fact, Heartbleed and OpenSSL may end up being the poster child for third-party code audits.

First, the programming error in question was a head-slapper, Johannes Ullrich of the SANS Internet Storm Center told me. Specifically, the TLS heartbeat extension that was added is missing a bounds check when handling requests. The flaw means that TLS heartbeat requests can be used to retrieve up to 64K of memory on the machine running OpenSSL to a connected client or server.

Second: OpenSSL’s use is so pervasive that even OpenSSL.org, which maintains the software, can’t say for sure where it’s being used. But Ullrich says the list is a long one, and includes ubiquitous tools like OpenVPN, countless mailservers that use SSL, client software including web browsers on PCs and even Android mobile devices.

We’ve talked about the difficulty of securing third-party code before – and often. In our Talking Code video series, Veracode CTO Chris Wysopal said that organizations need to work with their software suppliers – whether they are commercial or open source groups. “The best thing to do is to tell them what issues you found. Ask them questions about their process.”

Josh Corman, now of the firm Sonatype, has called the use and reuse of open source code like OpenSSL a ‘force multiplier’ for vulnerabilities – meaning the impact of any exploitable vulnerability in the platform grows with the popularity of that software.

For firms that want to know not “am I exposed?” (you are) but “how am I exposed?” to problems like Heartbleed, there aren’t easy answers.

Veracode has introduced a couple products and services to address the kinds of problems raised by Heartbleed. Today, customers can take advantage of a couple services that make response and recovery easier.

Services like the Software Composition Analysis can find vulnerable, third-party components in an application portfolio. Knowing what components you have in advance makes the job of patching and recovering that much easier.

Also, the Web Application Perimeter Monitoring service will identify public-facing application servers operating in your environment. It’s strange to say: but many organizations don’t have a clear idea of how many public facing applications they even have, or who is responsible for their management.

Beyond that, some important groups are starting to take notice. The latest OWASP Top 10 added the use of “known vulnerable components” to the list of security issues that most hamper web applications. And, in November, the FS-ISAC added audits of third-party code to their list of recommendations for vendor governance programs.

Fixing Heartbleed will, as its name suggests, be messy and take years. But it will be worthwhile if Heartbleed’s heartburn serves as a wake-up call to organizations to pay more attention to the third-party components at use within their IT environments.

Hell is Other Contexts: How Wearables Will Transform Application Development

Wearable technology is in its infancy. But don’t be fooled: the advent of wearables will fundamentally change the job of the application developer. Here’s how.

Wearbles

There’s no doubt about it: wearable technology is picking up steam. But as wearables gain traction with consumers and businesses, application developers will need to tackle a huge, new challenge, namely: context.

What do I mean by ‘context’? It’s the notion – unique to wearable technology – that applications will need to be authored to be aware of and respond to the situation of the wearer. Just received a new email message? Great. But do you want to splash an alert to your user if she’s hurtling down a crowded city street on her bicycle? Text message? Great – but do you want to buzz your users watch if the heart rate monitor suggests that he’s asleep?

These kinds of conundrums are a new consideration for application developers accustomed to writing for devices – ‘endpoints’ that are presumed to be objects that are distinct from their owner and, often, stationary.

Google has already called attention to this in its developer previews of Android Wear – that company’s attempt to extend its Android mobile phone OS to wearables. Google has encouraged wearable developers to be “good citizens.” “With great power comes great responsibility,” Google’s Justin Koh reminds would-be developers in a Google video.

“Its extremely important that you be considerate of when and how you notify a user….” Developers are strongly encouraged to make notifications and other interactions between the wearable device and its wearer as ‘contextually relevant as possible.’ Google has provided APIs (application program interfaces) to help with this. For example, Koh notes that developers can use APIs in Google Play Services to set up a geo-fence that will make sure the wearer is in a specific location (i.e. “home”) before displaying certain information.

Wearables2

Or, motion detection APIs for Wear can be used to front or hide notifications when the wearer is performing certain actions, like bicycling. Google is having fun with that; a promotional video shows a watch prompting its dancing wearer to look up the name of the song she’s dancing to. But it’s likely that the activity detection APIs will be just as important as a safety feature of Android Wear devices.

The problem, of course, is that considerations like these require a much deeper understanding about how humans behave in a much wider range of contexts than just ‘sitting at a desk.’ Anyone who has had the experience of pulling up behind a car whose driver is engaged in a cell phone conversation or (God forbid) texting appreciates the dangers posed by portable devices—the design of which doesn’t take context into consideration.

In the very near future, application design decisions will need to do a much better job of balancing feature development against an almost limitless range of use contexts as well as considerations of personal safety. Sensors will no longer be simply an excuse for extending features – they’ll be the developer’s lifelines to the wearer: a source of real-time information about the context that the user is in. That data will (or should) affect the behavior of the wearable application.

It’s also likely that wearable device makers will need to give some thought to fields such as cognitive science and even sociology in designing their products. Google Glass is a hugely important development: the first commercially available consumer technology that attempts to break down the wall between the device and the wearer. But recent stories about Glass wearers (derisively referred to as “Glassholes”) being harassed and even attacked by irate, privacy minded crowds suggests that maybe the public isn’t ready to embrace the ‘everyone is filming everyone all the time’ model of human social interaction. That matters.

Or, consider that most wearable devices have settled on dings and vibrations to notify users of events (new email, calendar appointment, etc.). But that’s a factor of the technology that can be miniaturized and implanted in a small device, not the wearers’ feelings about how to best inform them of something. Shouldn’t we at least start with an idea of what customers want – even if that’s different from what has come before? We’ve learned a lot since the days of Clippy, Microsoft’s hateful talking paperclip. But we haven’t learned everything.

To be clear: wearable tech is still in its infancy. For all the hype, Google Wear is just a platform for relaying alerts and other data from your Android phone to a compatible Android watch. That’s cool – but hardly earth-shattering. But it’s a mistake to discount the movement toward wearable tech as a fad, or wearable devices as the mobile phone’s poor cousin. The migration to wearables will change the way we live, work, and play. But it’s a change that requires some thought and planning by the software development community to get right. It’s far from clear that will happen.

Introducing the iOS Reverse Engineering Toolkit

March 20, 2014 by · Leave a Comment
Filed under: application security, Mobile, research 

It should be the goal of every worker to expend less time and energy to achieve a task, while still maintaining, or even increasing, productivity. As an iOS penetration tester, I find myself repeating the same manual tasks for each test. Typing out the same commands to run various tools that are required to help me do my job. And to be honest, it’s completely monotonous. Every time I fat-finger a key, I lose productivity, forcing me to expend more time and energy to achieve the task. I’m a fan of automation. I’m a fan of streamlined innovation that saves me time and still accomplishes, for the most part, the same results. It was this desire to save time, and reduce my likelihood of suffering from carpal tunnel, that I created the iOS Reverse Engineering Toolkit.

What is iRET?

So what is iRET? Well, for lack of a better, more eloquent definition, it’s a toolkit that allows you to automate many of the manual tasks an iOS penetration tester would need to perform in order to analyze and reverse engineering iOS applications. And the bonus is…this can all be performed right on the device. Still sound like an interesting toolkit? Great, read on.

Already sold? Download the toolkit here.

iRET Features

What exactly does iRET do that can help you, an iOS penetration tester, perform your job more efficiently? Below, in Figure #1, is a screenshot of the main landing page of the application. This page lets you know what tools need to be installed, and even tells you if they aren’t. This is also the page where you select the installed application you would like to being analyzing/reverse engineering.

Figure #1 – Main iRET Page

Figure1

The tools, listed on the left in the image above, and dependencies required to run iRET are freely available both on the web and within various repositories on Cydia. After selecting an application from the dropdown, the user is redirected into the main iRET functionality page. Below is an overview of each feature associated with the iRET toolkit.

Figure #2 – Binary Analysis Tab

Figure2

Binary Analysis: The binary analysis tab automates the execution of otool, which is used to extract information about the binary. The displayed data includes binary header information, such as if PIE is enabled and targeted architecture. It identifies if the binary is encrypted, if it has stack-smashing protecting enabled, and if it has automatic resource counting enabled.

Figure #3 – Keychain Analysis Tab

Figure3

Keychain Analysis: The keychain analysis tab automates the execution of ptoomey’s “keychain_dumper” utility. This utility allows the user to analyze the keychain contents, including passwords, keys, certificates, etc. for any sensitive information.

Figure #4 – Database Analysis Tab

Figure4

Database Analysis: The database analysis tab automatically populates a dropdown containing all databases (.db, .sqlite, .sqlite3) found within the selected application. Once a database is selected from the dropdown, sqlite3 is automated to display the content of the database.

Figure #5 – Log Viewer Tab

Figure5

Log Viewer: The log view tab contains two pieces of functionality. First, it lets the user review the last 100 lines of the system log (syslog) file contained on the device. Second, all identified log and text files associated with the selected application are loaded into a dropdown menu, and when selected, their content is displayed.

Figure #6 – Plist Viewer Tab

Figure6

Plist Viewer: The plist view tab fills a dropdown with all of the property list files that were found for the selected application. When the user selects a property list file from the dropdown, its content will be displayed to the user.

Figure #7 – Header Files Tab

Figure7

Header Files Part 1: The header files tab has three pieces of automated functionality. The first function identifies if the binary is encrypted. If the binary is encrypted, then the binary will be automatically decrypted. The second piece of functionality performs a class dump of the unencrypted binary into separate header files. These associated header files are then loaded into a dropdown menu, as seen in Figure #7 above. The third piece of function takes place when the user selects a header file from the dropdown menu. Once a header file is selected from the dropdown, the content of this header file is automatically converted to a theos logify format, as seen in Figure #8 below, allowing the user to easily copy/paste the content into the theos tab for quick theos tweak creation.

Figure #8 – Headers in Theos Logify Format

Figure8

The theos tab is multifunctional, and allows the user to create, edit, save and build a theos tweak in just minutes. The first part of the theos tab is the tweak creation process. Here, a form is provided, as seen in Figure #9 below, for the user to enter the information required to create the theos tweak.

Figure #9 – Theos Form

Figure9

After the theos tweak is created a dropdown is shown that allows the user to select the “makefile” or “Tweak.xm” file for viewing/editing purposes, as seen in Figure #10 below.

Figure #10 – Theos Files Displayed

Figure10

Once a user selects one of the files in the dropdown, the file can then be viewed/edited. After making any changes the user can click the “Save” button to save those changes to the selected file, as seen in Figures #11 and #12 below.

Figure #11 – Viewing the Theos makefile

Figure11

Figure #12 – Viewing the Copy/Pasted Header File into the Tweak.xm File

Figure12

After the user has made their changes to the tweak and is ready to build it, all they need to do is click the “Build” button, at which point the tweak will be compiled and automatically copied to the /Library/MobileSubstrate/DynamicLibraries directory, as seen in Figure #13 below.

Figure #13 – Building and Installing the Theos Tweak

Figure13

After the tweak has been installed, the user simply resprings their device and launches the application they have targeted by the theos tweak.

The final tab, and piece of functionality in the iRET toolkit is the screenshot tab.

Figure #14 – Screenshot Tab

Figure14

Screenshot Tab: This tab allows the user to view the cached screenshot, if any, of the selected application.

The iRET toolkit, like any toolkit, is not a panacea for iOS mobile penetration testing. However, it will allow you to automate many of the tasks that are required in analyzing iOS applications.

Download the iRET toolkit.

Special Thanks:

I would like to give a special thanks to all of the iOS tool/utility creators who make our jobs easier through their tireless research and contributions, including Dustin Howett (theos), Stefan Esser (dumpdecrypted), Patrick Toomey (keychain_dumper), as well as many others. I would like to thank the creators of the iNalyzer tool, which was the inspiration for iRET. I would also like to thank Richard Zuleg, who contributed his time and effort in helping me with the Python portion of this application, Bucky Spires for his assistance in troubleshooting many of the issues I experienced developing this toolkit, and Dan DeCloss for this help beta testing and making sure iRET was ready to be shared with the public. Without the efforts and assistance of those mentioned above, the development of this toolkit would never have been possible…at least not without a lot of caffeine, late nights and frustrated yelling.

Managing Flaw Review with a Large Multi-vendor Application

March 20, 2014 by · Leave a Comment
Filed under: application security 

The previous blog post in this series discussed strategies for the large-scale deployment of the Veracode static code analysis tool across a large enterprise focusing on strategies and techniques for ensuring rapid adoption within individual development teams typically responsible for self-contained homogenous applications. However in a large enterprise there exist applications which are developed by multiple vendors and consisting of a number of divergent codebases – this blog post discusses techniques for tailoring your Veracode process to accommodate larger multi-vendor applications.

multi-vendor-applications

Late Programme Lifecycle Implementation of a Static Tool

A flagship programme within my organisation is a harmonised platform for our European operations; the development timescale spans several years, comprises of multiple teams in multiple regions, and consists of a combination of bespoke application code, and third-party and open source software. The overarching objective for the use of Veracode within this programme was to obtain a summary view of the security posture of the overall application by performing a Veracode scan at a regular cadence as part of the standard build and deploy process. The decision to adopt Veracode analysis was made during a relatively late stage of the programme lifecycle meaning that existent contractual agreements with vendors did not mandate any kind of code analysis – how were we to get them to cooperate in this process?

Gaining Vendor Cooperation

The most common concern for a vendor unfamiliar with Veracode is the perceived high false positive rates associated with a static code analysis tool. The best way to allay such fears is to provide the vendor access to the platform at the earliest opportunity and to allow them to perform their own scans in order to gain familiarity with the platform and the quality of the findings.

There was a marked change in attitude when vendors saw the number of tracked flaws reduced from several thousand to a few hundred…

The next concern raised was the sheer volume of flaws identified – in one case a single vendor contributed over 8 million lines of source code. The key message to our vendors was that the objective is to identify and remediate exploitable risk, rather than producing a perfect result sheet. To facilitate this our team worked to produce a mitigation workflow which broke down various flaw categories (based on CWE) and exploitability (as determined by the Veracode analysis) resulting in a rule set that could be applied to a Veracode result set to produce a filtered set of flaws based on their risk rating. There was a marked change in attitude when vendors saw the number of tracked flaws reduced from several thousand to a few hundred; immediately the response was more pro-active, especially when the flaws could be confirmed in readout calls. In summary: Establish realistic and achievable targets for remediation, and ensure that you are able to measure and track progress.

Customising Vendor Specific Reports

Being able to provide customised reporting for each vendor from a single Veracode scan result set presented some unique challenges: it was important to provide each vendor with visibility of their flaws (and only their flaws) and to ensure that flaws which were not managed (due to their lower risk rating) were not reported. Additionally flaws within third-party and open source code were to be identified and tracked, but no action was expected of the vendor; rather these items were flagged for specific focus during the manual penetration testing phase. The Veracode platform detailed report provides a verbose description of all flaws in XML format, this was imported into Excel and a number of VBA macros were run to assign vendor names (based on module names) and to remove all items which were not tracked. The platform is able to accurately distinguish between flaws which have been newly detected and those that existed previously; a macro flagged new items for review and these were reviewed in vendor specific readout calls with the Veracode application security experts. A readout call can involve a significant investment in time from several parties; the ability to sharply focus these calls on only new high risk items was important in ensuring active vendor participation.

Communicating Remediation Upstream

Finally the Veracode platform provides an API specifically for the management of flaw remediation and comments. A Microsoft Visual Studio Tools for Office (VSTO) plugin was adapted to use this API to update flaw status on the platform with the mitigation status within the vendor spreadsheet; this was important to ensure that any high level reporting done for the programme governance would report the current overall security position across all vendors with trending information. The ability of the platform to track flaw locations within successive scans meant that these mitigations were then propagated into downstream scans. Several vendors were able to recognise the value of this programme to the quality of their codebase and were pro-active in ensuring the flaw reports were fed back to their upstream development teams, and in some cases tailored readout calls were held to address specific vendor concerns, or issues emerging from their scans.

This post have given a brief overview of some of the challenges faced in our multi-vendor scenario: the key to success is to gain the co-operation of your vendors by demystifying the process, demonstrating actionable findings and showing a pragmatic and risk based approach to the prioritisation of their remediation effort while stressing the benefits for both parties.

RSA Perspective: Is It Time For A Cyber Safety Board?

We have government agencies to monitor the safety of cars, roads, bridges and air travel. What’s so special about cyber?

coming-soon

If you caught the headlines last week, you might have read about the developing scandal over a fatal problem with ignition switches in General Motors cars?

The automaker has been forced to recall 1.37 million GM cars containing a faulty part that is believed to be the cause of 31 crashes and 13 fatalities in the last decade. The scandal is that the National Highway Traffic Safety Administration (NHTSA) – the federal agency charged with maintaining vehicle safety knew about the problem as long as seven years ago, but did not demand a recall of affected vehicles.

It’s a curious story – especially when you learn that it wasn’t as if the NHTSA ignored the problem. On the contrary, the agency ordered three Special Crash Investigations into incidents in 2004, 2005 and 2006 in which a new type of air bags failed to deploy in accidents. It also met with GM in 2007 to discuss the problem believed to be the source of the failures: a flaw that caused ignition switches to slip from “run” into “accessory” mode under the weight of heavy key chains, killing power to airbags and other systems.

But if you’re a software engineer or if you work for a company that makes software or if you’re an enterprise IT professional who is in the position of procuring software and services, you’re probably not shaking your fist at the NHTSA. You may, instead, be thinking “Wow! Talk about accountability!”

After all, no such federal, state or even industry equivalent of the NHTSA exists to oversee the operation of software, hardware and services – even in cases where those products are managing critical infrastructure and lifesaving systems.

And it’s not like the threats are hypothetical. I’ve noted on this blog how software flaws and vulnerabilities can kill people – literally. In just one dramatic example:a serial killer named Charles Cullen manipulated a design and software flaw in a drug dispensing product, Pyxis Medstation, to obtain lethal doses of medications he used to claim his victims. And, in recent months, the FDA has warned medical device makers to take more precautions to thwart cyber attacks on their products.

But would such a model work in the software world? After all, we’ve been trained to believe in a kind of ‘cyber exceptionalism’ – the notion that problems rooted in technology are fundamentally different from other real-world problems that we’ve already solved.

A panel discussion at last month’s RSA Conference posed that very question: whether a ‘National Cyber Safety Board’ was needed to put some teeth into calls for more secure application design and development.

Needless to say: this is a controversial proposal. Since its inception, the software industry has been defined by its agility and creativity. This is a space where brilliant, entrepreneurs can spin a “good idea” (say: the spreadsheet, a social network or a chat application) into riches. Imagine having to pass each of those creations through the filter of some government bureaucracy like the Food and Drug Administration.

But the idea has supporters – especially within the software security industry. The panel’s moderator, Veracode’s CTO Chris Wysopal, noted that a Cyber Safety Board could provide much needed expertise to do root-cause analysis of major cyber incidents like attacks and malware outbreaks. This would be akin to the kinds of work that the National Traffic Safety Board does investigating airplane and train accidents, or the CDC does for disease outbreaks.

Alex Hutton, the director of operations risk and governance at what he described as a “Too Big To Fail” bank said that there’s a desperate need to bring more science to bear in cyber, such as measuring the frequency of incidents and their impact. Better data on cyber incidents would help focus investments in controls, so companies at least knew that they were spending money on the right things.

This kind of positive feedback between public sector and private sector is something we take for granted in many other areas of our daily lives. Accident reports filed by local law enforcement and with private insurance companies inform the work of the NHTSB and DOT. Those agencies, in turn, exert pressure on automakers to fix problems and improve the safety of vehicles that consumers buy.

The same dynamics should work in the software world. So far, however, there have been only half steps in this direction.

Panelist Jacob Olcott, a principal in Good Harbor Consulting’s cyber security practice, helped create the Securities and Exchange Commission rule requiring companies to disclose “material” cyber breaches. But critics point out that the interpretation of “material” has given companies lots of wiggle room to not report serious cyber incidents.

And even Olcott is skeptical of direct government regulation of cyber security. The interests of private investors are enough to keep companies honest about cyber incidents, he said. “Investors care about this and have a right to ask,” he told audience members.

After all, panelists observed (rightly) that mandatory reporting of cyber incidents would raise a din of SEC disclosures since “everyone is getting breached.”

But maybe that’s the point? With a strong regulator (say the SEC or some future Cyber Safety Board) mandating disclosure of every material breach and serious (i.e. exploitable) software “defect,” there would be a flood of reports.

That would be scary and overwhelming. But it would also give a shape to a fear and anxiety that, today, is already overwhelming, but also shapeless. Knowing the details and having the data on vulnerabilities and cyber incidents will help us – as a society – understand and begin to manage our risk in the exact same way that we do with other societal ills, from property crime to vehicle accidents.

In the short term, that would be painful. In the long term, it might just save us all a lot of pain.

Reversing Kony JavaScript iOS Applications

February 27, 2014 by · Leave a Comment
Filed under: application security, research 

Researched by William Spires and Stephen Jensen.

That Was Then, This is Now

Just five short years ago, if you wanted to create an iOS application, you had to either take a crash course in Objective-C programming or hire someone to create the application for you. It was truly the beginning of a mobile revolution, and people wanted to jump on the bandwagon. However, with limited skills (and funds) how could the average .NET/Java/etc. programmer leverage his skills to create mobile applications? He couldn’t – he was forced to learn a completely new language. He had to learn the syntax and all of the various nuances of that new language and then, maybe, just maybe, within a couple of months, he might be close to actually developing a functional, albeit simple, mobile application.

Fast-forward a couple of years and we now see the mobile development market inundated with a multitude of mobile development platforms, covering the gamut of programming languages. No longer are you confined to Objective-C. Know Ruby? Great -you can develop mobile apps using the Rhodes Mobile framework. Know .NET? Perfect – you can create apps using Xamarin, and the list goes on and on. Enterprises no longer have to specifically hire Objective-C programmers. Now they can utilize the various programming skills of their current development staff to create robust, highly functional mobile applications.

This post will focus on a well-known mobile development framework called Kony. Kony mobile apps can be created in either the Lua or JavaScript scripting languages. We will focus on Kony mobile applications created using JavaScript, and demonstrate how easily, given a certain skill level, these applications can be reverse engineered.

Technically Speaking

On a technical level, it’s the desire of every mobile pentester, and every attacker for that matter, to get to the heart of a mobile application. It’s an adaptive process, as very few mobile applications are engineered exactly the same. The ultimate goal is to pull back the curtain and see what’s going on behind the scenes.

One part of a mobile pentester’s methodology involves file system analysis. This is where we start analyzing all of the application files, resources, dependencies, etc. that are stored locally on the device. We come across various common file types on the system, including property list files, databases, etc. During our assessment we located, via the applications Info.plist file, a key that identified the application as being built using the Kony mobile framework. The key looked like this:

<key>KonyBundleIndentifier</key>

As we continued looking at the file system we identified, as seen in Figure #1, a “JSScripts” directory that contained a file called, “app_script.js”.

Figure #1

image001

Attempting to open the app_script.js file in a normal text editor revealed nothing, as seen in Figure #2 below.

Figure #2

Content of app_script.js file

Content of app_script.js file

It was obvious this file was being protected for some unknown reason. Our next step involved reversing the application binary to determine what it was doing with this file. First, we used Doxygen and Graphviz to generate a call-flow graph of all method calls, as seen in Figure #3, so we could get a high level understanding of the inner workings of the Kony app framework.

Figure #3

We then loaded the application into IDA Pro to start narrowing down the methods that were handling the app_script.js file, as seen in Figure #4. Our analysis led us to several methods within the JSScriptLoader class:

Figure #4

Content of JSScriptLoader.h header file

Content of JSScriptLoader.h header file

What tipped us off about this class in particular was that the “contentsOfJSScriptsDirectory” method, besides having a method name that was self-explanatory, also contained a call to “contentsOfDirectoryAtPath:error:” from the NSFileManager class. This appears to search the contents of the JSScripts path, which was the parent directory containing our suspicious “app_script.js” file.

Figure #5 shows the disassembly of the “+[JSScriptLoader contentsOfJSScriptsDirectory]” method.

Figure #5

We also used class_dump_z to dump the headers of the iOS binary. After searching through the various methods and header files, we identified a JSScriptLoader.h header file, seen in Figure #6, which contained various script related method calls.

Figure #6

Content of JSScriptLoader.h header file

Content of JSScriptLoader.h header file

Having identified these methods within the header file, our next step was to create a Theos mobile substrate tweak, using Theos’ logify.pl script, as seen in Figure #7, which would allow us to log the behavior of these method calls to see exactly what they were doing.

Figure #7

Tweak.xm file created by Theos’ logify.pl script

Tweak.xm file created by Theos’ logify.pl script

Once we compiled and installed the mobile substrate tweak, we launched the application and watched the console log the activity. Immediately, we discovered the “evaluateScript” method was dumping what appeared to be huge amounts of JavaScript code. After a couple of hours of trial and error, and a crash course in Objective-C programming, we modified our Tweak.xm mobile substrate file and wrote some custom code, as seen in Figure #8, that would run in conjunction with the original code within the “evaluateScript” method.

Figure #8

The custom code shown above first creates a JSFiles directory within the Documents folder of the application. Next it utilizes the parameters, “script” and “fileName” that are passed to the method to create a JavaScript file using the “fileName” parameter as the name of the file. The code then writes the value stored, the actual executing JavaScript in this case, within the “script” parameter into the newly created JavaScript file.

After compiling and installing the updated mobile substrate tweak and launching the application, we confirmed the JSFiles directory had been created and within that directory there were 112 newly created JavaScript files, as seen in Figure #9.

Figure #9

New JavaScript files created from custom code

New JavaScript files created from custom code

Upon opening these newly created JavaScript files, we were presented with the actual functional code the application executed. In a nutshell, the custom code that we wrote within the mobile substrate tweak dumped the raw executing source code of the application, as seen in Figure #10.

Figure #10

At the end of the assessment, we knew the internal workings of the application, and were able to easily identify potential vulnerabilities within the JavaScript source code.

We did identify other Kony JS mobile applications that contained the “app_scripts.js” file, as well as the “JSScriptLoader.h” header file. However, the header file did not contain the exact same methods mentioned above. This difference may be the result of a refactoring of the code by the Kony framework. Your mileage in reversing Kony JavaScript mobile applications may differ.

Strategies for Rapid Adoption of a Security Programme within a Large Enterprise

A large-scale deployment of the Veracode static code analysis tool across a large enterprise presents a number of unique challenges such as understanding your application estate, prioritising your applications for scanning, and communicating with your application owners. This blog post provides some guidance based on my experience at delivering several hundred scanned applications in a 14-month time frame.

security-program-adoption

Understanding Your Application Estate

The first challenge is to understand the nature of your application estate – where are the applications hosted, where are the codebases, who is responsible for building and maintaining them, what development languages are used, how critical are they to the organisation, and so on. Most enterprise organisations will maintain an asset inventory of some sort; you should immediately familiarise yourself with this and determine the extent of information recorded, and what export formats are available. In my experience two problems exist: data accuracy and completeness. In many instances the contact details of application owners were incorrect or missing entirely. In our application repository the programming languages or frameworks are not recorded, and in only a few instances is the source code repository location specified. After my initial attempts to use the application repository as the principle data source I realised I would need to augment this with my own data gathered during an initial application profiling phase.

Application Profiling and Assigning Criticality

My initial attempts at profiling applications used a crude MS Word questionnaire containing questions such as technical contacts (capable of building the binaries), application language and frameworks, source code repository, binary image size, application version, and continuous integration environment. This questionnaire was sent to the registered application owners and the responses were then entered manually into a tracking spreadsheet based on an export from the application repository. It soon became apparent that this method was cumbersome and time consuming, so I deployed a web form based version of the profiling questionnaire which captured the responses to a backing spreadsheet enabling easy import to the main spreadsheet. Reviewing the responses it became apparent that not all applications would be suitable for static code analysis due to factors such as host operating system or language incompatibility. Once those applications were eliminated it was necessary to prioritise the list in order to ensure that our licence usage was targeted at the most critical business applications.

In order to ensure you are focused on the most critical applications consider a number of indicators; does the application require an Application Penetration Test, is the application externally facing, does the application have any particular regulatory requirements, has the application been the subject of recent incidents? For example, the Monetary Authority of Singapore (MAS) guidelines mandate a code review process which may be fulfilled in part by static code analysis, so I used the MAS compliance as an immediate inclusion criteria. It is important that whatever selection criteria you employ you are able to justify this both in terms of the Veracode licence usage and the manpower of the application teams who will be required to perform the upload, scan, and review.

Communication with Application Owners and Teams

Armed with your list of applications you will now need to gain a mandate from senior management within the application delivery sector of your organisation supporting your programme and encouraging 6525275_sthe participation of application teams using a “carrot and stick” message; for instance they need to comply with MAS legislation. Veracode will help vendors with this compliance. It is important that this message come from the upper management of the application teams and that the message stress the value of the programme rather than coming as an edict from within the upper echelons of the security part of the organisation. Our programme failed to achieve an initial foothold due to our lack of a clear mandate which enabled recalcitrant application teams an easy opt out clause. In many cases application teams were easily convinced of the value of early flaw detection and engaged with the programme quite willingly; however in a number of cases no amount of convincing or persuading could convince them to participate. One of the most frequent objections encountered was the perceived workload in on boarding to Veracode. It is important that you make the process of account creation as efficient as possible, and that your have the relevant support in place in terms of documentation, knowledge bases, support e-mails, etc. Many teams were pleasantly surprised at the ease of the process, and it was apparent that this news propagated within the development communities as we saw reduced friction as the programme progressed.

In order to ensure that our programme was not constrained by the team resources, I automated the process of user account and application profile creation on the Veracode platform by leveraging the rich API’s available. The application spreadsheet was used as the data source and a Microsoft Visual Studio Tools for Office (VSTO) plugin was developed which provided an additional toolbar within Excel (this is the subject of a future blog post.) This plugin allowed for the creation, modification or deletion of accounts on the platform based on the underlying spreadsheet data. Although I invested a significant upfront effort in developing the tooling, I reaped the benefits later in the programme when I was able to completely on board up to a hundred applications in one day. Additionally, I was able to add metadata specific to our organisation (business alignment, investment strategy, software vendor) to the application profiles on the platform which greatly enriched the reports generated within the platform’s analytics engine.

Within a few weeks of the programme it became apparent that teams were often asking the same questions, so I started capturing these questions and their answers as a set of Frequently Asked Questions available on our internal social media-like platform. Through appropriate tagging and hyperlinking I quickly developed an organisation specific knowledge base which again lowered the barrier to entry for application teams who no longer had to wait for an answer or struggle with a problem. During the midpoint of our programme I identified a few obvious success stories (applications which had performed a number of scans and were showing a clear improvement in security posture) and I asked the teams working on those applications to contribute their experience to our social media platform in order to encourage the participation of other teams.

This brief blog post highlighted some of the challenges facing a new Veracode static code analysis deployment along with some solutions that I have come across in the process. I hope some of the approaches and solutions I described will ensure that you are soon well underway with analysis. In the process you will not only find strengths in your team but flaws during review –this is the subject of my next blog post.

Read Colin’s earlier blog post: How to Run a Successful Proof of Concept for an Application Security Programme

Next Page »