After slipping a malicious app past Apple's App Store reviewers, security researchers say Apple should strengthen its defenses.

Thomas Claburn, Editor at Large, Enterprise Mobility

August 16, 2013

4 Min Read

10 Hidden iPhone Tips, Tricks

10 Hidden iPhone Tips, Tricks


10 Hidden iPhone Tips, Tricks (click image for larger view)

Five computer security researchers from the Georgia Institute of Technology have demonstrated that they can create malicious apps that can avoid detection by Apple's app review process.

In "Jekyll on iOS: When Benign Apps Become Evil", a paper presented at the Usenix Security '13 conference, Tielei Wang, Kangjie Lu, Long Lu, Simon Chung, and Wenke Lee describe how they were able to create apps that can be exploited remotely through program paths that did not exist during the app review process. The researchers call these "Jekyll apps," because they conceal their malicious side.

Apple takes justifiable pride in its iOS security regime. Though the company's scrutiny of third-party apps often forces developers to do extra work to satisfy its rules, its oversight has keep malware at bay more effectively than the efforts by the company's competitors. A 2011 research paper, "A Survey of Mobile Malware in the Wild", identified all known Android, iOS, and Symbian malware that spread between January 2009 and June 2011. Of the 46 instances of mobile malware during this period, only 4 affected iOS, compared to 24 for Symbian, and 18 for Android.

[ Read Microsoft Slams Google Over YouTube App Ban. ]

Nonetheless, iOS, like any operating system, has flaws that can be identified and exploited. While Apple tends to address such flaws quickly once it becomes aware of them, it can't fix problems that it can't identify. Wang and his colleagues show that exploitation can be accomplished without a specific vulnerability, by concealing malicious attack logic.

"Jekyll apps do not hinge on specific implementation flaws in iOS," the paper explains. "They present an incomplete view of their logic (i.e., control flows) to app reviewers, and obtain the signatures on the code gadgets that remote attackers can freely assemble at runtime by exploiting the planted vulnerabilities to carry out new (malicious) logic."

Assembling malicious logic at runtime avoids detection by reviewers and by automated methods of static analysis, a way to analyze program code without actually executing the instructions.

To prove that point, the researchers managed to submit a malicious "Jekyll" app, to have it approved by Apple and to download it, before voluntarily removing it from the iTunes App Store.

The construction of "Jekyll apps" may be more elaborate than necessary to sneak code that violates Apple's rules past app reviewers. Last year, for instance, the iOS app iRandomizer Numbers was found to have an undocumented tethering feature that violated Apple's review guidelines. The app was pulled from the iTunes App Store and AT&T's mobile business did not collapse as a result of unexpected network data traffic. But the incident demonstrates that Apple does not catch every app with undocumented features.

Asked whether it might not be easier just to create an app that acted maliciously for a single, targeted victim or after several months of use, Wang in an email responded he and his colleagues are assuming that Apple has complete insight into unexecuted branches of code that lead to malicious behavior when certain conditions are met.

The paper argues that it is theoretically difficult and economically prohibitive for Apple to keep its App Store free of vulnerabilities. Nevertheless, it does offer a few suggestions about how to mitigate the risk of "Jekyll" apps through runtime security monitoring mechanisms.

The researchers propose a stricter execution environment, along the lines of the way Google limits Native Client code, even as they express doubts about the ease with which Apple could accomplish this, due to the tightly coupled nature of public and private frameworks in iOS. They also advocate for more finely-grained use of security techniques like address space layout randomization (ASLR), permission models and control-flow integrity (CFI). Finally, they suggest that Apple adopt a type-safe programming language like Java to protect against low-level memory errors such as buffer overflows.

About the Author(s)

Thomas Claburn

Editor at Large, Enterprise Mobility

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful master's degree in film production. He wrote the original treatment for 3DO's Killing Time, a short story that appeared in On Spec, and the screenplay for an independent film called The Hanged Man, which he would later direct. He's the author of a science fiction novel, Reflecting Fires, and a sadly neglected blog, Lot 49. His iPhone game, Blocfall, is available through the iTunes App Store. His wife is a talented jazz singer; he does not sing, which is for the best.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights