The fact that Apple’s security model has worked so well in the past doesn’t mean it will work well forever. Here’s why.
Apple’s App Store and development ecosystem is often described as a walled garden, due to its closed developer ecosystem and stringent security. Since 2008, Apple’s approach has been reasonably successful in preventing malicious iOS apps from making their way into the store and onto users’ iPhones and iPads. But a very public incident in September 2015 highlighted a weakness in Apple’s security model that may spell trouble in the future.
Initially, Apple confirmed that 39 malware-infected apps had been found and removed from its China App Store, including popular messaging and car-hailing services. The tainted apps, which one research firm called “very harmful and dangerous,” contained code that steals sensitive user and device information. Independent researchers soon revised the number of infected apps upwards to 344, then 4,000, saying they may have been downloaded to hundreds of millions of devices.
This incident highlights an important issue: Developers are an attractive target for propagating advanced attacks because the product of their labor is widely distributed downstream and is trusted by end users and organizations.
In this case, legitimate developers in China were enticed into downloading an illicit copy of Apple’s XCode developer toolkit from a local server rather than using official Apple sources. It’s likely the attackers were exploiting the human tendency to take shortcuts: Due to China’s internet restrictions, downloads from the U.S. take up to three times longer. Unbeknownst to the developers, the illicit tools (dubbed XCodeGhost) inserted malware into their apps. Submitted to the App Store under the developers’ own trusted credentials, the apps were assumed to be safe.
Although Apple moved quickly to repair the damage, the incident is troubling in a number of ways. It tells cybercriminals that Apple security is not invincible and that attacking higher up the value chain, by compromising developer tools and credentials, can be an effective approach.
A falure on two counts
The fact that the breach was discovered by external researchers suggests that Apple’s own security failed on two counts: Apple didn’t detect the developers’ use of the illicit tools and didn’t recognize the presence of malicious apps once they had gotten into the store. Furthermore, Apple is moving toward a similar development-and-deployment paradigm for Mac applications, potentially exposing them to the same risks as iOS apps.
This incident should serve as a warning. The fact that Apple’s security model has worked so well in the past doesn’t mean it will work forever. That’s especially true as the number of iOS devices gets larger—Apple had sold more than a billion as of last January—and the content they transport or store gets juicier. Today’s developer shops run almost exclusively on Macs. A huge number of business executives, government officials, and senior professionals conduct sensitive communications on their iPhones. Given this reality, attackers will persist in looking for new, creative ways to compromise those devices for profit, political gain, or sheer spite.
There’s no indication that Apple plans to overhaul its security regime any time soon. However, the recent insertion of malicious code could have been detected and shut down earlier and with less damage if Apple’s security strategy had employed large-scale, behavioral analytics on the endpoints themselves, alongside other security measures.
On the developer side of such an attack, it’s important to compare the reputation of each developer’s toolset against known good tools. By detecting if and when new versions show up in the environment and where they were downloaded from, it’s possible to quickly identify illicit sources or versions of the developer toolchain. On the end-user side of an attack, Apple would benefit from opening the iOS platform to allow monitoring of a device’s network connections, reputation services, and behavioral detection.
Historically, Apple’s implicit message about iOS security has been this: “Trust us. We have all these controls in place around tools and credentials, so only secure stuff you can trust gets into the App Store.” But in the wake of this recent attack, enterprises, developers and end users can be forgiven for wondering, “Is that still enough?”
Paul Drapeau is a Principal Security Researcher for Confer, which offers endpoint and server security via an open, threat-based, collaborative platform. Prior to joining Confer he led IT security for a public, global pharmaceutical research-and-development organization. He is … View Full Bio