Security Experts:

Industry Reactions to XcodeGhost Malware: Feedback Friday

XcodeGhost, a piece of malware designed to target Apple users, has made a lot of headlines recently after researchers reported finding thousands of infected iOS applications.

Attackers have modified Apple’s Xcode development platform and posted the malicious version on various Chinese websites knowing that many developers in that country prefer obtaining the software from third-party services due to slow download speeds when using Apple’s servers.

The iOS and OS X apps created by developers with the rogue version of Xcode are injected with malicious code that allows attackers to perform various actions, such as collect information from infected devices and open arbitrary websites. Initially, researchers spotted only tens of XcodeGhost-infected iOS apps, but the latest reports indicate that the actual number of affected applications could be as high as 4,000.

Experts comment on XcodeGhost OS X/iOS malware

Since many of the infected apps made their way to Apple’s App Store, the company has taken steps to remove the malicious programs and released an advisory containing instructions on how developers can ensure that the version of Xcode they are using is legitimate.

Industry professionals contacted by SecurityWeek have shared thoughts on the sophistication and impact of the XcodeGhost attack, supply chain security, and possible prevention and protection methods.

And the feedback begins...

Ryan Smith, vice president and chief scientist, Optiv:

"XCodeGhost is one of the most interesting developments in malware. Normally malware authors have to write a seemingly benign application and put malicious code inside of the application. Instead, these malware authors wrote malicious code and injected it inside a compiler so that unwitting application developers included the malicious code inside their own programs. This type of attack was theorized at least as far back as 1984 by Ken Thompson of Bell Labs.


There are two attributes of this type of malware that complicate matters for the good guys. The first is that a standard inspection of the source code would reveal no malicious functionality. The malicious functionality isn’t added until the source code is translated into machine code and human reading of machine code rarely happens and few are qualified. The second issue is that since the malicious code is injected by the compiler, it’s easier for the malicious part of the code to change every time it’s embedded in a program. Many solutions to anti-virus involve identifying static portions of the malicious code, and if the malicious code changes by even a single bit it can throw off detection algorithms. These two things combined make these types of attacks difficult to combat.


Standard operational practices can easily prevent this type of malware from invading applications that you’re developing. All developers should submit their code to a centralized source code repository. There should be a dedicated build server. That build server should only use authentic versions of the compiler to compile the source code. The server should be inspected on an ongoing basis to ensure that it hasn’t been compromised and isn’t running vulnerable software. Additionally, the server should have extremely restricted access.”

Yossi Naar, co-founder, Cybereason:

“This whole situation - how the infection started and spread, underscores the reality that there's no anticipating the path of a compromise. Look at the infiltration path - fooling developers into downloading a modified development environment which injects malicious code into apps they build ­ a lot of effort went into building that trap, it's a sophisticated piece of social engineering. Clearly, it's a wide net attack, as it's almost impossible to reach a particular target using this method. This is consistent with Chinese attacks - they tend to be designed for high volume data collection rather than any specific operational goal. Data gathering and controls are not the means but the end itself, possibly enabling future data extraction from compromised target.


As far as how to prevent something like this from occurring: I'm not sure this scenario was preventable, given the bigger picture (China has slow download speeds so developers create workarounds) - that is beyond Apple's control. As for protecting the supply chain: Can you really expect to trust everyone in the chain? Can you validate every USB keyboard that every one of your suppliers uses? Every piece of development software? Trusted environments are a myth - we need to assume that anything and everything can be compromised and focus more of our attention on detecting signs that such a compromise has taken place.”

Alex Cox, Director, RSA-FirstWatch Global:

“[In the case of Apple devices], infecting individual phones is extremely difficult, as the vendor controls what can and can’t be installed on a phone. So an attack has to be performed higher in the distribution chain. Mobile malware has been a specter for years now, and it’s logical to see this sort of attack, especially when compared to the maturation of other aspects of the threat environment.

[...]

The other important thing to consider is these attacks compound themselves. If I’m able to compromise a mobile app development platform and get it widely distributed, with apps hosted in an official app store followed by thousand of downloads, the amount and types of information and credentials I’m able to access then spread widely. Which leads the attacker to additional information/compromises/attack possibilities and platforms.


Ultimately, detection of this sort of attack is a combined effort with both hard and soft approaches. Mobile device vendors need to increase their inspection capabilities for submitted apps infected with malware, and corporations need to consider the implications of mobile devices on their network, making sure they have the pervasive network visibility to understand when *any* network device is operating outside the norm while on the corporate network. This also needs to tie into a corporate policy that governs the acceptable use of mobile devices for corporate business. Unfortunately, the common consumer is completely at the mercy of the mobile device vendor that controls the app store. If their game is not on, the chance that the consumer will know a problem is occurring is slim to none.”

Lance Cottrell, Chief Scientist for Passages, Ntrepid Corporation:

“The XcodeGhost attack on applications in Apple’s iOS store is impressive for its sophistication. Rather than creating their own malware, the attackers were able to trick developers into incorporating malware into their apps. The big trend now is towards launching attacks upstream of the intended victim. In this case the attack focused on application developers to deliver malware rather than trying to deliver it directly. It is similar to malvertising attacks on small companies providing ads to big ad networks, or the Target attack that came in through an HVAC contractor network.


The attack shows both the vulnerability and strength of the walled garden approach to security. Apple failed to identify the malware before placing the applications into the store. Detection is incredibly difficult, so this should not be a huge shock. Fortunately, once identified, Apple is able to quickly remove the malware from every iOS device on the planet. The window of opportunity for the attackers is minimized.


Mass attacks like this are much less concerning than highly targeted attacks. Integrated attacks looking for generic access and information are typically more of an annoyance than a crisis for businesses. Targeted attacks are much harder to detect and are crafted for maximum benefit and/or damage. The recent $1 Million exploit bounty by Zerodium shows just how much these can be worth. The likely buyer for such an exploit would be a criminal or a government. In neither case would it be used in a mass attack but rather kept secret and used for maximum impact against carefully selected targets.”

Rob Kraus, Director of Security Research and Strategy, Solutionary:

“When approaching both mobile and traditional desktop application development, developers should always verify the source and integrity of application code that is implemented into a product, whether it is a software library, modules or frameworks. In the fast-moving world of application development, with a strong focus on reusing or leveraging existing code, it is often convenient to reuse or import existing code to help reduce development timelines. However, this has its inherent risks. Organizations often do not review third-party or their own code for security issues. All too often, developers blindly trust that code will perform as advertised without thought of what the security impact could be. What could go wrong there?


In the case of Xcode, the developers may have been able to identify the Xcode package was not indeed legitimate if they compared the source checksums against the list of official Xcode download sites. Probably all too simple of a step, but who wants to verify checksums when you can just jump in and start coding, right? Perhaps, it may be better served to implement security checks and validation of third party code while progressing the application through the development lifecycle?

[...]

The real question I have for developers today: Has this news changed anything you're doing today about how you adjust your practices? What are you doing today to make sure your current application do not have similar issues? And finally, what are you doing to ensure your applications aren't susceptible to these types of attacks in the future?”

Elad Yoran, Executive Chairman, KoolSpan:

"The XcodeGhost attack on Apple's users and App Store reminds us that adversaries are patient and able to hit targets indirectly by hijacking software supply chains. Mobile platforms like iOS have very different attack surfaces compared to PCs, meaning that in some cases targeting developers might offer the fastest way to gain a foothold in coveted app markets and users' devices.


Many CISOs are likely concerned with the widely-varying published figures on the number and types of affected apps. At a time when it's all too easy for users to adopt almost any app for business purposes, there is clearly a scramble underway to confirm which apps are compromised and whether simply deleting them is the quickest fix. The XcodeGhost saga's quiet exploitation and sudden rise offer powerful lessons. Users of phone, messaging and communications apps, in particular - which are entrusted with sensitive behavior and information - should press developers to explain who stands behind the code and their software security practices."

John Prisco, CEO, Triumfant:

“Apple has been brilliant at maintaining the purity of the brand and until relatively recently, their closed off development community and rigorous control over applications have been enough to provide additional protections against malicious attacks. However, the genie is out of the iBottle. Hackers are a reality and malware is inevitable. This development is just one example of what will happen when the world’s largest smart device manufacturer won’t let security professionals protect its users.


Google and Android allow a certain level of collaboration where security professionals can conduct analysis with where is matters – with operating system-level interrogation and anomaly detection. You cannot do this with Apple IOS. You’re blocked off because Apple does not open their OS up to the security community thus, there is no way to develop a guardian for the operating system. Apple is no longer infallible and it’s important that Apple to realize this for itself.”

Bill Anderson, Chief Product Officer, OptioLabs:

“Some clever hackers managed to get one level below where security checking is normally done. Instead of hacking the apps after they were published, these folks did something very sneaky: they modified and republished the iPhone development toolset called Xcode so that it inserted some of the hacker’s code in unsuspecting app developers iPhone Apps.


These developers went on to submit their applications to Apple for publishing in the App store, and the apps were approved as non-malicious. There are two interesting observations here: 1. Apple’s app vetting procedure did not catch this malware. This is of moderate significance. There have been very few malicious apps published to the iTunes store, reportedly, until now. It is not because hackers have not tried before, but rather that Apple has been able to detect and deny them access. Something has changed. It appears the new malicious software was able to hide from Apple’s analysis. 2. The unsuspecting developers should have been more discerning about where they got their tools. There was a social engineering win for the bad guys here in that they slipped under the developer’s normal level of caution. Credible App developers did not realize that the bad guys could substitute another toolset for the real one.


This could have happened before and we might not know about it. The incident shows that clever malware can make it through Apple’s current vetting system if it is delivered by a trustworthy app. A particularly clever group of hackers (or an intelligence agency) could have introduced subtle malware to other applications by subverting other development tools. How would the average developer know if his toolset is trustworthy or modified? This is an aspect of software development that deserves further consideration.”

Vikram Phatak, CEO, NSS Labs:

"Don't disable Apple's security. Apple has a superior ecosystem from a security standpoint. The attacks against Apple demonstrate the lengths that attackers will go to to infect an iPhone. The fact that Apple’s App store was delivering infected Apps is only news because after six years this is the first time it has ever happened. An Android malware outbreak is regular occurrence, and therefore not newsworthy.”

Chris Wysopal, CTO and CISO, Veracode:

“Apple has a massive install base, making it an attractive target for an attacker. In recent years it has seemed that the problem of mobile malware was bigger for Android than for iOS. The more testing and rigorous developer enrollment required before an iOS app can be published has always been considered to be the reason for this difference, yet in this case it seems to have fallen short. One very interesting aspect of this incident is that that the developers of the apps had no knowledge that their own code was being used to carry malware – it was the modified development environment (Xcode) that introduced the payload.


While enterprises need to adjust their MDM systems to reflect the potential of infected apps already being in their environment quickly, this case highlights the fact that developers need to start paying attention to security. However functionally perfect the code may be this is not a reflection on how secure the resulting app is. It’s critical that developers test what they are actually providing before releasing that app to the world. Analyzing the compiled code for vulnerabilities and malware using technologies such as binary static assessment and behavioral analysis to detect if malware has been injected between development and distribution should be mandatory before apps are ever published.”

Gavin Reid, Vice President of Threat Intelligence, Lancope:

"Before this unfortunate incident the Apple App Store had the industry-leading track record releasing more than a million apps with only 5 known bad. This is due to their strong application verification process - contrast that with open Android policy resulting in daily malware. In this case there is little the user can do to protect itself. The fix for this is better care from the application developers to security and better verification from Apple. Apps like WeChat are used all over the world and there are people running apps developed in China everywhere.”


Due to internet restrictions and longer download times - people in China are used to using local services. This should be a wakeup call for software developers to really pay attention to their source materials. Mostly US and European developers download Xcode directly from Apple making a repeat of the same problem unlikely.”

Paco Hope, Software Security Consultant, Cigital:

“Attacks like The Ken Thompson hack (in this case against iOS apps) show how after-the-fact security is very limited in what it can realistically address. Analyzing binaries after they are built or penetration testing web and cloud apps after they are deployed provides limited assurance against vulnerabilities that are egregious and obvious. Secure software begins earlier, like when it is designed and developed. And there are no silver bullets—no tools that simply take care of the problem so that the people don’t need to do it themselves. It is important to incorporate security throughout the development process, right down to the provenance and selection of the development toolchain itself.”

view counter
Eduard Kovacs (@EduardKovacs) is a contributing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.