Category Archives: Android

SMS Trojan Targets South Korean Android Devices

It’s a common misconception that mobile malware is a problem limited to users in a particular geographical region such as China or Eastern Europe. Last week, McAfee Labs mobile research department received a mobile malware sample that targets Android mobile phone users in South Korea. The sample pretends to be a popular coffee shop coupon application, but in fact is an SMS Trojan that posts the incoming SMS messages to the attacker’s website.

SMS trojan target on South Korea's mobile phone-1

If a user clicks the familiar application icon, a pop-up message will display the following information:

SMS trojan target on South Korea's mobile phone-2

This is a fake error message reporting that the server is overloaded and unable to process the request. This, together with the icon used for the application, is simply social engineering to fool the victim into believing the application is legitimate but having problems, in the hope that the victim will just quit the application. This malicious app has nothing to do with the popular coffee vendor you may associate with the bogus icon.

While the message is displayed, the application creates a service to run in the background after the device has been rebooted. This service then sends the victim’s phone number to the following URL to “register” the infection.

  • http://it[deleted].com/Android_SMS/installing.php

The following image shows the application’s ability to gather a phone number and send it to the attacker:

SMS trojan target on South Korea's mobile phone-3

Once the application is installed, it monitors any incoming SMS messages. All of these will be sent, together with the phone number of the sending device, to the following URL:

  • http://it[deleted].com/Android_SMS/receiving.php

Furthermore, the malicious application blocks the incoming SMS message as well as the notification, so the victim will never know of the message’s existence.

The following image shows the application code responsible for the incoming message theft:

SMS trojan target on South Korea's mobile phone-4

This malicious application targets only South Korean Android devices by checking for numbers starting with “+82,” the international code for South Korea, as shown in the following:

SMS trojan target on South Korea's mobile phone-5

All intercepted and stolen SMS messages and the originating phone number are posted to the aforementioned URL using “EUC-KR” character encoding, as shown in the following picture:

SMS trojan target on South Korea's mobile phone-6

McAfee Mobile Security detects this malware as Android/Smsilence.A.

 

The post SMS Trojan Targets South Korean Android Devices appeared first on McAfee Blogs.

Secure USB debugging in Android 4.2.2

It seems we somehow managed to let two months slip by without a single post. Time to get back on track, and the recently unveiled Android maintenance release provides a nice opportunity to jump start things. Official release notes for Android 4.2.2 don’t seem to be available at this time, but it made its way into AOSP quite promptly, so you can easily compile your own changelog based on git log messages. Or, you can simply check the now traditional one over at Funky Android. As you can see, there are quite a few changes, and if you want a higher level overview your time would probably be better spent reading some of the related posts by the usual suspects. Deviating from our usually somewhat obscure topics, we will focus on a new security feature that is quite visible and has received a fair bit of attention already. It was even introduced on the official Android Developers Blog, fortunately for us only in brief. As usual, we like to dig a little deeper, so if you are interested in more details about the shiny new secure debugging feature, read on.

Why bother securing debugging?

If you have done development in any programming environment, you know that ‘debugging’ is usually the exact opposite of ‘secure’. Debugging typically involves inspecting (and sometimes even changing) internal program state, dumping encrypted communication data to log files, universal root access and other scary, but necessary activities. It is hard enough without having to bother with security, so why further complicate things by making developers jump through security hoops? As it turns out, Android debugging, as provided by the Android Debug Bridge (ADB), is quite versatile and gives you almost complete control over a device when enabled. This is, of course, is very welcome if you are developing or testing an application (or the OS itself), but can also be used for other purposes. Before we give an overview of those, here is a (non-exhaustive) list of things ADB lets you do:
  • debug apps running on the device (using JWDP)
  • install and remove apps
  • copy files to and from the device
  • execute shell commands on the device
  • get the system and apps logs

If debugging is enabled on a device, you can do all of the above and more simply by connecting the device to a computer with an USB cable. If you think that’s not much of a problem because the device is locked, here’s some bad news: you don’t have to unlock the device in order to execute ADB commands. And it gets worse — if the device is rooted (as are many developer devices), you can access and change every single file, including system files and password databases. Of course, that is not the end of it: you don’t actually need a computer with development tools in order to do this: another Android device and an OTG USB cable are sufficient. Security researchers, most notably Kyle Osborn, have build tools (there’s even a GUI) that automate this and try very hard to extract as much data as possible from the device in a very short time. As we mentioned, if the device is rooted all bets are off — it is trivial to lift all of your credentials, disable or crack the device lock and even log into your Google account(s). But even without root, anything on external storage (SD card) is accessible (for example your precious photos), as are your contacts and text messages. See Kyle’s presentations for details and other attack vectors.

By now you should be at least concerned about leaving ADB access wide open, so let’s see what are some ways to secure it.

Securing ADB

Despite some innovative attacks, none of the above is particularly new, but it has remained mostly unaddressed, probably because debugging is a developer feature regular users don’t even know about. There have been some third-party solutions though, so let’s briefly review those before introducing the one implemented in the core OS. Two of the more popular apps that allow you to control USB debugging are ADB Toggle and AdbdSecure. They automatically disable ADB debugging when the device is locked or unplugged, and enable it again when you unlock it or plug in the USB cable. This is generally sufficient protection, but has one major drawback — starting and stopping the adbd daemon requires root access. If you want to develop and test apps on a device with stock firmware, you still have to disable debugging manually. Root access typically goes hand-in-hand with running custom firmware — you usually need root access to flash a new ROM version (or at least it makes it much easier) and some of the apps shipping with those ROMs take advantage of root access to give you extra features not available in the stock OS (full backup, tethering, firewalls, etc.). As a result of this, custom ROMs have traditionally shipped with root access enabled (typically in the form of a SUID su binary and an accompanying ‘Superuser’ app). Thus, once you installed your favourite custom ROM you were automatically ‘rooted’. CyanogenMod (which has over a million users and growing) changed this almost a year ago by disabling root access in their ROMs and giving you the option to enable it for apps only, for ADB of for both. This is not a bad compromise — you can both run root apps and have ADB enabled without exposing your device too much, and it can be used in combination with an app that automates toggling ADB for even more control. Of course, these solutions don’t apply to the majority of Android users — those running stock OS versions.

The first step in making ADB access harder to reach was taken in Android 4.2 which hid the ‘Developer options’ settings screen, requiring you to use a secret knock in order to enable it. While this is mildly annoying for developers, it makes sure that most users cannot enable ADB access by accident. This is, of course, only a stop-gap measure, and once you manage to turn USB debugging on, your device is once again vulnerable. A proper solution was introduced in the 4.2.2 maintenance release with the so called ‘secure USB debugging’ (it was actually commited almost a year ago, but for some reason didn’t make it into the original JB release). ‘Secure’ here refers to the fact that only hosts explicitly authorized by the user can now connect to the adbd daemon on the device and execute debugging commands. Thus if someone tries to connect a device to another one via USB in order to access ADB, they need to first unlock the target device and authorize access from the debug host by clicking ‘OK’ in the confirmation dialog shown below. You can make your decision persistent by checking the ‘Always allow from this computer’ and debugging will work just as before, as long as you are on the same machine. One thing to note is that on tablets with multi-user support the confirmation dialog is only shown to the primary (administrator) user, so you will need to switch to it in order to enable debugging. Naturally this ‘secure debugging’ is only effective if you have a reasonably secure lock screen password in place, but everyone has on of those, right? That’s pretty much all you need to know in order to secure your developer device, but if you are interested in how all of this is implemented under the hood, proceed to the next sections. We will first a give a very brief overview of the ADB architecture and then show how it has been extended in order to support authenticated debugging.

ADB overview

The Android Debug Bridge serves two main purposes: it keeps track of all devices (or emulators) connected to a host, and it offers various services to its clients (command line clients, IDEs, etc.). It consists of three main components: the ADB server, the ADB daemon (adbd) and the default command line client (adb). The ADB server runs on the host machine as a background process and decouples clients from the actual devices or emulators. It monitors device connectivity and sets their state appropriately (CONNECTED, OFFLINE, RECOVERY, etc.). The ADB daemon runs on an Android device (or emulator) and provides the actual services client use. It connects to the ADB server through USB or TCP/IP, and receives and process commands from it. Finally, adb is the command line client that lets you send commands to a particular device. In practice it is implemented in the same binary as the ADB server and thus shares much of its code.

The client talks to the local ADB server via TCP (typically via localhost:5037) using text based commands, and receives OK or FAIL responses in return. Some commands, like enumerating devices, port forwarding or daemon restart are handled by the local daemon, and some (e.g., shell or log access) naturally require a connection to the target Android device. Device access is generally accomplished by forwarding input and output streams to/from the host. The transport layer that implements this uses simple messages with a 24 byte header and an optional payload to exchange commands and responses. We will not go into further details about those, but will only note the newly added authentication commands in the next section. For more details refer to the protocol description in system/core/adb/protocol.txt and this presentation which features quite a few helpful diagrams and examples.

Secure ADB implementation

The ADB host authentication functionality is enabled by default when the ro.adb.secure system property is set to 1, and there is no way to disable it via the system settings interface (which is a good thing). The device is initially in the OFFLINE state and only goes into the ONLINE state once the host has authenticated. As you may already know, hosts use RSA keys in order to authenticate to the ADB daemon on the device. Authentication is typically a three step process:

  1. After a host tries to connect, the device sends and AUTH message of type TOKEN that includes a 20 byte random value (read from /dev/urandom).
  2. The host responds with a SIGNATURE packet that includes a SHA1withRSA signature of the random token with one of its private keys.
  3. The device tries to verify the received signature, and if signature verification succeeds, it responds with a CONNECT message and goes into the ONLINE state. If verification fails, either because the signature value doesn’t match or because there is no corresponding public key to verify with, the device sends another AUTH TOKEN with a new random value, so that the host can try authenticating again (slowing down if the number of failures goes over a certain threshold).

Signature verification typically fails the first time you connect the device to a new host because it doesn’t yet have the host key. In that case the host sends its public key in an AUTH RSAPUBLICKEY message. The device takes the MD5 hash of that key and displays it in the ‘Allow USB debugging’ confirmation dialog. Since adbd is a native daemon, the key needs to be passed to the main Android OS. This is accomplished by simply writing the key to a local socket (aptly named, ‘adbd’). When you enable ADB debugging from the developer settings screen, a thread that listens to the ‘adbd’ socket is started. When it receives a message starting with "PK" it treats it as a public key, parses it, calculates the MD5 hash and displays the confirmation dialog (an activity actually, part of the SystemUI package). If you tap ‘OK’, it sends a simple simple "OK" response and adbd uses the key to verify the authentication message (otherwise it just stays offline). In case you check the ‘Always allow from this computer’ checkbox, the public key is written to disk and automatically used for signature verification the next time you connect to the same host. The allow/deny debugging functionality, along with starting/stopping the adbd daemon, is exposed as public methods of the UsbDeviceManager system service.

We’ve described the ADB authentication protocol in some detail, but haven’t said much about the actual keys used in the process. Those are 2048-bit RSA keys and are generated by the local ADB server. They are typically stored in $HOME/.android as adbkey and adbkey.pub. On Windows that usually translates to %USERPOFILE%.android, but keys might end up in C:WindowsSystem32configsystemprofile.android in some cases (see issue 49465). The default key directory can be overridden by setting the ANDROID_SDK_HOME environment variable. If the ADB_VENDOR_KEYS environment variable is set, the directory it points to is also searched for keys. If no keys are found in any of the above locations, a new key pair is generated and saved. On the device, keys are stored in the /data/misc/adb/adb_keys file, and new authorized keys are appended to the same file as you accept them. Read-only ‘vendor keys’ are stored in the /adb_keys file, but it doesn’t seem to exist on current Nexus devices. The private key is in standard OpenSSL PEM format, while the public one consists of the Base 64 encoded key followed by a `user@host` user identifier, separated by space. The user identifier doesn’t seem to be used at the moment and is only meaningful on Unix-based OS’es, on Windows it is always ‘unknown@unknown’. 
While the USB debugging confirmation dialog helpfully displays a key fingerprint to let you verify you are connected to the expected host, the adb client doesn’t have a handy command to print the fingerprint of the host key. You might think that there is little room for confusion: after all there is only one cable plugged to a single machine, but if you are running a couple of VMs, thing can get a little fuzzy. Here’s one of way of displaying the host key’s fingerprint in the same format the confirmation dialog uses (run in $HOME/.android or specify the full path to the public key file):

awk '{print $1}' < adbkey.pub|openssl base64 -A -d -a 
|openssl md5 -c|awk '{print $2}'|tr '[:lower:]' '[:upper:]'

We’ve reviewed how secure ADB debugging is implemented and have shown why it is needed, but just to show that all of this solves a real problem, we’ll finish off with a screenshot of what a failed ADB attack against an 4.2.2 device from another Android device looks like:

Summary

Android 4.2.2 finally adds a means to control  USB access to the ADB daemon by requiring debug hosts to be explicitly authorized by the user and added to a whitelist. This helps prevent information extraction via USB which requires only brief physical access and has been demonstrated to be quite effective. While secure debugging is not a feature most users will ever use directly, along with full disk encryption and a good screen lock password, it goes a long way towards making developer devices more secure. 

Certificate pinning in Android 4.2

A lot has happened in the Android world since our last post, with new devices being announced and going on and off sale.  Most importantly, however, Android 4.2 has been released and made its way to AOSP. It’s an evolutionary upgrade, bringing various improvements and some new  user and developer features. This time around, security related enhancements made it into the what’s new  list, and there is quite a lot of them. The most widely publicized one has been, as expected, the one users may actually see — application verification. It recently got an in-depth analysis, so in this post we will look into something less visible, but nevertheless quite important — certificate pinning

PKI’s trust problems and proposed solutions

In the highly unlikely case that you haven’t heard about it, the trustworthiness of the existing public CA model has been severely compromised in the recent couple of years. It has been suspect for a while, but recent high profile CA security breaches have brought this problem into the spotlight. Attackers managed to issue certificates for a wide range of sites, including Windows Update servers and Gmail. Not all of those were used (or at least not detected) in real attacks, but the incidents showed just how much of current Internet technology depends on certificates. Fraudulent ones can be used for anything from installing malware to spying to Internet communication, and all that while fooling users that they are using a secure channel or installing a trusted executable. And better security for CA’s is not really a solution: major CA’s have willingly issued hundreds of certificated for unqualified names such as localhost, webmail and exchange (here is a breakdown, by number of issued certificates). These could enable eavesdropping on internal corporate traffic by using the certificates for a man-in-the-middle (MITM) attack against any internal host accessed using an unqualified name. And of course there is also the matter of compelled certificate creation, where a government agency could compel a CA to issue a false certificate to be used for intercepting secure traffic (and all this may be perfectly legal). 
Clearly the current PKI system, which is largely based on a pre-selected set of trusted CA’s (trust anchors), is problematic, but what are some of the actual problems? There are different takes on this one, but for starters, there are too many public CA’s. As this map by the EFF‘s SSL Observatory project shows, there are more than public 650 CA’s trusted by major browsers. Recent Android versions ship with over one hundred (140 for 4.2) trusted CA certificates and until ICS the only way to remote a trusted certificate was a vendor-initiated OS OTA. Additionally, there is generally no technical restriction to what certificates CA’s can issue: as the Comodo and DigiNotar attack have shown, anyone can issue a certificate for *.google.com (name constraints don’t apply to root CA’s and don’t really work for a public CA). Furthermore, since CA’s don’t publicize what certificates they have issued, there is no way for site operators (in this case Google) to know when someone issues a new, possibly fraudulent, certificate for one of their sites and take appropriate action (certificate transparency standards aims to address this). In short, with the current system if any of the built-in trust anchors is compromised, an attacker could issue a certificate for any site, and neither users accessing it, nor the owner of the site would notice. So what are some of the proposed solutions? 
Proposed solutions range from radical: scrape the whole PKI idea altogether and replace it with something new and better (DNSSEC is a usual favourite); and moderate: use the current infrastructure  but do not implicitly trust CA’s; to evolutionary: maintain compatibility with the current system, but extend it in ways that limit the damage of CA compromise. DNSSEC is still not universally deployed, although the key TLD domains have already been signed. Additionally, it is inherently hierarchical and actually more rigid than PKI, so it doesn’t really fit the bill too well. Other even remotely viable solutions have yet to emerge, so we can safely say that the radical path is currently out of the picture. Moving towards the moderate side, some people suggest the SSH model, in which no sites or CA’s are initially trusted, and users decide what site to trust on first access. Unlike SSH however, the number of sites that you access directly or indirectly (via CDN’s, embedded content, etc.) is virtually unlimited, and user-managed trust is quite unrealistic. Of a similar vein, but much more practical is Moxie Marlinspike‘s (of sslstrip and CloudCracker fame) Convergence. It is based on the idea of trust agility, a concept he introduced in his SSL And The Future Of Authenticity talk (and related blog post). It both abolishes the browser (or OS) pre-selected trust anchor set, and recognizes that users cannot possibly independently make trust decisions about all the sites they visit. Trust decisions are delegated to a set of notaries, that can vouch for a site by basically confirming that the certificate you receive from a site is one they have seen before. If multiple notaries point out the same certificate as correct, users can be reasonably sure that it is genuine and therefore trustworthy. Convergence is not a formal standard, but was released as actual working code including a Firefox plugin (client) and server-side notary software. While this system is promising, the number of available notaries is currently limited, and Google has publicly stated that it won’t add it to Chrome, and it cannot currently be implemented as an extension either (Chrome lacks the necessary API’s to let plugins override the default certificate validation module).
That leads us to the current evolutionary solutions, which have been deployed to a fairly large user base, mostly courtesy of the Chrome browser. One is certificate blacklisting, which is more of a band-aid solution: in addition to removing compromised CA certificates from the trust anchor set with a browser update, it also explicitly refuses to trust their public keys in order to cover the case where they are manually added to the trust store again. Chrome added blacklisting around the time Comodo was compromised, and Android has this feature since the original Jelly Bean release (4.1). The next one, certificate pinning (more accurately public key pinning), takes the converse approach: it whitelists the keys that are trusted to sign certificates for a particular site. Let’s look at it in a bit more detail.

Certificate pinning

Pinning was introduced in Google Chrome 13 in order to limit the CA’s that can issue certificates for Google properties. It actually helped discover the MITM attack against Gmail, which resulted from the DigiNotar breach. It is implemented by maintaining a list of public keys that are trusted to issue certificates for a particular DNS name. The list is consulted when validating the certificate chain for a host, and if the chain doesn’t include at least one of the whitelisted keys, validation fails. In practice the browser keeps a list of SHA1 hashes of the SubjectPublicKeyInfo (SPKI) field of trusted certificates. Pinning the public keys instead of the actual certificates allows for updating host certificates without breaking validation and requiring pinning information update. You can find the current Chrome list here.

As you can see, the list now pins non-Google sites as well, such as twitter.com and lookout.com, and is rather large. Including more sites will only make it larger, and it is quite obvious that hard-coding pins doesn’t really scale. A couple of new Internet standards have been proposed to help solve this scalability problem: Public Key Pinning Extension for HTTP (PKPE) by Google and Trust Assertions for Certificate Keys (TACK) by Moxie Marlinspike. The first one is simpler and proposes a new HTTP header (Public-Key-Pin, PKP) that holds pinning information including public key hashes, pin lifetime and whether to apply pinning to subdomains of the current host. Pinning information (or simply ‘pins’) is cached by the browser and used when making trust decisions until it expires. Pins are required to be delivered over a secure (TLS) connection, and the first connection that includes a PKP header is implicitly trusted (or optionally validated against pins built into the client). The protocol also supports an endpoint to report failed validations to via the report-uri directive and allows for a non-enforcing mode (specified with the Public-Key-Pins-Report-Only header), where validation failures are reported, but connections are still allowed. This makes it possible to notify host administrators about possible MITM attacks against their sites, so that they can take appropriate action. The TACK proposal, on the other header, is somewhat more complex and defines a new TLS extension (TACK) that carries pinning information signed with a dedicated ‘TACK key’. TLS connections to a pinned hostname require the server to present a ‘tack’ containing the pinned key and a corresponding signature over the TLS server’s public key. Thus both pinning information exchange and validation are carried out at the TLS layer. In contrast, PKPE uses the HTTP layer (over TLS) to send pinning information to clients, but also requires validation to be performed at the TLS layer, dropping the connection if validation against the pins fails. Now that we have an idea how pinning works, let’s see how it’s implemented on Android.

Certificate pinning in Android

As mentioned at beginning of the post, pinning is one of the many security enhancements introduced in Android 4.2. The OS doesn’t come with any built-in pins, but instead reads them from a file in the /data/misc/keychain directory (where user-added certificates and blacklists are stored). The file is called, you guessed it, simply pins and is in the following format: hostname=enforcing|SPKI SHA512 hash, SPKI SHA512 hash,.... Here enforcing is either true or false and is followed by a list of SPKI hashes (SHA512) separated by commas. Note that there is no validity period, so pins are valid until deleted. The file is used not only by the browser, but system-wide by virtue of pinning being integrated in libcore. In practice this means that the default (and only) system X509TrustManager implementation (TrustManagerImpl) consults the pin list when validating certificate chains. However there is a twist: the standard checkServerTrusted() method doesn’t consult the pin list. Thus any legacy libraries that do not know about certificate pinning would continue to function exactly as before, regardless of the contents of the pin list. This has probably been done for compatibility reasons, and is something to be aware of: running on 4.2 doesn’t necessarily mean that you get the benefit of system-level certificate pins. The pinning functionality is exposed to third party libraries or SDK apps via the new X509TrustManagerExtensions SDK class. It has a single method, List<X509Certificate> checkServerTrusted(X509Certificate[] chain, String authType, String host) that returns a validated chain on success or throws a CertificateException if validation fails. Note the last parameter, host. This is what the underlying implementation (TrustManagerImpl) uses to search the pin list for matching pins. If one is found, the public keys in the chain being validated will be checked against the hashes in the pin entry for that host. If none of them matches, validation will fail and you will get a CertificateException. So what part of the system uses the new pinning functionality then? The default SSL engine (JSSE provider), namely the client handshake (ClientHandshakeImpl) and SSL socket (OpenSSLSocketImpl) implementations. They would check their underlying X509TrustManager and if it supports pinning, they will perform additional validation against the pin list. If validation fails, the connection won’t be established, thus implementing pin validation on the TLS layer as required by the standards discussed in the previous section. We now know what the pin list is and who uses it, so let’s find out how it is created and maintained.

First off, at the time of this writing, Google-managed (on Nexus devices) JB 4.2 installations have an empty pin list (i.e., the pins file doesn’t exist). Thus certificate pinning on Android has not been widely deployed yet. Eventually it will be, but the current state of affairs makes it easier to play with, because restoring to factory state requires simply deleting the pins file and associated metadata (root access required). As you might expect, the pins file is not written directly by the OS. Updating it is triggered by a broadcast (android.intent.action.UPDATE_PINS) that contains the new pins in it’s extras. The extras contain the path to the new pins file, its new version (stored in /data/misc/keychain/metadata/version), a hash of the current pins and a SHA512withRSA signature over all the above. The receiver of the broadcast (CertPinInstallReceiver) will then verify the version, hash and signature, and if valid, atomically replace the current pins file with new content (the same procedure is used for updating the premium SMS numbers list). Signing the new pins ensures that they can only by updated by whoever controls the private signing key. The corresponding public key used for validation is stored as a system secure setting under the "config_update_certificate" key (usually in the secure table of the
/data/data/com.android.providers.settings/databases/settings.db) Just like the pins file, this value currently doesn’t exists, so its relatively safe to install your own key in order to test how pinning works. Restoring to factory state requires deleting the corresponding row from the secure table. This basically covers the current pinning implementation in Android, it’s now time to actually try it out.

Using certificate pinning

To begin with, if you are considering using pinning in an Android app, you don’t need the latest and greatest OS version. If you are connecting to a server that uses a self-signed or a private CA-issued certificate, chances you might already be using pinning. Unlike a browser, your Android app doesn’t need to connect to practically every possible host on the Internet, but only to a limited number of servers that you know and have control over (limited control in the case of hosted services). Thus you know in advance who issued your certificates and only need to trust their key(s) in order to establish a secure connection to your server(s). If you are initializing a TrustManagerFactory with your own keystore file that contains the issuing certificate(s) of your server’s SSL certificate, you are already using pinning: since you don’t trust any of the built-in trust anchors (CA certificates), if any of those got compromised your app won’t be affected (unless it also talks to affected public servers as well). If you, for some reason, need to use the default trust anchors as well, you can define pins for your keys and validate them after the default system validation succeeds. For more thoughts on this and some sample code (doesn’t support ICS and later, but there is pull request with the required changes), refer to this post by Moxie Marlinspike. Update: Moxie has repackaged his sample pinning code in an easy to use standalone library. Update 2: His version uses a static, app-specific trust tore. Here’s a fork that uses the system trust store, both on pre-ICS (cacerts.bks) and post-ICS (AndroidCAStore) devices.

Before we (finally!) start using pinning in 4.2 a word of warning: using the sample code presented below both requires root access and modifies core system files. It does have some limited safety checks, but it might break your system. If you decide to run it, make sure you have a full system backup and proceed with caution.

As we have seen, pins are stored in a simple text file, so we can just write one up and place it in the required location. It will be picked and used by the system TrustManager, but that is not much fun and is not how the system actually works. We will go through the ‘proper’ channel instead by creating and sending a correctly signed update broadcast. To do this, we first need to create and install a signing key. The sample app has one embedded so you can just use that or generate and load a new one using OpenSSL (convert to PKCS#8 format to include in Java code). To install the key we need the WRITE_SECURE_SETTINGS permission, which is only granted to system apps, so we must either sign our test app with the platform key (on a self-built ROM) or copy it to /system/app (on a rooted phone with stock firmware). Once this is done we can install the key by updating the "config_update_certificate" secure setting:

Settings.Secure.putString(ctx.getContentResolver(), "config_update_certificate", 
"MIICqDCCAZAC...");

If this is successful we then proceed to constructing our update request. This requires reading the current pin list version (from /data/misc/keychain/metadata/version) and the current pins file content. Initially both should be empty, so we can just start off with 0 and an empty string. We can then create our pins file, concatenate it with the above and sign the whole thing before sending the UPDATE_PINS broadcast. For updates, things are a bit more tricky since the metadata/version file’s permissions don’t allow for reading by a third party app. We work around this by launching a root shell to get the file contents with cat, so don’t be alarmed if you get a ‘Grant root?’ popup by SuperSU or its brethren. Hashing and signing are pretty straightforward, but creating the new pins file merits some explanation.

To make it easier to test, we create (or append to) the pins file by connecting to the URL specified in the app and pinning the public keys in the host’s certificate chain (we’ll use www.google.com in this example, but any host accessible over HTTPS should do). Note that we don’t actually pin the host’s SSL certificate: this is to allow for the case where the host key is lost or compromised and a new certificate is issued to the host. This is introduced in the PKPE draft as a necessary security trade-off to allow for host certificate updates. Also note that in the case of one (or more) intermediate CA certificates we pin both the issuing certificate’s key(s) and the root certificate’s key. This is to allow for testing more variations, but is not something you might want to do in practice: for a connection to be considered valid, only one of the keys in the pin entry needs to be in the host’s certificate chain. In the case that this is the root certificate’s key, connections to hosts with certificates issued by a compromised intermediary CA will be allowed (think hacked root CA reseller). And above all, getting and creating pins based on certificates you receive from a host on the Internet is obviously pointless if you are already the target of a MITM attack. For the purposes of this test, we assume that this is not the case. Once we have all the data, we fire the update intent, and if it checks out the pins file will be updated (watch the logcat output to confirm). The code for this will look something like this (largely based on pinning unit test code in AOSP). With that, it is time to test if pinning actually works.

URL url = new URL("https://www.google.com");
HttpsURLConnection conn = (HttpsURLConnection) url.openConnection();
conn.setRequestMethod("GET");
conn.connect();

X509Certificate[] chain = (X509Certificate[])conn.getServerCertificates();
X509Certificate cert = chain[1];
String pinEntry = String.format("%s=true|%s", url.getHost(), getFingerprint(cert));
String contentPath = makeTemporaryContentFile(pinEntry);
String version = getNextVersion("/data/misc/keychain/metadata/version");
String currentHash = getHash("/data/misc/keychain/pins");
String signature = createSignature(content, version, currentHash);

Intent i = new Intent();
i.setAction("android.intent.action.UPDATE_PINS");
i.putExtra("CONTENT_PATH", contentPath);
i.putExtra("VERSION", version);
i.putExtra(REQUIRED_HASH", currentHash);
i.putExtra("SIGNATURE", signature);
sendBroadcast(i);

We have now pinned www.google.com, but how to test if the connection will actually fail? There are multiple ways to do this, but to make things a bit more realistic we will launch a MITM attack of sorts by using an SSL proxy. We will use the Burp proxy, which works by generating a new temporary (ephemeral) certificate on the fly for each host you connect to (if you prefer a terminal-based solution, try mitmproxy). If you install Burp’s root certificate in Android’s trust store and are not using pinning, browsers and other HTTP clients have no way of distinguishing the ephemeral certificate Burp generates from the real one and will happily allow the connection. This allows Burp to decrypt the secure channel on the fly and enables you to view and manipulate traffic as you wish (strictly for research purposes, of course). Refer to the Getting Started page for help with setting up Burp. Once we have Burp all set up, we need to configure Android to use it. While Android does support HTTP proxies, those are generally only used by the built-in browser and it is not guaranteed that HTTP libraries will use the proxy settings as well. Since Android is after all Linux, we can easily take care of this by setting up a ‘transparent’ proxy that redirects all HTTP traffic to our chosen host by using iptables. If you are not comfortable with iptables syntax or simply prefer an easy to use GUI, there’s an app for that as well: Proxy Droid. After setting up Proxy Droid to forward packets to our Burp instance we should have all Android traffic flowing through our proxy. Open a couple of pages in the browser to confirm before proceeding further (make sure Burp’s ‘Intercept’ button is off if traffic seems stuck).

Finally time to connect! The sample app allows you to test connection with both of Android’s HTTP libraries (HttpURLConnection and Apache’s HttpClient), just press the corresponding ‘Check w/ …’ button. Since validation is done at the TLS layer, the connection shouldn’t be allowed and you should see something like this (the error message may say ‘No peer certificates‘ for HttpClient; this is due to the way it handles validation errors):

If you instead see a message starting with ‘X509TrustManagerExtensions verify result: Error verifying chain…‘, the connection did go through but our additional validation using the X509TrustManagerExtensions class detected the changed certificate and failed. This shouldn’t happen, right? It does though because HTTP clients cache connections (SSLSocket instances, which in turn each hold a X509TrustManager instance, which only reads pins when created). The easiest way to make sure pins are picked up is to reboot the phone after you pin your test host. If you try connecting with the Android browser after rebooting (not Chrome!), you will be greeted with this message:

As you can see the certificate for www.google.com is issued by our Burp CA, but it might as well be from DigiNotar: if the proper public keys are pinned, Android should detected the fraudulent host certificate and show a warning. This works because the Android browser is using the system trust store and pins via the default TrustManager, even though it doesn’t use JSSE SSL sockets. Connecting with Chrome on the other hand works fine even though it does have built-in pins for Google sites: Chrome allows manually installed trust anchors to override system pins so that tools such as Burp or Fiddler continue to work (or pinning is not yet enabled on Android, which is somewhat unlikely).

So there you have it: pinning on Android works. If you look at the sample code, you will see that we have created enforcing pins and that is why we get connection errors when connecting through the proxy. If you set the enforcing parameter to false instead, connection will be allowed, but chains that failed validation will still be recorded to the system dropbox (/data/system/dropbox) in cert_pin_failure@timestamp.txt files, one for each validation failure.

Summary

Android adds certificate pinning by keeping a pin list with an entry for each pinned DNS name. Pin entries include a host name, an enforcing parameter and a list of SPKI SHA512 hashes of the of keys that are allowed to sign a certificate for that host. The pin list is updated by sending a broadcast with signed update data. Applications using the default HTTP libraries get the benefit of system-level pinning automatically or can explicitly check a certificate chain against the pin list by using the X509TrustManagerExtensions SDK class. Currently the pin list is empty, but the functionality is available now and once pins for major sites are deployed this will add another layer of defense against MIMT attacks that follow after a CA has been compromised.

Happy 5th Birthday Android!

From Cupcake to Jelly Bean, the last five years have brought many different flavors of smart phones to Android users. People can’t seem to get enough of these delicious digital treats, grossing over 500 million active Android devices worldwide and expecting to reach one billion at some point in 2013.

With its increasing popularity, it’s no surprise that the Android OS is the most attractive target for writers of mobile malware. From April to June 2012, McAfee Labs found that practically all new mobile malware was directed at the Android platform. The attacks included SMS-sending malware, mobile botnets, spyware and destructive Trojans.

Mobile threats continue to evolve as writers of mobile malware become more advanced in their practice. They are looking to steal consumer and business data from unprotected devices ranging from customer lists to personal financial information. These threats are growing in their sophistication and continue to find vulnerabilities through users’ pictures and social media applications.

In the last few months McAfee Labs has uncovered the latest emerging threats that Android users should be watchful for:

“Drive by downloads:” From April to June 2012 there was an emergence of mobile Android “drive by downloads.” These mobile drive by downloads drop dangerous malware on your phone when a user visit a malicious site. Once the user tricked into running the app it steals the personal data stored on your phone.

Twikabot.A: A new botnet client, Android/Twikabot.A, uses Twitter as a means of controlling and executing attacks. The user unexpectedly downloads the malware onto their phone after clicking on a Twitter picture link or message. Once downloaded the attacker can, tweet infected links to followers, install additional malware, delete files and leave the back door open for other attackers.

Stamper. A: Malware authors have evolved the Android/Moghava.A into a new Trojan threat known as Android/Stamper.A by simply changing a few lines of the original malicious code. This damages photos by photo-bombing the user’s pictures with an image of a baby. Users looking for a voting app for the Japanese female pop band, AKB48, unknowingly download the Trojan. The baby picture is from an ad campaign originally targeted at male fans of the band, posing the question “if you and a member of AKB48 had a kid, what would it look like?” The ad campaign put together a sumo wrestler and band member and featured an image of what that ensuing baby would look like. While the Trojan doesn’t change anything except the image and a few strings in the image stamping, users expecting to get results from the pop group’s fan site instead have all their pictures damaged with a photo-bombing baby.

McAfee Labs recommends that all Android users take appropriate precautions to safeguard their devices and personal information. For more information on how Android users can protect their devices visit mcafee.com/us/mms/. You can also download a free guide on mobile security from our Security Advice Center.

The post Happy 5th Birthday Android! appeared first on McAfee Blogs.

Android online account management

Our recent posts covered NFC and the secure element as supported in recent Android versions, including community ones. In this two-part series we will take a completely different direction: managing online user accounts and accessing Web services. We will briefly discuss how Android manages user credentials and then show how to use cached authentication details to log in to most Google sites without requiring additional user input. Most of the functionality we shall discuss is hardly new — it has been available at least since Android 2.0. But while there is ample documentation on how to use it, there doesn’t see to be a ‘bigger picture’ overview of how the pieces are tied together. This somewhat detailed investigation was prompted by trying to develop an app for a widely used Google service that unfortunately doesn’t have an official API and struggling to find a way to login to it using cached Google credentials. More on this in the second part, let’s first see how Android manages accounts for online services.

Android account management

Android 2.0 (API Level 5, largely non-existent, because it was quickly succeeded by 2.0.1, Level 6), introduced the concept of centralized account management with a public API. The central piece in the API is the AccountManager class which, quote: ‘provides access to a centralized registry of the user’s online accounts. The user enters credentials (user name and password) once per account, granting applications access to online resources with “one-click” approval.’ You should definitely read the full documentation of the class, which is quite extensive, for more details. Another major feature of the class is that it lets you get an authentication token for supported accounts, allowing third party applications to authenticate to online services without needing to handle the actual user password (more on this later). It also has a whole of 5 methods that allow you to get an authentication token, all but one with at least 4 parameters, so finding the one you need might take some time, with yet some more to get the parameters right. It might be a good idea to start with the synchronous blockingGetAuthToken() and work your way from there once you have a basic working flow. On some older Android versions, the AccountManager would also monitor your SIM card and wipe cached credentials if you swapped cards, but fortunately this ‘feature’ has been removed in Android 2.3.4.

The AccountManager, as most Android system API’s, is just a facade for the AccountManagerService which does the actual work. The service doesn’t provide an implementation for any particular form of authentication though. It only acts as a coordinator for a number of pluggable authenticator modules for different account types (Google, Twitter, Exchange, etc.). The best part is that any application can register an authentication module by implementing an account authenticator and related classes, if needed. Android Training has a tutorial on the subject that covers the implementation details, so we will not discuss them here. Registering a new account type with the system lets you take advantage of a number of Android infrastructure services:

  • centralized credential storage in a system database
  • ability to issue tokens to third party apps
  • ability to take advantage of Android’s automatic background synchronization

One thing to note is that while credentials (usually user names and passwords) are stored in a central database (/data/system/accounts.db or /data/system/user/0/accounts.db on Jelly Bean and later for the first system user), that is only accessible to system applications, credentials are in no way encrypted — that is left to the authentication module to implement as necessary. If you have a rooted device (or use the emulator) listing the contents of the accounts table might be quite instructive: some of your passwords, especially for the stock Email application, will show up in clear text. While the AccountManger has a getPassword() method, it can only be used by apps with the same UID as the account’s authenticator, i.e., only by classes in the same app (unless you are using sharedUserId, which is not recommended for non-system apps). If you want to allow third party applications to authenticate using your custom accounts, you have to issue some sort of authentication token, accessible via one of the many getAuthToken() methods. Once your account is registered with Android, if you implement an additional sync adapter, you can register to have it called at a specified interval and do background syncing for you app (one- or two-way), without needing to manage scheduling yourself. This is a very powerful feature that you get practically for free, and probably merits its own post. As we now have a basic understanding of authentication modules, let’s see how they are used by the system.

As we mentioned above, account management is coordinated by the AccountManagerService. It is a fairly complex piece of code (about 2500 lines in JB), most of the complexity stemming from the fact that it needs to communicate with services and apps that span multiple processes and threads within each process, and needs to take care of synchronization and delivering results to the right thread. If we abstract out the boilerplate code, what it does on a higher level is actually fairly straightforward:

  • on startup it queries the PackageManager to find out all registered authenticators, and stores references to them in a map, keyed by account type
  • when you add an account of a particular type, it saves its type, username and password to the accounts table
  • if you get, set or reset the password for an account, it accesses or updates the accounts table accordingly
  • if you get or set user data for the account, it is fetched from or saves to the extras table
  • when you request a token for a particular account, things become a bit more interesting:
    • if a token with the specified type has never been issued before, it shows a confirmation activity asking (see screenshot below) the user to approve access for the requesting application. If they accept, the UID of the requesting app and the token type are saved to the grants table.
    • if a grant already exits, it checks the authtoken table for tokens matching the request. If a valid one exists, it is returned.
    • if a matching token is not found, it finds the authenticator for the specified account type in the map and calls its getAuthToken() method to request a token. This usually involves the authenticator fetching the username and password from the accounts table (via the getPassword() method) and calling its respective online service to get a fresh token. When one is returned, it gets cached in the authtokens table and then returned to the requesting app (usually asynchronously via a callback).
  • if you invalidate a token, it gets deleted from the authtokens table
Now that we know how Android’s account management system works, let’s see how it is implemented for the most widely used account type.

Google account management

    Usually the first thing you do when you turn on your brand new (or freshly wiped) ‘Google Experience’ Android device is to add a Google account. Once you authenticate successfully, you are offered to sync data from associated online services (GMail, Calendar, Docs, etc.) to your device. What happens behinds the scenes is that an account of type ‘com.google’ is added via the AccountManager, and a bunch of Google apps start getting tokens for the services they represent. Of course, all of this works with the help of an authentication provider for Google accounts. Since it plugs in the standard account management framework, it works by registering an authenticator implementation and using it involves the sequence outlined above. However, it is also a little bit special. Three main things make it different:

    • it is not part of any particular app you can install, but is bundled with the system
    • a lot of the actual functionality is implemented on the server side
    • it does not store passwords in plain text on the device

    If you have ever installed a community ROM built off AOSP code, you know that in order to get GMail and other Google apps to work on your device, you need a few bits not found in AOSP. Two of the required pieces are the Google Services Framework (GSF) and the Google Login Service (GLS). The former provides common services to all Google apps such as centralized settings and feature toggle management, while the latter implements the authentication provider for Google accounts and will be the topic of this section.

    Google provides a multitude of online services (not all of which survive for long), and consequently a bunch of different methods to authenticate to those. Android’s Google Login Service, however doesn’t call those public authentication API’s directly, but via a dedicated online service, which lives at android.clients.google.com. It has endpoints both for authentication and authorization token issuing, as well as data feed (mail, calendar, etc.) synchronization, and more. As we shall see, the supported methods of authentication are somewhat different from those available via other public Google authentication API’s. Additionally, it supports a few ‘special’ token types that greatly simplify some complex authentication flows.

    All of the above is hardly surprising: when you are dealing with online services it is only natural to have as much as possible of the authentication logic on the server side, both for ease of maintenance and to keep it secure. Still, to kick start it you need to store some sort of credentials on the device, especially when you support background syncing for practically everything and you cannot expect people to enter them manually. On-device credential management is one of the services GLS provides, so let’s see how it is implemented. As mentioned above, GLS plugs into the system account framework, so cached credentials, tokens and associated extra data are stored in the system’s accounts.db database, just as for other account types. Inspecting it reveals that Google accounts have a bunch of Base64-encoded strings associated with them. One of the user data entries (in the extras table) is helpfully labeled sha1hash (but does not exist on all Android versions) and the password (in the accounts table) is a long string that takes different formats on different Android versions. Additionally, the GSF database has a google_login_public_key entry, which when decoded suspiciously resembles a 1024-bit RSA public key. Some more experimentation reveals that credential management works differently on pre-ICS and post-ICS devices. On pre-ICS devices, GLS stores an encrypted version of your password and posts it to the server side endpoints both when authenticating for the first time (when you add the account) and when it needs to have a token for a particular service issued. On post-ICS devices, it only posts the encrypted password the first time, and gets a ‘master token’ in exchange, which is then stored on the device (in the password column of the accounts database). Each subsequent token request uses that master token instead of a password.

    Let’s look into the cached credential strings a bit more. The encrypted password is 133 bytes long, and thus it is a fair bet that it is encrypted with the 1024-bit (128 bytes) RSA public key mentioned above, with some extra data appended. Adding multiple accounts that use the same password produces different password strings (which is a good thing), but the first few bytes are always the same, even on different devices. It turns out those identify the encryption key and are derived by hashing its raw value and taking the leading bytes of the resulting hash. At least from our limited sample of Android devices, it would seem that the RSA public key used is constant both across Android versions and accounts. We can safely assume that its private counterpart lives on the server side and is used to decrypt sent passwords before performing the actual authentication. The padding used is OAEP (with SHA1 and MGF1), which produces random-looking messages and is currently considered secure (at least when used in combination with RSA) against most advanced cryptanalysis techniques. It also has quite a bit of overhead, which in practice means that the GLS encryption scheme can encrypt at most 86 bytes of data. The outlined encryption scheme is not exactly military-grade and there is the issue of millions of devices most probably using the same key, but recovering the original password should be sufficiently hard to discourage most attackers. However, let’s not forget that we also have a somewhat friendlier SHA1 hash available. It turns out it can be easily reproduced by ‘salting’ the Google account password with the account name (typically GMail address) and doing a single round of SHA1. This is considerably easier to do and it wouldn’t be too hard to precompute a bunch of hashes based on commonly used or potential passwords if you knew the target account name.

    Fortunately, newer version of Android (4.0 and later) no longer store this hash on the device. Instead of the encrypted password+SHA1 hash combination they store an opaque ‘master token’ (most probably some form of OAuth token) in the password column and exchange it for authentication tokens for different Google services. It is not clear whether this token ever expires or if it is updated automatically. You can, however, revoke it manually by going to the security settings of your Google account and revoking access for the ‘Android Login Service’ (and a bunch of other stuff you never use while you are at it). This will force the user to re-authenticate on the device next time it tries to get a Google auth token, so it is also somewhat helpful if you ever lose your device and don’t want people accessing your email, etc. if they manage to unlock it. The service authorization token issuing protocol uses some device-specific data in addition to the master token, so obtaining only the master token should not be enough to authenticate and impersonate a device (it can however be used to login into your Google account on the Web, see the second part for details).

    Google Play Services

    Google Play Services (we’ll abbreviate it to GPS, although the actual package is com.google.android.gms, guess where the ‘M’ came from) was announced at this year’s Google I/O as an easy to use platform that offers integration with Google products for third-party Android apps. It was actually rolled out only a month ago, so it’s probably not very widely used yet. Currently it provides support for OAuth 2.0 authorization to Google API’s ‘with a good user experience and security’, as well some Google+ plus integration (sign-in and +1 button). Getting OAuth 2.0 tokens via the standard AccountManager interface has been supported for quite some time (though support was considered ‘experimental’) by using the special 'oauth2:scope' token type syntax. However, it didn’t work reliably across different Android builds, which have different GLS versions bundled and this results in slightly different behaviour. Additionally, the permission grant dialog shown when requesting a token was not particularly user friendly, because it showed the raw OAuth 2.0 scope in some cases, which probably means little to most users (see screenshot in the first section). While some human-readable aliases for certain scopes where introduced (e.g., ‘Manage your taks’ for ‘oauth2:https://www.googleapis.com/auth/tasks’), that solution was neither ideal, nor universally available. GPS solves this by making token issuing a two-step process (newer GLS versions also use this process):
    1. the first request is much like before: it includes the account name, master token (or encrypted password pre-ICS) and requested service, in the 'oauth2:scope' format. GPS adds two new parameters: requesting app package name and app signing certificate SHA1 hash (more on this later). The response includes some human readable details about the requested scope and requesting application, which GPS shows in a permission grant dialog like the one shown below.
    2. if the users grants the permission, this decision is recorded in the extras table in a proprietary format which includes the requesting app’s package name, signing certificate hash, OAuth 2.0 scope and grant time (note that it is not using the grants table). GPS then resends the authorization request setting the has_permission parameter to 1. On success this results in an OAuth 2.0 token and its expiry date in the response. Those are cached in the authtokens table in a similar format.
    To be able to actually use a Google API, you need to register your app’s package name and signing key in Google’s API console. The registration lets services validating the token query Google what app the token was issued for, and thus identify the calling app. This has one subtle, but important side-effect: you don’t have to embed an API key in your app and send it with every request. Of course, for a third party published app you can easily find out both the package name and the signing certificate so it is not particularly hard to get a token issued in the name of some other app (not possible via the official API, of course). We can assume that there are some additional checks on the server side that prevent this, but theoretically, if you used such a token you could, for example, exhaust a third-party app’s API request quota by issuing a bunch of requests over a short period of time. 
    The actual GPS implementation seems to reuse much of the original Google Login Service authentication logic, including the password encryption method, which is still used on pre-ICS devices (the protocol is, after all, mostly the same and it needs to be able to use pre-existing accounts). On top of that it adds better OAuth 2.0 support, a version-specific account selection dialog and some prettier and more user friendly permission grant UIs. The GPS app has the Google apps shared UID, so it can directly interact with other proprietary Google services, including GLS and GSF. This allows it, among other things, to directly get and write Google account credentials and tokens to the accounts database. As can be expected, GPS runs in a remote service that the client library you link into your app accesses. The major selling point against the legacy AccountManager API is that while its underlying authenticator modules (GLS and GSF) are part of the system, and as such cannot be updated without an OTA, GPS is an user-installable app that can be easily updated via Google Play. Indeed, it is advertised as auto-updating (much like the Google Play Store client), so app developers presumably won’t have to rely on users to update it if they want to use newer features (unless GPS is disabled altogether, of course). This update mechanism is to provide ‘agility in rolling out new platform capabilities’, but considering how much time the initial roll-out took, it is to be seen how agile the whole thing will turn out to be. Another thing to watch out for is feature bloat: besides OAuth 2.0 support, GPS currently includes G+ and AdMob related features, and while both are indeed Google-provided services, they are totally unrelated. Hopefully, GPS won’t turn into a ‘everything Google plus the kitchen sink’ type of library, delaying releases even more. With all that said, if your app uses OAuth 2.0 tokens to authenticate to Google API’s, which is currently the preferred method (ClientLogin, OAuth 1.0 and AuthSub have been officially deprecated), definitely consider using GPS over ‘raw’ AccountManager access.

    Summary

    Android provides a centralized registry of user online accounts via the AccountManager class. It lets you both get tokens for existing accounts without having to handle the actual credentials and register your own account type, if needed. Registering an account type gives you access to powerful system features, such as authentication token caching and automatic background synchronization. ‘Google experience’ devices come with built-in support for Google accounts, which lets third party apps access Google online services without needing to directly request authentication information from the user. The latest addition to this infrastructure is the recently released Google Play Services app and companion client library, which aim to make it easy to use OAuth 2.0 from third party applications. 
    We’ve now presented an overview of how the account management system works, and the next step is to show how to actually use it to access a real online service. That will be the topic of the second article in the series.