My Federal Government Security Crash Program

In the wake of recent intrusions into government systems, multiple parties have been asking for my recommended courses of action.

In 2007, following public reporting on the 2006 State Department breach, I blogged When FISMA BitesInitial Thoughts on Digital Security Hearing. and What Should the Feds Do. These posts captured my thoughts on the government’s response to the State Department intrusion.

The situation then mirrors the current one well: outrage over an intrusion affecting government systems, China suspected as the culprit, and questions regarding why the government’s approach to security does not seem to be working.

Following that breach, the State Department hired a new CISO who pioneered the “continuous monitoring” program, now called “Continuous Diagnostic Monitoring” (CDM). That CISO eventually left State for DHS, and brought CDM to the rest of the Federal government. He is now retired from Federal service, but CDM remains. Years later we’re reading about another breach at the State Department, plus the recent OPM intrusions. CDM is not working.

My last post, Continuous Diagnostic Monitoring Does Not Detect Hackers, explained that although CDM is a necessary part of a security program, it should not be the priority. CDM is at heart a “Find and Fix Flaws Faster” program. We should not prioritize closing and locking doors and windows while there are intruders in the house. Accordingly, I recommend a “Detect and Respond” strategy first and foremost.

To implement that strategy, I recommend the following, three-phase approach. All phases can run concurrently.

Phase 1: Compromise Assessment: Assuming the Federal government can muster the motivation, resources, and authority, the Office of Management and Budget (OMB), or another agency such as DHS, should implement a government-wide compromise assessment. The compromise assessment involves deploying teams across government networks to perform point-in-time “hunting” missions to find, and if possible, remove, intruders. I suspect the “remove” part will be more than these teams can handle, given the scope of what I expect they will find. Nevertheless, simply finding all of the intruders, or a decent sample, should inspire additional defensive activities, and give authorities a true “score of the game.”

Phase 2: Improve Network Visibility: The following five points include actions to gain enhanced, enduring, network-centric visibility on Federal networks. While network-centric approaches are not a panacea, they represent one of the best balances between cost, effectiveness, and minimized disruption to business operations.

1. Accelerate the deployment of Einstein 3A, to instrument all Federal network gateways. Einstein is not the platform to solve the Federal government’s network visibility problem, but given the current situation, some visibility is better than no visibility. If the inline, “intrusion prevention system” (IPS) nature of Einstein 3A is being used as an excuse for slowly deploying the platform, then the IPS capability should be disabled and the “intrusion detection system” (IDS) mode should be the default. Waiting until the end of 2016 is not acceptable. Equivalent technology should have been deployed in the late 1990s.

2. Ensure DHS and US-CERT have the authority to provide centralizing monitoring of all deployed Einstein sensors. I imagine bureaucratic turf battles may have slowed Einstein deployment. “Who can see the data” is probably foremost among agency worries. DHS and US-CERT should be the home for centralized analysis of Einstein data. Monitored agencies should also be given access to the data, and DHS, US-CERT, and agencies should begin a dialogue on whom should have ultimately responsibility for acting on Einstein discoveries.

3. Ensure DHS and US-CERT are appropriately staffed to operate and utilize Einstein. Collected security data is of marginal value if no one is able to analyze, escalate, and respond to the data. DHS and US-CERT should set expectations for the amount of time that should elapse from the time of collection to the time of analysis, and staff the IR team to meet those requirements.

4. Conduct hunting operations to identify and remove threat actors already present in Federal networks. Now we arrive at the heart of the counter-intrusion operation. The purpose of improving network visibility with Einstein (for lack of an alternative at the moment) is to find intruders and eliminate them. This operation should be conducted in a coordinated manner, not in a whack-a-mole fashion that facilitates adversary persistence. This should be coordinated with the “hunt” mission in Phase 1.

5. Collect metrics on the nature of the counter-intrusion campaign and devise follow-on actions based on lessons learned. This operation will teach Federal network owners lessons about adversary campaigns and the unfortunate realities of the state of their enterprise. They must learn how to improve the speed, accuracy, and effectiveness of their defensive campaign, and how to prioritize countermeasures that have the greatest impact on the opponent. I expect they would begin considering additional detection and response technologies and processes, such as enterprise log management, host-based sweeping, modern inspection platforms with virtual execution and detonation chambers, and related approaches.

Phase 3. Continuous Diagnostic Monitoring, and Related Ongoing Efforts: You may be surprised to see that I am not calling for an end to CDM. Rather, CDM should not be the focus of Federal security measures. It is important to improve Federal security through CDM practices, such that it becomes more difficult for adversaries to gain access to government computers. I am also a fan of the Trusted Internet Connection program, whereby the government is consolidating the number of gateways to the Internet.

Note: I recommend anyone interested in details on this matter see my latest book, The Practice of Network Security Monitoring, especially chapter 9. In that chapter I describe how to run a network security monitoring operation, based on my experiences since the late 1990s.

Continuous Diagnostic Monitoring Does Not Detect Hackers

There is a dangerous misconception coloring the digital security debate in the Federal government. During the last week, in the wake of the breach at the Office of Personnel Management (OPM), I have been discussing countermeasures with many parties. Concerned officials, staffers, and media have asked me about the Einstein and Continuous Diagnostic Monitoring (CDM) programs. It has become abundantly clear to me that there is a fundamental misunderstanding about the nature of CDM. This post seeks to remedy that problem.

The story Federal cyber protection knocked as outdated, behind schedule by Cory Bennett unfortunately encapsulates the misunderstanding about Einstein and CDM:

The main system used by the federal government to protect sensitive data from hacks has been plagued by delays and criticism that it is already outdated — months before it is even fully implemented.

The Einstein system is intended to repel cyberattacks like the one revealed last week by the Office of Personnel Management (OPM)…

Critics say Einstein has been a multibillion-dollar boondoggle that is diverting attention away from the security overhaul that is needed…

To offset those shortcomings, officials in recent years started rolling out a Continuous Diagnostics and Mitigation (CDM) program, which searches for nefarious actors once they’re already in the networks. It’s meant to complement and eventually integrate with Einstein. (emphasis added)

The section I bolded and underlined is 100% false. CDM does not “search” for “nefarious actors” “in the networks.” CDM is a vulnerability management program. Please see the figure at the upper left. It depicts the six phases of the CDM program:

  1. Install/update “sensors.” (More on this shortly)
  2. Automated search for flaws.
  3. Collect results from departments and agencies.
  4. Triage and analyze results.
  5. Fix worst flaws.
  6. Report progress.
CDM searches for flaws (i.e., vulnerabilities), and Federal IT workers are supposed to then fix the flaws. The “sensors” mentioned in step 1 are vulnerability management and discovery platforms. They are not searching for intruders. You could be forgiven for misunderstanding what “sensor” means. Consider the following from the DHS CDM page:

The CDM program enables government entities to expand their continuous diagnostic capabilities by increasing their network sensor capacity, automating sensor collections, and prioritizing risk alerts.

Again, “sensor” here does not mean “sensing” to find intruders. The next paragraph says:

CDM offers commercial off-the-shelf (COTS) tools, with robust terms for technical modernization as threats change. First, agency-installed sensors perform an automated search for known cyber flaws. Results feed into a local dashboard that produces customized reports, alerting network managers to their worst and most critical cyber risks based on standardized and weighted risk scores. Prioritized alerts enable agencies to efficiently allocate resources based on the severity of the risk. Progress reports track results, which can be used to compare security posture among department/agency networks.  Summary information can feed into an enterprise-level dashboard to inform and situational awareness into cybersecurity risk posture across the federal government.

The “situational awareness” here means configuration and patch status, not intrusion status.
I captured the CMD figure from US-CERT’s Continuous Diagnostic Monitoring program overview (pdf). It also appears on the DHS CDM page. The US-CERT program Web page lists the core tools used for CDM as the following:

  • Intro to Hardware Asset Management (HWAM)
  • Intro to Software Asset Management (SWAM)
  • Intro to Vulnerability Management (VUL)
  • Intro to Configuration Settings Management (CSM)

As you can see, CDM is about managing infrastructure, not detecting and responding to intruders. Don’t be fooled by the “monitoring” in the term CDM; “monitoring” here means looking for flaws.
In contrast, Einstein is an intrusion detection and prevention platform. It is a network-based system that uses threat signatures to identify indications of compromise observable in network traffic. Einstein 1 and 2 were more like traditional IDS technologies, while Einstein 3 and 3 accelerated are more like IDP technologies. 
Critics of my characterization might say “CDM is more than faster patching.” According to the GSA page on CDM, CDM as I described earlier is only phase 1:
Endpoint Integrity
  • HWAM – Hardware Asset Management
  • SWAM – Software Asset Management
  • CSM – Configuration Settings Management
  • VUL – Vulnerability Management

Phase 2 will include the following:
Least Privilege and Infrastructure Integrity
  • TRUST –Access Control Management (Trust in People Granted Access)
  • BEHAVE – Security-Related Behavior Management
  • CRED – Credentials and Authentication Management
  • PRIV – Privileges

Phase 3 will include the following:
Boundary Protection and Event Management for Managing the Security Lifecycle
  • Plan for Events
  • Respond to Events
  • Generic Audit/Monitoring
  • Document Requirements, Policy, etc.
  • Quality Management
  • Risk Management
  • Boundary Protection – Network, Physical, Virtual

What do you not see listed in any of these phases? Aside from “respond to events,” which does not appear to mean intrusions, I still see no strong focus on detecting and responding to intrusions. CDM beyond phase 1 is still just dealing with “cyber hygiene.” Unfortunately, even the President does not have the proper strategic focus. As reported by the Hill:
President Obama acknowledged that one of the United States’s problems is that it has a “very old system.”

“What we are doing is going agency by agency and figuring out what can we fix with better practices and better computer hygiene by personnel, and where do we need new systems and new infrastructure in order to protect information,”

Don’t misunderstand my criticism of CDM as praise for Einstein. At the very least, Einstein, or a technology like it, should have been deployed across the Federal government while I was still in uniform, 15 years ago. We had equivalent technology in the Air Force 20 years ago. (See the foreword for my latest book online for history.)

Furthermore, I’m not saying that CDM is a bad approach. All of the CDM phases are needed. I understand that intruders are going to have an easy time getting back into a poorly secured network.

My goal with this post is to show that CDM is either being sold as, or misunderstood as, a way to detect intruders. CDM is not an intrusion detection program; CDM is a vulnerability management program, a method to Find and Fix Flaws Faster. CDM should have been called “F^4, F4, or 4F” to capture this strategic approach.

The focus on CDM has meant intruders already present in Federal networks are left to steal and fortify their positions, while scarce IT resources are devoted to patching. The Feds are identifying and locking doors and windows while intruders are inside the house.

It’s time for a new (yet ideologically very old) strategy: find the intruders in the network, remove them, and then conduct counter-intrusion campaigns to stop them from accomplishing their mission when they inevitably return. CDM is the real “multibillion-dollar boondoggle that is diverting attention away from the security overhaul that is needed.” The OPM breach is only the latest consequence of the misguided CDM-centric strategy.

Selling Your Mobile Device? Don’t Sell Your Data Along With It

It’s been a couple of years and your phone’s showing signs of age – scuffs on its once clear exterior, battery life that wanes in mere minutes, and a substantial wait time when powering on and off.

So, you start to ask yourself, “Is it time for an upgrade?”

Getting a new mobile device can be an exciting time for multiple reasons. Not only do you get a new toy to play with and configure, but also if you’ve kept your old phone in good condition, hopefully you’ll be able to sell it for a bit of a price break.

However, just as you wouldn’t move from one house to the next without first securing all your belongings, you don’t want to sell your mobile device without first removing all your data. In the wrong hands, it’s possible your data could become exposed.

Android’s factory-reset option has traditionally been a good go-to for those looking to rid their phones of leftover data and prime them for sale. Unfortunately, as recent data goes to show, the factory reset just isn’t enough for wiping everything off of your Android phone.

This issue stems from a problem in the phone’s flash memory, which limits how often certain data can be overwritten. In order to prolong the life of your mobile device’s hard drive, the factory-reset will essentially skip over certain pieces of information – categorizing them as logically deleted, which means they are not actually overwritten or removed from your phone.

As security researchers found, this leaves sensitive data such as passwords, login tokens and texts discoverable by someone with the right knowhow and the wrong intentions. Login tokens are especially concerning, as they are a prime target for thieves looking to compromise all of the apps on your mobile device – and the data they hold.

Luckily, there is a simple and quick fix for this issue – encryption!

If you plan to get rid of your Android phone or resell it, head to your security settings and opt to encrypt your device prior to performing a factory reset. This will ensure that any un-erased data is scrambled, and rendered useless to outside eyes.

Another great way to protect sensitive data and ensure your personal information stays safe? Installing comprehensive security software on your mobile device. McAfee® Mobile Security is free for both Android and iOS-based phones, and offers a variety of protections against the latest mobile threats.

And as always, to keep up with the latest security threats, make sure to follow @IntelSec_Home on Twitter and like us on Facebook.

lianne-caetano

Keystore redesign in Android M

Android M has been announced, and the first preview builds and documentation are now available. The most visible security-related change is, of course, runtime permissions, which impacts almost all applications, and may require significant app redesign in some cases. Permissions are getting more than enough coverage, so this post will look into a less obvious, but still quite significant security change in Android M — the redesigned keystore (credential storage) and related APIs. (The Android keystore has been somewhat of a recurring topic on this blog, so you might want to check older posts for some perspective.)

New keystore APIs

Android M officially introduces several new keystore features into the framework API, but the underlying work to support them has been going on for quite a while in the AOSP master branch. The most visible new feature is support for generating and using symmetric keys that are protected by the system keystore. Storing symmetric keys has been possible in previous versions too, but required using private (hidden) keystore APIs, and was thus not guaranteed to be portable across versions. Android M introduces a keystore-backed symmetric KeyGenerator, and adds support for the KeyStore.SecretKeyEntry JCA class, which allows storing and retrieving symmetric keys via the standard java.security.KeyStore JCA API. To support this, Android-specific key parameter classes and associated builders have been added to the Android SDK.

Here’s how generating and retrieving a 256-bit AES key looks when using the new M APIs:

// key generation
KeyGenParameterSpec.Builder builder = new KeyGenParameterSpec.Builder("key1",
KeyProperties.PURPOSE_ENCRYPT | KeyProperties.PURPOSE_DECRYPT);
KeyGenParameterSpec keySpec = builder
.setKeySize(256)
.setBlockModes("CBC")
.setEncryptionPaddings("PKCS7Padding")
.setRandomizedEncryptionRequired(true)
.setUserAuthenticationRequired(true)
.setUserAuthenticationValidityDurationSeconds(5 * 60)
.build();
KeyGenerator kg = KeyGenerator.getInstance("AES", "AndroidKeyStore");
kg.init(keySpec);
SecretKey key = kg.generateKey();

// key retrieval
KeyStore ks = KeyStore.getInstance("AndroidKeyStore");
ks.load(null);

KeyStore.SecretKeyEntry entry = (KeyStore.SecretKeyEntry)ks.getEntry("key1", null);
key = entry.getSecretKey();

This is pretty standard JCA code, and is in fact very similar to how asymmetric keys (RSA and ECDSA) are handled in previous Android versions. What is new here is, that there are a lot more parameters you can set when generating (or importing a key). Along with basic properties such as key size and alias, you can now specify the supported key usage (encryption/decryption or signing/verification), block mode, padding, etc. Those properties are stored along with the key, and the system will disallow key usage which doesn’t match the key’s attributes. This allows insecure key usage patterns (ECB mode, or constant IV for CBC mode, for example) to be explicitly forbidden, as well as constraining certain keys to a particular purpose, which is important in a multi-key cryptosystem or protocol. Key validity period (separate for encryption/signing and decryption/verification) can also be specified.

Another major new feature is requiring use authentication before allowing a particular key to be used, and specifying the authentication validity period. Thus, a key that protects sensitive data, can require user authentication on each use (e.g., decryption), while a different key may require only a single authentication per session (say, every 10 minutes).

The newly introduced key properties are available for both symmetric and asymmetric keys. An interesting detail is that apparently trying to use a key is now the official way (Cf. the Confirm Credential sample and related video) to check whether a user has authenticated within a given time period. This quite a roundabout way to verify user presence, especially if you app doesn’t make use of cryptography in the first place. The newly introduced FingerprintManager authentication APIs also make use of cryptographic objects, so this may be part of a larger picture, which we have yet to see.

Keystore and keymaster implementation

On a high level, key generation and storage work the same as in previous versions: the system keystore daemon provides an AIDL interface, which framework classes and system services use to generate and manage keys. The keystore AIDL has gained some new, more generic methods, as well support for a ‘fallback’ implementation but is mostly unchanged.

The keymaster HAL and its reference implementation have, however, been completely redesigned. The ‘old’ keymaster HAL is retained for backward compatibility as version 0.3, while the Android M version has been bumped to 1.0, and offers a completely different interface. The new interface allows for setting fine-grained key properties (also called ‘key characteristics’ internally), and supports breaking up cryptographic operations that manipulate data of unknown or large size into multiple steps using the familiar begin/update/finish pattern. Key properties are stored as a series of tags along with the key, and form an authorization set when combined. AOSP includes a pure software reference keymaster implementation which implements cryptographic operations using OpenSSL and encrypts key blobs using a provided master key. Let’s take a more detailed look at how the software implementations handles key blobs.

Key blobs

Keymaster v1.0 key blobs are wrapped inside keystore blobs, which are in turn stored as files in /data/misc/keystore/user_X, as before (where X is the Android user ID, starting with 0 for the primary user). Keymaster blobs are variable size and employ a tag-length-value (TLV) format internally. They include a version byte, a nonce, encrypted key material, a tag for authenticating the encrypted key, as well as two authorization sets (enforced and unenforced), which contain the key’s properties. Key material is encrypted using AES in OCB mode, which automatically authenticates the cipher text and produces an authentication tag upon completion. Each key blob is encrypted with a dedicated key encryption key (KEK), which is derived by hashing a binary tag representing the key’s root of trust (hardware or software), concatenated with they key’s authorization sets. Finally, the resulting hash value is encrypted with the master key to derive the blob’s KEK. The current software implementation deliberately uses a 128-bit AES zero key, and employs a constant, all-zero nonce for all keys. It is expected that the final implementation will either use a hardware-backed master-key, or be completely TEE-based, and thus not directly accessible from Android.

The current format is quite easy to decrypt, and while this will likely change in the final M version, in the mean time you can decrypt keymaster v1.0 blobs using the keystore-decryptor tool. The program also supports key blobs generated by previous Android versions, and will try to parse (but not decrypt) encrypted RSA blobs on Qualcomm devices. Note that the tool may not work on devices that use custom key blob formats or otherwise customize the keystore implementation. keystore-decryptor takes as input the keystore’s .masterkey file, the key blob to decrypt, and a PIN/password, which is the same as the device’s lockscreen credential. Here’s a sample invocation:

$ java -jar ksdecryptor-all.jar .masterkey 10092_USRPKEY_ec_key4 1234
master key: d6c70396df7bfdd8b47913485dc0a885

EC key:
s: 22c18a15163ad13f3bbeace52c361150 (254)
params: 1.2.840.10045.3.1.7
key size: 256
key algorithm: EC
authorizations:

Hidden tags:
tag=900002C0 TAG_KM_BYTES bytes: 5357 (2)

Enforced tags:

Unenforced tags:
tag=20000001 TAG_KM_ENUM_REP 00000003
tag=60000191 TAG_KM_DATE 000002DDFEB9EAF0: Sun Nov 24 11:10:25 JST 2069
tag=10000002 TAG_KM_ENUM 00000003
tag=30000003 TAG_KM_INT 00000100
tag=700001F4 TAG_KM_BOOL 1
tag=20000005 TAG_KM_ENUM_REP 00000000
tag=20000006 TAG_KM_ENUM_REP 00000001
tag=700001F7 TAG_KM_BOOL 1
tag=600002BD TAG_KM_DATE FFFFFFFFBD84BAF0: Fri Dec 19 11:10:25 JST 1969
tag=100002BE TAG_KM_ENUM 00000000

Per-key authorization

As discussed in the ‘New keystore APIs’ section, the setUserAuthenticationRequired() method of the key parameter builder allows you to require that the user authenticates before they are authorized to use a certain key (not unlike iOS’s Keychain). While this is not a new concept (system-wide credentials in Android 4.x require access to be granted per-key), the interesting part is how it is implemented in Android M. The system keystore service now holds an authentication token table, and a key operation is only authorized if the table contains a matching token. Tokens include an HMAC and thus can provide a strong guarantee that a user has actually authenticated at a given time, using a particular authentication method.

Authentication tokens are now part of Android’s HAL, and currently support two authentication methods: password and fingerprint. Here’s how tokens are  defined:

typedef enum {
HW_AUTH_NONE = 0,
HW_AUTH_PASSWORD = 1 << 0,
HW_AUTH_FINGERPRINT = 1 << 1,
HW_AUTH_ANY = UINT32_MAX,
} hw_authenticator_type_t;

typedef struct __attribute__((__packed__)) {
uint8_t version; // Current version is 0
uint64_t challenge;
uint64_t user_id; // secure user ID, not Android user ID
uint64_t authenticator_id; // secure authenticator ID
uint32_t authenticator_type; // hw_authenticator_type_t, in network order
uint64_t timestamp; // in network order
uint8_t hmac[32];
} hw_auth_token_t;

Tokens are generated by a newly introduced system component, called the gatekeeper. The gatekeeper issues a token after it verifies the user-entered password against a previously enrolled one. Unfortunately, the current AOSP master branch does not include the actual code that creates these tokens, but there is a base class which shows how a typical gatekeeper might be implemented: it computes an HMAC over the all fields of the hw_auth_token_t structure up to hmac using a dedicated key, and stores it in the hmac field. The serialized hw_auth_token_t structure then serves as an authentication token, and can be passed to other components that need to verify if the user is authenticated. Management of the token generation key is implementation-dependent, but it is expected that it is securely stored, and inaccessible to other system applications. In the final gatekeeper implementation the HMAC key will likely be backed by hardware, and the gatekeeper module could execute entirely within the TEE, and thus be inaccessible to Android. The low-level gatekeeper interface is part of Android M’s HAL and is defined in hardware/gatekeeper.h.

As can be expected, the current Android M builds do indeed include a gatekeeper binary, which is declared as follows in init.rc:

...
service gatekeeperd /system/bin/gatekeeperd /data/misc/gatekeeper
class main
user system
...

While the framework code that makes use of the gatekeepr daemon is not yet available, it is expected that the Android M keyguard (lockscreen) implementation interacts with the gatekeeper in order to obtain a token upon user authentication, and passes it to the system’s keystore service via its addAuthToken() method. The fingerprint authentication module (possibly an alternative keyguard implementation) likely works in the same way, but compares fingerprint scans against a previously enrolled fingerprint template instead of passwords.

Summary

Android M includes a redesigned keystore implementation which allows for fine-grained key usage control, and supports per-key authorization. The new keystore supports both symmetric and asymmetric keys, which are stored on disk as key blobs. Key blobs include encrypted key material, as well as a set of key tags, forming an authorization set. Key material is encrypted with a per-blob KEK, derived from the key’s properties and a common master key. The final keystore implementation is expected to use a hardware-backed master key, and run entirely within the confines of the TEE. 
Android M also includes a new system component, called the gatekeeper, which can issue signed tokens to attest that a particular user has authenticated at a particular time. The gatekeeper has been integrated with the current PIN, pattern or password-based lockscreen, and is expected to integrate with fingerprint-based authentication in the final Android M version on supported devices.