The Tragedy of the Bloomberg Code Issue

Last week I Tweeted about the Bloomberg “code” issue. I said I didn’t know how to think about it. The issue is a 28,000+ word document, enough to qualify as a book, that’s been covered by news outlets like the Huffington Post.

I approached the document with an open mind. When I opened my mail box last week, I didn’t expect to get a 112 page magazine devoted to explaining the importance of software to non-technical people. It was a welcome surprise.

This morning I decided to try to read some of the issue. (It’s been a busy week.) I opened the table of contents, shown at left. It took me a moment, but I realized none of the article titles mentioned security.

Next I visited the online edition, which contains the entire print version and adds additional content. I searched the text for the word “security.” These are the results:

Security research specialists love to party.

I have been asked if I was physical security (despite security wearing very distinctive uniforms),” wrote Erica Joy Baker on Medium.com who has worked, among other places, at Google.

Can we not rathole on Mailinator before we talk overall security?

We didn’t talk about password length, the number of letters and symbols necessary for passwords to be secure, or whether our password strategy on this site will fit in with the overall security profile of the company, which is the responsibility of a different division. 

Ditto many of the security concerns that arise when building websites, the typical abuses people perpetrate.

“First, I needed to pass everything through the security team, which was five months of review,” TMitTB says, “and then it took me weeks to get a working development environment, so I had my developers sneaking out to Starbucks to check in their code. …”

In Fortran, and I ask to see your security clearance.

If you’re counting, that’s eight instances of “security” in seven sentences. There’s no mention of “software security.” There’s a small discussion about “e-mail validation,” but it’s printed to show how broken software development meetings can be.

Searching for “hack” yields two references to “Hacker News” and this sentence talking about the perils of the PHP programming language:

Everything was always broken, and people were always hacking into my sites.

There is one result for “breach,” but it has nothing to do with security incidents. The only time the word “incident” appears is in a sentence talking about programming conference attendees behaving badly.

In brief, a 112 page magazine devoted to the importance of software has absolutely nothing useful to say about software security. Arguably, it says absolutely nothing on software security.

When someone communicates, what he or she doesn’t say can be as important as what he or she does say.

In the case of this magazine, it’s clear that software security is not on the minds of the professional programmer who wrote the issue. It’s also not a concern of the editor or any of the team that contributed to it.

From what I have seen, that neglect is not unique to Bloomberg.

That is the tragedy of the Bloomberg code issue, and it remains a contributing factor to the decades of breaches we have been suffering.

Air Force Enlisted Ratings Remain Dysfunctional

I just read Firewall 5s are history: Quotas for top ratings announced in Air Force Times. It describes an effort to eliminate the so-called “firewall 5” policy with a new “forced distribution” approach:

The Air Force’s old enlisted promotion system was heavily criticized by airmen for out-of-control grade inflation that came with its five-point numerical rating system. There were no limits on how many airmen could get the maximum: five out of five points [aka “firewall 5”]. As a result nearly everyone got a 5 rating.

As more and more raters gave their airmen 5s on their EPR [ Enlisted Performance Report], the firewall 5 became a common occurrence received by some 90 percent of airmen. And this meant the old EPR was effectively useless at trying to differentiate between levels of performance…

Under the new system, [Brig. Gen. Brian Kelly, director of military force management policy] said in a June 12 interview at the Pentagon, the numerical ratings are gone — and firewall 5s will be impossible…

The quotas — or as the Air Force calls them, “forced distribution” — will be one of the final elements to be put in place in the service’s massive overhaul of its enlisted promotion process, which has been in the works for three years

Only the top 5 percent, at most, of senior airmen, staff sergeants and technical sergeants who are up for promotion to the next rank will be deemed “promote now” and get the full 250 EPR points…

The quotas for the next tier of airmen — who will be deemed “must promote” and will get 220 out of 250 EPR points — will differ based on their rank. Kelly said that up to 15 percent of senior airmen who are eligible for promotion to staff sergeant can receive a “must promote” rating, and up to 10 percent of staff sergeants and tech sergeants up for promotion to technical and master sergeant can get that rating, and the accompanying 220 points.

The next three ratings — “promote,” “not ready now” and “do not promote” — will each earn airmen 200, 150 and 50 points, respectively. But there will be no limit on how many airmen can get those ratings. (emphasis added)

I am not an expert on the enlisted performance rating system. In some ways, I think the EPR is superior to the corresponding system for officers, because enlisted personnel take tests whose scores influence their promotion potential.

However, upon reading this story, it reminded me of my 2012 post How to Kill Teams Through “Stack Ranking”, which cited a Vanity Fair article about Microsoft’s old promotion system:

[Author Kurt] Eichenwald’s conversations reveal that a management system known as “stack ranking” — a program that forces every unit to declare a certain percentage of employees as top performers, good performers, average, and poor — effectively crippled Microsoft’s ability to innovate.

“Every current and former Microsoft employee I interviewed — every one — cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees,” Eichenwald writes.

This sounds uncomfortably like the new Air Force enlisted “forced distribution” system.

I was also reminded of another of my 2012 posts, Bejtlich’s Thoughts on “Why Our Best Officers Are Leaving”, which stressed the finding that

[V]eterans were shocked to look back at how “archaic and arbitrary” talent management was in the armed forces. Unlike industrial-era firms, and unlike the military, successful companies in the knowledge economy understand that nearly all value is embedded in their human capital. (emphasis added)

I am sure the Air Force is doing what it thinks is right by changing the EPR system. However, it’s equivalent to making changes in a centrally planned economy, without abandoning central planning.

It’s time the Air Force, and the rest of the military, discard their centrally-planned, promote-the-paper (instead of the person), involuntary assignment process.

In its place I recommend one that openly and competitively advertises and offers positions; gives pay, hiring, and firing authority to the local manager; and adopts similar aspects of sound private sector personnel management.

Today’s knowledge economy demands that military personnel be treated as unique individuals, not industrial age interchangeable parts. Our military talent is one of the few competitive advantages we possess over peer rivals. We must not squander it with dysfunctional promotion systems.

How to Avoid Bad Apps

If you think there’s like a million apps out there, that’s not exactly an exaggeration. For sure, there are more than you can imagine, which makes it easy to conceive that many certainly come with security problems.

In fact, out of the top 25 most popular apps, 18 of them bombed on a security test from McAfee Labs recently.

Creators of apps put convenience and allure ahead of security. This is why so many apps don’t have secure connections—creating welcome mats for hackers; they get into your smartphone and get your passwords, usernames and other sensitive information.

Joe Hacker knows all about this pervasive weakness in the app world. You can count on hackers using tool kits to aid in their quest to hack into your mobile device. The tool kit approach is called a man-in-the-middle attack.

The “man” gets your passwords, credit card number, Facebook login information, etc. Once the hacker gets all this information, he could do just about anything, including obtaining a credit line in your name and maxing it out, or altering your Facebook information.

You probably didn’t know that smartphone hacks are becoming increasingly widespread.

15-3301_BadApps_MECH

So what can you do?

  • Stay current – Know that mobile malware is growing and is transmitted via malicious apps.
  • Do your homework – Research apps, read reviews, and check app ratings before you download.
  • Check your sources – Only download apps from well-known, reputable app stores.
  • Watch the permissions – Check what info each app is accessing on your mobile devices and make sure you are comfortable with that.
  • Protect your phone – Install comprehensive security on your mobile devices to keep them protected from harmful apps.

RobertSiciliano1-150x150Robert Siciliano is an Online Safety Expert to Intel Security. He is the author of 99 Things You Wish You Knew Before Your Mobile was Hacked!

Redefining Breach Recovery

For too long, the definition of “breach recovery” has focused on returning information systems to a trustworthy state. The purpose of an incident response operation was to scope the extent of a compromise, remove the intruder if still present, and return the business information systems to pre-breach status. This is completely acceptable from the point of view of the computing architecture.

During the last ten years we have witnessed an evolution in thinking about the likelihood of breaches. When I published my first book in 2004, critics complained that my “assumption of breach” paradigm was defeatist and unrealistic. “Of course you could keep intruders out of the network, if you combined the right controls and technology,” they claimed. A decade of massive breaches have demonstrated that preventing all intrusions is impossible, given the right combination of adversary skill and persistence, and lack of proper defensive strategy and operations.

We need to now move beyond the arena of breach recovery as a technical and computing problem. Every organization needs to think about how to recover the interests of its constituents, should the organization lose their data to an adversary. Data custodians need to change their business practices such that breaches are survivable from the perspective of the constituent. (By constituent I mean customers, employees, partners, vendors — anyone dependent upon the practices of the data custodian.)

Compare the following scenarios.

If an intruder compromises your credit card, it is fairly painless for a consumer to recover. There is a $50 or less financial penalty. The bank or credit card company handles replacing the card. Credit monitoring and related services are generally adequate for limiting damage. Your new credit card is as functional as the old credit card.

If an intruder compromises your Social Security number, recovery may not be possible. The financial penalties are unbounded. There is no way to replace a stolen SSN. Credit monitoring and related services can only alert citizens to derivative misuse, and the victim must do most of the work to recover — if possible. The citizen is at risk wherever other data custodians rely on SSNs for authentication purposes.

This SSN situation, and others, must change. All organizations who act as data custodians must evaluate the data in their control, and work to improve the breach recovery status for their constituents. For SSNs, this means eliminating their secrecy as a means of authentication. This will be a massive undertaking, but it is necessary.

It’s time to redefine what it means to recover from a breach, and put constituent benefit at the heart of the matter, where it belongs.

Decrypting Android M adopted storage

One of the new features Android M introduces is adoptable storage. This feature allows external storage devices such as SD cards or USB drives to be ‘adopted’ and used in the same manner as internal storage. What this means in practice is that both apps and their private data can be moved to the adopted storage device. In other words, this is another take on everyone’s (except for widget authors…) favorite 2010 feature — AppsOnSD. There are, of course, a few differences, the major one being that while AppsOnSD (just like app Android 4.1 app encryption) creates per-app encrypted containers, adoptable storage encrypts the whole device. This short post will look at how adoptable storage encryption is implemented, and show how to decrypt and use adopted drives on any Linux machine.

Adopting an USB drive

In order to enable adoptable storage for devices connected via USB you need to execute the following command in the Android shell (presumably, this is not needed if your device has an internal SD card slot; however there are no such devices that run Android M at present):

$ adb shell sm set-force-adoptable true

Now, if you connect a USB drive to the device’s micro USB slot (you can also use an USB OTG cable), Android will give you an option to set it up as ‘internal’ storage, which requires reformatting and encryption. ‘Portable’ storage is formatted using VFAT, as before.

After the drive is formatted, it shows up under Device storage in the Storage screen of system Settings. You can now migrate media and application data to the newly added drive, but it appears that there is no option in the system UI that allows you to move applications (APKs).

Adopted devices are mounted via Linux’s device-mapper under /mnt/expand/ as can be seen below, and can be directly accessed only by system apps.

$ mount
rootfs / rootfs ro,seclabel,relatime 0 0
...
/dev/block/dm-1 /mnt/expand/a16653c3-... ext4 rw,dirsync,seclabel,nosuid,nodev,noatime 0 0
/dev/block/dm-2 /mnt/expand/0fd7f1a0-... ext4 rw,dirsync,seclabel,nosuid,nodev,noatime 0 0

You can safely eject an adopted drive by tapping on it in the Storage screen, and the choosing Eject from the overflow menu. Android will show a persistent notification that prompts you to reinsert the device once it’s removed. Alternatively, you also can ‘forget’ the drive, which removes it from the system, and should presumably delete the associated encryption key (which doesn’t seem to be the case in the current preview build).

Inspecting the drive

Once you’ve ejected the drive, you can connect it to any Linux box in order to inspect it. Somewhat surprisingly, the drive will be automatically mounted on most modern Linux distributions, which suggests that there is at least one readable partition. If you look at the partition table with fdisk or a similar tool, you may see something like this:

# fdisk /dev/sdb
Disk /dev/sdb: 7811 MB, 7811891200 bytes, 15257600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 34815 16M unknown android_meta
2 34816 15257566 7.3G unknown android_expand

As you can see, there is a tiny android_meta partition, but the bulk of the device has been assigned to the android_expand partition. Unfortunately, the full source code of Android M is not available, so we cannot be sure how exactly this partition table is created, or what the contents of each partition is. However, we know that most of Android’s storage management functionality is implemented in the vold system daemon, so we check if there is any mention of android_expand inside vold with the following command:

$ strings vold|grep -i expand
--change-name=0:android_expand
%s/expand_%s.key
/mnt/expand/%s

Here expand_%s.key suspiciously looks like a key filename template, and we already know that adopted drives are encrypted, so our next step is to look for any similar files in the device’s /data partition (you’ll need a custom recovery or root access for this). Unsurprisingly, there is a matching file in /data/misc/vold which looks like this:

# ls /data/misc/vold
bench
expand_8838e738a18746b6e435bb0d04c15ccd.key

# ls -l expand_8838e738a18746b6e435bb0d04c15ccd.key

-rw------- 1 root root 16 expand_8838e738a18746b6e435bb0d04c15ccd.key


# od -t x1 expand_8838e738a18746b6e435bb0d04c15ccd.key
0000000 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f
0000020

Decrypting the drive

That’s exactly 16 bytes, enough for a 128-bit key. As we know, Android’s FDE implementation uses an AES 128-bit key, so it’s a good bet that adoptable storage uses a similar (or identical) implementation. Looking at the start and end of our android_expand partition doesn’t reveal any readable info, nor is it similar to Android’s crypto footer, or LUKS‘s header. Therefore, we need to guess the encryption mode and/or any related parameters. Looking once again at Android’s FDE implementation (which is based on the dm-crypt target of Linux’s device-mapper), we see that the encryption mode used is aes-cbc-essiv:sha256. After consulting dm-crypt’s mapping table reference, we see that the remaining parameters we need are the IV offset and the starting offset of encrypted data. Since the IV offset is usually zero, and most probably the entire android_expand partition (offset 0) is encrypted, the command we need to map the encrypted partition becomes the following:

# dmsetup create crypt1 --table "0 `blockdev --getsize /dev/sdb2` crypt 
aes-cbc-essiv:sha256 00010203040506070809010a0b0c0d0e0f 0 /dev/sdb2 0"

It completes with error, so we can now try to mount the mapped device, again guessing that the file system is most probably ext4 (or you can inspect the mapped device and find the superblock first, if you want to be extra diligent).

# mount -t ext4 /dev/mapper/crypt1 /mnt/1/
# cd /mnt/1
# find ./ -type d
./
./lost+found
./app
./user
./media
./local
./local/tmp
./misc
./misc/vold
./misc/vold/bench

This reveals a very familiar Android /data layout, and you should see any media and app data you’ve moved to the adopted device. If you copy any files to the mounted device, they should be visible when you mount the drive again in Android.

Storage manager commands

Back in Android, you can use the sm command (probably short for ‘storage manager’) we showed in the first section to list disks and volumes as shown below:

$ sm list-disks
disk:8,16
disk:8,0

$ sm list-volumes
emulated:8,2 unmounted null
private mounted null
private:8,18 mounted 0fd7f1a0-2d27-48f9-8702-a484cb894a92
emulated:8,18 mounted null
emulated unmounted null
private:8,2 mounted a16653c3-6f5e-455c-bb03-70c8a74b109e

If you have root access, you can also partition, format, mount, unmount and forget disks/volumes. The full list of supported commands is shown in the following listing.

$ sm
usage: sm list-disks
sm list-volumes [public|private|emulated|all]
sm has-adoptable
sm get-primary-storage-uuid
sm set-force-adoptable [true|false]

sm partition DISK [public|private|mixed] [ratio]
sm mount VOLUME
sm unmount VOLUME
sm format VOLUME

sm forget [UUID|all]

Most features are also available from the system UI, but sm allows you to customize the ratio of the android_meta and android_expand partitions, as well as to create ‘mixed’ volumes.

Summary

Android M allows for adoptable storage, which is implemented similarly to internal storage FDE — using dm-crypt with a per-volume, static 128-bit AES key, stored in /data/misc/vold/. Once the key is extracted from the device, adopted storage can be mounted and read/written on any Linux machine. Adoptable storage encryption is done purely in software (at least in the current preview build), so its performance is likely comparable to encrypted internal storage on devices that don’t support hardware-accelerated FDE.