Android code signing

We covered a new security feature introduced in the last Jelly Bean maintenance release in our last post and, before you know it, a new tag has already popped up in AOSP. Google I/O is just around the corner, and some interesting bits and pieces are trickling into the AOSP master branch, so it’s probably time for a new post. There are plenty of places where you can get your rumour fix regarding I/O 2013 and it looks like build JDQ39E is going to be somewhat boring, so we will explore something different instead: code signing. This particular aspect of Android has remained virtually unchanged since the first public release, and is so central to the platform, that is pretty much taken for granted. While neither Java code signing, nor its Android implementation are particularly new, some of the finer details are not particularly well-known, so we’ll try to shed some more light on those. The first post of the series will concentrate on the signature formats used while the next one will look into how code signing fits into Android’s security model.

Java code signing

As we all know, Android applications are coded (mostly) in Java, and Android application package files (APKs) are just weird-looking JARs, so it pays to understand how JAR signing works first. 
First off, a few words about code signing in general. Why would anyone want to sign code? For the usual reasons: integrity and authenticity. Basically, before executing any third-party program you want to make sure that it hasn’t been tampered with (integrity) and that it was actually created by the entity that it claims to come from (authenticity). Those features are usually implemented by some digital signature scheme, which guarantees that only the entity owning the signing key can produce a valid code signature. The signature verification process verifies both that the code has not been tampered with and that the signature was produced with the expected key. One problem that code signing doesn’t solve directly is whether the code signer (software publisher) can be trusted. The usual way trust is handled is by requiring the code signer to hold a digital certificate, which they attach to the signed code. Verifiers decide whether to trust the certificate either based on some trust model (e.g., PKI or web of trust), or on a case-by-case basis. Another problem that code signing does not solve (or event attempt to) is whether the signed code is safe to run. As we have seen, code that has been signed (or appears to be) by a trusted third party is not necessarily safe (e.g., Flame or pwdump7).

Java’s native code packaging format is the JAR file, which is essentially a ZIP file bundling together code (.class files or classes.dex in Android), some metadata about the package (.MF manifest files in the META-INF/ directory) and, optionally, resources the code uses. The main manifest file (MANIFEST.MF) has entries with the file name and digest value of each file in the archive. The start of the manifest file of a typical APK file is show below (we’ll use APKs instead of actual JARs for all examples).

Manifest-Version: 1.0
Created-By: 1.0 (Android)

Name: res/drawable-xhdpi/ic_launcher.png
SHA1-Digest: K/0Rd/lt0qSlgDD/9DY7aCNlBvU=

Name: res/menu/main.xml
SHA1-Digest: kG8WDil9ur0f+F2AxgcSSKDhjn0=

Name: ...

Java code signing is implemented at the JAR file level by adding another manifest file, called a signature file (.SF) which contains the data to be signed, and a digital signature over it (called a ‘signature block file’, .RSA, .DSA or .EC). The signature file is very similar to the manifest, and contains the digest of the whole manifest file (SHA1-Digest-Manifest), as well as digests for each of the individual entries in MANIFEST.MF.

Signature-Version: 1.0
SHA1-Digest-Manifest-Main-Attributes: ZKXxNW/3Rg7JA1r0+RlbJIP6IMA=
Created-By: 1.6.0_45 (Sun Microsystems Inc.)
SHA1-Digest-Manifest: zb0XjEhVBxE0z2ZC+B4OW25WBxo=

Name: res/drawable-xhdpi/ic_launcher.png
SHA1-Digest: jTeE2Y5L3uBdQ2g40PB2n72L3dE=

Name: res/menu/main.xml
SHA1-Digest: kSQDLtTE07cLhTH/cY54UjbbNBo=

Name: ...

The digests in the signature file can easily be verified by using the following OpenSSL commands:

$ openssl sha1 -binary MANIFEST.MF |openssl base64
$ echo -en "Name: res/drawable-xhdpi/ic_launcher.pngrnSHA1-Digest:
K/0Rd/lt0qSlgDD/9DY7aCNlBvU=rnrn"|openssl sha1 -binary |openssl base64

The first one takes the SHA1 digest of the entire manifest file and encodes it to Base 64 to produce the SHA1-Digest-Manifest value, and the second one simulates how the digest of a single manifest entry is being calculated. The actual digital signature is in binary PKCS#7 (or more generally, CMS) format and includes the signature value and signing certificate. Signature block files produced using the RSA algorithm are saved with the extension .RSA, those generated with DSA or EC keys with the .DSA or .EC extensions, respectively. Multiple signatures can be performed, resulting in multiple .SF and .RSA/DSA/EC files in the JAR file’s META-INF/ directory. The CMS format is rather involved, allowing not only for signing, but for encryption as well, both with different algorithms and parameters, and is extensible via custom signed or unsigned attributes. A thorough discussion is beyond the scope of this post, but as used for JAR signing it basically contains the digest algorithm, signing certificate and signature value. Optionally the signed data can be included in the SignedData CMS structure (attached signature), but JAR signatures don’t include it (detached signature). Here’s how an RSA signature block file looks like when parsed into ASN.1 (certificate info trimmed):

$ openssl asn1parse -i -inform DER -in CERT.RSA
0:d=0 hl=4 l= 888 cons: SEQUENCE
4:d=1 hl=2 l= 9 prim: OBJECT :pkcs7-signedData
15:d=1 hl=4 l= 873 cons: cont [ 0 ]
19:d=2 hl=4 l= 869 cons: SEQUENCE
23:d=3 hl=2 l= 1 prim: INTEGER :01
26:d=3 hl=2 l= 11 cons: SET
28:d=4 hl=2 l= 9 cons: SEQUENCE
30:d=5 hl=2 l= 5 prim: OBJECT :sha1
37:d=5 hl=2 l= 0 prim: NULL
39:d=3 hl=2 l= 11 cons: SEQUENCE
41:d=4 hl=2 l= 9 prim: OBJECT :pkcs7-data
52:d=3 hl=4 l= 607 cons: cont [ 0 ]
56:d=4 hl=4 l= 603 cons: SEQUENCE
60:d=5 hl=4 l= 452 cons: SEQUENCE
64:d=6 hl=2 l= 3 cons: cont [ 0 ]
66:d=7 hl=2 l= 1 prim: INTEGER :02
69:d=6 hl=2 l= 1 prim: INTEGER :04
72:d=6 hl=2 l= 13 cons: SEQUENCE
74:d=7 hl=2 l= 9 prim: OBJECT :sha1WithRSAEncryption
85:d=7 hl=2 l= 0 prim: NULL
87:d=6 hl=2 l= 56 cons: SEQUENCE
89:d=7 hl=2 l= 11 cons: SET
91:d=8 hl=2 l= 9 cons: SEQUENCE
93:d=9 hl=2 l= 3 prim: OBJECT :countryName
98:d=9 hl=2 l= 2 prim: PRINTABLESTRING :JP
735:d=5 hl=2 l= 9 cons: SEQUENCE
737:d=6 hl=2 l= 5 prim: OBJECT :sha1
744:d=6 hl=2 l= 0 prim: NULL
746:d=5 hl=2 l= 13 cons: SEQUENCE
748:d=6 hl=2 l= 9 prim: OBJECT :rsaEncryption
759:d=6 hl=2 l= 0 prim: NULL
761:d=5 hl=3 l= 128 prim: OCTET STRING [HEX DUMP]:892744D30DCEDF74933007...

If we extract the contents of a JAR file, we can use the OpenSSL smime (CMS is the basis of S/MIME) command to verify its signature by specifying the signature file as the content (signed data). It will print the signed data and the verification result:

$ openssl smime -verify -in CERT.RSA -inform DER -content CERT.SF signing-cert.pem
Signature-Version: 1.0
SHA1-Digest-Manifest-Main-Attributes: ZKXxNW/3Rg7JA1r0+RlbJIP6IMA=
Created-By: 1.6.0_43 (Sun Microsystems Inc.)
SHA1-Digest-Manifest: zb0XjEhVBxE0z2ZC+B4OW25WBxo=

Name: res/drawable-xhdpi/ic_launcher.png
SHA1-Digest: jTeE2Y5L3uBdQ2g40PB2n72L3dE=

Verification successful

The official tools for JAR signing and verification are the jarsigner and keytool commands from the JDK. Since Java 5.0 jarsigner also supports timestamping the signature by a TSA, which could be quite useful when you need to ascertain the time of signing (e.g., before or after the signing certificate expired), but this feature is not widely used. Using the jarsigner command, a JAR file is signed by specifying a keystore file, the alias of the key to use for signing (used as the base name for the signature block file) and, optionally, a signature algorithm. One thing to note is that since Java 7, the default algorithm has changed to SHA256withRSA, so you need to explicitly specify it if you want to use SHA1. Verification is performed in a similar fashion, but the keystore file is used to search for trusted certificates, if specified. (again using an APK file instead of an actual JAR):

$ jarsigner -keystore debug.keystore -sigalg SHA1withRSA test.apk androiddebugkey
$ jarsigner -keystore debug.keystore -verify -verbose -certs test.apk

smk 965 Mon Apr 08 23:55:34 JST 2013 res/drawable-xxhdpi/ic_launcher.png

X.509, CN=Android Debug, O=Android, C=US (androiddebugkey)
[certificate is valid from 6/18/11 7:31 PM to 6/10/41 7:31 PM]

smk 458072 Tue Apr 09 01:16:18 JST 2013 classes.dex

X.509, CN=Android Debug, O=Android, C=US (androiddebugkey)
[certificate is valid from 6/18/11 7:31 PM to 6/10/41 7:31 PM]

903 Tue Apr 09 01:16:18 JST 2013 META-INF/MANIFEST.MF
956 Tue Apr 09 01:16:18 JST 2013 META-INF/CERT.SF
776 Tue Apr 09 01:16:18 JST 2013 META-INF/CERT.RSA

s = signature was verified
m = entry is listed in manifest
k = at least one certificate was found in keystore
i = at least one certificate was found in identity scope

jar verified.

The last command verifies the signature block and signing certificate, ensuring that the signature file has not been tampered with. It then verifies that each digest in the signature file (CERT.SF) matches its corresponding section in the manifest file (MANIFEST.MF). One thing to note is that the number of entries in the signature file does not necessarily have to match those in the manifest file. Files can be added to a signed JAR without invalidating its signature: as long as none of the original files have been changed, verification succeeds. Finally, jarsigner reads each manifest entry and checks that the file digest matches the actual file contents. Optionally, it checks whether the signing certificate is present in the specified key store (if any). As of Java 7 there is a new -strict option that will perform additional certificate validations. Validation errors are treated as warnings and reflected in the exit code of the jarsigner command. As you can see, it prints certificate details for each entry, even though they are the same for all entries. A slightly better way to view signer info when using Java 7 is to specify the -verbose:summary or -verbose:grouped, or alternatively use the keytool command:

$ keytool -list -printcert -jarfile test.apk
Signer #1:


Owner: CN=Android Debug, O=Android, C=US
Issuer: CN=Android Debug, O=Android, C=US
Serial number: 4dfc7e9a
Valid from: Sat Jun 18 19:31:54 JST 2011 until: Mon Jun 10 19:31:54 JST 2041
Certificate fingerprints:
MD5: E8:93:6E:43:99:61:C8:37:E1:30:36:14:CF:71:C2:32
SHA1: 08:53:74:41:50:26:07:E7:8F:A5:5F:56:4B:11:62:52:06:54:83:BE
Signature algorithm name: SHA1withRSA
Version: 3

Once you know the signature block file name (by listing the archive contents, for example), you can also use OpenSSL in combination with the zip command to easily extract the signing certificate to a file:

$ unzip -q -c test.apk META-INF/CERT.RSA|openssl pkcs7 -inform DER -print_certs -out cert.pem

Android code signing

As evident from the examples above, Android code signing is based on Java JAR signing and you can use the regular JDK tools to sign or verify APKs. Besides those, there is an Android specific tool in the AOSP build/ directory, aptly named signapk. It performs pretty much the same task as jarsigner in signing mode, but there are also a few notable differences. To start with, while jarsigner requires keys to be stored in a compatible key store file, signapk takes separate signing key (in PKCS#8 format) and certificate (in DER format) files as input. While it does appear to have some support for reading DSA keys, it can only produce signatures with the SHA1withRSA mechanism. Raw private keys in PKCS#8 are somewhat hard to come by, but you can easily generate a test key pair and a self-signed certificate using the make_key found in development/tools. If you have existing OpenSSL keys you cannot use them as is however, you will need to convert them using OpenSSL’s pkcs8 command:

echo "keypwd"|openssl pkcs8 -in mykey.pem -topk8 -outform DER -out mykey.pk8 -passout stdin

Once you have the needed keys, you can sign an APK like this:

$ java -jar signapk.jar cert.cer key.pk8 test.apk test-signed.apk

Nothing new so far, except the somewhat exotic (but easily parsable by JCE classes) key format. However, the signapk has an extra ‘sign whole file’ mode, enabled with the -w option. When in this mode, in addition to signing each individual JAR entry, the tool generates a signature over the whole archive as well. This mode is not supported by jarsigner and is specific to Android. So why sign the whole archive when each of the individual files is already signed? In order to support over the air updates (OTA), naturally :). If you have ever flashed a custom ROM, or been impatient and updated your device manually before it picked up the official update broadcast, you know that OTA packages are ZIP files containing the updated files and scripts to apply them. It turns out, however, that they a lot more like JAR files on the inside. They come with a META-INF/ directory, manifests and a signature block, plus a few other extras. One of those is the /META-INF/com/android/otacert file, which contains the update signing certificate (in PEM format). Before booting into recovery to actually apply the update, Android will verify the package signature, then check that the signing certificate is one that is trusted to sign updates. OTA trusted certificates are completely separate from the ‘regular’ system trust store, and reside in a, you guessed it, a ZIP file, usually stored as /system/etc/security/ On a production device it will typically contain a single file, likely named releasekey.x509.pem.

Going back to the original question, if OTA files are JAR files, and JAR files don’t support whole-file signatures, where does the signature go? The Android signapk tool slightly abuses the ZIP format by adding a null-terminated string comment in the ZIP comment section, followed by the binary signature block and a 6-byte final record, containing the signature offset and the size of the entire comment section. This makes it easy to verify the package by first reading and verifying the signature block from the end of the file, and only reading the rest of the file (which for a major upgrade might be in the hundreds of MBs) if the signature checks out. If you want to manually verify the package signature with OpenSSL, you can separate the signed data and the signature block with a script like the one below, where the second argument is the signature block file, and the third one is the signed ZIP file (without the comments section) to write:

#!/bin/env python

import os
import sys
import struct

file_name = sys.argv[1]
file_size = os.stat(file_name).st_size

f = open(file_name, 'rb') - 6)
footer =

sig_offset = struct.unpack('<H', footer[0:2])
sig_start = file_size - sig_offset[0]
sig_size = sig_offset[0] - 6
sig =
# 2 bytes comment length + 18 bytes string comment
sd = - sig_offset[0] - 2 - 18)

sf = open(sys.argv[2], 'wb')

zf = open(sys.argv[3], 'wb')


Android relies heavily on the Java JAR format, both for application packages (APKs) and for system updates (OTA packages). APK signing uses a subset of the JAR signing specification as is, while OTA packages use a custom format that generates a signature over the whole file. Standalone package verification can be performed with standard JDK tools or OpenSSL (after some preprocessing). The Android OS and recovery system follow the same verification procedures before installing APKs or applying system updates. In the next article we will explore how the OS uses package signatures and how they fit into Android’s security model. 

Fake Vertu App Infects Korean and Japanese Android Users

A new threat has surfaced targeting users in Korea and Japan, but this attack, unlike others making the news, is not one motivated by political or ideological dogma. Instead, this one is based purely on old-fashioned greed. Vertu phone owners or those looking for a localized Vertu theme in Korean or Japanese for an Android phone had better think twice before downloading something. McAfee Mobile Research has identified a new variant of Android/Smsilence distributed under the guise of a Vertu upgrade/theme that is targeting Japanese and Korean users.

Fake Vertu app in Japanese.
Fake Vertu app in Japanese. (Click on images to enlarge.)

On installation, Android/Smsilence.C attempts to display a loading screen, while in the background registering the device phone number with an external server [XXX.XX.24.134] by sending an HTTP post. The malware then registers an Internet filter on the local device so that any incoming messages are handled first by the Trojan and then forwarded to the same server. The loading screen eventually stops with the message in Japanese or Korean reporting that the service was unavailable and to please try again.

Threat Details 2

McAfee’s research into the control management system used by this threat has shown that multiple domains (pointing to the same server) were used in addition to multiple guises to spread the threat. Around 20 fake branded apps–from coffee to fast-food chains, including an antivirus product from Korea that was uploaded and revoked from Google Play–were used. Despite a lack of sophistication compared with other mobile botnets, Android/Smsilence was still able to infect between 50,000 to 60,000 mobile users, according to our analysis.

Fake Vertu app in Korean.
Fake Vertu app in Korean.

The new variant now extends to Japanese victims. Most other threats targeting  Japan this year have been minor variations of one-click fraud (also called scareware), which has been around in one form or another since 2004. Devices infected with Android/Smsilence.C are capable of sending back a lot more information, in addition to downloading additional spyware to the infected device.

Because carriers in Japan use the CMAIL protocol for text messaging, attempting to control and maintain a mobile botnet from outside of Japan is not easy (due to the security features implemented by Japanese carriers). We wonder if there was a local accomplice facilitating the spread or control of infected devices. This would also explain the function of a secondary package that is downloaded to an infected device only on demand by the botnet controller, and contains additional spyware functionality not limited to text messaging.

The most bizarre aspect of this new strain remains to be explained, and highlights a limitation in the antimalware research field. Regardless whether we analyze an Android Trojan or a complex threat like Stuxnet, given enough time we can reverse-engineer any piece of code into its basic building blocks. Nonetheless, there are sometimes aspects to a case in which no matter how much time is spent investigating, we have no idea what the malware authors were thinking. In this case we discovered a file inside the malware that changes the package hash; that’s an evasive technique dubbed server-side polymorphism, and attempts to avoid detections by antimalware vendors. But it was not the technique that was confusing, even though this is the first time we have seen this technique used outside of an Eastern European threat family. The chosen file, the key component in the evasion technique, was a picture of London Mayor Boris Johnson.

image files discovered in the package
The malware authors included an image of  London Mayor Boris Johnson.

The post Fake Vertu App Infects Korean and Japanese Android Users appeared first on McAfee Blogs.

One-Click Fraud Variant on Google Play in Japan Steals User Data

Last week McAfee Labs reported a series of “one-click fraud” malware on Google Play in Japan. We have been monitoring this fraudulent activity and have found more than 120 additional variants on Google Play since the previous report. The malicious developers upload five or six applications per account using three to five accounts every night, even though almost all of the applications are quickly deleted from Google Play. In some cases the fraudsters upload the applications with few or no modifications to the previous ones, and in other cases they substantially modify images and descriptions. But the final behavior is always the same.

Most of the variants of this malware have the same functionality, with only slight differences in their implementation code. They simply show the fraudulent web pages on the in-application web component or the device’s browser.

McAfee has also found a variant of this family of malware with more dangerous features. This variant retrieves the device user’s Google account name–the email address–as well as the phone number, and sends the information to the attacker’s remote server.


Fig.1 Application description page on Google Play

The application description page on Google Play.


This application, tv.maniax.p_urapane1, is a 16-piece slider-puzzle game consisting of pornographic images. It also plays movie files when the user completes the game.

Unlike previous variants from this family of fraudulent malware, this application requires several permissions at installation that are usually unnecessary for this type of game:

  • android.permission.READ_PHONE_STATE
  • android.permission.GET_ACCOUNTS


Fig.2 List of required permissions

The malware’s list of required permissions.


Behind the scenes, the malware retrieves the user’s data using these permissions and sends it to a remote server by opening the URL http://man**** It stores the data in a MySQL database server using the Java Database Connectivity API in a database-driver library in the application.


Fig.3 Application screens

Malware application screens.


Fig.4 Google account name and phone number data sent on network

Google account name and phone number data sent to the attacker’s server.


This application also displays some “advertisement” links at the bottom of the screen. The application’s description page on Google Play says that the developer does not guarantee the safety of these linked advertisements, implying that they are not aware of the contents of the ads. In fact, however, the application simply displays the image files bundled in the application package and invokes the browser with the hard-coded URL http://pr**.*obi/?neosp_nontop_eropne01, which is the fraudulent web page often used in other variants of this one-click-fraud family of malware.


Fig.5 Fraudulent Web pages

Fraudulent web pages.


The stolen Google account name and phone number are not directly used in the fraudulent page opened from this application. However, we expect the attacker will try to use this information for future malicious activities.

Fortunately, this application was deleted from Google Play within a day after it was added, and so the number of victims should be small. But the appearance of this variant indicates that the attackers are determined to collect personal information from their victims and that they are capable of developing variants with more advanced features than previous ones.

McAfee Mobile Security detects this application as Android/OneClickFraud, and will continue to monitor for more fraudulent activities from this family in Japan.

The post One-Click Fraud Variant on Google Play in Japan Steals User Data appeared first on McAfee Blogs.

Ongoing Google Play Attacks Plague Japanese with Variation on One-Click Fraud

In what may be the biggest security-related incident on Google Play this year, multiple Trojans targeting Japanese users were discovered carrying the strain of Android one-click fraud. McAfee Mobile Research has already identified multiple developer accounts that were used to spread the malware and confirmed that more than 80 applications of this type existed on Google Play as of this writing. We have also reported additional developer accounts to Google Play Security for investigation and revocation.


Our investigation into the apps have shown that new variants of one-click fraud have been altered so that the fraud is not immediately identifiable unless the victim interacts with the apps–in effect making the apps “two-click fraud” or even “three-click fraud”–and making the automated screening and scanning process difficult.

In fact, these applications simply invoke the web browser on the device or the web-view component inside the application to load the web contents. This extra step by the fraudulent activities makes the automated detection of this type of malware more difficult.


One-click fraud is a threat vector that is unique to Japan and has been around for more than a decade on PCs, but recent aggressive tactics during the past year show that the criminals behind this scam are committed to exploiting mobile devices.

By using two or more clicks to commit fraud, an attacker can more easily trick users into believing that they are actually registered in the fraudulent service. Victims are more likely to pay money or give detailed personal information to the attacker.

In the current fraud, the attacker used multiple developer accounts on Google Play, as well as almost the same description of the applications across these separate accounts. This indicates that this type of fraudulent application variant is easily created and distributed. Actually, the attacker created new developer accounts soon after old accounts were banned due to malware reporting and published almost the same applications with minor changes under these new accounts.

What is worse, the essential part of this fraud occurs on the websites rather than inside the Android application, so there are still risks that the number of victims will increase via web browsing even if these applications are removed from Google Play.

McAfee detects this malware family as Android/OneClickFraud. We also detect and block the web accesses to the URLs used in this series of online fraud to protect users when they encounter the malicious fraud sites using their browsers. Make sure to keep your McAfee security products updated and stay tuned to McAfee Labs blogs for additional information as we continue our investigation.

The post Ongoing Google Play Attacks Plague Japanese with Variation on One-Click Fraud appeared first on McAfee Blogs.

Wearable Technology: Utterly Fantastic or the Next Privacy Fiasco?

You’ve felt it. That tiny nagging of a feeling making you doubt for a second whether or not you should post what you’re doing on Twitter, share that picture of your new car (including the license plate, shall I mention) on Facebook, or tag your location in an Instagram photo. But that’s just the beginning! As we adopt the next generation of mobile devices – also known as wearable technology – that nagging feeling will be amplified across your entire body because you’ll have mobile devices strapped to your wrist, worn as eyewear, wrapped around your neck, and even embedded into your shoes.

We are living in a generation of oversharing and overusing. Our mindsets have shifted from “what’s safe” to “what’s next” and that gut feeling that normally tells us to be more cautious about how public we make our daily lives is being replaced by excitement over the hottest new gadgets. Wearable technology is fascinating and designed to make our lives easier, but what people forget is that every connected device we use becomes a new entry point or backdoor for cybercriminals to enter.

But they’re only glasses that record video, why would I need protection?

A fair question; however, when Google created their Google Glass, the intended use was for far more than just recording. Along with taking pictures and recording videos, these glasses allow consumers to use GPS and maps apps, send messages to saved contacts, ask Google questions, store your schedules, and of course, share all of this information to your networks – in real-time. It sounds like a dream until you realize that each of these activities is a gateway for hackers. You could potentially be sharing every aspect of your life and opening up your most private doors – from where you live, what you’re doing tomorrow at 3 p.m., to what your children look like – all to strangers who may be looking to hurt you and your loved ones.

Google Glass is not the only wearable technology becoming available to consumers. There is the rumored iWatch from Apple that would sync to all your Apple products – an outlet that could leave your data at risk. FitBit Flex, the wristwatch that tracks all your daily movements including your sleep patterns, leaving room for others to put together a snapshot of your daily life.

Google Glass

Every single day, a new idea emerges, and every single day, a new device is created that provides leeway for cyberscammers to go after unsuspecting consumers. Yesterday it was smartphones and tablets, today it’s wearable technologies, and tomorrow it will be something else.

By no means should you switch your smartphone out for a flip phone, delete your Facebook account, or end your obsession with taking Instagram photos – but you should be aware of the security and privacy risks that each new connected device brings and make sure that you have the basics covered. Taking an extra second to ensure that you have your passwords and other data protected on your phone, PC and tablet could be the difference between taking a picture of what you are eating and having a stranger know where you are eating dinner.

Protect yourself. Secure your data. Love your technology. And welcome to keeping up with the 22nd century.

The post Wearable Technology: Utterly Fantastic or the Next Privacy Fiasco? appeared first on McAfee Blogs.