Illumio ASAP eliminates a weak link in data center infrastructures

Image: iStock/stockxchng

Two years ago, security startup Illumio, having $42 million (US) in venture capital, came out of stealth mode by announcing Adaptive Security Platform (ASP). Then and now, the company’s goal has been to eliminate weak links in distributed data centers—environments where workloads could move between multiple servers and scale either up or down at a moment’s notice.

Data Center Must-Reads

Cofounders Andrew Rubin (CEO) and PJ Kirner (CTO) believe security must be as agile as the workloads being protected under those conditions. Fast forward to April 2015, add another $100 million (US) in venture backing, and Illumio has launched a new product called Attack Surface Assessment Program (ASAP).

As to what “attack surface” means, Nathaniel Gleicher, head of cybersecurity strategy at Illumio and former director for cybersecurity policy for the National Security Council at the White House, says to eWeek’s Sean Michael Kerner that an attack surface is an open communication pathway between servers in a data center or a company’s digital infrastructure. Gleicher also notes that reducing attack surfaces has been a cornerstone of all Illumio products. He adds, “The new ASAP effort is an outgrowth of the visibility that Illumio offers its customers as a way to understand what the attack surface is within a data center.”

The company’s April 2016 press release announcing ASAP adds:

“ASAP gives enterprises MRI-like visibility inside the data center and cloud by providing a map of high-value assets and open communication pathways between applications. It then enables organizations to understand — and radically reduce — the attack surface of their high-value assets.”

SEE: Software defined data centers: The smart person’s guide

How ASAP works

In this age of automating everything, Illumio ASAP is surprisingly hands-on. Experts from Illumio work with the client to determine the best way to ensure the company’s communication links are as secure as possible. Gleicher describes the process in this video.

illumioasap.jpg

illumioasap.jpg

Image: Illumio

Visualize the client’s infrastructure

The first step, according to Gleicher, is understanding the client’s digital environment by determining all the applications, platforms, and communication links. More specifically, the Illumio consultants:

  • determine working communication links between different environments or applications;
  • detect patterns of traffic moving from low-value environments to high-value environments (pathways that most interest intruders); and
  • highlight the most connected workloads (most talked to servers).

Analyze the communication links

The next step is to determine whether communication links are moving malicious traffic, traffic needed by the client, or benign and unneeded traffic traversing the client’s network. That is accomplished by showing the client a networking visualization of all the communication links.

“When the client’s security team see the map for the first time, they say, ‘Wait a minute, why is that server talking to this server?'” mentions Gleicher, who is not surprised when the security personnel from the client admit they were unaware of the undocumented communication links.

This is when the discussion turns to whether the communication links indeed need to be open. If not, members of the in-house security team disable the links. If the links are needed, they are flagged as being open and potential attack surfaces that need priority when applying security measures.

Provide long-term security strategy

Gleicher suggests it is near impossible to protect everything, so only safeguard high-value environments. With that in mind, the Illumio security team will create a step-by-step segmentation strategy to reduce attack surfaces, and explain how to implement the security strategy for the least cost, least complexity, and least effort. He adds, “We make sure the client gets the most bang for their buck.”

On a slightly different note, privacy is a concern for the people at Illumio, as they deal with data from their clients that attackers would love to steal. The press release advises that all data is transferred to Illumio across an encrypted channel and stored in an isolated, encrypted repository; client data is anonymized before being analyzed by Illumio; and data shared with Illumio is deleted two weeks after presenting the final report.

What differentiates ASAP from the competition?

The press releases suggests, “ASAP visualizes the relationships between your servers, analyzes your attack surface, and helps you quickly take steps to harden your data center against lateral movement.”

Also see

IoT and liability: Who pays when things go wrong?

iotistock96208459-a-image.jpg

Image: iStock/a-image

Product liability is about to meet the Internet of Things (IoT), and what that means is anyone’s guess. Case in point: Who is liable if bad guys hack a home’s smart thermostat, turn off the heat in the dead of winter while the owners are vacationing in sunny Florida for two weeks, and the water pipes freeze, flooding the newly remodeled lower level?

SEE: Security researchers’ smart home findings may keep you up at night

Product liability is no panacea

Historically, product liability has been a can of worms. The blog post What is Product Liability? on Thomson Reuters’ FindLaw website suggests one reason is the lack of federal product-liability laws. The authors add, “Product liability claims are based on state laws and brought under the theories of negligence, strict liability, or breach of warranty.”

It’s not hard to see ensuring that products meet liability regulations in every state would be quite an undertaking. It also might be why most definitions of product liability are confusing at best—even one of the better descriptions championed by the FindLaw website:

“Product liability refers to a manufacturer or seller being held liable for placing a defective product into the hands of a consumer. Responsibility for a product defect that causes injury lies with all sellers of the product who are in the distribution chain.”

IoT complicates product liability

If the definition sounds nebulous, more than a few attorneys who specialize in product liability would agree, especially when devices associated with IoT are considered.

Lucas Amodio, intellectual property attorney at Armstrong Teasdale LLP, in his post Is the Internet of Things Ripe for Product Liability Law?, brings the problem to the forefront, saying, “With hacks in the past where data was compromised, the damage was intangible and hard to quantify. However, it’s easier to determine monetary damages when you have real physical damages.”

More about IT Security

As one might expect, when monetary values can be assigned to liability claims, the blame game get serious. “The question becomes who is ultimately responsible for the interactions of the product,” asks Amodio. “And more importantly to the people in the cybersecurity field, who is responsible if a hacker breaches the security to the device and causes damages in the real world?”

The Mason, Hayes, and Curran blog post Untangling the Web of Liability in the Internet of Things raises yet another complication caused by IoT. “Manufacturers of IoT devices, IoT network providers, and IoT software developers need to be aware users may bring claims against one or all of them following a device malfunction or security breach,” mentions the post. “It is not clear if the aggrieved IoT user will be required to prove they have suffered damage as a result of an IoT player’s actions or if the courts and lawmakers will adopt a ‘strict liability‘ approach.”

The Mason, Hayes, and Curran post’s authors suggest there may be an alternative where the courts might apportion liability to all concerned parties, including IoT device manufacturers, network providers, and even hackers, if it’s within reach of the law and the courts.

SEE: Internet of Things: The Security Challenges (ZDNet/TechRepublic)

Criminal, civil, or both?

As to if and when a liability might be considered a criminal offense instead of civil, the Mason, Hayes, and Curran post suggests that depends on the severity of the liability. For example, the authors imply that malfunctioning automated traffic lights causing a serious accident could raise claims of criminal liability.

In the near future, the authors suggest that a liability case may contain both criminal and civil elements. Their example deals with an automated car crashing into an oncoming vehicle because the automated car’s system was not compatible with the city’s smart traffic lights. “A situation like this could raise claims of criminal liability,” mentions the Mason, Hayes, and Curran post. “However, it appears unfair to hold the car owner/driver responsible for causing injury when the culprit was in part the malfunctioning traffic lights and in part the malfunctioning car.”

SEE: Tesla driver dies in first fatality with Autopilot: What it means for the future of driverless cars

Food for thought

The Mason, Hayes, and Curran post concludes that IoT is going to create new risks and a surge in liability litigation. The authors wisely suggest that IoT manufacturers and developers avoid waiting for new liability regulations, and continue to refine IoT security standards and protocols. Companies following that advice will have a competitive edge, while improving user confidence in their IoT products.

Also see

SSHCure: Flow-based attack detection that cuts through the chaff

Image: iStock

Gartner analyst Neil MacDonald wrote a report in March 2012 about security and big data analytics. MacDonald writes, “Information security is becoming a big data analytics problem, where massive amounts of data will be correlated, analyzed, and mined for meaningful patterns.”

MacDonald goes on to mention that traditional security products are unable to scale to the volume of data being captured, thus missing information related to potential attacks or those already in progress. In his report, MacDonald suggests data analytics specifically designed for information security will become necessary.

Fast-forward to today, and MacDonald’s prediction appears to be spot-on. Nearly every business involved in information security is touting how their big-data analytics can help by providing actionable intelligence on suspicious patterns and threats. However, if one looks at current internet attack predictions, the outcome is as bleak, if not more so, than it was back in 2012.

One possible reason

Rick Hofstede, a Ph.D. researcher at the University of Twente – Centre for Telematics and Information, believes he understands why current security measures are not succeeding. In the paper Unveiling SSHCure 3.0: Flow-based SSH Compromise Detection (PDF), coauthored by Hofstede and Luuk Hendriks, the researchers write:

More about IT Security

“Due to the sheer and ever-increasing number of attacks on the internet, Computer Security Incident Response Teams (CSIRT) are overwhelmed with attack reports. For that reason, there is a need for the detection of compromises rather than compromise attempts, since those incidents are the ones that have to be taken care of.”

Put simply, Hofstede and Hendriks feel traditional security systems and data-analytic platforms currently in use are not prepared for the exponential growth of connected devices and the vast amount of data being created (potential compromise attempts).

That, in turn, opens the door for warnings of an attack (real compromise) to be missed or lost.

SEE: IoT hidden security risks: How businesses and telecommuters can protect themselves

Flow-based detection with SSHCure

One area where “real compromise” is of utmost concern is Secure Shell (SSH), a network protocol that provides secure authentication and encrypted data communications between computers connecting over an insecure network such as the internet.

Hofstede and Hendriks, focusing specifically on SSH, came up with a solution to the data overload challenge: SSHCure. The University of Twente press release Internet Attacks, Find that Effective One, mentions, “Hofstede chose a ‘flow based’ approach. He looks at the data flow from a higher level and detects patterns; just like you can recognize advertisement mailings without actually checking the content of the brochures.”

The two authors feel the advantages of SSHCure come from command and control taking place at a central location, and the system’s ability to scale easily. The press release adds, “By zooming in on attacks that lead to a ‘compromise’ and require action, Hofstede further narrows his analysis.”

To accomplish this, Hofstede and Hendriks mention SSHCure’s detection algorithms classify SSH attacks into the following categories:

  • Scan: Attackers perform a horizontal network scan to identify active SSH daemons. This phase features a minuscule number of packets per flow, mostly TCP SYN packets.
  • Brute-force: Attackers perform a brute-force attack by issuing many authentication requests against one or more targets (i.e., SSH daemons), typically done by means of dictionaries. The traffic in this phase consists of high-intensity TCP connections commonly referred to as flat traffic.
  • Compromise: Attackers have gained access to a target host by using correct login credentials. This phase typically features either small flows in case of idle connections, or large flows in case the compromised target is being actively misused.

SSHCure needed some revising

Hofstede in his recent Ph.D. defense mentions that earlier versions of SSHCure were far and away the best flow-based brute-force attack detection tools at the time. That said, SSHCure was not ready for production environments because it was yielding too many false positives.

Hofstede believes false positives were occurring because SSHCure’s detection algorithm assumed that brute-force traffic was flat or where all connections have a similar number of packets, number of bytes, and duration length. However, the researchers discovered that brute-force attacks from a remote site do not always appear to be flat.

Hofstede, in his Ph.D. defense, points out the latest version of SSHCure uses a new compromise-detection algorithm that can distinguish non-flat transmissions from remote sources. “His [Hofstede] method proves to be effective, and diminishes the number of incidents, with detection accuracies up to 100 percent—depending on the actual application and the type of network,” adds the press release. “Future, more powerful routers will be able to perform the detection themselves, without needing extra equipment.”

The researchers note version 3.0 is currently available, but there are stability issues. They also state that version 2.4 is still available.

Also see

Tor may remain anonymous, thanks to Selfrando

Image: Tor

Kilton Public Library in Lebanon, New Hampshire, the “Live Free or Die” state, is the first library in the nation to make use of Tor, an anonymizing technology. And that did not sit well with the Department of Homeland Security. The agency (via the local police) asked the Kilton Public Library board to shut down their Tor server, to which the board members said no.

In the Phys.org article, Browse free or die? New Hampshire library is at privacy fore, Lynne Tuohy quotes Alison Macrina, founder and director of the Library Freedom Project, as saying, “Kilton’s really committed as a library to the values of intellectual privacy. In New Hampshire, there’s a lot of activism fighting surveillance. It’s the ‘Live Free or Die’ place, and they really mean it.”

SEE: The undercover war on your internet secrets: How online surveillance cracked our trust in the web

All for naught

More about IT Security

However, the Library Freedom Project test pilot at Kilton appears to be an exercise in futility. The FBI has proven to be very capable in finding ways to bypass any anonymity provided by the Tor Network. Case in point, the FBI is fighting a court order to explain how the agency is able to sidestep Tor.

As to how FBI agents are able to compromise Tor, researchers at Technische Universitat Darmstadt have a pretty good idea. According to the team’s research paper Selfrando: Securing the Tor Browser against De-anonymization Exploits (PDF), the FBI themselves or with help from a third party found and exploited a weakness in the Tor Browser (software used to access the Tor Network).

The vulnerability is there because Address Space Layout Randomization (ASLR)—which is needed to prevent exploitation of memory corruption vulnerabilities—was not incorporated into the Tor Browser. As to why, the authors add that ASLR suffers from one or more drawbacks, thus the computer-security technique was not used.

Selfrando to the rescue

From the title of the research team’s paper, one can surmise that Selfrando has something to do with the solution. The paper mentions, “Selfrando is an enhanced and practical load-time randomization technique for the Tor Browser that defends against exploits, such as the one FBI allegedly used against Tor users.”

selfrando-1.png

selfrando-1.png

Image: The report’s authors and Technische Universitat Darmstadt

The research team believes that Selfrando improves security over ASLR, and still preserves the features that are credited to ASLR. The slide to the right represents the usual workflow from the C/C++ source code to a running program with and without Selfrando.

“While technically challenging, our use of load-time function layout permutation ensures that the attack surface changes from one run to another,” write the authors. “Load-time randomization also ensures compatibility with code signing and distribution mechanisms that use caching to efficiently serve millions of users.”

Besides the above advantages, the researchers suggest that Selfrando contributes the following:

  • Practical randomization framework: To its credit, Selfrando can be directly applied to the Tor Browser without any changes to the source code. “To the best of our knowledge, Selfrando is the first approach that avoids risky binary rewriting or the need to use a custom compiler, and works with existing build tools,” mention the authors. “Moreover, it is fully compatible with ASan, which required additional implementation effort since the randomization interferes with ASan.”
  • Increased entropy and leakage resilience: Selfrando reduces the impact of information leakage vulnerabilities and increases entropy relative to ASLR, making Selfrando more effective against guessing attacks. The researchers add, “Use of load-time randomization mitigates threats from attackers observing binaries during download or after the executable files have been stored on disk.”
  • Low overhead: As busy as Tor Networks are, the researchers were careful to ensure that Selfrando’s startup and performance overheads are negligible.

Integrating Selfrando

The researchers mention that Tor Project personnel are looking at a number of different opportunities to produce hardened builds of the Tor Browser. “We worked closely with their developers in order to make it easy to integrate Selfrando in the Tor Browser,” write the paper’s authors. “They [Tor Project personnel] plan to release a hardened version that includes Selfrando and to evaluate the inclusion of Selfrando in the normal version.”

If the Selfrando development team has its way, people at the Kilton Public Library will be able to “Live Free” and “Browse Free.”

The research team and report authors consisted of: Mauro Conti, Stephen Crane, Tommaso Frassetto, Andrei Homescu, Georg Koppen, Per Larsen, Christopher Liebchen, Mike Perry, and Ahmad-Reza Sadeghi.

Selfrando is available for use in other open-source projects at GitHub.

Also see

8 Bad Ass Tools Coming Out Of Black Hat

Penetration testing, reverse engineering and other security tools that will be explained and released at Black Hat 2016.

Previous

1 of 9

Next

Image Source: Adobe Stock

Image Source: Adobe Stock

Amid the parties, the deal making and the overall catching up between security compatriots at the annual Black Hat pow wow in Las Vegas every year, there’s a body of seriously good work that comes out of the show. Beyond the big vulnerability revelations, some of the most lasting contributions to ongoing security research and protection work comes in the form of new open source tools released by presenters at the show. This year’s show will be no different. Researchers are pulling out all the stops yet again, using Black Hat as a platform to explain, release and/or promote a ton of great tools for pen testers and security operations experts. Here are some of the highlights.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Previous

1 of 9

Next

More Insights