Yes, it is possible to orchestrate security across multiple clouds without creating performance hurdles. Here’s how.
In my experience, many conversations with customer security teams inevitably begin with: “I just found out that one of our business owners built infrastructure in the public cloud and it is hosting a critical business process.” Or, “we can’t afford the tech refresh in my current datacenter, and I have been directed to manage a multi-year migration plan to the public cloud.” Hence the headline of this piece “…whether you want to or not.”
Hybrid cloud security management is one of the popular industry trends that has seen a plethora of service offerings, all proclaiming to provide the “single pane of glass” to visualize security posture across datacenters. However, if there isn’t a plan for orchestrating security across multiple clouds, there will inherently be a collection of disparate data that will make management difficult.
For those fortunate enough to have a centralized datacenter managed in-house, the reality is that while those days are most likely numbered it might already be too late to embrace this new reality and plan accordingly. Before succumbing to this “want to or not” category, it is not as difficult as one might think to develop a roadmap for managing multi-cloud security on one’s own terms.
The first step in managing this divergent landscape is to classify datacenter environments into low, medium and high risk. This allows for proactive management of the type of data to send to each cloud option. The breakdown should include:
- Low-risk environments (e.g. static marketing webpages) are great candidates for public cloud offerings with a few security controls to detect an infected server.
- Medium-risk environments (e.g. dev environment, collaborations systems) require protection, but likely will benefit from the agility and cost of public cloud options. This classification should have a managed security solution that pulls security telemetry into a Security Incident Event Management (SIEM) tool or a third-party security monitoring tool.
- High-risk environments (e.g. payment card data, personal healthcare information) should be protected and require auditable security controls that must be maintained. This could be conducted in the public cloud; however, most organizations choose to keep them within internal IT infrastructure or host in a secure private cloud. This is normally the last environment to move to the cloud and is likely the bottleneck to keep from putting all environments in one location.
Setting the Security Framework
Once an approach to distribute workloads to various cloud options is determined, the next step is to define the security framework and tools to leverage in each environment. It is advisable to organize security controls into high-level buckets in accordance with the compliance frameworks being used (e.g. NIST 800-53, ISO, PCI), then try to standardize the tools you use to implement those controls.
For example, consider the tools for network inspection (layer 3/layer 4) and application inspection (layer 7), network segmentation, configuration control, endpoint detection/remediation. Whenever possible, the same tools should be used across multiple clouds. If this isn’t reasonable, try to ensure the log output from those tools can be consumed and visualized with a correlation tool or SIEM.
The final step to building a multi-cloud security platform is to create the logging infrastructure that allows all of the information to flow into the proverbial “single pane of glass” to manage multi-cloud security. The critical aspect in this situation is to settle on a single logging standard (e.g. Syslog, JSON) then convert where necessary to integrate in a visualization tool or correlation engine. This is where many security teams choose to use a third-party tool or management portal to offload this demanding architecture design task on an outside group.
Once these steps are in place, building a sound roadmap to a secure multi-cloud environment becomes far more manageable. A solid plan will also be able to sustain additional growth and ensure that the ROI that the cloud offers is fully realized. The ultimate goal is to have seamless security without creating performance hurdles. Being proactive and thinking big picture is a huge first step.
Jeff Schilling, a retired U.S. Army colonel, is Armor’s chief of operations and security. He is responsible for the cyber and physical security programs for the corporate environment and customer-focused capabilities. His areas of focus include cloud operations, client … View Full Bio