The adage "When everyone's in charge, no one's in charge" applies all too well to private cloudified networks.

Richard Dreger, Contributor

November 21, 2011

6 Min Read

As network boundaries blur and longstanding design paradigms fall by the wayside, how do we assign accountability for security? It's a pressing question: Because virtualization gives us so much power and flexibility, we're moving ahead at a breakneck pace, often without looking closely at whether security-assurance levels remain as the services delivery model morphs.

Whether adding virtualization will break security depends on how you do IT. A unified organization, where network, storage, application, and security groups work well together, communicate openly, and follow a documented security program can take the added complexity of multisite virtualization in stride. Sure, processes will need to be expanded and new standards developed, but as a whole, the team approach can extend.

But what if your IT "department" comprises independent silos that not only don't integrate, but have clear, perhaps formally designated, boundaries? How does that work in a highly virtualized environment, where you can easily have dozens of complete ecosystems residing within a single rack of equipment? It doesn’t. Not only can we not physically examine system perimeters anymore, the whole concept of providing adequate segmentation or defining an accreditation boundary demands work from multiple teams. If one group fails or even makes a simple configuration error, the whole system could become unreachable or open to unauthorized access.

Say you have three major customers, all requiring different security-assurance levels. Maybe one's a large retailer, another is in healthcare, and another is a federal contractor. To make the most of your hardware investment and maximize performance, you use a shared SAN that connects back to multiple blade servers. Since things are virtualized, you've configured the appropriate virtual networks and provide connectivity out of the virtual world via high-speed links to core network equipment and beyond. The goal: to ensure that each customer gets the resources it needs while maintaining an audit-ready security posture.

To provide segmentation, you need the physical hardware team, and maybe the systems team, to configure the SAN disk arrays to balance performance, storage, and access requirements. Sure, you could physically carve up the disks and give different slices to each customer to provide a physical boundary, but this concept is anathema to performance-minded shops and the private cloud model. Storage, regardless of how we choose to divvy it up, is then made available to our system infrastructure, in which we're creating various securely configured virtual machines for each customer. These VMs are then provided with network connectivity, access controls, and perhaps firewalling to permit approved communication with other resources. Communication will ultimately terminate on a strong segmentation boundary, such as a next-generation application firewall with intrusion prevention, data loss prevention, and the like, to limit intracustomer traffic.

Even this relatively simple setup requires the skills of system, project, network, and security teams, at a minimum, plus careful coordination, planning, and documentation (there, we said it) to ensure proper client isolation. If audits are required, then the due diligence bar is raised even higher, as segmentation must be shown all the way through the layers from physical disk partitioning up through multiple network and application access levels.

Clearly, even a monolithic security team can't perform all of these duties effectively. It's tempting to just say, "Security is now everyone's responsibility," but that's not the whole story, either. You need to do some restructuring. The InformationWeek Virtualization Management Survey asked about this. Of 396 IT pros from larger companies, 33% have or are reorganizing, and 18% see the need.

So what's the right structure?

The security team, led by a chief security officer or chief information security office, still bears the ultimate accountability for ensuring data protection, defining the security program vision, and managing various security resources. In a private cloud, everyone has a security job to do, but no one has free rein. Rather, the security team, whether an army of one or a larger group, must liaise with the other teams to craft a multitiered strategy. In this model, security systems are developed jointly, with the security team responsible for the overarching assurance requirements and the appropriate technical teams helping with the controls they know best. A CISSP will never be as good at Active Directory architecture design as the Microsoft guru, so don't try to be. Instead, work to guide the AD architecture design and ensure that it provides sufficient protections and can stand up to objective scrutiny--an audit by an outside firm. A decentralized but interlinked organizational structure will extend well into the virtual environment.

Consider a few take-home guidelines as you segue into a heavily virtualized world:

-- Define your requirements: I find myself repeating this like a mantra. If you haven't defined your requirements then you don't know if your team can do what you need it to do. Virtualization comes with a slew of new tools, toys, and technologies to choose from, and requirements, once defined, should lead you to select the correct controls and products--not vice versa. It can be easy to get lost in technical minutia and forget what the goals are or where the "minimum sufficient" level is. Clear requirements help define and achieve success in complex projects.

-- Communicate openly and often: Hard as it can be, teams must really communicate, not just talk at one another. As we discussed earlier, each group is not only responsible for excelling at its own area of expertise. These skills must be guided and coordinated to ensure that the IT organization operates as an effective whole. The only way to do that is to sanity check the virtualization plan as a whole in terms of business and security requirement. Midlevel IT managers, I'm looking at you to make this happen. You know your team the best and can communicate both at the individual technical level and up the chain to management.

-- Someone must still be in charge: Remember the way we started this out? If no one person or group has ultimate accountability for a client or resource, then nobody does. Even when we have each layer of the IT team supporting our security initiatives, the security team (and CISO) must be confident that risk has been properly managed and the appropriate controls deployed and regularly checked. Trust but verify.

Richard Dreger is president of WaveGard, a vendor-neutral security consulting firm.

InformationWeek Analytics has published a report on backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster? Download the report now. (Free registration required.)

About the Author(s)

Richard Dreger

Contributor

Rick is co-founder and president of WaveGard. With nearly 20 years of experience in the cybersecurity and related enterprise technology fields, Rick enjoys solving complex business IT problems. He has an EE/BME degree from Duke University and a Masters of Computer Engineering from Villanova University.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights