Virtualization And The Case For Universal Grid Architectures - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Cloud // Cloud Storage
Commentary
10/17/2012
12:26 PM
Mark Peters
Mark Peters
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail
50%
50%

Virtualization And The Case For Universal Grid Architectures

Today, virtualization can make one system act like 10, but making 10 physical boxes look and act like one is far more valuable. And that is where we are heading.

Essentially since the beginning of the industrial computing era, systems have been designed in a monolithic fashion--that is, they are effectively self-contained compute, memory, and I/O systems in a box. While such boxes can stretch out (internally and/or externally), and the capabilities of all the elements and connections have ridden the advancements-in-technology wave, they fundamentally continue to operate the same way. For instance, data lives on some sort of storage device (which itself is, of course, a system that contains processing), and is fed into memory where it is processed, and then the pattern is reversed.

Thus, in order to accommodate more of anything--more users on the system, more data to process, more transactions, faster processing, etc., the industry has responded by constantly developing bigger, faster, more capable systems ... systems that continue to remain largely monolithic.

In the meantime, as IT systems became more and more critical to the operation of various business functions, secondary--or redundant (highly available)--systems were required. This introduced the era of clustering, which is where one monolithic system can take over for a second monolithic system in the event that the first system fails. Clusters have grown in sophistication and size (as have their monolithic components), but remain comparatively small and confined when compared to the alternative approach: grid.

Moore's Law has meant we have been able to effectively double our capabilities (processing and capacity anyway, not actual I/O) roughly every 18 months which has, by and large, been able to keep up with the lion's share of overall demand from the commercial computing buying community.

Until now, that is.

Monolithic architectures, whether clustered or standalone, have historically been finite and static. This means that, in order to execute an application on a system, you have to run that application on that system. The overall system is configured with an operating system (overall stack controller), and applications beneath that OS. Those applications execute under rigid, specific conditions that are directly related to that OS, and that stack of infrastructure.

Clustering in that situation is normally relegated to simply having System A take over the application workload of System B, if/when System B goes down, for whatever reason. There are many variations and subtly different ways this happens, but basically that's it. Sometimes we have more than a 1:1 cluster relationship--sometimes we can have 4:1 or even 8:1, etc. But we never have 1,000: 1 or more. As long as the IT world has been comfortable knowing that an application could only execute under those physical parameters, clustering has been fine.

But now, virtualization changes all of that.

Server virtualization has allowed IT departments to make one physical stack of hardware appear to the OS/application environments as many individual stacks, allowing much better hardware utilization, efficiency, etc. That's great.

Building an N-node cluster of individual hardware stacks with high availability is great, and enables much improved operating efficiency because often users can eliminate many of their previous smaller stacks of equipment and push all their application environments onto virtual machines on far less equipment. But applications reap no more benefit--and indeed can often lose benefit--by doing this. Users save on hardware and operations, but their applications do not perform any better, or have any better availability or scalability on virtual hardware, than they would on their own dedicated hardware.

This is reality. It does not make it bad, it simply is what it is.

As great as virtualization 1.0 is, and as difficult as the problems it creates are, we truly are at the easy phase. Things are going to get much more difficult. Fundamentally, you might say that we're in 1972. Back then, the mainframe of the day was a single big box with a ton of resources in it that allowed us to create virtual machine instances where we carved out some of those resources and dedicated them to a specific virtual machine. If one VM/application environment needed more of anything, we could give it what was needed, presuming of course that we had more to give. This is essentially the same as what we do today; except that back then we could actually even do it with I/O to some degree.

The situation can be summarized thus: Being able to make one physical box look like 10 is very interesting and compelling, but making 10 physical boxes look and act like one is far more valuable. And that is where we are heading.

Today, if you run out of processing capability on your VM, you can either give it more cores within your physical machine (if you have them), or move the VM to a bigger, more powerful physical machine and let it run on those cores. Ironically, this is the very definition of monolithic computing. And yet, tomorrow you will have the ability to distribute--or federate--your application processing across cores, across systems, and across boxes as you need (and also to shrink back accordingly), all without an application knowing or caring.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

Slideshows
Blockchain Gets Real Across Industries
Lisa Morgan, Freelance Writer,  7/22/2021
Commentary
Seeking a Competitive Edge vs. Chasing Savings in the Cloud
Joao-Pierre S. Ruth, Senior Writer,  7/19/2021
News
How CIO Roles Will Change: The Future of Work
Jessica Davis, Senior Editor, Enterprise Apps,  7/1/2021
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Monitoring Critical Cloud Workloads Report
In this report, our experts will discuss how to advance your ability to monitor critical workloads as they move about the various cloud platforms in your company.
Slideshows
Flash Poll