Cloud, SSD Make Servers Fight For Survival

Two trends are hitting the server market hard. One is life extension from SSD and in-memory operations. The other is a systemic decline caused by the cloud.

Jim O'Reilly, Consultant

November 15, 2013

4 Min Read

The cloud is starting to gain traction as a way to provide cheap, scalable IT capability. Many companies are experimenting with the cloud at various levels of commitment ranging from full-blown cloud operation to hybrid approaches where non-critical work is offloaded.

In addition, archiving and backup are migrating to the cloud, which provides a low-cost remote storage capability, and multi-site archiving for maximum integrity.

Public clouds such as Amazon (the market leader with a 7 percent share), Google, and Microsoft use inexpensive servers purchased directly from Chinese original design manufacturers (ODM) such as Quanta. These are low-cost minimalist designs, created from Intel reference designs.

Nearly 45 percent of total servers worldwide are purchased directly from these ODMs by the major CSPs, and, according to GigaOM, this has put Quanta on track to be one of the largest server providers in the world. Traditional vendors are feeling the pinch.

IBM wants out of its x86 server business, but failed to negotiate a price with Lenovo, which may reflect Lenovo’s sense that the market is falling faster. It is worth noting that when Lenovo backed out, the quarterly revenue numbers from IDC were being prepared that showed a bloodbath for the traditionals.

Traditional server vendors on the ropes
Dell has gone private amid comments showing internal consternation at the state of the market, and the company plans to move its focus to software and services. Dell has shelved its public cloud efforts and plans to focus on private clouds.

Hewlett-Packard, perhaps hurt the most, appears to be unconcerned, and is putting a brave face on the future, but its cloud story is confused by the blade-, micro-, and rack-server teams all claiming that they have the best solutions for the cloud.

A major study by Lawrence Berkeley National Lab, which contains well-researched IT demographic data, suggests that a full implementation of a virtualized x86-based cloud to replace traditional servers would result in saving power equal to that used by Greater Los Angeles.

The study was aiming at the "green" aspects of the cloud, but buried just below the surface was what created the savings. Clouds use servers at least as efficiently as the enterprise. In fact, because they aggregate many workloads, they typically do better. The result is that a cloud solution will use less gear than the systems replaced. This is multiplied by Moore’s law, since new “systems” are at least 2x faster every couple of years.

I chose the word "systems" carefully here, since Moore’s covers CPUs and historically system performance hasn’t done anywhere near as well because of slow storage and interconnects. But this problem is now fixed, and systems will exceed CPUs in performance growth for a while.

Moore’s Law now applies to systems
The performance issue means that still fewer new systems are required. For the SMB and smaller enterprise, the cloud replacement rate is likely to be below 50 percent, and even enterprises will need fewer machines.

The sum of these impacts is bad news for traditional vendors. As private and public clouds using x86 servers become mainstream, new server unit count will continue to shrink and sourcing will move to the ODMs. This leaves the traditional vendors with a growing revenue hole, exacerbated by low unit revenue and margin for x86 servers.

Micro-servers and blade-servers are an attempt to sell up clouds based on proprietary designs with properties seen in the normal datacenter -- more robust, less maintenance, smaller, lower power etc.

TCO calculations favor the ODM servers, as cloud robustness techniques solve the issues of downtime and data integrity as well as any of them. This is true of both private and public clouds, with the latter having the advantage that 24/7 is mainly "someone else’s problem."

The threat of a vanishing business also hangs over server wannabees Cisco and EMC. All of the majors will chase a declining market, resulting in price wars and even lower margins. That’s not a good time to be entering a market, with even the traditional vendors already feeling pain.

Add some future technologies, and the story looks even worse. We expect ARM+GPU hybrids in 2015 offering lower costs and more performance still. By 2016 we’ll have SoC solutions with a complete server plus DRAM and flash all on a module, and these will increase "system" performance by much more than 2x.

To offset their decline, servers will need new consumers of compute power. The Internet of Everything and big-data will help, but more is needed. Breakthroughs in server-based voice recognition will help, for instance, as will devices like Google Glass, but the next few years will have vendors fight for survival.

 

Read more about:

20132013

About the Author(s)

Jim O'Reilly

Consultant

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights