Facebook's hardware design chief previews his Interop keynote by explaining why he decided to open source the company's data center designs.

Charles Babcock, Editor at Large, Cloud

May 6, 2013

6 Min Read

 Facebook's Futuristic Data Center: Inside Tour

Facebook's Futuristic Data Center: Inside Tour


Facebook's Futuristic Data Center: Inside Tour(click image for larger view and for slideshow)

Maybe it's the full beard beneath the bald dome. Regardless of whether it's energy consumption or hardware design, Frank Frankovsky gives the impression that he's got his subject in a bear hug. He's going to tell you about the whole thing; no detail will escape his attention. And he does so with a combination of gravity and geniality that makes the process highly palatable.

Perhaps it's not surprising, then, that this Ursa Major at Facebook is the founder of the Open Compute Project and has ended up as chairman of the Open Compute Foundation's board of directors. He's the de facto leader of the world's first open-source hardware project.

His official title at Facebook is VP of hardware design and supply chain operations, which means he's responsible for ensuring that Facebook has all the hardware it needs when it needs it. That's no small order, and over the last several years it has forced a rethinking of what data center builders were doing.

Anyone who has been to Facebook's data center complex in Prineville, Ore., can see the first phase of Frankovsky's work. Server motherboards sit on open sleds that slide in and out of the server rack for easier maintenance; components are arranged in channels that allow continuous air flow down the rack. Temperatures are a little higher than expected in the data center because Facebook servers are designed to run at 85 degrees and Facebook doesn't use giant air conditioner chillers to cool the air. The result of revamped server and data center design is a facility that is 38% more efficient and 24% less expensive than predecessor data centers, said Frankovsky in a recent interview at Facebook offices in Menlo Park, Calif.

[ Learn more about Facebook's energy-efficient data center design. See Facebook's Data Center: Where Likes Live. ]

"Once we got them into production, we thought, why not make this open source," he said. Facebook as a company had been a frequent adopter of open-source software. But there were no open-source hardware projects. On the contrary, Google and Amazon.com, as they built their leading-edge data centers, kept their designs a trade secret.

Frankovsky had spent years in hardware design, production and product management, first for Compaq, then 14 years with Dell. He had come to admire the collaborative nature and pace of innovation of the Linux kernel development process and other open-source projects. "At Facebook, we were very active in open source. It was in our DNA. But in data center design, there was no comparable pace of innovation," he noted.

Data center builders tended to take what the manufacturers gave them, which usually included what Frankovsky called "vanity hardware," where the manufacturer builds up the front of the sheet metal case with a molding and bulky brand symbol. Those embellishments restrict the airflow into the server and reduce the effectiveness of its cooling fans, Frankovsky said.

As they worked on data center servers at Facebook, "there was a passion among our peers to take over the technical design -- take control from the suppliers" and strip away all the useless decoration and interfering components. Hardware evolution should be more like software. "We wanted to spur the industry to move faster. We thought, 'Wow, there would be an aggregated impact on the environment if everyone did it this way.'"

The design team got its wish in April 2011 when Facebook CEO Mark Zuckerberg and two partners announced the Open Compute Initiative, the first open-source hardware project. Heading up an open-hardware project "was a natural evolution for me personally. I've always been a customer advocate," said Frankovsky.

Facebook's Prineville complex has two finished data center buildings that reflect the first and second phases of Open Compute's server-hardware designs. Any Open Compute member is welcome to implement the designs in its own facility, but Facebook remains a production test bed for the evolving ideas behind Open Compute. As proofs of concept, the complex has recorded the best power usage effectiveness measures so far, 1.07 and 1.08 over successive quarters of 2011. That means roughly 93% of the electricity brought to the complex is used by computing devices. A more typical ratio would be 50%.

The project has taken several new turns recently. "It's surprising how fast people are thinking differently" about hardware design, he said. Open Compute servers now populate the project's standard Open Racks, and a variation, the Hyve Solutions-originated storage server, combines disk drives with an extra serial-attached SCSI (SAS) connection. The connector usually ties storage devices together. In this combination, the SAS connector can tie the storage to a simple server, converting the Open Rack into "a poor man's NetApp file server," said Frankovsky. "It's a simple way to combine storage with a server in a single rack," one that more sophisticated designers of storage hadn't conceived of, he said.

Another recent innovation is the "group hug board," a server motherboard that allows Intel Atom and an ARM vendor's chips to reside side by side and work together. The project set a specification for the size and shape of the pin connectors for chips used by the board. Competitors produced chips that met the specification, and the Open Compute Project produced the first motherboard with no vendor lock-in. Theoretically, one vendor's chips can be replaced with another vendor's.

Facebook and the other members of the Open Compute Project have a unique position as large buyers of data center equipment, said Frankovsky. "We can bring competing vendors together. Only the (large Web-based) data center operators can do this," he said.

Frankovsky arrived at Facebook with a rare combination of experiences. He had been a product manager for Dell's PowerEdge servers and had also been among the founding executives of Dell's Data Center Solutions unit. Data Center Solutions studied the server needs of the largest Web data center builders, then figured out how to produce thousands of them at the lowest possible cost. Data Center Solutions designs went into some search engine and Microsoft Azure data centers.

For Frankovsky, who is slated to deliver a keynote address Wednesday at the Interop Conference in Las Vegas, the Open Compute Project has become "a virtuous cycle of development," where ideas don't respect vendor boundaries. "It's awesome to see how far it's come" in its first two years, he said.

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights