Facebook is open sourcing the hardware design for the servers it uses to train artificial intelligence software.

Larry Loeb, Blogger, Informationweek

December 12, 2015

3 Min Read
<p align="left">(Image: ymgerman/iStockphoto)</p>

Cognitive Computing Powers 6 Smart Deployments

Cognitive Computing Powers 6 Smart Deployments


Cognitive Computing Powers 6 Smart Deployments (Click image for larger view and slideshow.)

Facebook announced Thursday that it will open source its latest artificial intelligence (AI) server designs. The move continues a course the company began in 2011 when it launched the Open Compute Project to let companies share designs for new hardware.

The server, codenamed Big Sur, is designed specifically to train the newest class of AI algorithms, called deep learning, which mimic the neural pathways found in the human brain.

Google uses this kind of AI technique to recognize spoken words, translate from one language to another, improve Internet search results, and other tasks.

Not coincidentally, last month Google released its machine learning library TensorFlow under the open source Apache 2.0 license. According to the TensorFlow website, it is an "open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs (Graphical Processing Units) in a desktop, server, or mobile device with a single API."

Here is where the Facebook AI servers come into play. Google didn't release the hardware that TensorFlow runs on, and without that hardware, the software engine is heavily impeded in what it can do. But Facebook's move to open source its AI servers solves that. 

Big Sur, which Facebook is contributing to the Open Compute Project, includes eight GPU boards, each consisting of many chips but yet consuming only about 300 Watts of power. It is Open Rack-compatible hardware.

It was built with the Nvidia Tesla M40 in mind, but is qualified to support a wide range of PCI-e cards, according to Facebook.

GPUs were originally designed to render images for games and other intensely graphical applications, but have proven adept at deep learning.

Traditional CPUs are also present in these sorts of machines, but it has been found that neural networks are far more efficient when they shift much of the computation load onto GPUs. In fact, GPUs in general can give much more computational throughput per dollar spent than traditional CPUs will provide.

According to a blog post by Facebook engineers Kevin Lee and Serkan Piantino, Big Sur is twice as fast as the systems that Facebook previously used for training AI software.

[Read Machine Learning: How Algorithms Get You Clicking.]

Facebook previously used off-the-shelf components in its self-designed machines. In Big Sur's case, it partnered with Taiwanese manufacturer Quanta Computer, as well as leveraged Nvidia's Tesla Accelerated Computing Platform. This arrangement cuts out any middlemen in the server's design and manufacture.

Facebook says open sourced its AI hardware for altruistic reasons.

As Lee and Piantino wrote in their blog, "We want to make it a lot easier for AI researchers to share techniques and technologies. As with all hardware systems that are released into the open, it's our hope that others will be able to work with us to improve it. We believe that this open collaboration helps foster innovation for future designs, putting us all one step closer to building complex AI systems that bring this kind of innovation to our users and, ultimately, help us build a more open and connected world."

**Elite 100 2016: DEADLINE EXTENDED TO JAN. 18, 2016** There's still time to be a part of the prestigious InformationWeek Elite 100! Submit your company's application by Jan. 18, 2016. You'll find instructions and a submission form here: InformationWeek's Elite 100 2016.

About the Author(s)

Larry Loeb

Blogger, Informationweek

Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek. He has written a book on the Secure Electronic Transaction Internet protocol. His latest book has the commercially obligatory title of Hack Proofing XML. He's been online since uucp "bang" addressing (where the world existed relative to !decvax), serving as editor of the Macintosh Exchange on BIX and the VARBusiness Exchange. His first Mac had 128 KB of memory, which was a big step up from his first 1130, which had 4 KB, as did his first 1401. You can e-mail him at [email protected].

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights