Intel is targeting the nascent market for artificial intelligence gear with a new version of the Xeon Phi chip code-named "Knights Mill."

Eric Zeman, Contributor

August 18, 2016

3 Min Read
<p align="left">(Image: Jason Doiy/iStockphoto)</p>

12 Ways AI Will Disrupt Your C –Suite

12 Ways AI Will Disrupt Your C –Suite


12 Ways AI Will Disrupt Your C –Suite (Click image for larger view and slideshow.)

Intel has revealed a new weapon in the battle to dominate the artificial intelligence market -- the latest Xeon Phi chip -- which aims to get machines thinking on their own. The chipmaker can't let the AI market, which Nvidia already has a head start in, slip through its fingers, as it did with mobile phones.

Codenamed "Knights Mill," the third-generation Xeon Phi that was announced on Wednesday at the Intel Developer Forum, is a server processor made specifically to tackle artificial intelligence.

It features the ability to handle floating point calculations, which improves machine learning. Machine learning allows computers to learn on their own, without the assistance of developers.

"While less than 10% of servers worldwide were deployed in support of machine learning last year, machine learning is the fastest growing field of AI and a key computational method for expanding the field of AI," explained Intel.

Intel says it can take weeks to train machines to learn to recognize patterns and connections between complex data. This long learning curve means they are unable to make real-time decisions. Boosting the floating point calculation in the Xeon Phi improves how machines handle the algorithms needed to make accurate and useful decisions in a more realistic time frame.

More important, the Xeon Phi can target deep learning, a branch of machine learning that uses neural networks to handle random and complex bits of data for image and speech recognition, natural language processing, and other tasks. Deep learning "emulates neurons and synapses in the brain, learning through iteration and the formation of complex pathways in the neural network," according to Intel.

Right now, the market is larged owned by Nvidia. The company's GPUs handle multiple calculations at once in a process called parallel computing. Intel argues the use of GPUs won't work over the long haul.

[Read Intel CEO's Manifesto Details Cloud, IoT Strategy.]

The Intel Xeon Phi processor family can offer up to 1.38 times better scaling than GPUs, the company claims. The big issue here is that GPUs are add-ons to CPUs. It takes time, however little, to send the calculation set from the CPU to the GPU and back. Xeon Phi handles the calculations without sending them to the GPU, which provides a speed boost.

Intel plans to push Xeon Phi further once it finalizes its acquisition of Nervana Systems, which it announced earlier this month.

"Nervana's Engine and silicon expertise will advance Intel's AI portfolio and enhance the deep learning performance and total cost of ownership of Intel Xeon and Intel Xeon Phi processors," Intel said in a statement about the acquisition.

Intel says it expects to ship the latest Xeon Phi in 2017.

About the Author(s)

Eric Zeman

Contributor

Eric is a freelance writer for InformationWeek specializing in mobile technologies.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights