Nvidia has released a series of software and hardware tools aimed at the high-performance computer market, and for systems dedicated to AI and machine learning. In addition to the Tesla P100 GPU accelerator, Nvidia released a series of new software tools for its deep learning platform.

Nathan Eddy, Freelance Writer

June 20, 2016

4 Min Read
<p align="left">Tesla P100 GPU accelerator.</p>

Solar Impulse 2: 11 Images From Its Awe-Inspiring Journey

Solar Impulse 2: 11 Images From Its Awe-Inspiring Journey


Solar Impulse 2: 11 Images From Its Awe-Inspiring Journey (Click image for larger view and slideshow.)

Nvidia is releasing a number of new hardware and software tools designed for the high-performance computing market, and for those systems that are dedicated to advancing the capabilities of neural networks, artificial intelligence, and machine learning.

On June 19, the company announced the Tesla P100 GPU accelerator, as well as a series of upgrades to its deep learning software platform, including three new software tools: Nvidia DIGITS 4, CUDA Deep Neural Network Library (cuDNN) 5.1, and the new GPU Inference Engine (GIE).

The Tesla P100 is a plug-in for the PCI-Express slots in the servers that power HPC systems and supercomputers. The accelerator will be available in the fourth quarter of this year. Companies such as Cray, Dell, Hewlett Packard Enterprise, IBM, and SGI are all expected to use the Tesla P100 in different systems.

Nvidia's Tesla P100 is built on its Pascal architecture. It creates what the company calls "super nodes" that can offer the same amount of throughput in HPC systems as 32 traditional CPUs. The company also boasts that the accelerator can offer 4.7 teraflops of double-precision performance, as well as 9.3 teraflops of single-precision performance. (A teraflop is one trillion floating-point operations per second.)

In addition to the Tesla P100, Nvidia released three new software tools.

The first is DIGITS 4, which can automatically train neural networks across a range of tuning parameters. It introduces a new object detection workflow, which is meant to help data scientists train deep neural networks to find faces, pedestrians, traffic signs, vehicles, and other objects in a sea of images.

The new tool can also be applied to help track objects from satellite imagery and assist in security and surveillance. It is also meant for applications in advanced driver assistance systems and medical diagnostic screening.

Nvidia is making the DIGITS 4 release candidate available this week as a free download for members of its developer program.

The company also announced the GPU Inference Engine (GIE) -- a neural network inference engine for production deployment of deep learning applications.

The GIE platform optimizes trained neural networks for runtime performance and delivers GPU-accelerated services for data center, embedded, and automotive applications. It will be available as part of the Nvidia Deep Learning software development kit.

The Deep Learning platform is part of the broader Nvidia SDK, which unites into a single program three major technologies in computing, including artificial intelligence, virtual reality, and parallel computing.

Finally, cuDNN provides high-performance building blocks for the deep learning used by a variety of deep learning frameworks, and Version 5.1 delivers accelerated training for deep neural networks.

Nvidia is offering the Version 5.1 release candidate as a free download for members of the Nvidia developer program.

[Check out InformationWeek's video on neural networks and the future of security.]

Deep learning and AI capabilities could have far-reaching implications across a variety of industries, from data analysis to autonomous vehicle development. A recent report from analytics firm IHS indicates AI developers are increasingly focused on the autonomous vehicle space.

The IHS report estimated that unit shipments of AI systems used in infotainment and advanced driver assistance systems (ADAS) systems are expected to rise from only 7 million in 2015 to 122 million by 2025.

Nvidia's Drive PX autonomous car platform combines deep learning, sensor fusion, and surround vision. It can fuse data from 12 cameras, as well as lidar, radar, and ultrasonic sensors.

The platforms are built around deep learning and include a framework (Caffe) to run DNN models designed and trained on the company's Digits platform. Drive PX also includes an advanced computer vision (CV) library and primitives.

Nvidia isn't the only company expanding its focus on deep learning and AI capabilities. Google recently opened a dedicated machine learning operation in its Zurich office called the Google Research Europe center.

The center will focus on three areas: Machine intelligence, natural language processing and understanding, and machine perception. The research center aims to deliver machine learning that can be put into practical use, to improve the machine learning infrastructure, and to assist the research community overall.

About the Author(s)

Nathan Eddy

Freelance Writer

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights