OREANDA-NEWS. April 06, 2016. As artificial intelligence sweeps across the technology landscape, NVIDIA unveiled today at its annual GPU Technology Conference a series of new products and technologies focused on deep learning, virtual reality and self-driving cars.

Before a record crowd of more than 5,000 scientists, engineers, entrepreneurs and global press, NVIDIA CEO and Co-founder Jen-Hsun Huang unveiled the world’s first deep-learning supercomputer in a box — a single integrated system with the computing throughput of 250 servers. The NVIDIA DGX-1, with 170 teraflops of half precision performance, can speed up training times by over 12x.

Unlocking AI’s Powers

“Artificial intelligence is the most far-reaching technological advancement in our lifetime,” Jen-Hsun told the audience. “It changes every industry, every company, everything. It will open up markets to benefit everyone. The DGX-1 is easy to deploy and was created for one purpose: to unlock the powers of AI and bring superhuman capabilities to problems that were once unsolvable.”

Over the course of a 2 hour talk, he described the current state of AI, pointing to a wide range of ways it’s being deployed. He noted more than 20 cloud-services giants — from Alibaba to Yelp, and Amazon to Twitter — that generate vast amounts of data in their hyperscale data centers and use NVIDIA GPUs for tasks such as photo processing, speech recognition and photo classification.

Five Miracles

Underpinning DGX-1 is a revolutionary new processor, the NVIDIA Tesla P100 GPU — the most advanced accelerator ever built, and the first to be based on the company’s 11th generation Pascal architecture.

Based on five breakthrough technologies — which Jen-Hsun smilingly called “miracles” — the Tesla P100 enables a new class of servers that can deliver the performance of hundreds of CPU server nodes.

A group of AI industry leaders — including Facebook’s Yann LeCun and IBM’s John Kelly — have already voiced support for the product. “AI computers are like space rockets: the bigger the better,” said Baidu Chief Scientist Andrew Ng. “Pascal’s throughput and interconnect will make the biggest rocket we’ve seen.”

A key early customer for DGX-1 is Massachusetts General Hospital. It’s set up a clinical data center, with NVIDIA as a founding partner, that will use AI to help diagnose disease starting in the fields of radiology and pathology. Mass General, the nation’s largest research hospital,  has an archive of some 10 billion medical images — a perfect target for deep learning.

Showing Our Software Side

While NVIDIA hardware has long made headlines, our software is key to advancing the state of the art in GPU-accelerated computing. We call this body of work the NVIDIA SDK, and Jen-Hsun described a series of major updates that it’s getting. These touch on everything from deep learning to self-driving cars and embedded computing (see “NVIDIA Details Major Software Updates to Sharpen GPU Computing’s Cutting Edge”).

Our goal with these updates: Make more of our capabilities available to more developers. A million-plus developers have already downloaded our CUDA toolkit. And there are more than 400 commercially available GPU-accelerated applications that benefit from our software libraries, as well as hundreds more game titles.

Bringing Reality to Virtual Reality

The keynote’s visual highlight was a view of a VR experience built on NASA’s research to send visitors to Mars. The Mars 2030 VR experience developed with FUSION Media, with advice from NASA, was demoed by personal computing pioneer Steve Wozniak.

Jen-Hsun upped the stakes and showed how our Iray technology can create interactive, virtual 3D worlds with unparalleled fidelity. These Iray VR capabilities allow us to create environments that let users strap on a headset and prowl around photorealistic virtual environments — like a building not yet constructed.

To show what VR can do, Google sent along 5,000 Google Cardboard VR viewers. We passed them out after the keynote so GTC attendees can experience NVIDIA Iray VR technology on their phones.

Driving Towards Smarter Cars

Continuing our efforts to help build autonomous vehicles with super-human levels of perception, we also introduced an end-to-end mapping platforms for self-driving cars.

It’s designed to help automakers, map companies and startups rapidly create HD maps and keep them updated, using the compute power of NVIDIA DRIVE PX 2 in the car and NVIDIA Tesla GPUs in the data center.

Maps are a key component for self-driving cars. Automakers will need to equip vehicles with powerful on-board supercomputers capable of processing inputs from multiple sensors to precisely understand their environments. Adding detailed maps to this equation simplifies the problem — by giving the car a better sense of where it is, and what’s coming next.

Of course, you never really know what your technology can do without a little competition. So not only will the ROBORACE Championship — the first global autonomous motor sports competition — be powered by our DRIVE PX 2 AI supercomputer, we’ll also be the first team to enter the championship series.