NYU Pushes Boundaries of Deep Learning as Newest GPU Center of Excellence
OREANDA-NEWS. November 12, 2015. New Yorkers are fond of saying “Let’s talk.” But these days communication is increasingly being mastered by computers, thanks to advances in speech recognition, image understanding and language translation.
New York University’s Center for Data Science is among the universities and institutions helping make these deep learning tasks, once considered esoteric, mainstream. It’s doing so by advancing two key areas of computer science: machine learning, and parallel and distributed systems. These enable application programmers to handle massive datasets easily and efficiently.
Founded by deep learning pioneer Yann LeCun, who’s also director of AI Research at Facebook, the Center for Data Science has forged a strong alliance with NVIDIA as we work to advance GPU-based deep learning, parallel computing research and education.
These efforts are a big part of the reason why we’ve just named the center a GPU Center of Excellence.
“We’re thrilled to join some of our peer institutions as a GPU Center of Excellence and thankful for the support from NVIDIA,” said LeCun. “This will help us to continue our path-breaking work in deep learning and to develop the next generation of data scientists.”
Unblocking the Bottleneck
As the size of datasets expands and algorithms become more sophisticated, scientists face bottlenecks in their research because of the limited amount of compute and memory that can be put on a single machine.
To push deep neural network research to the limits means finding ways to break the single-machine barrier. That’s where GPUs come in. Working with the Center for Data Science, we’ll focus on the development of both scalable learning algorithms and distributed training systems.
These algorithms and systems help computers recognize and identify things that previously only humans could. GPUs help reduce the time it takes to train deep neural networks to identify patterns and objects while also using less data infrastructure.
Working with the Center for Data Science, we’ll also develop new deep neural network architectures that use the scalable memory and computational resources provided by the distributed training system. At least four research projects that tackle large-scale learning challenges are planned as joint projects.
Opportunities for Collaboration
The goal of our collaboration with the center is to drive discovery in data science and research with scientists trained to use CUDA, the parallel programming model used to increase computing performance of GPU accelerator-based systems.
We’ll also work to advance Torch, a machine learning framework with modules for deep neural networks and optimization (among hundreds of others), originally developed at NYU. Torch will benefit from these advances in algorithms and distributed training systems, enabling researchers to push the boundaries of deep learning.
As one of the world’s 23 GPU Centers of Excellence, NYU’s Center for Data Science will use equipment and grants provided by NVIDIA to support research and academic programs. Researchers have already deployed our GPU hardware and software toolkit and used it in courses offered by the NYU Courant Institute of Mathematical Sciences as well as the Center for Data Science.
Other opportunities for collaboration under consideration focus on large-scale machine learning, such as a deep learning hackathon series and a summer school of machine learning projects.
Комментарии