28.08.2017, 21:53
Intel Unveils Neural Compute Engine in Movidius Myriad X VPU to Unleash AI at the Edge
Source: Intel
OREANDA-NEWS. Intel today introduced its new Movidius™ Myriad™ X vision processing unit (VPU), advancing Intel’s end-to-end portfolio of artificial intelligence (AI) solutions to deliver more autonomous capabilities across a wide range of product categories including drones, robotics, smart cameras and virtual reality.
Myriad X is world’s first system-on-chip (SOC) shipping with a dedicated Neural Compute Engine for accelerating deep learning inferences at the edge. The Neural Compute Engine is an on-chip hardware block specifically designed to run deep neural networks at high speed and low power without compromising accuracy, enabling devices to see, understand and respond to their environments in real time. With the introduction of the Neural Compute Engine, the Myriad X architecture is capable of 1 TOPS1 of compute performance on deep neural network inferences.
"We’re on the cusp of computer vision and deep learning becoming standard requirements for the billions of devices surrounding us every day," said Remi El-Ouazzane, vice president and general manager of Movidius, Intel New Technology Group. "Enabling devices with humanlike visual intelligence represents the next leap forward in computing. With Myriad X, we are redefining what a VPU means when it comes to delivering as much AI and vision compute power possible, all within the unique energy and thermal constraints of modern untethered devices."
Capable of delivering more than 4 TOPS2 of total performance, its tiny form factor and on-board processing are ideal for autonomous device solutions. In addition to its Neural Compute Engine, Myriad X uniquely combines imaging, visual processing and deep learning inference in real time with:
Programmable 128-bit VLIW Vector Processors: Run multiple imaging and vision application pipelines simultaneously with the flexibility of 16 vector processors optimized for computer vision workloads.
Increased Configurable MIPI Lanes: Connect up to 8 HD resolution RGB cameras directly to Myriad X with its 16 MIPI lanes included in its rich set of interfaces, to support up to 700 million pixels per second of image signal processing throughput.
Enhanced Vision Accelerators: Utilize over 20 hardware accelerators to perform tasks such as optical flow and stereo depth without introducing additional compute overhead.
2.5 MB of Homogenous On-Chip Memory: The centralized on-chip memory architecture allows for up to 450 GB per second of internal bandwidth, minimizing latency and reducing power consumption by minimizing off-chip data transfer.
Myriad X is the newest generation in a lineage of Movidius™ VPUs, which are purpose-built for embedded visual intelligence and inference. Movidius VPUs achieve significant performance at low power with the merging of three architectural elements to provide sustained high-performance on deep learning and computer vision workloads: an array of programmable VLIW vector processors with an instruction set tuned to computer vision and deep learning workloads; a collection of hardware accelerators supporting image signal processing, computer vision, and deep learning inferences; and commonly accessible intelligent memory fabric that minimizes data movement on chip.
Myriad X is world’s first system-on-chip (SOC) shipping with a dedicated Neural Compute Engine for accelerating deep learning inferences at the edge. The Neural Compute Engine is an on-chip hardware block specifically designed to run deep neural networks at high speed and low power without compromising accuracy, enabling devices to see, understand and respond to their environments in real time. With the introduction of the Neural Compute Engine, the Myriad X architecture is capable of 1 TOPS1 of compute performance on deep neural network inferences.
"We’re on the cusp of computer vision and deep learning becoming standard requirements for the billions of devices surrounding us every day," said Remi El-Ouazzane, vice president and general manager of Movidius, Intel New Technology Group. "Enabling devices with humanlike visual intelligence represents the next leap forward in computing. With Myriad X, we are redefining what a VPU means when it comes to delivering as much AI and vision compute power possible, all within the unique energy and thermal constraints of modern untethered devices."
Capable of delivering more than 4 TOPS2 of total performance, its tiny form factor and on-board processing are ideal for autonomous device solutions. In addition to its Neural Compute Engine, Myriad X uniquely combines imaging, visual processing and deep learning inference in real time with:
Programmable 128-bit VLIW Vector Processors: Run multiple imaging and vision application pipelines simultaneously with the flexibility of 16 vector processors optimized for computer vision workloads.
Increased Configurable MIPI Lanes: Connect up to 8 HD resolution RGB cameras directly to Myriad X with its 16 MIPI lanes included in its rich set of interfaces, to support up to 700 million pixels per second of image signal processing throughput.
Enhanced Vision Accelerators: Utilize over 20 hardware accelerators to perform tasks such as optical flow and stereo depth without introducing additional compute overhead.
2.5 MB of Homogenous On-Chip Memory: The centralized on-chip memory architecture allows for up to 450 GB per second of internal bandwidth, minimizing latency and reducing power consumption by minimizing off-chip data transfer.
Myriad X is the newest generation in a lineage of Movidius™ VPUs, which are purpose-built for embedded visual intelligence and inference. Movidius VPUs achieve significant performance at low power with the merging of three architectural elements to provide sustained high-performance on deep learning and computer vision workloads: an array of programmable VLIW vector processors with an instruction set tuned to computer vision and deep learning workloads; a collection of hardware accelerators supporting image signal processing, computer vision, and deep learning inferences; and commonly accessible intelligent memory fabric that minimizes data movement on chip.
Комментарии