OREANDA-NEWS. November 20, 2015. At a press event in San Francisco earlier this month, StereoLabs showed off the real-time 3D SLAM — or simulation localization and mapping — features of their StereoLabs ZED camera, the world’s first consumer accessible high-definition stereo 3D camera for depth-sensing. The camera promises to make creating a detailed three-dimensional maps as simple as tossing a drone into the air.

The San Francisco based startup’s not-so-secret weapon: Jetson TX1. Introduced last week, Jetson TX1 is a credit-card sized module that harnesses the power of machine learning to enable a new generation of smart, autonomous learning machines.

Using NVIDIA’s tools, the StereoLab team was able to port their code directly from a PC to Jetson TX1. Our tools were also able to help them improve the performance of their 3D mapping software by 50%, allowing them to map 3D environments in real-time.

“When the Jetson TK1 came out it was a revolution for us, because everything we could do on our laptops we could do on our embedded platform,” says StereoLabs CEO Cecile Schmollgruber.

Now with the Jetson TX1, Stereolabs was able to put its ZED 3D camera on a drone and fly over an old French chateau to create a 3D map of the historic estate as it flew.

3D mapping is just one of a host of next-generation applications for Jetson TX1. We built Jetson TX1 to power new wave of millions of smart devices. Jetson TX1’s advanced GPU allows it to incorporate capabilities such as machine learning, computer vision, and navigation.

"In addition to depth sensing…we’re able to run tracking and mapping, in real time, aboard the drone thanks to Jetson TX1 and Zed,” says Stereolabs CTO Edwin Azzam 

The demonstration hints at a host of applications for Stereolabs ZED. Real estate agents can use it to create immersive digital recreations of their listings. Game developers can use it to put gamers in the middle of iconic settings. Map makers can use it to chart tough terrain quickly and easily.

“What we’re showing here is how we capture high quality depths map, track the camera, and fuse the 3D reconstructions together to create the 3D mesh of the environment,” says Azzam. “This resulting mesh can be imported into any 3D software, and has really simplified the process of creating a editing a 3D model.”