News

ams OSRAM’s VCSEL Laser Technology Takes Aim to Bring 3D Vision to Robots

February 03, 2022 by Jake Hertz

As the world strives to bring vision to different areas, from drones to vehicles, ams OSRAM and Luxonis have teamed up to developed robotic vision systems with vertical-cavity surface-emitting laser (VCSEL) technology.

Often when we hear of the significant advances occurring in world vision systems and computer vision, we attribute these innovations to software. 

However, at the heart of the developing field of vision systems lies innovations in vision hardware.

One of these innovative imaging hardware technologies is the VSCEL (pronounced "vixels"). 

 

VCSEL directionality versus LEDs and EELs.

VCSEL directionality versus LEDs and EELs. Image used courtesy of Radiant Vision Systems

 

Earlier this week, imaging hardware ams OSRAM announced a collaboration with 3D vision systems company Luxonis to use VCSEL technology for future autonomous robots and vehicles.

This article will look at VCSEL technology, how it's innovative, and what ams OSRAM and Luxonis hope to achieve together.

 

VCSEL Technology

VCSEL technology is a form of laser emanation that has become increasingly popular.

As opposed to other popular forms of semiconductor light sources, like LEDs which emit light horizontally or omnidirectionally, a VCSEL is unique in that it emits its light directly vertically. 

By emitting light directly from its top surface, a VCSEL offers a light source with an extremely high level of directionality, meaning that it can be precisely controlled and directed. 

 

Cross-section of a VCSEL.

Cross-section of a VCSEL. Image used courtesy of II-VI Incorporated

 

Often, VCSELs are not used as standalone devices but instead in an array of thousands integrated together on a single piece of silicon. 

In the context of vision systems, these VCSEL arrays can be used in time of flight (ToF) distance sensors, offering thousands of data points of information. Users can then utilize these ToF data points to create detailed and accurate 3D maps of an environment. 

This concept is precisely the working principle of LiDAR (light detection and ranging) sensors, which have found massive importance in recent years for environmental mapping in autonomous vehicles.

The applications of VCSEL sensors, like LiDAR, far extend autonomous vehicles and include autonomous guided robots, drones, and many more.

 

ams OSRAM and Luxonis Team Up

Earlier this week, ams, a designer of VCSEL technology, announced a collaboration with Luxonis, a provider of 3D solutions.

The companies specifically cite that the goal of their collaboration is to develop vision systems for applications including autonomous guided vehicles, robots, and drones. 

From Luxonis' point of view, the most attractive aspect of the collaboration was access to ams OSRAM's BELAGO1.1 Dot Projector technology.

 

The BELAGO1.1 Dot Projector (right) and its block diagram (left).

The BELAGO1.1 Dot Projector (right) and its block diagram (left). Image used courtesy of ams OSRAM

 

The BELAGO1.1 is a unique infrared VCSEL chip that has been designed to create a high contrast dot pattern, making it particularly well suited for active stereo vision applications. 

In active stereo vision solutions, two infrared cameras simultaneously read the high contrast dot pattern projected by the VCSEL source. The software will then compare the images acquired by both cameras to calculate depth and output a 3D map of the environment. 

In the context of robotics or autonomous vehicles, this could provide the device with depth perception and a detailed understanding of its environment, allowing it to navigate and avoid obstacles successfully.

 

The OAK-D Pro-PoE Spatial AI camera leverages ams’ VCSEL tech.

The OAK-D Pro-PoE Spatial AI camera leverages ams’ VCSEL tech. Image used courtesy of Luxonis

 

Already, Luxonis has leveraged ams OSRAM's VCSEL expertise to create its Luxonis' OAK-D Pro-PoE Spatial AI camera for robotics applications. 

The camera integrates the BELAGO1.1 solution with a flood illuminator and 4 TOPS of onboard processing to perform edge spatial computing for robotics applications.

 

Pushing Vision Hardware Forward

As the world searches for newer and better ways to create computer vision and vision systems, sometimes finding a partner and making use of each other's skillsets and technology can be the best route to take.

Through collaboration, these two companies have leveraged each other's expertise to bring more 3D sensing solutions to market. As time passes, we could expect this partnership, and possibly others, to yield even more solutions for applications, including autonomous vehicles, robots, drones, and more.