3D Imaging and Machine Vision

Currently, 2D devices are being used to recognize and measure objects, like for instance on a conveyor belt. But this works only when the product size and type are known. The product’s height must be a fixed value. Therefore, as long as the device can detect the object, the height is assumed. But what if there are multiple products on the conveyor belt? In such instances, the 2D device won’t be able to detect the product height, leading to failure.

Therefore, the advent of embedded software and algorithms that can detect, measure and process 3D pixels is an exciting development. According to embedded developers familiar with the technology, soon we will have 3D vision enabled robots in our factories that can detect and sort products under complicated circumstances (like unknown size and type of product).

3D machine vision is achieved through a number of techniques such as point clouds, stereo vision and 3D triangulation. Stereo vision is just like human vision. The brain uses the difference in the displacement of the images captured by the two eyes to give us a perspective. This also helps us judge distances. Such a system will certainly need more processing time, but we now have a number of multicore processors that can take care of this requirement.