Vision guided robotics (VGR)

Vision guided robotics (VGR)

Vision guided robotics (VGR)

Deep learning models trained exhaustively using synthetic image data achieve higher precision and reliability in object detection, positioning and gripping tasks.

For reasons of flexibilty and adaptability, there is a trend in the bin picking segment moving from rule-based or classical computer vision (CV) to computer vision based on deep learning. To facilitate robust and reliable performance in object identification and segmentation tasks leading to successful gripping of an object, the deep learning algorithms require large datasets that represent variations in objects, textures and environmental variables.

Capturing real data with 3D cameras/sensors in a lab before the algorithms are deployed in a manufacturing or logistics environment and also the training on a production site is a time, efforst and cost intensive process. This is significant especially in case of organic objects, object categories where each unit always has a unique look and shape or generic robotic vision applications, which need to be trained on hundred to thousands of different objects.

By using synthetic image data, the process of collecting and annotating 3D image data can be avoided with a solution that shortens the data acquisition process from years to weeks. Objects can be easily imported or selected from a library. Those objects can be physically simulated, placed or distributed based on your needs. Variations can also be simulated in the environment where the objects are located. As always, the synthetic image data comes pre-annotated, with all the image channels and information that an industrial 3D sensor/camera would provide.

SI-DemoData_RoboticVision_totale_01

Haven’t found your use case?

If you are facing challenges with data acquisition and are considering the use of synthetic images for an application not listed here, contact us – we would be glad to work together towards a solution.