Vision guided robotics (VGR)
Vision guided robotics (VGR)
Deep learning models trained exhaustively using synthetic image data achieve higher precision and reliability in object detection, positioning and gripping tasks.
Capturing real data with 3D cameras/sensors in a lab before the algorithms are deployed in a manufacturing or logistics environment and also the training on a production site is a time, efforst and cost intensive process. This is significant especially in case of organic objects, object categories where each unit always has a unique look and shape or generic robotic vision applications, which need to be trained on hundred to thousands of different objects.
By using synthetic image data, the process of collecting and annotating 3D image data can be avoided with a solution that shortens the data acquisition process from years to weeks. Objects can be easily imported or selected from a library. Those objects can be physically simulated, placed or distributed based on your needs. Variations can also be simulated in the environment where the objects are located. As always, the synthetic image data comes pre-annotated, with all the image channels and information that an industrial 3D sensor/camera would provide.

Why use synthetic image data for
vision guided robotics?
Haven’t found your use case?
If you are facing challenges with data acquisition and are considering the use of synthetic images for an application not listed here, contact us – we would be glad to work together towards a solution.