Planning, design and development
FlexSight designs and implements artificial vision and autonomous perception solutions, with a special focus on embedded systems.
Such solutions mainly address industrial automation and logistics processes, in particular where anthropomorphic robots and autonomous vehicles are used in environments shared with human operators, or in cooperative robotics applications.
To this end, our solutions are designed to guarantee a high level of integration and safety for the operators involved in the processes.
The strengths of FlexSight are represented by the strong scientific background in the fields of artificial vision and autonomous robotics and by the quality of the software modules developed in house, that allow to provide the customers with highly reliable products at an affordable cost.
All-In-One devices for intelligent perception: from data to result
FlexSight develops integrated sensor systems capable of providing high-level perception skills to manipulators or autonomous mobile platforms.
These skills include:
to detect and locate objects and obstacles
to estimate motions in the working environment
build the 3D reconstructions of the surrounding environment
Typical use cases include by pick & place and object handling applications in environments populated by human personnel
The FlexSight C1 device was the first prototype developed in this direction. It integrates in a single chassis an RGB-D data acquisition system, providing both images and 3D structure of the environment, and a powerful mini computer capable of performing advanced algorithms for 3D reconstruction and object localization.
Such algorithms are distributed bundled with the device: they are preinstalled, ready to use and easily configurable with a web interface accessible from any device with a web browser. The FlexSight C1 can be directly connected to the robot that one wants to control during the production cycle, and it only requires the 3D CAD model of the objects to be localized as input.
The FlexSight C2 represents the natural evolution of the C1 device: it integrates the potential of the previous version, with some important updates both from the hardware and software point of view.
The hardware has been enhanced and the data acquisition part improved to allow reliable operations even in suboptimal lighting conditions.
New perception algorithms based on deep-learning technologies have also been developed and integrated.