Coming soon – robots with human-like precision

by Nigel Smith

Manufacturers are seeking to do more with robots with greater flexibility, innovation and flexibility. Purchasers will expect smaller and more flexible designs that can fit easily into existing production lines, while existing robots are more easily repurposed and reassigned to tasks. In these environments, robots will become increasingly used to pick and move products in warehouses or around the production line.

One famous example is Amazon’s Kiva robots, which are roboticized pallet jacks that follow workers around the warehouse and support them in their tasks. Other growth areas will include industrial robots that operate and tend to CNC machines and there are also increasing  possibilities in welding applications.

But one feature will be crucial if robots are to perform these tasks successfully: 2D and 3D vision systems. Unlike “blind” robots ― those without vision systems ― which complete simple repetitive tasks, robots with machine vision react intuitively to their surroundings.

Greater precision

In the case of 2D vision systems, the robot is equipped with a single camera. This approach is better-suited for applications where reading colours or textures is important, like barcode detection.

3D systems, on the other hand, evolved from spatial computing developed at Massachusetts Institute of Technology (MIT) in 2003. Multiple cameras create a 3D model of the target object, and are especially suited for any task where shape or position are important. That includes precision bin picking, one of the most sought-after tasks for robots that we’ll look at in more detail below.

Both 2D and 3D vision systems have a lot to offer. 3D systems, in particular, can overcome some of the errors 2D-equipped robots encounter when executing physical tasks, that would otherwise leave human workers with the task of diagnosing and solving the malfunction or resulting bottle-neck.

Going forward, robots equipped with 3D vision systems have potential for reading barcodes and scanners, checking for defects such as in engine parts or wood quality, packaging inspections, checking the orientation of components and more.

Human-like reliability

Wendy Tan White, CEO of the robotics software company Intrinsic, mentions “cheaper sensing, and more abundant data”. In other words, we can expect to see the focus of robotics shift beyond sensor device hardware towards building AI that helps optimize the use of these sensors, and ultimately improves robot performance.

That will include combinations of machine vision with learning capabilities. Take precision bin picking applications, for instance. With previous robot systems, professional computer aided design (CAD) programming was needed to ensure the robot could recognize shapes. While these CAD systems could identify any given item in a bin, the system would run into issues if ― for example ― items appeared in random order during a bin picking task.

Instead, advanced vision systems use passive imaging, where photons are emitted or reflected by an object which then forms the image. The robot can then automatically detect items, whatever their shape or order.


Nigel Smith is CEO of  TM Robotics