Imagine letting a robot clean your house while you are at work, or Even to clear your tables.MIT's New AI Tech Helps Robots Understand Objects They Have Not Seen Before
That’s just what the novel robot developed by researchers in the MIT can perform.
The group in the MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), developed a novel system that lets robots inspect random objects, and visually comprehend them sufficient to accomplish specific tasks without ever having seen them before.

The machine, dubbed Dense Object Nets (DON), seems at objects as collections of points which function as visual roadmaps of sorts.

This strategy lets robots understand and manipulate items and permits them to even pick up a specific object among a jumble of similar objects – a valuable skill for the sorts of machines that companies such as Amazon and Walmart utilize in their warehouses, the researchers stated.

“Many approaches to manipulation can’t identify specific pieces of an object across the numerous orientations that object may experience,” explained Lucas Manuelli, doctoral student in the CSAIL.

“For instance, existing algorithms would be not able to grab a mug by its handle, particularly if the mug could be in many orientations, like upright, or on its own side,” Manuelli added.

The DON system, basically creates a series of coordinates on a given thing, which serve as a sort of visual roadmap of the objects, to give the robot a better knowledge of exactly what it requires to grasp, and where.

It is”self-supervised” and doesn’t require any individual annotations.

From the study, 1 pair of tests performed on a gentle caterpillar toy, a Kuka robotic arm powered by DON may grasp the toy’s right ear from a range of different configurations.

This revealed that, among other things, the system has the ability to distinguish left from right on symmetrical objects.

“In factories robots often need complex part feeders to operate faithfully,” Manuelli said. “But a system similar to this can comprehend objects’ orientations could only have a picture and have the ability to grasp and fix the object accordingly.”

The team will present their paper on the machine at approaching Conference on Robot Learning at Zurich, Switzerland.