University of California, Berkeley are work on a system that lets robots “imagine the future of their actions” so that they can interact with things they’ve never seen before. The technology is called visible foresight and it allows “Robots to predict what their cameras will see if they perform a particular sequence of actions.”
“In the similar way that we can imagine how our activities will move the things in our environment, this method can allow a robot to visualize how different behaviors will impact the world around it, ” said Sergey Levine, assistant professor at Berkeley’s Department of Electrical Engineeing and Computer Sciences. “This can allow intelligent planning of highly flexible skills in complex real-world situations. ”
The Researchers write-
These robotic imaginations are still relatively simple for now – predictions made only several seconds into the future – but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles. Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.
The machine uses convolutional recurrent video prediction to “predict how pixels in an image will move from one frame to another based on the robot’s actions.” This means that it can play out scenarios before it starts touching or moving objects.
“In past history, robots have learned skills with a human supervisor helping and providing responses. What makes this work exciting is that the robots can learn a range of visual subject manipulation skills totally on their own,” said Chelsea Finn, a doctoral student in Levine’s research laboratory and inventor of the original DNA model. The robot needs no special information about its area or any special detectors. A camera is employed to analyze the scene and then act accordingly, much as we can predict what supposed to be happening if we move objects on a table into each other.
“Children can learn about their world by playing with toys, moving them around, holding, and so on. Our aim with this research is to permit a robot to do the same: to learn about how precisely the earth works through autonomous interaction, Levine said. “The capabilities of this robot continue to be limited, but its skills are learned totally automatically, and let it to anticipate complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction.”