A robot created by Columbia Engineers learns to understand itself rather than the environment around it.
Our perception of our bodies is not always correct or realistic, as any athlete or fashion-conscious person knows, but it’s a crucial factor in how we behave in society. Your brain is continuously preparing for movement while you play ball or get dressed so that you can move your body without bumping, tripping, or falling.
Humans develop our body models as infants, and robots are starting to do the same. A team at Columbia Engineering revealed today that they have developed a robot that, for the first time, can learn a model of its whole body from scratch without any human aid. The researchers explain how their robot built a kinematic model of itself in a recent paper published in Science Robotics, and how it utilized that model to plan movements, accomplish objectives, and avoid obstacles in a range of scenarios. Even damage to its body was automatically detected and corrected.
The researchers placed a robotic arm within a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment.
“We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network, it’s a black box.” After the researchers struggled with various visualization techniques, the self-image gradually emerged. “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson. “As the robot moved, the flickering cloud gently followed it.” The robot’s self-model was accurate to about 1% of its workspace.