Researchers at DeepMind have taken baby steps in making artificial intelligence reason like humans.
In two papers entitled “A simple neural network module for relational reasoning” and “Visual Interaction Networks” published on Arxiv, DeepMind researchers showed that artificial intelligence is starting to reason like humans do.
Even as AIs have achieved great accomplishments over humans such as defeating top Go human players and beating top Poker human players, its current intelligence level, particularly in the reasoning aspect, is inferior that of humans.
“Physical reasoning is a core domain of human knowledge and among the earliest topics in AI,” DeepMind researchers said, “However, we still do not have a system for physical reasoning that can approach the abilities of even a young child.”
In the paper “A simple neural network module for relational reasoning,” DeepMind researchers developed an artificial intelligence model called “Relation Networks” (RN) – a model that can be plugged into broader deep learning architectures to significantly improve performance on tasks that require relational reasoning.
When the researchers tested the RN-augmented deep learning architectures, results showed that the new AI model got 96% of the time the visual relational questions correct – a superhuman score compared to humans’ score which is a respectable 92%. An example of a relational question correctly answered by the new AI model was “What size is the cylinder that is left of the brown metal thing that is left of the big sphere?”
In the paper “Visual Interaction Networks,” DeepMind researchers developed another model also called “Visual Interaction Network” (VIN) – a general-purpose model for predicting future physical states from video data.
“The VIN is learnable and can be trained from supervised data sequences which consist of input image frames and target object state values,” DeepMind researchers wrote. “It can learn to approximate a range of different physical systems which involve interacting entities by implicitly internalizing the rules necessary for simulating their dynamics and interactions.”
Adam Santoro, lead author of the paper “A simple neural network module for relational reasoning” told the New Scientist that AIs that can reason like humans is a long way off. Initially, he said, this new model can be used for computer vision. “You can imagine an application that automatically describes what is happening in a particular image, or even video for a visually impaired person,” Santoro said.