MIT CSAIL Researchers Develop an AI Voice Assistant that Understands Contextual Commands

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an artificial intelligence voice assistant that can understand a wide range of commands that require contextual knowledge.

The MIT CSAIL researchers call their AI voice assistant as “ComText,” which stands for commands in context. According to the MIT researchers, ComText can understand contextual commands as it is capable of retaining two types of memory: semantic memory and episodic memory.

The researchers described semantic memories as those based on general facts, for instance, the “sky is blue.” Episodic memories, meanwhile, are described as those based on personal facts, such as being able to remember what happened at a party. From these memories, ComText can then reason, comprehend and respond to commands.

“The main contribution is this idea that robots should have different kinds of memory, just like people,” Andrei Barbu, co-lead of the ComText project, told MIT News. “We have the first mathematical formulation to address this issue, and we’re exploring how these two types of memory play and work off of each other.”

The researchers tested ComText on Baxter – a two-armed humanoid robot developed by former CSAIL director Rodney Brooks. ComText, through Baxter, was able to correctly pick the right tool from a toolbox after being told and shown “the tool I put down is my tool”, followed by a command “pick it up.” Episodic memories of ComText here include object’s size, shape, position, type and ownership of the object.

In the future, the researchers plan to enable the AI to understand more complicated information, such as multi-step commands and the intent of actions. Understanding complicated commands would make artificial intelligence beneficial for self-driving cars and for robotic household helpers.