Researchers at Cornell University are developing a system that could one day enable robots to communicate with each other for the purpose for performing large scale surveillance.
In the project called “Convolutional-Features Analysis and Control for Mobile Visual Scene Perception,” Cornell researchers are developing a system that will enable multiple robots to work as a single surveillance unit, sharing information and interpreting what they see. This project is supported by a four-year, $1.7 million grant from the U.S. Office of Naval Research.
Principal investigators of the project – Silvia Ferrari, Mark Campbell, and Kilian Weinberger – wrote that effective surveillance agents must be able to know the context of a scene in order to detect suspicious persons or activities that might otherwise be ignored. An example of this contextual surveillance is giving a person running in a secured area further scrutiny compared to a person running on a college campus.
“Because in any surveillance problem a large amount of redundant and task-irrelevant data is obtained, agents must be capable of extracting scene features at a level of detail suited to the accuracy required by the task, but manageable, in order to operate in real time,” the Cornell researchers said.
The Cornell researchers added that this automated surveillance project will use the most recent developments in the areas of artificial intelligence, computer vision, and decentralized sensor planning and estimation.
According to Cornell Chronicle, while the U.S. Navy might deploy this automated surveillance system with drone aircraft or other autonomous vehicles, the Cornell researchers concerned will not be involved with any direct application of the technology.