DARPA Wants Tech That Can Trick Computer Vision
Neural networks can adapt and learn to identify and classify objects but they can also be stumped by cleverly placed pixels.
The Pentagon’s research arm is looking for cutting-edge techniques to disrupt, fool or undermine the systems that help computers “see”—to ultimately fuel future improvements.
According to a recently unveiled Artificial Intelligence Exploration Opportunity, the Defense Advanced Research Projects Agency wants proposals for innovative technical research concepts to throw off neural network-based machine vision technology without any insights into how systems were trained or built.
“Development and exploration of universal disruption techniques, including the scientific phenomena that enables their success, will enhance our understanding of the inherent nature of neural net architectures and inform more robust approaches,” officials wrote in their announcement.
Loosely based on the biological neural networks that make up human and animal brains, deep neural nets can essentially be trained to perform a range of classification and prediction tasks and can “learn” and adapt along the way. The agency notes that Convolutional Neural Nets, or CNNs, are what initially boosted the utility of computer recognition, and roughly over the last decade, artificial intelligence-infused machine vision “has improved and progressed, achieving superhuman performance with real-time executable codes that can detect, classify and segment within a complicated image.” The CNN paradigm involves a multi-layer network of computational nodes that are trained with massive amounts of labeled images to produce highly accurate object detection and scene classification capabilities.
Although deep neural net architectures have accelerated progress in machine vision applications over recent years, DARPA’s program manager for the project Gregory Avicola told Nextgov Tuesday that a large body of work now exists and is evolving “in the art of deceiving machine vision systems with techniques that have no impact on a human or animal machine vision system.”
In the solicitation, the agency highlights several of such efforts, including pixel-based modifications to images, or “perturbations.” While people can’t easily differentiate the slight changes with their own eyes, “the classifier generates a different output, e.g., it classifies the image as containing a ‘gibbon’ instead of a ‘panda.’” The fact that such minor image alterations can disrupt many CNNs without fooling human observers, according to DARPA, “inspires questions about the fundamental nature of neural nets as currently implemented.”
“You may have seen examples of the work where people put small patches on objects to make them either invisible to the machine vision system or to deliberately create false answers,” Avicola added. “The focus of this research is to explore how the structure used in deep neural nets creates mathematical vulnerabilities to such attacks.”
More broadly, experts in the field are already pursuing a range of questions related to the new research opportunity, Avicola said, noting that “this is a basic research effort intended to push the research community.”
“The research ultimately could lead to improved neural net architectures with more robust decision boundaries, and thus, increased ‘trust’ in the outputs of such systems,” he said.
Throughout the solicitation, DARPA continuously reiterates that the to-be-developed techniques must not require any knowledge or access to the actual images used to train the systems or any details around the training algorithms or architectures that underpin it. DARPA wants the resulting technology to be as “universal” as possible, which the agency said means it can be effective across all sorts of images, with various content, scales and resolutions. Such attacks and techniques created or enhanced through the initiative should also hold potential when applied to video frames. And they’ll also be expected to work across multiple computer neural net architectures, such as recurrent neural networks, or generative adversarial networks.
The program will be broken up into two phases. The first “will focus on the development of universal attack algorithms and demonstrate algorithm success on at least three networks with assessment of extensibility to additional networks,” DARPA said. In the second phase, which is optional, researchers will seek to test the developed techniques further and identify and characterize the core principles beneath the produced algorithms’ effectiveness. Together, the two phases are anticipated to last no more than 18 months.
The agency aims to award up to $1 million for a prototype through an other transaction agreement. The last day that the agency will accept comprehensive proposals from interested participants is May 14.