New Microchip Could Increase Military Intelligence Powers
A military-funded breakthrough in microchips opens the door to portable deep learning.
A new microchip could change life on the battlefield for U.S. troops by bringing the massive data crunching power of multicomputer neural networks — a dream of the 1970s and '80s — into handheld devices. The chip, announced by a team of researchers from MIT and funded by the Defense Advanced Research Projects Agency, or DARPA, could enable a smartphone-sized device to perform deep-learning functions.
What can the military do with deep learning? Effectively executing complex operations in places like Syria, Iraq, and Afghanistan is no longer just a matter of guts and glory. It’s also dependent on accessing and processing information in real time. The military has an abundance of data but always claims a shortage of useful intelligence. Consider that in 2011, during the height of the Iraq and Afghanistan Wars, the U.S. Air Force was processing 1,500 hours of full-motion video and 1,500 still images taken from aerial drones every day.
When satellites or drones collect high-resolution photographs or video, it’s human operators that have to do the job of classifying all the objects in that footage. Did someone just move a missile launcher within range of a forward operating base, or is that just a strangely-shaped pile of debris? Is that white van the same one that was on that street during last month’s IED attack? Or is it a different one? Is that bearded insurgent Abu Bakr al-Baghdadi or just a regular radical?
“Full exploitation of this information is a major challenge,” officials at DARPA wrote in a 2009 announcement on deep learning. “Human observation and analysis of [intelligence, surveillance and reconnaissance] assets is essential, but the training of humans is both expensive and time-consuming. Human performance also varies due to individuals’ capabilities and training, fatigue, boredom, and human attentional capacity.”
The promise of mobile deep learning for the military is in shrinking large “neural networks” into the palm of a soldier’s hand. Neural networks are a technology that emerged in the 1970s to great hype and fanfare. They’re a method of information processing inspired by organic central nervous systems. Nodes establish links to other nodes in patterns to hold information in a sort of code somewhat the way that the synaptic connections in your brain hold information.
Fitting neural networks (of the sort that perform deep learning) into smaller platforms could enable drones to do that sort of object recognition on board, without sending imagery back to an overworked human analyst or a data processing center halfway around the world. It could also enable a team of special operators to do the same, using their own drones, portable cameras, or other devices, to make positive identifications of people or objects without human analysts far away looking at the footage.
Imagine a special operator getting a push notification the moment that a small camera on the other side of town detects–and correctly identifies–a particular person walking into a particular house. That sort of capability would require computers that are small enough to be inconspicuous in the sorts of places soldiers operate but that can also learn to recognize different people or objects.Those are the among the many military applications for deep learning and neural networks.
In the seventies and throughout the eighties, the processing power didn’t exist to turn the concept of computer neural networks into anything useful in an information technology environment. Neural networks have since re-emerged thanks to the efforts of Google (and researchers like Andrew Ng at Stanford) who put them back on the map in 2012 with the announcement that they had used neural networks and deep learning to improve by 70 percent the ability of artificial intelligence to correctly recognize objects.
But neural networks remain energy intensive and relatively inefficient. To do deep learning right, you currently need computational resources in the form of servers or large computers. That means if you want to access deep learning processes on your smartphone it probably has to be connected to the Internet and thus a different computer. That’s not ideal for anyone working from a forward operating base or disaster zone.
“Right now, the networks are pretty complex and are mostly run on high-power [graphical processing units]. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.” MIT’s Vivienne Sze explained in a press release. It also would allow for compartmentalizing sensitive intelligence or mission information from wider dissemination.
Here’s how it works: the MIT researchers’ breakthrough microchip, dubbed “Eyeriss,” minimizes the number of times that the chip’s 168 cores have to access a memory bank, a process that eats away at energy efficiency in conventional graphical processing units, or GPU, chips. Every core in Eyeriss has its own memory. In effect, it’s like creating the functionality of 168 chips on a wafer where there was just one. That could lead to a pocket–sized device that can perform deep learning functions independently, potentially bringing a lot more brains into the devices that soldiers carry with them into the precision-guided counterterrorism battles of today and tomorrow.
(Image via Pingingz/Shutterstock.com)