Apple’s First Research Paper Tries to Solve a Problem Facing Every Company Working on AI
It addresses a core problem: training a machine takes a huge amount of data.
A few days before Christmas and Hanukkah festivities began, Apple gave a little something to the artificial intelligence research community: its first research paper.
The paper, authored by six of Apple’s researchers, doesn’t focus on AI that someone with an iPhone might interact with, but rather how to create enough data to effectively train it. Specifically, the research focuses on making realistic fake images—mostly of humans—to train facial recognition AI. It addresses a core problem: training a machine takes a huge amount of data.
Moreover, training a machine on matters like faces and body language can take a ton of personal data. The ability to manufacture this kind of training data and still achieve high results could allow Apple to build AI that understand how humans function (the way we move our hands or look around a screen) without needing to use any user data while building the software.
Apple’s published research focuses on those two examples: identifying hand gestures and detecting where people are looking, examples of basic image recognition problems that could be applied to anything from tracking user behavior to a wave-to-unlock iPhone feature.
In both cases, the researchers took established datasets of synthetic images, and used a neural network trained on real images to refine them to look more realistic. The system then compares the refined image to a real image, attempts to decide which picture is real, and then updates itself based on what the system judged as fake compared to the real image.
As the researchers write, the end result is “state-of-the-art results without any labeled real data.”
The work Apple decided to present first is interesting. It’s not speech recognition for Siri, or a PR stunt for some new Maps feature. Rather, it’s research that very much falls in line with an established trend of 2016: using neural networks to generate new data instead of just identifying it.
The research also nods toward user data security, a drum that Apple beats loudly and often. While some companies like Google and Facebook use vast quantities of user data to train their algorithms, Apple’s entire pitch has been that nobody has access to what’s on an iPhone but the iPhone’s owner. This kind of work makes the statement that Apple will keep up with other tech companies and still honor the privacy it promised users.
Researchers write the next possible avenues of research could be using the same technique for videos.