You No Longer Own Your Face
Students were recorded for research—and then became part of a data set that lives forever online, potentially accessible to anyone.
If 20 people are in a coffee shop, then there are at least 21 cameras: One embedded in each person’s phone and, usually, one tucked high in the corner. What you say may be overheard and tweeted; you might even appear in the background of another patron’s selfie or Skype session. But that doesn’t stop even the most private people from entering coffee shops. They accept the risk inherent in entering a public place.
This notion—of a “reasonable” expectation of privacy—guides researchers hoping to observe subjects in public. But the very idea of what’s reasonable is a complicated one. Faculty at three universities—Duke, Stanford, and the University of Colorado at Colorado Springs—are facing backlash after creating databases built using surveillance footage of students as they walked through cafes and on college campuses. You might reasonably expect being overheard in a coffee shop, but that’s different from suddenly becoming a research subject, part of a data set that can live forever.
Ethics boards approved all three research projects, which used student data to refine machine-learning algorithms. The Duke University researcher Carlo Tomasi declined an interview with The Atlantic, but said in a statement to the Duke Chronicle that he “genuinely thought” he was following Institutional Review Board guidelines. For their research, he and his colleagues placed posters at all entrances to public areas, telling people they were being recorded, and providing contact information should they want their data erased. No one reached out, Tomasi told the Chronicle.
But when the parameters of his research changed, Tomasi admits he didn’t inform the IRB. For minor changes, that’s allowed. But Tomasi got permission to record indoors, not outdoors. And more significantly, he promised to allow access to the database only upon request. Instead, he opened it to anyone to download, he admitted to the Chronicle. “IRB is not to be blamed, as I failed to consult them at critical junctures. I take full responsibility for my mistakes, and I apologize to all the people who were recorded and to Duke for their consequences,” his statement reads.
Duke ultimately decided to delete the data set related to the research. Stanford did the same thing with a similarly derived data set its researchers created from patrons filmed at a San Francisco café. At UCCS, where researchers recorded students to test identification software, the lead researcher says the team never collected individually identifying information. Researchers for the Stanford and UCCS projects didn’t respond to requests for comment. In separate statements, each university reiterated that ethics boards approved all research, and underscored its commitment to student privacy.
But the problem is that university ethics boards are inherently limited in their scope. They oversee certain, narrow aspects of how research is conducted, but not always where it ends up. And in the information age, the majority of academic research goes online, and what’s online lives forever. Other researchers, unbound by IRB standards, could download the data set and use it how they wish, introducing all manner of consequences for people with no way of being informed or offering consent.
Those consequences can be far beyond what researchers imagine. Adam Harvey, a countersurveillance expert in Germany, found more than 100 machine-learning projects across the globe that cited Duke’s data set. He created a map that tracked the spread of the data set around the world like a flight tracker, with long blue lines extending from Duke University in every direction. Universities, start-ups, and institutions worldwide used the data set, including SenseTime and Megvii, Chinese surveillance firms linked to the state repression of Muslim minorities in China.
Every time a data set is accessed for a new project, the intention, scope, and potential for harm changes. The portability and pliability of data meet the speed of the internet, massively expanding the possibilities of any one research project, and scaling the risk far beyond what any one university can be held accountable for. For better or worse, they can only regulate the intentions of the original researchers.
The federal government’s Office for Human Research Protections explicitly asks board members not to consider “possible long-range effects of applying knowledge gained in the research.” Instead, they’re asked to focus only on the subjects directly involved in a study. And if those subjects are largely anonymous people briefly idling in a public space, there’s no reason to believe they’ve been explicitly harmed.
“It’s just not what [the IRB] was designed to do,” says Michelle Meyer, a bioethicist who chairs the IRB Leadership Committee at Geisinger, a major health-care provider in Philadelphia. As she explains, the IRB’s main privacy concern for publicly observed research is whether subjects are individually identified, and if being identified places them at risk of financial or medical harm. “In theory, if you were creating a nuclear bomb and … [conducting research that] involved surveying or interviewing human subjects,” she says, “the risks that the IRB would be considering would be the risks to people immediately involved in the project, not the risk of nuclear annihilation downstream.”
Opening up data sets for other researchers increases those downstream risks. But the IRB may not have much jurisdiction here; data sharing, fundamentally, is not research. The after-the-fact application of data is not itself research, so it’s “sort of in this weird regulatory twilight zone,” Meyer explains.
Casey Fiesler, an assistant professor in the information-science department at the University of Colorado at Boulder, writes on the ethics of using public data in research studies. Fiesler proposed a system for scrutinizing data-set access that’s similar to copyright use. Fair-use clauses are subjective, she notes, but have standards based on how the requester plans to use the material.
“Having some kind of gatekeeper for these data sets is a good idea,” she says, “because [requesters] can have access if you tell us what you’ll do with it.” Similar rules are in place for open-source software and Creative Commons intellectual property, a permission-based system where requesters can use media only for noncommercial work that builds on the original without copying it, and are liable if they lie or misrepresent their intentions. Those are subjective metrics that don’t immediately jibe with the highly bureaucratized academic landscape, but can be useful at least in trying to imagine cutting off downstream harm. “This isn’t to suggest [burdensome] rules, but it suggests a way that you should take certain contextual factors into account when you’re making decisions about what you’re going to do,” Fiesler says.