A Massive AI Partnership Is Tapping Civil Rights and Economic Experts to Keep AI Safe
The Partnership is now the most high-profile, comprehensive, and mainstream organization considering how AI will shape our future.
When the Partnership on Artificial Intelligence to Benefit People and Society was announced in September, it was with the stated goal of educating the public on artificial intelligence, studying AI’s potential impact on the world, and establishing industry best practices. Now, how those goals will actually be achieved is becoming clearer.
This week, the Partnership brought on new members that include representatives from the American Civil Liberties Union, the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University, and the University of California, Berkeley.
The organizations themselves are not officially affiliated yet—that process is still underway—but the Partnership’s board selected these candidates based on their expertise in civil rights, economics, and open research, according to interim co-chair Eric Horvitz, who is also a director at Microsoft Research. The Partnership also added Apple as a “founding member,” putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board.
The Partnership is now the most high-profile, comprehensive, and mainstream organization considering how AI will shape our future. It not only has representatives from nearly every major tech company that’s heavily invested in machine-learning research, but also has backing from organizations that routinely study the impacts of technology and bias on modern society. To succeed in its mission would mean organizing a nascent field, uncertain and fragmented in its view of how AI should be implemented, while establishing guidelines that match its members’ propitious rhetoric.
The Partnership’s most recent additions suggest it is also uniquely concerned with understanding AI’s ability to create (or mimic) disparity.
“In its most ideal form, [the Partnership] puts on the agenda the idea of human rights and civil liberties in the science and data science community,” says Carol Rose, the executive director of the ACLU of Massachusetts who is joining the Partnership’s board. “[This is] so the people who are developing machine intelligence are aware, mindful, and cognizant of the impact of their choices, because they’re not neutral choices.
“There’s a decision on, are you going to use artificial intelligence to perpetuate biases that exist in our human society? Is artificial intelligence going to be developed in a way that serves the 1% but not the 99%? Or instead, can artificial intelligence and [machine learning] be used to address deep issues of global climate change or poverty? Those are fundamental ethical and moral issues it’s important that the scientific community engage in. What they do isn’t apolitical, it’s deeply political.”
The ACLU has been doing similar work with universities like MIT and Harvard as a part of its Technology for Liberty initiative. Rose says that if the Partnership turns out to be toothless, she’ll back out and stop contributing.
The Partnership has also tapped Jason Furman, a veteran of the Obama administration’s efforts to draw attention (pdf) to the economic impact of AI and automation. Previously chairman of the Council on Economic Advisers under Obama, Furman is joining the Partnership to guide its consideration of AI’s economic benefits.
“I think we’ve had insufficient productivity growth in the United States,” Furman told Quartz. “I think AI is very promising in terms of the future of our economy. We need more [AI] than we’ve had to date, but we’re only going to have it more if people are comfortable with it—if we’re getting the positives and not the negatives. I don’t think that’s going to happen automatically, some of that will require best practices and some of that will require public policy.”
Also joining up is Eric Sears from the MacArthur Foundation, which awards grants for things like climate change and nuclear risk. “While there will be many benefits from AI, it is important to ensure that challenges such as protecting and advancing civil rights, civil liberties, and security are accounted for,” Sears says. “The Partnership is being set up to do just that.”
In a blog post on the MacArthur Foundation website, Sears also wrote that “the public interest challenges these technologies present are unlikely to be adequately addressed without philanthropy’s help.”
Other new members include OpenAI, a nonprofit funded by Elon Musk, Sam Altman, and Peter Thiel that will look to research technical questions and provide open access to knowledge in the AI community. OpenAI’s representative in the Partnership, Dario Amodei, previously contributed to research on AI safety at Google Brain.
The Partnership plans to fund research through grants that will separate its participants’ work from their respective companies’ corporate umbrellas. Founding companies will contribute on a multi-year basis, although the amount has yet to be disclosed. The Partnership has yet to appoint an executive director—that role and the first round of research will likely be announced soon after its board of directors meets for the first time on Feb. 3 in San Francisco.
Among major tech outfits, Apple is late addition, despite working with the Partnership from the beginning, according to Horvitz. The company will be represented in the Partnership by Tom Gruber, who leads Apple’s Siri Advanced Development team. Google will be represented by director of augmented intelligence research Greg Corrado; Facebook by its director of AI research, Yann LeCun; Amazon by its director of machine learning, Ralf Herbrich; Microsoft by the director of its research lab, Horvitz; and IBM by a research scientist at its T.J. Watson Research Centre, Francesca Rossi.