Experts Sound Off on AI’s ‘Spiderman’ Problem
AI's potential is growing, but so is its potential for problems and pitfalls.
In the Spiderman comic, Uncle Ben tells a young Peter Parker that “with great power comes great responsibility.” Technology experts offered similar advice to government and industry adopting artificial intelligence tools in a wide-ranging report published Wednesday.
Advancements in AI could boost the economy, increase productivity and help people make smarter decisions, but policymakers must overcome a handful of technical, legal and ethical barriers before that can happen, technologists told a congressional watchdog in the sweeping report.
If they don't exercise caution, AI could run amok, widening socioeconomic inequality, giving data-hoarding companies too much influence and perhaps empowering AI to make decisions humans don’t understand.
Based on interviews with nearly 60 experts from academia, government and industry, the Government Accountability Office’s report essentially outlines AI’s Spiderman problem. It echoes lawmakers’ call for more technical research, but also highlights the need to investigate the long term impacts of AI on society at large.
A lack of data sharing could limit the benefits of artificial intelligence in many areas where it could otherwise make a significant impact, such as criminal justice, GAO found. Machine-learning technology could potentially help law enforcement allocate resources, identify criminals and inform decisions about punishment, but developing those tools would require troves of data that don’t exist today.
“As AI moves from the laboratory into human spaces, and as the problems we ask AI to solve grow in complexity, so too will the data needed to effectively train and test that AI,” GAO wrote, but in many fields that information remains unstandardized and siloed within different jurisdictions and companies.
Experts also worry this environment could create “data monopolies,” firms that own a disproportionate amount of information on a specific topic and thus corner AI technology in that space. In addition to decreasing competition, that system would concentrate data where malicious hackers could wreak havoc by stealing or manipulating it.
Artificial intelligence might also not fit well into the copyright and patent framework that's on the books today, experts told GAO.
Companies could use current laws to protect algorithms and AI tools for decades, which could slow innovation tremendously and create monopolies within the industry, GAO found. To address the issue, experts suggested shortening the length of time companies can hold exclusive rights to the AI products they produce.
Similar patent and copyright fixes been suggested for other internet technology.
Transparency will also be key when applying AI to areas that have a significant impact on human lives, like self-driving cars and criminal justice, but current copyright law doesn’t require software developers to reveal much about how their tech works.
The issue of transparency plays into experts' broader call for so-called “explainable AI,” or tools that show how they arrived at a given answer. Policymakers have long advocated for transparency to keep self-driving cars and other autonomous products safe, but GAO said such standards would also address many of the ethical concerns about the technology and increase public trust in it.
If artificial intelligence became more widespread in the criminal justice field, for example, it would be crucial to monitor its decision-making process to rule out any racial or demographic biases.
“It is a grave misconception to believe that the algorithms used in AI are inherently neutral and trustworthy,” one expert told GAO.
“Before humans will understand, appropriately trust, and be able to effectively manage AI, an AI application or system needs to explain why it took certain actions and why it valued certain variables more than others," the watchdog wrote.