Agencies Should Watch Out for Unethical AI, Report Warns
AI will figure prominently in government services, a report from Harvard's Ash Center predicts.
A myriad of government operations could be automated using artificial intelligence, but agencies need to be sure the technology is implemented in an ethical way, a new report warns.
The report, from Harvard's Ash Center for Democratic Governance and Innovation, says agencies—sometimes strapped for cash and low on personnel—are an ideal landscape for technology like artificial intelligence to do things like pore through massive amounts of data. But the report recommends that machines, at least for now, should not be making key decisions.
Specifically, AI systems are good at rapidly scanning large amounts of structured data, answering simple and binary questions and predicting patterns based on historical trends, among other capabilities, the report says. Currently, many AI applications being tested across government are related to simple customer service tasks, such as "answering questions, filling out and searching documents, routing requests, translation, and drafting documents."
» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.
Automating those simple tasks could free government employees up to "build relationships and solve problems face-to-face with citizens," the report argues. "Until machine learning techniques improve, though, AI should only be used for analysis and process improvement, not decision support, and human oversight should remain prevalent."
Government agencies must consult with citizens for input about what services they'd like to see automated, the report says. If citizens' personal information is being handled by an automated system, the report states, citizens should fully understand the privacy laws surrounding their information, and how the government will make decisions using that data.
"There may be fewer privacy concerns if the only data being used is already provided to the government by citizens (such as IRS data)," the report says.
Machine-learning algorithms, which are often trained to make predictions based on past data, can also adopt biases if the data they are trained on is skewed. To mitigate this risk, agencies should "involve multidisciplinary and diverse teams, in addition to ethicists, in all AI efforts."
This might require a new position focusing on data science ethics, Matt Chessen, a State Department AI researcher, notes in the report. Agencies and governments might also create a common set of ethical rules: for instance, that AI systems "should not be tasked with making critical government decisions about citizens," like deciding criminal sentences.
A recent effort within the General Services Administration aims to walk agencies through the process of building their own automated systems, which could include voice-controlled assistants similar to Apple's Siri or Amazon's Alexa. Agencies are still in the very early stages of that effort. Early prototypes have helped citizens apply for licenses from the Small Business Administration and obtain park permits.