Is government ready for AI?
Dozens of agencies are at least dabbling in artificial intelligence, but big concerns remain about transparency and the impact on the human workforce.
Artificial intelligence is helping the Army keep its Stryker armored vehicles in fighting shape.
Army officials are using IBM’s Watson AI system in combination with onboard sensor data, repair manuals and 15 years of maintenance data to predict mechanical problems before they happen. IBM and the Army’s Redstone Arsenal post in Alabama demonstrated Watson’s abilities on 350 Stryker vehicles during a field test that began in mid-2016.
The Army is now reviewing the results of that test to evaluate Watson’s ability to assist human mechanics, and the early insights are encouraging.
The Watson AI enabled the pilot program’s leaders to create the equivalent of a “personalized medicine” plan for each of the vehicles tested, said Sam Gordy, general manager of IBM U.S. Federal. Watson was able to tell mechanics that “you need to go replace this [part] now because if you don’t, it’s going to break when this vehicle is out on patrol,” he added.
The Army is one of a handful of early adopters in the federal government, and several other agencies are looking into using AI, machine learning and related technologies. AI experts cite dozens of potential government uses, including cognitive chatbots that answer common questions from the public and complex AIs that search for patterns that could signal Medicaid fraud, tax cheating or criminal activity.
“There are, for a lack of a better number, a gazillion sweet spots” for AI in government, said Daniel Enthoven, business development manager at Domino Data Lab, a vendor of AI and data science collaboration tools.
Still, many agencies will need to answer some difficult questions before they embrace AI, machine learning and autonomous systems. For instance, how will the agencies audit decisions made by intelligent systems? How will they gather data from often disparate sources to fuel intelligent decisions? And how will agencies manage their employees when AI systems take over tasks previously performed by humans?
A more intelligent census
Intelligence agencies are using Watson to comb through piles of data and provide predictive analysis, and the Census Bureau is considering using the supercomputer-powered AI as a first-line call center that would answer people’s questions about the 2020 census, Gordy said.
A Census Bureau spokesperson added that the AI virtual assistant could improve response times and enhance caller interactions.
Using AI should save the bureau money because “you have a computer doing this instead of people,” Gordy said. And if trained correctly, the system will provide more accurate answers than a group of call-center workers could.
“You train Watson once, and it understands everything,” he said. “You’re getting a very consistent answer, time after time after time.”
For many agencies, however, it’s still early in the AI adoption cycle. Use of the technology is “very, very nascent in government,” said William Eggers, executive director of Deloitte’s Center for Government Insights and co-author of a recent study on AI in government. “If it was a nine-inning [baseball] game, we’re probably in the first inning right now.”
He added that over the next couple of years, agencies can expect to see AI-like functionality being incorporated into the software products marketed to them.
Chatbots and telephone agents
The first step for many civilian agencies appears to be using AI as a chatbot or telephone agent. Daniel Castro, vice president of the Information Technology and Innovation Foundation, said intelligent agents should be able to answer about 90 percent of the questions agencies receive, and the people asking those questions aren’t likely to miss having a human response.
“It’s not like people are expecting to know their IRS agents when they call them up with a question,” he said.
The General Services Administration’s Emerging Citizen Technology program launched an open-source pilot project in April to help federal agencies make their information available to intelligent personal assistants such as Amazon’s Alexa, Google’s Assistant and Microsoft’s Cortana. More than two dozen agencies — including the departments of Energy, Homeland Security and Transportation — are participating.
Many vendors and other technology experts see huge opportunities for AI inside and outside government. In June, an IDC study sponsored by Salesforce predicted that AI adoption will ramp up quickly in the next four years. AI-powered customer relationship management activities will add $1.1 trillion to business revenue and create more than 800,000 jobs from 2017 to 2021, the study states.
In the federal government, using AI to automate tasks now performed by employees would save at least 96.7 million working hours a year, a cost savings of $3.3 billion, according to the Deloitte study. Based on the high end of Deloitte’s estimates, AI adoption could save as many as 1.2 billion working hours — and $41.1 billion — every year.
“AI-based applications can reduce backlogs, cut costs, overcome resource constraints, free workers from mundane tasks, improve the accuracy of projections, inject intelligence into scores of processes and systems, and handle many other tasks humans can’t easily do on our own, such as sifting through millions of documents in real time for the most relevant content,” the report states.
AI vs. the workforce
Although some might fear a robot takeover, Eggers said federal workers should not worry about their jobs in the near term. Although there’s likely to be pressure from lawmakers to use AI to reduce the government’s headcount, agencies should look at AI as a way to supplement employees’ work and allow them to focus on more creative and difficult tasks, he added.
The goal is to ask: “How do we automate the menial, the dull, the repetitive tasks, so you can free up labor to do more important things?” Eggers said. “There are always going to be more things for government to do than we have resources for.”
Agencies should think of AI as a new digital labor force “so we can make better decisions, so we can make faster decisions, and we can serve citizens better,” he added. “Then you can look at getting a lot of value out of this, as opposed to doing it in kind of a random way.”
Castro said AI promises to change the nature of government work, and he added that employees will “be left with the good stuff. If you can take away the pain of government bureaucracy — and AI can do a lot of that — you can change the culture of government.”
Meagan Metzger, founder and CEO of government-focused IT accelerator Dcode42, said agencies that want to adopt AI for customer-facing systems must prepare their employees for the changes. “You’re not replacing staff. You just need to change their skill set,” she said. “They’re accomplishing different things.”
She added that she and her colleagues at Dcode42, which offered an AI-in-government program for vendors this year, are seeing huge interest in AI, machine learning and related technologies from government agencies. Many, however, are still trying to understand the technologies, their effects and their ethical boundaries, she said.
How to audit a robot
Beyond employee management issues, one of the biggest obstacles to using AI in government is the auditability of the decisions intelligent systems make.
In the European Union, regulators want people to be able to demand an explanation when an intelligent system makes a decision that affects them. A version of that right to an explanation might be included in the EU’s General Data Protection Regulation due in 2018.
A regulated right to an explanation hasn’t gained traction in the U.S., but AI experts say intelligent systems must deliver repeatable results and provide documentation that backs up their recommendations. That’s especially important when AI systems make major decisions that affect people’s lives, they say.
IBM’s Gordy said that when intelligence agencies deploy the Watson AI system, they receive the documentation the system used to come up with its answers.
Other experts say that in most cases, agencies should view AI as a tool for augmenting rather than replacing human decisions.
The technology should not be making the final decision for agencies in many situations, said Aron Ezra, CEO of OfferCraft, a vendor of machine learning-powered marketing tools. A human should, in almost all cases, review the AI system’s recommendation, whether the technology is approving an applicant for a government program or flagging tax fraud.
“The fear that, all of the sudden, everyone’s going to be sitting back and…letting computers make all the decisions for us is something that I don’t see happening for some time, if ever,” he said.
Furthermore, Enthoven said AI results improve substantially when the organization has an expert managing what’s going into the system.
“There’s this one approach where you throw all the data in a big hopper, turn the crank, see what comes out of the bottom, and it’s got to be right,” he said. “You’re going to have better luck and better accuracy if you don’t just turn the crank on the machine but actually understand what you’re doing.”
As with most systems, the garbage-in, garbage-out rule applies to AI. Although machine learning allows such systems to become more intelligent, it’s important to take the time to train the system and feed the proper data into it.
Prepping data
Data preparation, especially for agencies that have massive stores of information, is a time-consuming step before deploying an AI system, Enthoven said.
“Data might be spread all over the place, and it might be in formats that are hard to use,” he added. ”There may be data you need but don’t yet have. In some cases, data scientists go upstream to ask for data and even reengineer processes to get the data they need.”
Agencies need good data scientists who will work with the team to “understand how they can get the data they need to fulfill the objective,” Enthoven said. After the team collects the data, it will need to be cleaned up or normalized.
A big part of data preparation often involves dealing with duplication, he added. “If you have hundreds of analysts spread across multiple locations and they’re all working in the same problem area, you can bet that many are looking at the same datasets [and] doing the same cleaning,” he said. “It’s hard for any one person to know what has already been looked at and what data has been deemed useful or non-usable.”
Some companies, such as Tamr, have begun to offer tools to clean up and consolidate data, and most AI systems will offer services to fix data problems, Metzger said. But data scrubbing is still a major activity that must happen before AI technology is deployed.
“Of the AI tools we have worked with, the biggest question for their customers is always ‘what state is the data in?’” she said. “Often the data is messy or needs cleansing before these tools can work effectively.”
The way the federal government categorizes its spending data is another example of the problems agencies face, said Peter Viechnicki, a data scientist at Deloitte’s Center for Government Insights and co-author of the AI paper with Eggers. Money spent on contracts is stored in one format while money spent on grants is stored in another, he said, adding that agencies have been making progress on standardizing their approach to spending data.
It “seems like a no-brainer that citizens should be able to see where and how our tax dollars are being spent, but legacy IT systems make this more challenging,” Viechnicki said.
Still, many AI experts believe the widespread use of the technology in government is inevitable. And Metzger is among those who recommend that agencies start educating themselves now.
“Because there are real problems that can be solved with AI today, don’t think of it as ‘we’re adopting artificial intelligence,’” she said. “Just think of it as you would any other IT modernization effort. It’s going to happen, so you need to figure out how to embrace it.”