Artificial Intelligence in Government and the Presidential Transition: Building on a Solid Foundation
Here are the key steps that the incoming Biden administration should take to make the federal government AI-ready.
Artificial intelligence allows computerized systems to perform tasks traditionally requiring human intelligence: analytics, decision support, visual perception and foreign language translation. AI and robotics process automation, or RPA, have the potential to spur economic growth, enhance national security, and improve the quality of life. In a world of “Big Data” and “Thick Data,” AI tools can process huge amounts of data in seconds, automating tasks that would take days or longer for human beings to perform—and the public sector in the United States is at the very beginning of a long-term journey to develop and harness these tools.
The National Academy of Public Administration identified Making Government AI Ready as one of the Grand Challenges in Public Administration. I chaired the Academy’s Election 2020 Project Working Group on AI. Our report—released in August 2020—contained a series of practical nonpartisan recommendations for how the administration in 2021 should address this Grand Challenge.
Clearly, our nation is deeply divided, and many citizens are dismissive of science and technology. If citizens don’t trust one another, might there be a day when they trust machines more? What will the promises of AI bring and why is this important? And given a large portion of today’s society seeming inability to accept facts, can AI one day be used to curtail the spread of conspiracy theories?
Despite the divisiveness of today’s political landscape, it is reassuring to note that a cadre of highly dedicated and knowledgeable career public managers have traditionally passed the torch of technology innovation from one administration to another. I expect that this will happen over the next couple of months even amidst current political turmoil.
What are key steps that the incoming Biden administration should take to make the federal government AI-ready? First, it can build upon the progress made on AI during the Trump administration. Of particular importance was the AI executive order issued in February 2019. This order directed the federal government to pursue five goals: invest in AI research and development, unleash AI resources, remove barriers to AI innovation, train an AI-ready workforce, and promote an international environment supportive of American AI innovation and responsible use. Federal agencies were also directed to identify ways that they can enable the use of cloud computing for AI R&D.
Other recommendations include:
- Build trustworthy AI by establishing a single, authoritative, and recognize federal entity that focuses on AI’s social, cultural and political effects, and leverages existing investments to create guidance and solutions.
- Use ethical frameworks to identify and reduce bias in AI by demonstrating a federal government commitment to ethical principles and standards in AI development and use.
- Build intergovernmental partnerships and knowledge sharing around public sector uses of AI by developing an interagency and intergovernmental mechanism that addresses the need to share practices between different levels of government, incentivizes and stimulates broader AI adoption, and addresses gaps in readiness to build an AI workforce for all levels of government.
- Increase investments in AI research and translation of research to practice by increasing public access to federal government data, increasing by at least 50% investment into unclassified AI research, ensuring the protection of privacy at the individual level, and removing biases from programming to ensure equitable treatment.
- Build an AI-ready workforce by providing funding to support the growth of an AI competent federal workforce, develop policies and fund incentives that encourage the AI R&D to use multidisciplinary teams, and support studies to increase understanding of current and future national workforce needs for AI R&D.
It is especially critical for the incoming administration to build a trustworthy AI environment. With a skeptical public, a majority of Americans recognize the need to carefully manage AI, with the greatest importance placed on safeguarding data privacy; protecting against AI-enhanced cyberattacks, surveillance, and data manipulation; and ensuring the safety of autonomous vehicles, accuracy and transparency of disease diagnosis, and the alignment of AI with human values.
And building trust will require an ethical framework. Today we recognize that AI when coupled with huge amounts of (quality) data can be highly useful in identifying patterns, seeking out anomalies, making real-time recommendations based on data inputs, communicating both verbally and in writing, and all the time learning and perfecting. But what happens if the quality of data is found to be flawed and what if it is found that there may be unintended bias in the increasingly complex algorithms?
Implementing these recommendations will require a sustained leadership commitment and steadfast focus, sufficient funding, and both interagency and intergovernmental coordination. I have every reason to believe the great work that started in 2019 will continue its journey for many years to come. And if anyone is seeking a solid example of what AI can do, the current string of breakthrough COVID-19 vaccine announcements was made possible by applying AI towards analyzing the DNA of the virus itself. As a result of massive simulation with combinations and known interactions, a promising cure came about in a mere six months instead of six years.
This is the promise of AI. The incoming administration can build on recent successes and ensure that AI is used to the benefit of all Americans.
Dr. Alan R. Shark is executive director of CompTIA’s Public Technology Institute and an associate professor at Schar School of Policy and Government at George Mason University. He is a Fellow of the National Academy of Public Administration, where he is Chair of the Standing Panel on Technology Leadership.