Agriculture Department is taking a ‘cautious’ approach to generative AI
The agency’s data “also needs to be in a condition… to support those kinds of tools,” according to USDA chief data officer and responsible AI officer, Chris Alvares.
The Agriculture Department is approaching generative artificial intelligence cautiously as it wades through questions about the technology’s efficacy, as well as potential impacts on privacy, equity and security, Chris Alvares, the department’s responsible AI officer and chief data officer, said during a Nextgov/FCW event on Wednesday.
“I do see a lot of opportunities, a lot of potential, for generative AI to help USDA be more efficient, deliver programs more effectively, understand the complexity of our organization and be able to describe that better than we can today without those tools,” he said. “But there's a lot of uncertainty around how to use those tools… in a federal agency and how to do that safely and securely, so we need to take a cautious approach.”
Alvares’s comments echo USDA chief information officer Gary Washington’s July take on AI more broadly, telling Nextgov/FCW “we have to be very cautious.”
The remarks give insight into how one part of the federal government is approaching generative AI. Washington himself said that some agencies “have been proactive about dipping their toes into [AI], but I just felt like we need to put some guardrails around it because that could potentially be dangerous.”
Generative AI has received increasing amounts of attention since the release of OpenAI’s ChatGPT. Microsoft opened up access to ChatGPT and other generative AI tools to government customers in June.
The technology has the potential to help government agencies solve data problems, rewrite jargon and improve chatbots.
But, as Alvares said, generative AI is also the subject of a long list of questions about security, bias, civil rights and privacy implications — especially given the large amounts of data needed to train algorithms — and even the accuracy of the tools’ outputs.
The White House is currently drafting policy guidance on the use of AI in the federal government. But for now, existing frameworks — the AI Bill of Rights and AI Risk Management Framework — are voluntary.
“Having some broad, federal-wide guidance on how to approach some of this will help us be more coordinated as a federal government as a whole,” said Alvares, noting that visibility around AI use is important, as is monitoring how well the tools are or aren’t working as intended.
At USDA, the department is exploring potential use cases.
“How do people want to apply these? What types of analyses do they want these generative AI tools to do?” Alvares said. “How are we going to monitor them to make sure they’re producing the kind of output we expect, that they’re trustworthy, that we have the ability to maybe override it with human decisions if we need to? There’s a lot for us to figure out there.”
In terms of AI more broadly, the department listed over 30 use cases in its public inventory for 2023.
“We’ve been doing things like analyzing satellite imagery to understand forest health and wildfire risks,” Alvares said.
“We’ve been using artificial intelligence to understand the benefits of conservation practices, or to analyze orchard health based on data that we can gather about an orchard, for example, or to predict how an invasive species might spread if it inadvertently gets introduced into the United States. We've even done some more basic things around extracting data from documents,” he said. “Those are all types of artificial intelligence that have been happening for many years at USDA.”
Asked about the challenges around using AI, Alvares said that “the conversations around AI that we’re having, to me, are just a reminder that our data also needs to be in a condition… to support those kinds of tools.
“We need to be addressing things like the quality of the data,” he added. “We need to be aware that some of our data has biases in it, historical ways that we've done things that we don't want to continue in the future. And we need to have a good awareness of what those biases in the data are and how do we approach this in a way that rectifies that.”
Jim Barham, assistant chief data officer and division director for the Data Analytics Division at the department’s Rural Development Innovation Center, said that rural development — one of eight mission areas for the department — has been doing a lot of foundational work on data that has historically been disaggregated and disparate.
“We've been just sort of crawling for a long time and we finally stood up and we're starting to walk,” he said. “For us to go from our walking position to a flat-out run with AI — it’s probably not prudent.
“We just have to be cautious as a federal agency. We have tremendous responsibility over our customers and their data,” he said, “so yes, we’re not going to be leaders in AI, right? This isn’t how it works with government. We’re going to take a more cautious approach, but ultimately one that I think will reap benefits for not just the federal government, but of course for all of our customers.”