Companies turn to risk mitigation tools to monitor AI absent federal law, study finds
While Congress debates AI regulations, Gartner has found a majority of businesses are looking at using new IT tools to self-regulate the risks of AI usage.
Last week was a big one for artificial intelligence, as Congress continued to invite tech executives to talk about how AI works, what it can do and how they feel it should be regulated. The many meetings and hearings are occurring as generative AI continues to develop at lightning speed, with new AI models like ChatGPT 4, Google Bard, the new Bing, Stable Diffusion and many others being recently released or updated. And there are many more AI models in development, with each of them attempting to become smarter and more useful than those that came before them.
But there are also security concerns, as the AIs could possibly be used for nefarious purposes beyond the original intention of their designs. And people are also rightly concerned about being discriminated against by AI models coming to incorrect or even prejudicial conclusions based on faulty training data, which could negatively impact everything from people’s healthcare to their financial well-being. But even with those possible dangers, there is even more concern that overly stringent regulations could stifle AI development and harm the nascent industry while it works on creating what is potentially one of the most important technologies of the near future. With that in mind, many industry leaders are urging Congress to adopt a light touch when it comes to AI regulations so that the United States can maintain its competitive edge in the field.
In the meantime, many businesses are adopting new IT tools to help self-regulate and protect their use of AI, ensuring that whatever systems they employ are not able to harm their business or negatively impact their customers. That is according to a just-released Gartner study, which found that business leaders feel AI can provide a competitive edge — but that it also comes with potential dangers.
Gartner conducted the survey among 150 members of its Peer Community, which are generally IT and security leaders working with the world’s top companies. When asked about the risks associated with using AI in their organizations, the majority of respondents were concerned. Specifically, 58% were worried about incorrect or biased outputs from their AIs, while 57% were also concerned about their AIs leaking secrets or divulging more information than they should.
According to Avivah Litan, the analyst at Gartner who worked on the project, awareness of both the potential for AI and the risks associated with it were well-known by the survey respondents.
“Organizations that don’t manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage,” said Litan. “This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes. AI malperformance can also cause organizations to make poor business decisions.”
To compensate for some of the potential negative aspects of the technology, many of the survey respondents are using newly created IT tools designed to watch over AIs and help to guide their performance. The so-called AI TRiSM — trust, risk and security management — tools are relatively new and were designed to help self-regulate AIs by doing things like exposing the datasets used to train the model to look for bias or monitoring AI responses to ensure that they are compliant with existing regulations or company guidelines. Theoretically, if Congress were to pass laws regarding the use of AI, the TRiSM tools could also be used to help enforce them.
“IT and security and risk management leaders must, in addition to implementing security tools, consider supporting an enterprise-wide strategy for AI TRiSM,” said Litan. “AI TRiSM manages data and process flows between users and companies who host generative AI foundation models, and [it] must be a continuous effort, not a one-off exercise, to continuously protect an organization.”
According to the survey, 56% of respondents say that their companies are investigating the use of the new AI-watchdog tools, while 36% said they were already using them. The tools employed included everything from privacy-enhancing technologies, to those that examined AI training data, to ones that directly monitored the behavior of the AI itself.
Until Congress acts on AI regulations, self-regulating tools may be the only guardrails organizations can employ to protect their AI data. Last year, the White House unveiled an AI Bill of Rights, which among other things called for the use of responsible AI — and for protections like letting the public opt out of decision-making being conducted with AI by requesting a human-based evaluation instead. NIST has also introduced a framework that outlines the responsible use of AI creation and employment. However, both of those documents are only guidelines for AI use, with no regulatory or enforcement power backing them up.
Several bills have also been introduced in Congress to try and regulate AI — most recently the Algorithmic Accountability Act of 2022 — but none of those bills have even come close to working their way through a divided Congress.
Until well-defined AI regulations are put in place, using the new AI TRiSM tools can provide a way for both government agencies and private sector companies to help monitor and control their use of AIs, letting them gain some of the many benefits that the new technology can offer, without so many of the associated risks.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys