AI concerns continue as governments look for the right mix of regulations and protections
The technology is starting to rack up an impressive portfolio of success stories, but there could be dangers and downsides as well.
There is little doubt that the emerging science of artificial intelligence is continuing to advance and grow, both in capabilities and the number of things that AI is being tasked with doing. And this is happening at lightning speed, despite there being little to no regulation in place to govern its use.
In the case of AI, and especially the new generative AI models like ChatGPT 4, the reason for both agencies and the private sector to proceed, even without guardrails, is likely due to the potential of game-changing AI to provide incredible benefits that seem to outweigh any associated risks. And AI is also starting to rack up an impressive portfolio of success stories.
For example, NASA and the National Oceanic and Atmospheric Administration recently tasked AI with predicting potentially deadly solar storms, and the AI is now able to give warnings about those events up to 30 minutes before a storm even forms on the surface of the sun. And in November, emergency managers from around the country will meet to discuss tasking AI with predicting storms and other natural disasters that originate right here on Earth, potentially giving more time for evacuations or preparations and possibly saving a lot of lives. Meanwhile, over in the military, unmanned aerial vehicles and drones are being paired up with AI in order to help generate better situational awareness, or even to fight on the battlefields of tomorrow, keeping humans out of harm’s way as much as possible. The list goes on and on.
But there could be dangers and downsides to AI as well, a fact that those who work with the technology are increasingly aware of. The results of a survey of over 600 software developers from across the public and private sector, with many of them tasked with working on projects involving AI, was released this week by Bitwarden and Propeller Insights. A full 78% of the survey respondents said that the use of generative AI would make security more challenging. In fact, 38% said that AI would become the top threat to cybersecurity over the next five years, which proved to be the most popular answer. The second biggest threat predicted by the developer community, ransomware, was cited by just 19% of the survey participants.
Self-regulation reigns
Although no laws yet exist in the United States for regulating AI, there are an increasing number of guidelines and frameworks to help provide direction on how to develop so-called ethical AI. One of the most detailed was recently unveiled by the Government Accountability Office. Called the AI Accountability Framework for Federal Agencies, it provides guidance for agencies that are building, selecting or implementing AI Systems.
According to those from the GAO and the educational institutions that helped to draft the framework, the most responsible uses of AI in government should be centered around four complimentary principles — governance, data, performance and monitoring — all of which are covered in detail within the GAO framework.
Another framework that has gotten a lot of attention, although it has no legal power, is the White House Office of Science and Technology Policy’s AI Bill of Rights. The framework does not give specific advice but instead provides general rules about how AI should be employed and how it should be allowed or restricted from working with humans. For example, it states that people should not face discrimination based on the decision of an algorithm or an AI. The framework also asserts that people should know if an AI is being used to generate a decision. So, if someone is being considered for a loan, the bank they are applying to should disclose whether a human or an AI will make the final decision. And if an AI is doing it, people should be able to request to opt out of that process and instead have their application looked at by real people.
Even though the AI Bill of Rights is merely a guideline, there have been calls for the government to make it binding, at least in how it applies to federal agencies. In Europe, that kind of legal action may soon happen. If it does, it will affect all entities working with AI within the European Union, not just their government agencies. The proposed regulations were put forward as part of the Artificial Intelligence Act, which was first introduced in April 2021.
Unlike the more high-level guidance detailed in the AI Bill of Rights, the Artificial Intelligence Act more carefully defines what kinds of AI activities should be allowed, which ones will be highly regulated, and what will be fully restricted based on having an unacceptable amount of risk. For example, activities that would be illegal under the AI Act include having AI negatively manipulate children, such as an AI-powered toy that encourages bad behavior. Anything that uses AI to score or classify people based on personal characteristics, socio-economic status or behavior would also be illegal.
High-risk activities — like AI use in educational fields or training, law enforcement, assistance in legal actions, the management of critical infrastructure and other similar activities — would be allowed, but heavily regulated. There is even an entire section in the AI Act that applies to generative AI, allowing the technology but requiring users to disclose whenever content is AI-generated. The model owners would also need to disclose any copyrighted materials that went into the model’s creation, and would also be prevented from generating illegal content.
Finding a path forward
A highly regulated approach to AI development, like in the European model, could help to keep people safe, but it could also hinder innovation in countries that accept the new standard, something EU officials have said they want in place by the end of the year. That is why many industry leaders are urging Congress to adopt a lighter touch when it comes to AI regulations in the United States. They argue that the United States is currently the world’s leader in AI innovation, and strict regulations would severely hinder that.
Plus, the emerging group of AI TRiSM — standing for trust, risk and security management — tools are just now being deployed, and could be used to help companies self-regulate AIs. The TRiSM tools do that by exposing the datasets used to train the AI models to look for bias, monitoring AI responses to ensure that they are compliant with existing regulations or guidelines, or they can help to train the AI to act appropriately.
Whether a strict approach to AI development like in the European model, a lighter set of guidelines like those currently used in the United States, or self-regulation by the companies which are programming and crafting new AIs, it’s clear that some form of regulation or guidance is probably needed. Even the developers working on AI projects acknowledge that the technology could prove dangerous under certain circumstances, especially as it continues to advance and improve its capabilities over the next few years. The question then becomes which one of those approaches is best? But it may be a long time before that question can be definitively answered.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys