It's too soon to regulate AI
COMMENTARY | While restrictions and export controls related to specific compute and hardware platforms are possible, it is virtually impossible to monitor and enforce how these widely available computing platforms are actually used.
As the undisputed top technology trend of the year, artificial intelligence has given rise to an array of concerns and speculations. Some policymakers, government officials and AI researchers are calling for quickly enacting restrictions on the technology. However, we must approach any regulatory regime thoughtfully and carefully, as hastily enacted regulation could have numerous unintended consequences and negatively impact American business, national security, and the American people.
First, we must look at what it is we are seeking to regulate. Instead of applying restrictions to the technology itself, we should continue to evolve common-sense regulation around how AI is applied. For example, privacy regulations at the state level and any future national privacy legislation should be adjusted to account for AI, as many of the concerns about AI are actually concerns about privacy. Focusing on how the technology is applied, rather than restricting the models and research themselves, is an important nuance that will allow us to both foster innovation in the field and ensure safe and responsible use of the technology.
Ensuring safety is not new to cybersecurity practitioners. For decades, the industry has invested in fostering the creation of technology to defend against advanced cyberattacks perpetrated by both cybercriminals and nation states. To build an effective defense requires using the best technology available. In today’s cyber battleground, that technology is AI.
Bad actors use AI to create exponentially more targeted and lethal attacks, and the cyber defense industry uses the latest AI to detect and defend against these attacks. This is a cat and mouse game that plays out at lightning speed, making it imperative that we have the freedom to leverage emerging technology in a responsible and ethical way. Cybercriminals and adversarial nations, by nature, are not inclined to follow U.S. regulations or laws. Any regulation that slows the use of new technology in cyber defense will provide bad actors with an advantage and increase cyber risks for consumers, businesses, and U.S. national security.
Looking beyond the specific case of cybersecurity, it’s important to recognize that any regulation and oversight in this area will, by default, favor larger businesses and corporations with funds to invest in becoming “compliant” or “licensed.” Heading down this path would edge out small businesses, startups, and entrepreneurs. Google, Apple, and Microsoft didn’t emerge from large corporations; they started in someone’s garage, unencumbered by overarching regulatory restrictions and the financial burdens that come with them. Overly burdensome and unnecessary regulation will prevent smaller companies and start-ups from competing with well-established players that can afford to work through compliance issues.
We also need to consider the logistics of implementing AI-specific regulations. Regulations in this field cannot be as rigidly enforced as they can in traditional technological areas. Nuclear technology, for example, can be controlled by limiting access to precursor materials such as uranium or plutonium. However, the precursor technology for advanced AI is access to large amounts of computational technology, which is broadly available from cloud computing suppliers worldwide and used almost ubiquitously throughout U.S. corporations, small businesses, and government.
Further complicating implementation of regulations is that the underlying building blocks of AI don’t have a physical presence, as they are ultimately rooted in mathematical algorithms executed by computers. While restrictions and export controls related to specific compute and hardware platforms are possible, it is virtually impossible to monitor and enforce how these widely available computing platforms are actually used. Similarly, the nature of AI is such that there is flexibility in model approaches, meaning implementations can and will be created that sidestep AI-specific regulations. The result of this will lead to advantages for the bad actors, while simultaneously creating innovative friction for legitimate researchers and organizations.
Yet without regulation, how do we ensure the responsible use of AI technology? There is a middle ground where the focus is placed on developing frameworks that encourage responsible and ethical use and development of AI. The Biden Administration and 15 AI companies have already agreed on voluntary guidelines for AI. Industry leaders should work together further to develop frameworks and best practices to model the ethical usage and implementation of AI while also providing a guide for smaller businesses and innovators to leverage. We should also understand how AI can and will be applied to various industries and facets of American life and evolve existing regulations to meet this evolution.
Furthermore, government has sometimes tried to spur behavior in the private sector by focusing on federal or state entities first. This is the case in cybersecurity, for instance, where a comprehensive executive order requires agencies to implement specific practices and technologies, with the expectation that businesses and organizations will follow suit. A similar flow-down approach could prove successful for AI.
We are at a critical inflection point: AI has and will radically change our world in myriad ways, both positive and negative. However, we cannot let fear or uncertainty drive policy that hinders positive innovation. Working though these nuanced discussions will take time, and we should not rush to implement restrictions and gates that may ultimately do more harm than good.
NEXT STORY: The government management paradox