SEC chair: Existing financial law can be applied to AI regulatory debate
The head Securities and Exchange Commission discussed the potential pitfalls of leveraging AI in investment banking, underscoring the need to prioritize consumers.
The Securities and Exchange Commission could have an avenue for regulating artificial intelligence based on existing securities law, particularly surrounding the usage of AI-based financial tools and how brokers may need to navigate an automated trading environment, according to Chair Gary Gensler.
Speaking during a forum at Yale, Gensler discussed his agency’s potential role in providing oversight of firms that use retail investing technologies.
“We at the SEC take investor education very seriously,” he said. “The robo-advising platforms and brokerage platforms affirmatively put educational material on their site. What their obligation is, is to make sure that when they do that, it's accurate and not misleading.”
He clarified that it is likely not within the SEC’s purview, drawing on existing securities law, to require financial firms to offer education if they decide not to. But should investment firms use AI and machine learning models to aid in certain decisions, Gensler said they should abide by basic disclosures with clients.
“Investor protection requires that the humans who deploy a model …put in place appropriate guardrails,” he said. “If you deploy a model…you’ve got to make sure that it complies with the law.”
A specific guardrail Gensler mentioned was testing an AI model to ensure it minimizes common risks, like hallucinations, and prohibiting illegal investment strategies like frontrunning and spoofing — or using algorithms to manipulate trading activity and financial markets.
Despite the relatively emerging nature of AI in certain industries, Gensler maintained that disclosure obligations regarding the use of AI systems still broadly apply. This encompasses disclosing risks and benefits.
“Companies should ask themselves sort of the basic questions. ‘Am I discussing AI in earnings calls?’ Am I discussing something consequential or extensive about it with my board? Then maybe it's material? Maybe I should tell the public about it,’” he said.
One potential conflict comes in the form of using predictive data analytics created by certain algorithms to optimize either financial gains for the investor or for a given broker. He confirmed that the problems would lie within the AI system prioritizing the platform or brokerage ahead of the customer.
“If the predictive data analytics is optimizing on our investor interest and still putting the investor first before the adviser, the broker, okay,” Gensler said. “But if you put in the optimization function…that you're also optimizing for the revenues or profits or interest of the advisor or broker dealer, then therein lies a conflict.”