The head of the U.S. Securities and Exchange Commission (SEC), Gary Gensler, has expressed concerns over the unchecked influence of Artificial Intelligence (AI) in the financial sector. He emphasized the need for regulators to be vigilant and proactive in addressing the potential risks posed by AI.
AI's Growing Influence in Finance
Gensler believes that the increasing concentration of data managed by AI-driven platforms poses significant risks to the financial systems. “It's frankly a hard challenge,” Gensler commented in an interview with the Financial Times. He pointed out the complexity of regulating AI, especially when many institutions might rely on similar base models or data aggregators.
While the SEC chairman acknowledges the need for regulation, he also recognizes the challenges in crafting a comprehensive framework for AI in the U.S. This is due to the diverse solutions being developed by tech firms that might not fall directly under the SEC's jurisdiction. Gensler has been vocal about supporting only positive AI trends and has consistently called on Congress to back the commission's stance.
Taking Steps to Regulate AI
The rise of AI into the mainstream has been relatively sudden. 2023 has been the year where AI have become prominent. As generative AI becomes more powerful and widely available, regulators are scrambling to create laws to govern this new technology.
In the UK, Competition and Markets Authority (CMA) has taken a significant step in the realm of artificial intelligence by unveiling a comprehensive set of principles aimed at guiding the development and deployment of AI foundation models.
In the US, The White House has announced that eight more tech companies, including Adobe, IBM, Nvidia, Cohere, Palantir, Salesforce, Scale AI, and Stability AI, have pledged their commitment to the development of safe, secure, and trustworthy artificial intelligence (AI). This move builds upon the Biden-Harris Administration's efforts to manage AI risks and harness its benefits.
In July when the initiative was launched, Leading U.S. tech companies, including OpenAI, Google, Microsoft, Amazon, Anthropic, Inflection AI, and Meta, agreed to the voluntary safeguard. The commitments are divided into three categories: Safety, Security, and Trust, and apply to generative models that surpass the current industry frontier.