HomeWinBuzzer NewsSEC Chairman Gary Gansler Warns AI Might Cause Financial Crisis

SEC Chairman Gary Gansler Warns AI Might Cause Financial Crisis

Gary Gansler believes that the increasing concentration of data managed by AI-driven platforms poses significant risks to the financial systems.

-

The head of the U.S. Securities and Exchange Commission (SEC), Gary Gensler, has expressed concerns over the unchecked influence of Artificial Intelligence (AI) in the financial sector. He emphasized the need for regulators to be vigilant and proactive in addressing the potential risks posed by AI.

AI's Growing Influence in Finance

Gensler believes that the increasing concentration of data managed by AI-driven platforms poses significant risks to the financial systems. “It's frankly a hard challenge,” Gensler commented in an interview with the Financial Times. He pointed out the complexity of regulating AI, especially when many institutions might rely on similar base models or data aggregators.

While the SEC chairman acknowledges the need for , he also recognizes the challenges in crafting a comprehensive framework for AI in the U.S. This is due to the diverse solutions being developed by tech firms that might not fall directly under the SEC's jurisdiction. Gensler has been vocal about supporting only positive AI trends and has consistently called on Congress to back the commission's stance.

Taking Steps to Regulate AI

The rise of AI into the mainstream has been relatively sudden. 2023 has been the year where AI have become prominent. As becomes more powerful and widely available, regulators are scrambling to create laws to govern this new technology. 

In the UK,  Competition and Markets Authority (CMA) has taken a significant step in the realm of artificial intelligence by unveiling a comprehensive set of principles aimed at guiding the development and deployment of AI foundation models.

In the US,  has announced that eight more , including AdobeIBMNvidia, Cohere, Palantir, Salesforce, Scale AI, and Stability AI, have pledged their commitment to the development of safe, secure, and trustworthy artificial intelligence (AI). This move builds upon the Biden-Harris Administration's efforts to manage AI risks and harness its benefits. 

In July when the initiative was launched, Leading U.S. , including AnthropicInflection AI, and Meta, agreed to the voluntary safeguard. The commitments are divided into three categories: Safety, Security, and Trust, and apply to that surpass the current industry frontier.

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon