HomeWinBuzzer NewsGodfather of AI Geoffrey Hinton Doubts Good AI Will Triumph over Bad...

Godfather of AI Geoffrey Hinton Doubts Good AI Will Triumph over Bad AI

-

Geoffrey Hinton, who is called the ‘Godfather of AI’ and a professor at the University of Toronto, expressed his concerns about the rapid development of artificial intelligence (AI) at the 2023 Collision tech conference in Toronto. Hinton, who recently left Google to critique the field he helped pioneer freely, warned about the unchecked acceleration of AI development, particularly in generative AI tools like ChatGPT and Bing Chat. He expressed doubts about whether good AI would triumph over bad AI and suggested that ethical adoption of AI might come with a steep cost.

“AI Is Only as Good as Its Creators”

Hinton argued that AI is only as good as its creators and that bad tech could still win. He expressed concerns about the potential for AI to be used in warfare and the possibility of AI exacerbating wealth inequality. He also reiterated his view that AI could pose an existential risk to humanity, stating that if AI becomes smarter than humans, there is no guarantee that humans will remain in control.

Bias and Misinformation in AI

Hinton highlighted existing problems with AI, including bias and discrimination due to skewed training data, and the creation of echo chambers that reinforce misinformation. He also expressed concern about AI spreading misinformation beyond these echo chambers and emphasized the importance of marking fake content as such.

AI’s Potential and the Need for Regulation

Despite his concerns, Hinton acknowledged the potential benefits of AI but warned that their realization might come at a high price. He suggested that humans might need to conduct empirical work to understand how AI could go wrong and prevent it from taking control. He also called for more balance in AI development efforts, with more focus on understanding and managing the risks of AI.

While Hinton expressed concerns, other industry figures at the Collision conference were more optimistic. Adam Selipsky, CEO of Amazon Web Services, discussed Amazon’s ongoing work on its own large language models, set to be released later this year. Google DeepMind’s business chief, Colin Murdoch, and Roblox Chief Scientist Morgan McGuire also expressed hope for AI’s potential to solve global challenges and enhance creativity, respectively.

Hinton’s comments reflect a broader debate within the tech industry about the future of AI. While AI has the potential to revolutionize many aspects of society, its rapid development also raises significant ethical and practical concerns. Balancing these risks and rewards will be a key challenge for policymakers, tech companies, and society as a whole in the coming years.

AI Regulation: Recent Developments

  1. EU Businesses Express Concern Over AI Act (June 30, 2023): Over 150 executives from top European companies, including Renault, Heineken, Airbus, and Siemens, have signed an open letter expressing concerns about the EU’s AI Act. They argue that the Act’s strict regulations could stifle AI innovation and discourage companies from using AI to create new products and services. The executives suggest a more flexible, risk-based approach that focuses on AI’s actual use cases rather than the underlying technology.

  2. OpenAI’s Lobbying Efforts Against the AI Act (June 20, 2023): OpenAI has been lobbying European officials to water down the EU’s proposed AI Act. The company argues that its general-purpose AI systems, such as GPT-4, should not be considered “high risk” and should therefore be exempt from the Act’s regulations. OpenAI’s lobbying efforts have been somewhat successful, as the current draft of the AI Act does not include GPT-4 or other general-purpose AI systems among the list of high-risk AI systems.

  3. EU Parliament Approves AI Act (June 14, 2023): The European Union Parliament has given the green light to the European AI Act. This legislation is designed to regulate AI usage, focusing on systems that pose a high level of risk, such as predictive policing tools and social scoring systems. The Act also introduces new restrictions on high-risk AI systems that could potentially manipulate voters or endanger health. The Act also outlines new rules for generative AI, requiring AI-generated content to be clearly labeled and summaries of copyrighted training data to be published. The Act’s implications are so significant that OpenAI, the maker of ChatGPT, may consider exiting the European market. The Act is still under negotiation with the European Council.

  4. EU Plans to Label AI-Generated Content (June 5, 2023): The European Union has urged companies like Google, Facebook, and Microsoft to start labeling all AI-generated content, such as deepfakes and synthetic media. The EU’s Digital Services Act (DSA) requires online platforms to take measures to prevent the spread of harmful content, including AI-generated content that could be used to mislead or deceive users. The regulation is set to come into force in 2024.
  5. Australia Plans Regulatory Framework for AI (June 5, 2023): The Australian government, under the leadership of Industry and Science Minister Ed Husic, has launched a comprehensive review of AI in response to global concerns. The review, which is set to last eight weeks, aims to establish a new regulatory framework for AI, with a particular focus on high-risk areas such as facial recognition. The review will explore the possibility of strengthening existing regulations, introducing new AI-specific legislation, or a combination of both.

  6. Microsoft President Calls for Generative AI Regulations (May 31, 2023): Microsoft’s President, Brad Smith, has voiced his support for generative AI regulations. He emphasized the need for a framework that ensures the responsible use of AI technologies, adding his voice to the growing chorus of advocates for AI regulation.

  7. Microsoft Publishes Governance Blueprint for Future Development (May 26, 2023): Microsoft has shared a blueprint outlining its vision for AI governance. The report, titled “Governing AI: A Blueprint for the Future”, presents five key principles that Microsoft believes should guide AI development and usage. The company’s proposed five-step blueprint for public AI governance includes the implementation of government-led AI safety frameworks, the establishment of a new federal agency dedicated to AI policy, and the promotion of responsible AI practices across sectors.

  8. G-7 Leaders Initiate ‘Hiroshima Process’ (May 21, 2023): The leaders of the G-7 countries have agreed to establish a governance protocol named the ‘Hiroshima Process’ in response to the rapid advancement of generative AI. This agreement aims to ensure that AI development and deployment align with the shared democratic values of the G-7 nations.

  9. OpenAI CEO Calls for Urgent AI Regulation (May 17, 2023): Sam Altman, the CEO of OpenAI, has testified before a US Senate subcommittee, advocating for the regulation of rapidly advancing AI technologies. Altman suggested the establishment of an agency that would issue licenses for the development of large-scale AI models, enforce safety regulations, and require AI models to pass tests before public release.

Last Updated on November 8, 2024 12:32 pm CET

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Mastodon