HomeWinBuzzer NewsGoogle DeepMind Unveils New AI Safety and Alignment Organization

Google DeepMind Unveils New AI Safety and Alignment Organization

Google forms AI Safety and Alignment team to tackle risks of powerful AI like Gemini, including misinformation and bias.

-

Google's AI research division, DeepMind, has announced a new project named AI Safety and Alignment, integrating existing teams with a mission to escalate artificial intelligence (AI) safety measures. DeepMind is hiring for the new team at a time when Google aims to mitigate the emerging risks associated with General AI (GenAI) models, including its flagship model, Gemini, known for generating deceptive content under specific prompts. The organization will significantly expand its focus to encompass new domains of safety around artificial general intelligence (AGI), aiming to address the broad spectrum of AI applications with the potential to perform tasks at a human level.

Strategic Approach towards AGI and Safety Measures

The new organization is not only a response to increasing scrutiny from policymakers concerned with the misuse of GenAI tools but also a proactive step towards addressing the technical challenges of controlling superintelligent AI systems. AI Safety and Alignment widens its purview to include a novel team dedicated to AGI safety. This team operates within the broader umbrella alongside DeepMind's Scalable Alignment team based in London, aiming to devise solutions for the uncharted territories of superintelligent AI safety. In a parallel mission to rival OpenAI's Superalignment division, this initiative signifies Google's commitment to maintaining a responsible trajectory in AI development amidst competitive pressures.

Anca Dragan, with her rich background in AI safety systems from her tenure at Waymo and as a computer science professor at UC Berkeley, leads the team. She brings to the table a firm grasp of the nuances involved in aligning AI systems with human values and preferences, emphasizing the importance of models that can robustly interpret human intent and safeguard against adversarial manipulations. Dragan's dual role, spanning her lab at UC Berkeley and DeepMind, leverages her extensive research experience in human-AI and human-robot interactions to pioneer safety mechanisms in anticipation of advancing AI capabilities.

Navigating Challenges and Setting Future Directions

The establishment of AI Safety and Alignment organization reflects Google DeepMind's acknowledgment of the complex challenges GenAI tools pose, specifically in the realms of misinformation and . Surveys reveal substantial public concern over the misuse of AI in spreading false information, notably in the context of the U.S. presidential elections. The organization places a premium on developing robust safety mechanisms, ranging from preventing misleading medical advice to addressing bias and injustice amplification.

DeepMind's commitment involves not promising an infallible model but investing in frameworks to evaluate GenAI safety risks adequately. Dragan outlines a vision where models are equipped to account for data-driven biases, invoke real-time monitoring to catch failures, and employ confirmation dialogues for crucial decisions. The ultimate goal is a continuous improvement trajectory for Google's AI models, leading to safer and more reliable user experiences.

With AI technology rapidly advancing, Google DeepMind's strategic pivot emphasizes the importance of safety and ethical considerations in the AI development landscape, aiming to secure a balanced progression that harnesses AI benefits while effectively mitigating associated risks.

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon