Google is pushing forward in the race against OpenAI with a new AI model designed to improve reasoning capabilities. With a clear focus on handling more sophisticated tasks like programming and advanced mathematics, this effort is part of Google DeepMind’s broader attempt to outperform OpenAI’s o1 model.
According to reports from Bloomberg, Google’s development teams have made strides recently in building AI tools centered on reasoning, a move that closely mirrors OpenAI’s approach.
By utilizing what’s known as “chain-of-thought” prompting, Google’s AI models are being trained to break down complicated questions into smaller, manageable steps before arriving at a solution. The technique allows the AI to analyze multiple potential answers, choosing the most accurate one. OpenAI’s o1 model uses a similar method, with the added capability of addressing tasks in science, coding, and math.
Enhancing AI’s Thought Process with New Techniques
Google’s emphasis on the “chain-of-thought” technique means its AI models can now handle multi-step problems in ways previous models couldn’t. Rather than generating rapid, straightforward answers, the AI pauses to consider each step of a problem, ensuring a more thorough analysis.
OpenAI’s o1 model, which operates similarly, has set the bar for performance in complex problem-solving. Although the model is still in its preview phase, it’s already showing progress in more advanced areas of reasoning. Unlike earlier versions of ChatGPT, the o1 model still lacks certain features, like web browsing and file uploads.
However, OpenAI’s focus remains on deepening the AI’s ability to tackle more challenging tasks. Google’s competing models are following a similar path, with plans to roll out additional improvements to their own AI platform, including features designed to enhance math and coding problem-solving abilities.
The Role of Computing Power in AI Progress
To further boost the performance of its reasoning models, Google is also exploring how additional computational resources during AI inference—the process of generating responses—can improve overall results. A recently released research paper from the Google DeepMind division highlights efforts to optimize this process.
The team has devised methods to scale the compute power needed for more complex tasks, adapting the system dynamically based on the difficulty of the prompt. Google’s approach has already led to more efficient use of resources, with improvements in performance by more than four times compared to standard models.
An increase in computing efficiency could help Google stay ahead in the AI arms race, as it allows models to be scaled in ways that go beyond simply adding more data. OpenAI has similarly invested in scaling its models, but benchmarking the two systems will only be possible once they are fully available for public testing.
AI Models with Specialized Skills in Math and Geometry
Google has also been developing specialized AI models aimed at solving complex problems in specific fields. In July, the company introduced AlphaProof and AlphaGeometry 2, both part of Google’s effort to refine AI-driven problem-solving in mathematical and geometric reasoning. These models were able to master several tasks from the International Mathematical Olympiad, an event designed to challenge the world’s brightest high school students.
By combining elements from large language models with more traditional search algorithms, Google has made considerable progress in fields where precision is key. As these models continue to evolve, Google’s strategy includes scaling them up to tackle more demanding and specialized tasks in the near future.
Last Updated on October 14, 2024 12:42 pm CEST