Update: Sam Altman has refuted rumors of a December release for Orion, the company’s upcoming AI model, describing them as “fake news out of control.”
OpenAI’s highly anticipated AI model, codenamed “Orion,” is slated for release by December, although initial access will be limited to a select group of corporate partners. Microsoft, a significant backer of OpenAI, is preparing its Azure platform to host the model, with potential early access as soon as November.
Orion is positioned as a major step beyond the capabilities of OpenAI’s GPT-4 model, with insiders hinting it could bring OpenAI closer to developing artificial general intelligence (AGI), a form of AI that can perform complex, human-like reasoning across various tasks.
Exclusive Partner Access Reflects Shift in Release Strategy
Unlike previous models, Orion won’t see an immediate public release through ChatGPT but will instead be accessible first to trusted partners who can tailor the model to specific applications. The selective rollout, sources say, marks OpenAI’s response to growing financial pressures, including a $5 billion projected loss for 2024.
By focusing on partners and controlled environments, OpenAI aims to refine Orion’s capabilities before a potential public release. OpenAI’s push to make its models more customizable began in August, with the addition of fine-tuning for GPT-4o. In a decently cryptic tweet, OpenAI CEO Sam Altman teased a winter launch for Orion.
i love being home in the midwest.
the night sky is so beautiful.
excited for the winter constellations to rise soon; they are so great.
— Sam Altman (@sama) September 14, 2024
Fine-tuning enables developers to adapt AI responses using custom datasets, enhancing the model’s usefulness across industries by making it more specialized in areas like software development and creative content generation. OpenAI charges $25 per million tokens for training with fine-tuning, a feature that has already seen adoption across sectors needing high precision and adaptability.
Legal and Financial Issues Create Challenges for Orion’s Release
As Orion’s debut nears, OpenAI faces legal scrutiny regarding its data practices. Former employee Suchir Balaji recently raised concerns about OpenAI’s use of copyrighted content in AI training, alleging that such practices could harm content creators.
Earlier this year, The New York Times filed a lawsuit, accusing OpenAI of using its articles without authorization. OpenAI defends its practices as fair use, but experts suggest that upcoming court decisions could reshape how AI companies collect data.
To mitigate such disputes, OpenAI has entered into a $250 million licensing agreement with News Corp, gaining access to a wealth of licensed content. The growing tension between AI firms and content creators reflects a broader industry trend, with companies like Perplexity AI also facing similar lawsuits. News Corp’s ongoing case against Perplexity, for instance, underscores media companies’ frustrations with how AI developers use content without direct compensation.
Financially, OpenAI’s heavy reliance on Microsoft’s Azure for computational power is straining resources. Despite Microsoft’s $13 billion investment, employees at OpenAI have reportedly voiced concerns about limited access to computing resources needed for new projects like Orion.
To alleviate dependency, OpenAI secured an additional $6.6 billion from a funding round led by Thrive Capital. However, Microsoft, wary of OpenAI’s rising costs, is diversifying its own AI strategy, exploring partnerships with other AI companies like Inflection AI, co-founded by Mustafa Suleyman.
Staffing Changes and Restructuring
Amid these challenges, OpenAI is also undergoing major staffing changes. Last month, Mira Murati, the former CTO, departed to pursue her own AI venture, seeking $100 million in funding. OpenAI CEO Sam Altman also announced that Chief Research Officer Bob McGrew and Research Vice President Barret Zoph would also be leaving the company.
Murati’s decision comes amid a flurry of high profile depatures and boardroom drama over the last year. It started in December 2023 with Altman being fired by OpenAI. Over the next week, there was controversy and confusion, with Altman seemingly ready to head up Microsoft’s AI team. However, he was eventually reinstated as the head of OpenAI, with major investor Microsoft took a position on the board. That position was non-voting and Microsoft has since given it up.
In 2024, many important executives and employees have left OpenAI. Last month, I reported that OpenAI co-founder John Schulman is transitioning to Anthropic, the rival AI company that builds the Claude AI model. He was following fellow co-founder Greg Brockman and product lead Peter Deng out the door.
The corporate shuffle followed the disbanding of OpenAI’s superalignment team, previously spearheaded by Jan Leike and Ilya Sutskever, a few months prior. The research team endeavored to overcome the technical hurdles associated with controlling superintelligent AI within a four-year timeframe.
In a strategic move, OpenAI hired Aaron Chatterji, a former economist from the White House, as its first Chief Economist. Chatterji’s role centers on assessing AI’s potential impacts on job markets and economic systems—a key consideration as OpenAI navigates its growth amid increased public scrutiny.
Technical Challenges: Concerns Over Logical Reasoning
Concerns over the accuracy and reasoning abilities of OpenAI’s recent models have also emerged. Apple researchers recently tested OpenAI’s o1 and GPT-4o models with a new tool, GSM-Symbolic, finding that slight changes in task phrasing impacted accuracy significantly. For example, minor wording adjustments reduced accuracy by up to 10%, raising questions about these models’ reliability in applications requiring high logical consistency, such as healthcare.
Neurosymbolic AI, a hybrid approach combining neural networks and symbolic reasoning, has been suggested as a solution to these reasoning limitations. Critics argue that without incorporating symbolic reasoning, neural network models may continue to fall short in tasks requiring abstract reasoning and logic.
Last Updated on November 7, 2024 2:20 pm CET