OpenAI is engaging in discussions with the US Food and Drug Administration (FDA) regarding the use of artificial intelligence to potentially streamline the complex drug evaluation process, a move that could reshape parts of regulatory science, WIRED reported.
These conversations, confirmed by sources familiar with the matter, involve OpenAI personnel, FDA officials including the agency’s inaugural AI officer, Jeremy Walsh, and individuals associated with Elon Musk’s Department of Government Efficiency (DOGE). A specific project possibly named ‘cderGPT’ is reportedly part of the talks, although no formal agreement has been finalized.
The initiative aligns with recent statements from FDA Commissioner Marty Makary, who highlighted the agency’s push towards modernization. “Why does it take over 10 years for a new drug to come to market?” “Why are we not modernized with AI and other things? We’ve just completed our first AI-assisted scientific review for a product and that’s just the beginning,” Makary stated on X, following remarks at an American Hospital Association meeting about AI’s potential in approving treatments for conditions like diabetes and cancer.
Leading the discussions for the agency is Jeremy Walsh, the FDA’s recently appointed, first-ever AI officer. Walsh has also conferred with HHS’s acting chief AI officer, Peter Bowman-Davis, who, according to Politico, is linked to Andreessen Horowitz’s American Dynamism team.
Modernizing Drug Review
The prospect of using AI in drug reviews aims to tackle parts of a notoriously lengthy development timeline, which often exceeds a decade. However, experts and former officials caution that AI’s role, while promising, faces hurdles.
Former FDA Commissioner Robert Califf noted that the agency’s review teams have utilized AI for years, stating, “It will be interesting to hear the details of which parts of the review were ‘AI assisted’ and what that means.” and “Final reviews for approval are only one part of a much larger opportunity.” The challenge lies in ensuring the reliability and appropriate application of these powerful tools.
Experts acknowledge AI’s potential but urge caution. Rafael Rosengarten, CEO of Genialis, stressed the need for careful training and validation: “These machines are incredibly adept at learning information, but they have to be trained in a way so they’re learning what we want them to learn.”. He suggested AI could initially handle ‘low-hanging fruit’ like checking application completeness.
“Something as trivial as that could expedite the return of feedback to the submitters based on things that need to be addressed to make the application complete.” Concerns about the reliability of large language models, known for their potential to generate convincing but inaccurate information, persist. “Who knows how robust the platform will be for these reviewers’ tasks.” an ex-FDA employee commented to WIRED.
Industry groups like PhRMA also advocate caution. “Ensuring medicines can be reviewed for safety and effectiveness in a timely manner to address patient needs is critical.” “While AI is still developing, harnessing it requires a thoughtful and risk-based approach with patients at the center.” stated spokesperson Andrew Powaleny.
The FDA itself is actively researching AI applications, advertising a fellowship in late 2023 focused on developing LLMs for regulatory science, precision medicine, and drug development. The agency already employs several existing mechanisms like ‘fast track’ and ‘breakthrough therapy’ designations to accelerate reviews for promising drugs.
OpenAI’s Expanding Reach
The talks coincide with OpenAI’s broader push into regulated sectors. In January 2025, the company announced ChatGPT Gov, a self-hosted chatbot designed for government compliance, and is pursuing FedRAMP accreditation to handle sensitive federal data.
This isn’t OpenAI’s first venture into bioscience. The company previously collaborated with Retro Biosciences on GPT-4b Micro, an AI model designed to dramatically improving stem cell reprogramming efficiency by optimizing Yamanaka factors.
That project, detailed by MIT Technology Review, marked a significant step for OpenAI into biological research, distinct from efforts like Google DeepMind’s AlphaFold which focuses on protein structure prediction.
OpenAI CEO Sam Altman, a personal investor in Retro Biosciences, has been a strong advocate for AI’s potential in scientific acceleration. “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.” he previously stated.
The Broader AI-Science Ecosystem
The OpenAI-FDA talks are part of a larger trend where AI is increasingly applied to scientific discovery and healthcare. Google DeepMind, besides AlphaFold, has developed an AI Co-Scientist capable of generating novel hypotheses and partnered with BioNTech on AI lab assistants. Google Cloud also offers AI suites for drug discovery back. Microsoft, too, is active with initiatives like DeepSpeed4Science and models like BioEmu-1 designed for protein dynamics.
However, as AI labs push boundaries, the tension between open research and commercial strategy grows. Google DeepMind, for instance, has increased restrictions on publishing research to protect its competitive edge, a move that has caused frustration among some researchers. The approach OpenAI takes with any potential FDA collaboration – whether tools remain proprietary or become more accessible, perhaps like Google’s open-source TxGemma toolkit – will be closely watched as AI’s role in regulated fields like medicine continues to evolve.