Stanford Research Reveals AI Transparency Rankings

Stanford University has launched the Foundation Model Transparency Index to chart the openness of AI models such as GPT-4 and Llama 2.

Stanford researchers have emphasized the importance of transparency in AI development while unveiling a new tool intended to shed light on the lack of information regarding the operations of large-scale AI language models from OpenAI, Google and Meta. These organizations seldom provide details about training data, hardware, capabilities, and safety tests of their AI systems. Regardless of numerous AI models becoming open-source, the public remains unsure about the process of their creation and their applications post-release.

The Foundation Model Transparency Index

The unique scoring system, the Foundation Model Transparency Index, aims to resolve this information void by rating 10 significant AI language models or “foundation models” based on their transparency levels. High-profile models like OpenAI’s GPT-4, Google’s PaLM 2 and Meta’s LLaMA 2, alongside less prominent models like Amazon’s Titan Text and Inflection AI’s Inflection-1, are evaluated on 100 criteria. These criteria range from the disclosure of training data sources, hardware details, involved labor, and additional relevant facts. Moreover, the ranking includes data other than only the model production information like “downstream indicators” — essentially how the models are utilized post-release.

The Rankings and its Implications

According to this newly developed index, LLaMA 2 of Meta secured the highest transparency score at 54 percent. GPT-4 and PaLM 2 stand jointly at third place with 40 percent. Percy Liang, Stanford’s Center for Research on Foundation Models’ leader, stressed the necessity of transparency in the AI industry. As AI becomes more powerful and commonplace in everyday life, knowledge about their workings becomes crucial for regulators, researchers, and users.

While AI companies shy from sharing more information due to concerns involving lawsuits, competition, and safety, Stanford researchers argue for a higher degree of information from AI firms. They emphasized the need to be aware of model operations, limitations, and potential dangers amidst an escalating impact of AI embedded with declining transparency.

Last Updated on November 8, 2024 10:31 am CET

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x