HomeWinBuzzer NewsStanford Research Reveals AI Transparency Rankings

Stanford Research Reveals AI Transparency Rankings

Stanford University has launched the Foundation Model Transparency Index to chart the openness of AI models such as GPT-4 and Llama 2.

-

Stanford researchers have emphasized the importance of transparency in while unveiling a new tool intended to shed light on the lack of information regarding the operations of large-scale AI language models from OpenAI, Google and Meta. These organizations seldom provide details about training data, hardware, capabilities, and safety tests of their AI systems. Regardless of numerous AI models becoming open-source, the public remains unsure about the process of their creation and their applications post-release.

The Foundation Model Transparency Index

The unique scoring system, the Foundation Model Transparency Index, aims to resolve this information void by rating 10 significant AI language models or “” based on their transparency levels. High-profile models like OpenAI's GPT-4, Google's PaLM 2 and Meta's LLaMA 2, alongside less prominent models like 's Titan Text and Inflection AI's Inflection-1, are evaluated on 100 criteria. These criteria range from the disclosure of training data sources, hardware details, involved labor, and additional relevant facts. Moreover, the ranking includes data other than only the model production information like “downstream indicators” — essentially how the models are utilized post-release.

The Rankings and its Implications

According to this newly developed index, of Meta secured the highest transparency score at 54 percent. and stand jointly at third place with 40 percent. Percy Liang, Stanford's Center for Research on Foundation Models' leader, stressed the necessity of transparency in the AI industry. As AI becomes more powerful and commonplace in everyday life, knowledge about their workings becomes crucial for regulators, researchers, and users.

While AI companies shy from sharing more information due to concerns involving lawsuits, competition, and safety, Stanford researchers argue for a higher degree of information from AI firms. They emphasized the need to be aware of model operations, limitations, and potential dangers amidst an escalating impact of AI embedded with declining transparency.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News