HomeWinBuzzer NewsLLaMA AI Under Fire: What Meta Isn’t Telling You About "Open Source"...

LLaMA AI Under Fire: What Meta Isn’t Telling You About “Open Source” Models

Meta calling its LLaMA AI models "open-source" is under scrutiny. Critics argue limitations in the license don't meet the definition.

-

Meta Platforms has found itself at the center of a debate over its claims about LLaMA, its large language AI models. According to Stefano Maffulli, head of the Open Source Initiative (OSI) speaking to Financial Times. Meta’s decision to call these models “open-source” is misleading and confusing.

Maffulli suggests that the term is being misused and could lead to misunderstandings about what truly qualifies as open-source technology, especially in the realm of AI. The LLaMA models, though popular, have sparked concern about their licensing terms, which limit how users can engage with and modify the AI system.

LLaMA: Available But Not Fully Open

Meta’s LLaMA models, downloaded hundreds of millions of times, offer developers the ability to work with large-scale AI systems outside of proprietary ecosystems like OpenAI’s GPT-4. However, the issue arises with how Meta has labeled this as open-source when, in reality, the models come with limitations—restrictions on commercial use, for example.

Critics argue that such a license doesn’t meet the open-source definition created by OSI, which requires unrestricted access and use. Meta’s licensing agreements, on the other hand, impose conditions on what users can do, particularly when it comes to commercial endeavors.

For Maffulli, this isn’t just a technical oversight. He’s warned that this type of language misuse could ultimately harm the broader AI development ecosystem, slowing down progress toward more accessible, user-controlled AI systems. It is worth noting that this is not a new issue for OSI.

Unfortunately, the tech giant has created the misunderstanding that LLaMa 2 is “open source” – it is not,” Maffulli wrote in July 2023, pointing out Meta’s licensing negates its open source claims. “OSI does not question Meta’s desire to limit the use of Llama for competitive purposes, but doing so takes the license out of the category of “Open Source.”

Are ‘Open-Weight’ Models Enough?

While Meta’s LLaMA models are referred to as “open,” some experts within the AI community, such as Ali Farhadi from the Allen Institute for AI, suggest a different term: “open-weight.” These models, although allowing developers access to certain components (like model weights), don’t provide the full transparency needed for independent development.

Developers can’t freely adapt or improve the models, limiting the kind of experimentation that was once possible with open-source software. Farhadi argues that AI systems need to go beyond merely offering pieces of the puzzle. For the AI world to evolve, full transparency in how these models are built and trained is essential.

Meta’s Big Play for Open Source Credibility

It is likely that Meta would refute OSI’s criticism, not least because the company has made a big deal over the open source nature of its AI models. Meta’s strategy of open-sourcing its models has garnered significant attention and praise from the developer community. “Open Source” leads the marketing of new AI releases (see the Xeet below) but he company’s commitment to open source has been questioned.

Meta claims to be open about its AI, offering a community license for LLaMA. It thinks that openness leads to more innovation and safety in AI. The company invites the community to test Code Llama, find problems, and fix them. In fact, when pivoting to an AI-first company, CEO Mark Zuckerberg was eager to discuss the company’s open source credentials

But a July 2023 study says that Meta and other companies are not really open about their AI models. The study, done by AI experts from Radboud University in Nijmegen, Netherlands, shows that some of the strongest AI LLMs are hidden from the public, because the code that trained them is not shared.

Regulation and Open-Source Standards in AI

This debate goes beyond developer access. Regulatory bodies like the European Commission have pushed for open technologies, hoping to establish standards that support user autonomy and transparency. If companies like Meta redefine what “open-source” means in AI, these regulatory efforts could be undermined.

According to Maffulli, the risk is that companies might use this as an opportunity to push revenue-generating models into what are supposed to be open standards. Meta’s licensing approach has sparked a conversation about whether current open-source definitions are sufficient in today’s rapidly changing tech world.

Meta itself has argued that AI systems are far more complex than software from previous decades, and therefore, open-source definitions should evolve. But for critics, the fear is that redefining these terms to suit corporate interests could lead to a future where fewer AI models are genuinely accessible to all.

Last Updated on November 7, 2024 2:29 pm CET

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x