Silo AI Introduces Poro, an Open-Source AI Model, to Empower European Languages

Finland's Silo AI unveils Poro, a breakthrough open-source large language model for European languages.

Silo AI, a Finland-based artificial intelligence startup and Europe's answer to OpenAI, has announced the launch of Poro, a groundbreaking large language model (LLM) with a special focus on European languages. Poro stands out as an open-source initiative, which aims to extend multilingual AI capabilities significantly. Derived from the Finnish word for “reindeer,” Poro signals a pioneering stride as it plans to cover all 24 official European Union languages.

A Collaboration for Innovation

Poro is the brainchild of SiloGen, the division of Silo AI established in late 2022, in collaboration with the University of Turku's TurkuNLP research group. Together, this partnership embodies the fusion of applied AI expertise and academic excellence. Spearheaded by Silo AI CEO Peter Sarlin, the initiative underscores the imperative of “digital sovereignty,” aiming to retain the value creation within Europe.

Poro's Technical Prowess

Embarking on this venture, the Poro 34B model employs a BLOOM transformer architecture combined with ALiBi embeddings; a combination designed to amplify its language processing capabilities. The BLOOM AI model is a large that can generate text in 46 natural languages and 13 programming languages. It was created by a collaboration of over 1000 AI researchers from 70+ countries and 250+ institutions, using the Jean Zay supercomputer in France. 

Trained on a 21 trillion token multilingual dataset, Poro 34B tackles languages such as English, Finnish, and various programming languages including Python and Java. Remarkably, the model operates on LUMI, Europe's fastest supercomputer, which boasts an impressive 74 petaflops of power through its 512 AMD Instinct MI250X GPUs.

The approach adopted by Poro addresses the crucial challenge of developing effective natural language models for languages that traditionally have fewer resources, such as Finnish. By capitalizing on cross-lingual training methodologies, Poro is crafted to elevate performance by learning from data-rich languages to benefit the ones with less data.

Poro's accomplishments are apparent even at the milestone of just 30% training completion. It has not only outperformed existing Finnish-specific models but is rapidly closing the gap with performance benchmarks set by models for the English language. Sarlin notes the dual achievement of Poro, showcasing excellence in low-resource languages while matching English language benchmarks.

Transparency and Open Source Commitment

In the spirit of transparency, SiloGen has launched the Poro Research Checkpoints program, offering unprecedented access to the model's training progression. According to Sarlin, such initiatives are rare, connoting a new epoch for transparency in model training. Poro's developmental benchmarks will be shared regularly, enriching the AI community with invaluable resources and insights.

As for the project's vision, pushing Poro as an ethical and transparent alternative to models developed by tech companies, Sarlin emphasizes the importance of open-source AI models. Asserting the EU's intent to nurture value within its own borders, he envisions Poro to catalyze a future where European enterprises leverage their own proprietary models to foster value creation.

Silo AI's extended plan involves releasing checkpoints periodically till the model's training culminates. The decisive objective is to assemble a suite of open-source models capable of serving all European languages. Poro's initial results seem promising, indicating potential competition for the in the realm of AI.