September 2, 2025

With love, from Switzerland

Imanol's team just launched Apertus, the most powerful open-source language model ever released by a public institution.

With love, from Switzerland

On September 2, 2025, the Swiss AI Initiative—a collaboration between EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS)—released Apertus, the country's first large-scale open language model. Built with Swiss values in mind—transparency, multilingual capabilities, and public service—Apertus is a fully open-source foundation model released in two versions (8B and 70B) under the Apache 2.0 license.

It is now one of the most powerful multilingual LLMs ever released by a public institution—and it's available to everyone through the Public AI Inference Utility.

Public AI is proud to be the official international deployment partner for Apertus. To support Apertus, we've allocated over 115,000 GPU-hours spread across 20 clusters in 5+ countries—just for the month of September. For reference, that’s a Geneva-sized amount of inference: about what a city the size of Geneva spends on consumer compute per month. This was made possible by our many inference partners, led by Amazon Web Services, Exoscale, AI Singapore, Cudo Compute, the Swiss National Supercomputing Centre (CSCS), and the National Computational Infrastructure (NCI) Australia.

Openness by design

The name "Apertus" comes from Latin, meaning open. Everything about the model is transparent and reproducible:

  • the training architecture, datasets, and recipes
  • the model weights, including intermediate checkpoints
  • the source code, logs, and deployment guides

Unlike models that offer partial access, Apertus is fully inspectable and modifiable, giving developers, researchers, and institutions complete visibility into how the model was built and how it behaves.

"Apertus is built for the public good," says Imanol Schlag, technical lead of Apertus and a research scientist at ETH Zurich. "It stands among the few fully open LLMs at this scale and is the first of its kind to embody multilingualism, transparency, and compliance as foundational design principles."

Multilingual and inclusive

Apertus is trained on 15 trillion tokens across more than 1500 languages, with 40% of the training data in non-English languages. That includes Swiss languages like Romansh and German, as well as many others historically underrepresented in mainstream, private LLMs.

This is a significant step forward in linguistic diversity. Apertus isn't just another English-first model with some multilingual additions—it's designed from the ground up to serve users and communities across linguistic boundaries.

Research-grade, industry-ready

Apertus is designed for both research and real-world applications. The 70B model delivers cutting-edge performance and is ideal for deployment at scale. The 8B version offers fast performance and lower resource requirements, suitable for fine-tuning or local use. Under the Apache 2.0 license, both models can be used for:

  • Research and education
  • Commercial applications
  • Translation, summarization, chatbots, tutoring systems, and more

Try Apertus now

If you're in Switzerland, you can try Apertus through Swisscom or during Swiss {ai} Weeks events. If you're outside Switzerland—or just curious—you can try Apertus right here, through the Public AI Inference Utility.

Where it goes next

This is just the beginning. The Swiss AI Initiative will continue to make regular updates to Apertus, exploring domain-specific models in areas like law, climate, health, and education. Future releases will expand capabilities while maintaining the same core values: openness, excellence, and public purpose.

This release isn't a final step—it's a beginning. We're building toward a long-term commitment to sovereign, open AI foundations that serve the public good, worldwide.

Antoine Bosselut, Co-Lead of the Swiss AI Initiative

The Public AI Inference Utility will continue to bring Apertus to the world. We look forward to supporting more Apertus-scale efforts. We extend an open invitation to other public institutions, research communities, and forward-looking companies to join us in shaping the next chapter of AI.