Difference between revisions of "Vectors/norlm"
(→Acknowledgements) |
(→Available Models) |
||
Line 35: | Line 35: | ||
* [[Vectors/norlm/norelmo|NorELMo: LSTM-Based Architectures]] | * [[Vectors/norlm/norelmo|NorELMo: LSTM-Based Architectures]] | ||
* [[Vectors/norlm/norbert|NorBERT: Transformer-Based Architectures]] | * [[Vectors/norlm/norbert|NorBERT: Transformer-Based Architectures]] | ||
− | * NorT5: Combined | + | * [https://huggingface.co/collections/ltg/nort5-653bd26401eb025af225ee32 NorT5: Combined Encoder–Decoder Architecture] |
− | * | + | * [https://huggingface.co/norallm NorMistral & NorBLOOM: Generative Language Models] |
We emphatically welcome all kinds of user feedback, including of course suggestions for improvement | We emphatically welcome all kinds of user feedback, including of course suggestions for improvement |
Revision as of 16:14, 15 February 2024
Contents
Norwegian Large-scale Language Models
Welcome to the emerging collection of large-scale contextualized and generative language models for the Norwegian language. NorLM (or, more recently, NORA.LLM) originated as a joint initiative of the projects EOSC-Nordic (European Open Science Cloud), SANT (Sentiment Analysis for Norwegian), and HPLT (High-Performance Language Technologies), in collaboration with the AI Laboratory of the National Library of Norway and the National e-Infrastructure Services, coordinated by the Language Technology Group (LTG) at the University of Oslo.
We are working to provide these models and supporting tools for researchers and developers in Natural Language Processing (NLP) for the Norwegian language. We do so in the hope of facilitating scientific experimentation with and practical applications of state-of-the-art NLP architectures, as well as to enable others to develop their own large-scale models, for example for domain- or application-specific tasks, language variants, or even other languages than Norwegian.
Under the auspices of the NLPL use case in EOSC-Nordic, we are coordinating with colleagues in Denmark, Finland, and Sweden on a collection of large contextualized language models for the Nordic languages, including language variants or related groups of languages, as linguistically or technologically appropriate.
Available Models
At this initial stage of development, Norwegian models for two common architecture variants are available:
- NorELMo: LSTM-Based Architectures
- NorBERT: Transformer-Based Architectures
- NorT5: Combined Encoder–Decoder Architecture
- NorMistral & NorBLOOM: Generative Language Models
We emphatically welcome all kinds of user feedback, including of course suggestions for improvement or suggestions for additional types of Norwegian contextualized language models or associated tools. Please contact us via the Nor(aL)LM technical coordinator, Andrey Kutuzov.
License and Access
All Norwegian language models from the NorLM initiative are publicly available for download from the NLPL Vectors Repository, with a CC BY 4.0 license. The NorBERT model is also included with the Huggingface Transformers Library.
To receive announcements of updates and availability of additional models, please self-subscribe to our very low-traffic NorLM mailing list.
Related Work
Our paper "Large-Scale Contextualised Language Modelling for Norwegian" was presented at NoDaliDa'2021 conference.
Acknowledgements
The NorLM resources are being developed on the Norwegian national supercomputing services operated by UNINETT Sigma2, the National Infrastructure for High Performance Computing and Data Storage in Norway. Software provisioning was financially supported through the European EOSC-Nordic project; data preparation and evaluation were supported by the Norwegian SANT and the Horizon Europe HPLT projects. We are indebted to all funding agencies involved, the University of Oslo, and the Norwegian tax payer.