Edit model card

Meltemi: A large foundation Language Model for the Greek language

Meltemi v1.5 is a follow-up version to Meltemi 7B v1 trained by the Institute for Language and Speech Processing at Athena Research & Innovation Center. Meltemi is built on top of Mistral 7B, extending its capabilities for Greek through continual pretraining on a large corpus of high-quality and locally relevant Greek texts. We present Meltemi 7B v1.5, as well as an instruction fine-tuned version Meltemi 7B Instruct v1.5.

image/png

Model Information

  • Vocabulary extension of the Mistral 7B tokenizer with Greek tokens for lower costs and faster inference (1.52 vs. 6.80 tokens/word for Greek)
  • 8192 context length
  • We extend the pretraining of Mistral 7B with added proficiency for the Greek language, by utilizing a large corpus consisting of approximately 55 billion tokens.
    • This corpus includes 43.3 billion monolingual Greek tokens, constructed from publicly available resources. Additionaly, to mitigate catastrophic forgetting and ensure that the model has bilingual capabilities, we use additional sub-corpora with monolingual English texts (10.5 billion tokens) and Greek-English parallel data (600 million tokens).
    • This corpus has been processed, filtered, and deduplicated to ensure data quality (a detailed description of our data processing pipeline will be published in our upcoming paper) and is outlined below:

In the table below, we list the number and percentage of tokens for pretraining Meltemi 7B v1.5 (the respective values for Meltemi 7B v1 in parentheses).

Sub-corpus # Tokens Percentage
Greek 43,383,244,502 (28,555,902,360) 79.5% (72.0%)
English 10,538,413,259 (10,478,414,033) 19.3% (26.4%)
Parallel 633,816,023 (633,816,023) 1.2% (1.6%)
Total 54,555,473,784 (39,668,132,416) 100%

Meltemi 7B v1.5 was trained for less than 2/3rds of the training steps of Meltemi 7B v1.

Usage

Please make sure that the BOS token is always included in the tokenized prompts. This might not be the default setting in all evaluation or fine-tuning frameworks.

Evaluation

The evaluation suite we created includes 6 test sets and has been implemented based on a fork of the lighteval framework.

*** The differences in the Meltemi 7B v1 scores compared to the ones here can be attributed to a different -and better optimized- evaluation setup for Greek, i.e., lighteval vs. lm-eval-harness. ***

Our evaluation suite includes:

Our evaluation is performed in a few-shot setting, consistent with the settings in the Open LLM leaderboard.

We can see that our new training procedure enhances performance across all Greek test sets by a +16.3% average improvement compared to Mistral 7B. The results for the Greek test sets are shown in the following table:

Medical MCQA EL (15-shot) Belebele EL (5-shot) HellaSwag EL (10-shot) ARC-Challenge EL (25-shot) TruthfulQA MC2 EL (0-shot) MMLU EL (5-shot) Average
Mistral 7B 29.8% 45.0% 36.5% 27.1% 45.8% 35% 36.5%
Meltemi 7B v1 46.3% 68.5% 63.3% 43.6% 44.6% 42.4% 51.4%
Meltemi 7B v1.5 48.1% 68.6% 65.7% 47.1% 45.1% 42.4% 52.8%

Ethical Considerations

This model has been aligned with human preferences, but might generate misleading, harmful, and toxic content.

Acknowledgements

The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the OCRE Cloud framework, providing Amazon Web Services for the Greek Academic and Research Community.

Citation

@misc{voukoutis2024meltemiopenlargelanguage,
      title={Meltemi: The first open Large Language Model for Greek}, 
      author={Leon Voukoutis and Dimitris Roussis and Georgios Paraskevopoulos and Sokratis Sofianopoulos and Prokopis Prokopidis and Vassilis Papavasileiou and Athanasios Katsamanis and Stelios Piperidis and Vassilis Katsouros},
      year={2024},
      eprint={2407.20743},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.20743}, 
}
Downloads last month
14
Safetensors
Model size
7.48B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for ilsp/Meltemi-7B-v1.5

Quantizations
3 models