gemma-2b-tr / README.md
Metin's picture
Update README.md
3c32f43 verified
metadata
license: cc-by-nc-4.0
language:
  - tr

Model Card for Model ID

gemma-2b fine-tuned for the task of Turkish text generation.

Model Details

Model Description

  • Language(s) (NLP): Turkish, English
  • License: Creative Commons Attribution Non Commercial 4.0 (Chosen due to the use of restricted/gated datasets.)
  • Finetuned from model [optional]: gemma-2b (https://huggingface.co/google/gemma-2b)

Uses

The model is specifically designed for Turkish text generation. It is not suitable for instruction-following or question-answering tasks.

Restrictions

Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms Please refer to the gemma use restrictions before start using the model. https://ai.google.dev/gemma/terms#3.2-use

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Metin/gemma-2b-tr")
model = AutoModelForCausalLM.from_pretrained("Metin/gemma-2b-tr")

prompt = "Bugün sinemaya gidemedim çünkü"
input_ids = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Training Details

Training Data

  • Dataset size: ~190 Million Token or 100K Document
  • Dataset content: Web crawl data

Training Procedure

Training Hyperparameters

  • Adapter: QLoRA
  • Epochs: 1
  • Context length: 1024
  • LoRA Rank: 32
  • LoRA Alpha: 32
  • LoRA Dropout: 0.05