Edit model card

tFINE-900m-e16-d32-1024ctx

Pretrained T5 model with nanoT5:

  • ~900m parameters, 16 layers in encoder, 32 layers in decoder
  • sentencepiece tokenizer with 48k vocab & byte-pair fallback
    • handles whitespaces etc correctly (unlike standard T5 tokenizer)
  • 1024 ctx during pretrain
  • relative_attention_num_buckets increased to 48 from standard 32 for context length upscaling

Experiment logs

Training consisted of two phases:

  • phase one - ~30k steps at context length 512
  • phase two - 20k steps at context length 1024
Downloads last month
25
Safetensors
Model size
887M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for pszemraj/tFINE-900m-e16-d32-1024ctx

Finetunes
1 model

Dataset used to train pszemraj/tFINE-900m-e16-d32-1024ctx