Edit model card

HelpingAI-Lite

Subscribe to my YouTube channel

Subscribe

The HelpingAI-Lite-2x1B is a MOE (Mixture of Experts) model, surpassing HelpingAI-Lite in accuracy. However, it operates at a marginally reduced speed compared to the efficiency of HelpingAI-Lite. This nuanced trade-off positions the HelpingAI-Lite-2x1B as an exemplary choice for those who prioritize heightened accuracy within a context that allows for a slightly extended processing time.

Language

The model supports English language.

Downloads last month
5
Safetensors
Model size
1.86B params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for OEvortex/HelpingAI-Lite-2x1B

Finetuned
this model