Edit model card

Reflection-Llama-3.1-70B-GGUF

Original Model

mattshumer/ref_70_e3

  • The recommended system prompt for this model
You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.
  • Tips for performance

    • Recommended temperature: 0.5
    • Recommended top_p: 0.95
    • For increased accuracy, append Think carefully. at the end of your messages.

Run with Gaianet

Prompt template

prompt template: llama-3-chat

Context size

chat_ctx_size: 128000

Run with GaiaNet

Quantized with llama.cpp b3664

Downloads last month
287
GGUF
Model size
70.6B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for gaianet/Reflection-Llama-3.1-70B-GGUF