afrideva commited on
Commit
ac9ba68
1 Parent(s): ab8e4cd

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: euclaise/Echo-3B
3
+ datasets:
4
+ - pankajmathur/lima_unchained_v1
5
+ - CheshireAI/guanaco-unchained
6
+ - totally-not-an-llm/sharegpt-hyperfiltered-3k
7
+ - totally-not-an-llm/EverythingLM-data-V3
8
+ - LDJnr/Verified-Camel
9
+ - CollectiveCognition/chats-data-2023-10-16
10
+ - Norquinal/claude_multiround_chat_30k
11
+ - euclaise/WritingPromptsX
12
+ - euirim/goodwiki
13
+ - euclaise/MiniCoT
14
+ - euclaise/SciCoT
15
+ - euclaise/symtune_mini
16
+ - euclaise/mathoverflow-accepted
17
+ - lemonilia/LimaRP
18
+ inference: false
19
+ model_creator: euclaise
20
+ model_name: Echo-3B
21
+ pipeline_tag: text-generation
22
+ quantized_by: afrideva
23
+ tags:
24
+ - gguf
25
+ - ggml
26
+ - quantized
27
+ - q2_k
28
+ - q3_k_m
29
+ - q4_k_m
30
+ - q5_k_m
31
+ - q6_k
32
+ - q8_0
33
+ ---
34
+ # euclaise/Echo-3B-GGUF
35
+
36
+ Quantized GGUF model files for [Echo-3B](https://huggingface.co/euclaise/Echo-3B) from [euclaise](https://huggingface.co/euclaise)
37
+
38
+
39
+ | Name | Quant method | Size |
40
+ | ---- | ---- | ---- |
41
+ | [echo-3b.fp16.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.fp16.gguf) | fp16 | 5.59 GB |
42
+ | [echo-3b.q2_k.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q2_k.gguf) | q2_k | 1.20 GB |
43
+ | [echo-3b.q3_k_m.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q3_k_m.gguf) | q3_k_m | 1.39 GB |
44
+ | [echo-3b.q4_k_m.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q4_k_m.gguf) | q4_k_m | 1.71 GB |
45
+ | [echo-3b.q5_k_m.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q5_k_m.gguf) | q5_k_m | 1.99 GB |
46
+ | [echo-3b.q6_k.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q6_k.gguf) | q6_k | 2.30 GB |
47
+ | [echo-3b.q8_0.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q8_0.gguf) | q8_0 | 2.97 GB |
48
+
49
+
50
+
51
+ ## Original Model Card: