Cognitive-Machines-Lab commited on
Commit
875da94
1 Parent(s): ad48a03

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-4.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - llama
9
+ - mistral
10
+ - llama-2
11
+ - agi
12
+ - probelm solving
13
+ - biology
14
+ - reasoning
15
+ ---
16
+
17
+ <h1 style="font-size: 36px;">Ursidae-11b-Mini</h1>
18
+
19
+ <center>
20
+ <img src="https://i.imgur.com/y35DlPz.png" alt="logo" width="10%" style="min-width:20px; display:block;">
21
+ </center>
22
+
23
+ ## Upcoming Models:
24
+
25
+ **Ursidae-11b-Mini**
26
+ A model focused complex multi-step chain of thought problem solving while still being deployable on edge systems. Now better at reasoning!
27
+
28
+
29
+ ## Main Goals:
30
+
31
+ Ursidae was designed to address specific issues found in other chat models:
32
+
33
+ - Overcome limitations in logical reasoning found in other chat models.
34
+ - Efficiently solve complex, multi-step problems.
35
+ - Provide better decision-making assistance by enhancing the model's ability to reason and think critically.
36
+ - Removing restrictions and allowing the model to gain a true understanding of reality, greatly increasing overall results.
37
+
38
+ By focusing on these specific goals, the Ursidae-11b-Mini aims to provide a more sophisticated AI system that excels at critical thinking and problem-solving tasks requiring advanced logical reasoning skills. Its compact design makes it an efficient choice for applications where high cognitive abilities are necessary without occupying excessive computing resources.
39
+
40
+ # Recommended Settings:
41
+
42
+ **Defaults:**
43
+
44
+ ```
45
+ min_p: 0.074
46
+ top_k: 40
47
+ repetition_penalty: 1.12
48
+ temp: 1.18
49
+ context: 8192
50
+ ```
51
+
52
+ # Benchmarks:
53
+ PENDING FULL EVAL