TroyDoesAI commited on
Commit
67ece5e
1 Parent(s): 21d28eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md CHANGED
@@ -23,3 +23,113 @@ https://colab.research.google.com/drive/1gkmMOVQ_P-NGIRuK3Kj3gWJat33MHNi8#scroll
23
  # YouTube:
24
  https://www.youtube.com/watch?v=IiVlO4JBZaU
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  # YouTube:
24
  https://www.youtube.com/watch?v=IiVlO4JBZaU
25
 
26
+ ---
27
+
28
+ ---
29
+ ### Basic Context Obedient Prompt that works great for RAG
30
+
31
+ Note: Its pretty PG when it comes to its responses, but a quick dataset rinse with something toxic could change that right up.
32
+
33
+ Example Video:
34
+ https://imgur.com/LGuC1I0
35
+
36
+ Example Video 2: Further testing with more key value pairs
37
+ https://imgur.com/xYyYRgz
38
+
39
+ ```
40
+ Contextual-Request:
41
+ BEGININPUT
42
+ BEGINCONTEXT
43
+ date: 2024-05-03
44
+ url: https://web.site.thisshitsbadouthereboys/123
45
+ ENDCONTEXT
46
+ Pandemic Warning Notice there has been a huge issue with Zombie humans that are passing on a new disease that appeared to be similar to the symptoms of covid but when a host dies they reanimate as a zombie corpse.
47
+ ENDINPUT
48
+ BEGININSTRUCTION
49
+ What is the the pandemic about? cite your sources
50
+ ENDINSTRUCTION
51
+
52
+ ### Contextual Response:
53
+ ```
54
+ ---
55
+
56
+ Overview
57
+ This model is meant to enhance adherence to provided context (e.g., for RAG applications) and reduce hallucinations, inspired by airoboros context-obedient question answer format.
58
+
59
+ ## Overview
60
+
61
+ The format for a contextual prompt is as follows:
62
+ ```
63
+ Contextual-Request:
64
+ BEGININPUT
65
+ BEGINCONTEXT
66
+ [key0: value0]
67
+ [key1: value1]
68
+ ... other metdata ... like character Mood: Scared, Tone of the scene: Spooky, anything that will enhance your RAG / experience, maybe even use small Mermaid Knowledge Graphs as core memories stored as events like I do in my assistant I am building.
69
+ ENDCONTEXT
70
+ [insert your text blocks here, this is where RAG content goes]
71
+ ENDINPUT
72
+ [add as many other blocks, in the exact same format]
73
+ BEGININSTRUCTION
74
+ [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
75
+ ENDINSTRUCTION
76
+ ```
77
+
78
+ I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
79
+ - `Contextual-Request:` - denotes the type of request pattern the model is to follow for consistency
80
+ - `BEGININPUT` - denotes a new input block
81
+ - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
82
+ - `ENDCONTEXT` - denotes the end of the metadata block for the current input
83
+ - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
84
+ - `ENDINPUT` - denotes the end of the current input block
85
+ - [repeat as many input blocks in this format as you want]
86
+ - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
87
+ - [instruction(s)]
88
+ - `ENDINSTRUCTION` - denotes the end of instruction set
89
+
90
+ Here's a trivial, but important example to prove the point:
91
+ ```
92
+ Contextual-Request:
93
+ BEGININPUT
94
+ BEGINCONTEXT
95
+ date: 2021-01-01
96
+ url: https://web.site/123
97
+ ENDCONTEXT
98
+ In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
99
+ ENDINPUT
100
+ BEGININSTRUCTION
101
+ What color are bluberries? Source?
102
+ ENDINSTRUCTION
103
+ ```
104
+
105
+ And the expected response:
106
+ ```
107
+ ### Contextual Response:
108
+ Blueberries are now green.
109
+ Source:
110
+ date: 2021-01-01
111
+ url: https://web.site/123
112
+ ```
113
+
114
+ ### References in response
115
+
116
+ As shown in the example, the dataset includes many examples of including source details in the response, when the question asks for source/citation/references.
117
+
118
+ Why do this? Well, the R in RAG seems to be the weakest link in the chain.
119
+ Retrieval accuracy, depending on many factors including the overall dataset size, can be quite low.
120
+ This accuracy increases when retrieving more documents, but then you have the issue of actually using
121
+ the retrieved documents in prompts. If you use one prompt per document (or document chunk), you know
122
+ exactly which document the answer came from, so there's no issue. If, however, you include multiple
123
+ chunks in a single prompt, it's useful to include the specific reference chunk(s) used to generate the
124
+ response, rather than naively including references to all of the chunks included in the prompt.
125
+
126
+ For example, suppose I have two documents:
127
+ ```
128
+ url: http://foo.bar/1
129
+ Strawberries are tasty.
130
+
131
+ url: http://bar.foo/2
132
+ The cat is blue.
133
+ ```
134
+
135
+ If the question being asked is `What color is the cat?`, I would only expect the 2nd document to be referenced in the response, as the other link is irrelevant.