Laz4rz commited on
Commit
0f2a6b8
1 Parent(s): db17956

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -16,4 +16,33 @@ task_categories:
16
  - text-generation
17
  - text-classification
18
  - question-answering
19
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  - text-generation
17
  - text-classification
18
  - question-answering
19
+ ---
20
+
21
+ # ScienceWikiSmallChunk
22
+
23
+ Processed version of millawell/wikipedia_field_of_science, prepared to be used in small context length RAG systems. Chunk length is tokenizer dependent, but each chunk should be around 512 tokens. Longer wikipedia pages have been split into smaller entries, with title added as a prefix.
24
+
25
+ If you wish to prepare some other chunk length:
26
+ - use millawell/wikipedia_field_of_science
27
+ - adapt chunker function:
28
+ ```
29
+ def chunker_clean(results, example, length=512, approx_token=3, prefix=""):
30
+ if len(results) == 0:
31
+ regex_pattern = r'[\n\s]*\n[\n\s]*'
32
+ example = re.sub(regex_pattern, " ", example).strip().replace(prefix, "")
33
+ chunk_length = length * approx_token
34
+ if len(example) > chunk_length:
35
+ first = example[:chunk_length]
36
+ chunk = ".".join(first.split(".")[:-1])
37
+ if len(chunk) == 0:
38
+ chunk = first
39
+ rest = example[len(chunk)+1:]
40
+ results.append(prefix+chunk.strip())
41
+ if len(rest) > chunk_length:
42
+ chunker_clean(results, rest.strip(), length=length, approx_token=approx_token, prefix=prefix)
43
+ else:
44
+ results.append(prefix+rest.strip())
45
+ else:
46
+ results.append(prefix+example.strip())
47
+ return results
48
+ ```