File size: 1,652 Bytes
db17956
 
 
 
 
 
 
 
 
 
 
 
 
d705d76
db17956
 
 
 
 
0f2a6b8
 
 
 
 
 
d705d76
 
0f2a6b8
d705d76
 
0f2a6b8
 
3177a34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f2a6b8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
language:
- en
pretty_name: ScienceWikiSmallChunk
tags:
- RAG
- Retrieval Augmented Generation
- Small Chunks
- Wikipedia
- Science
- Scientific
- Scientific Wikipedia
- Science Wikipedia
- 512 tokens
license: cc-by-sa-3.0
task_categories:
- text-generation
- text-classification
- question-answering
---

# ScienceWikiSmallChunk

Processed version of millawell/wikipedia_field_of_science, prepared to be used in small context length RAG systems. Chunk length is tokenizer dependent, but each chunk should be around 512 tokens. Longer wikipedia pages have been split into smaller entries, with title added as a prefix.

There is also 256 tokens dataset available: Laz4rz/wikipedia_science_chunked_small_rag_256

If you wish to prepare some other chunk length:
1. use millawell/wikipedia_field_of_science
2. adapt chunker function:
```
def chunker_clean(results, example, length=512, approx_token=3, prefix=""):
    if len(results) == 0:
        regex_pattern = r'[\n\s]*\n[\n\s]*'
        example = re.sub(regex_pattern, " ", example).strip().replace(prefix, "")
    chunk_length = length * approx_token
    if len(example) > chunk_length:
        first = example[:chunk_length]
        chunk = ".".join(first.split(".")[:-1])
        if len(chunk) == 0:
            chunk = first
        rest = example[len(chunk)+1:]
        results.append(prefix+chunk.strip())
        if len(rest) > chunk_length:
            chunker_clean(results, rest.strip(), length=length, approx_token=approx_token, prefix=prefix)
        else:
            results.append(prefix+rest.strip())
    else:
        results.append(prefix+example.strip())
    return results
```