wikiembed.py / README.md
brittlewis12's picture
Upload folder using huggingface_hub
22b128b verified
---
license: cc-by-sa-3.0
task_categories:
- question-answering
tags:
- wikipedia
- usearch
- distilbert
- msmarco
- text
size_categories:
- 100K<n<1M
---
# wikiembed.py
text embeddings for wikipedia articles
## artifacts
**sqlite3 db** with `article_sections` table, containing cleaned wikipedia article contents by section
- `id` - unique section id, generated by sqlite
- `article_id` - wikipedia’s provided id for the article to which the section belongs
- `title` - the article title
- `url` - the article url
- `sequence_id` - the per-article section order number
- `section_name` - the header name from Wikipedia, or 'Lead' which has no heading.
- `text` - the cleaned text contents for the section
[usearch](https://unum-cloud.github.io/usearch/) **index for semantic search** of wikipedia section contents
- keyed on the section `id` from the sqlite `article_sections` table
- vector embeddings generated with [msmarco distilbert](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b) model in [coreml](https://huggingface.co/ZachNagengast/coreml-msmarco-bert-base-dot-v5)
* 768 dimensions
* f32 precision
* 512 token limit
- section contents chunked & embedded with page title and section name in a chunk header
- indexed with the following parameters:
* [inner product](https://zilliz.com/blog/similarity-metrics-for-vector-search#Inner-Product) distance metric
* connectivity (`M` in HSNW terms) of 16
- tested 16, 32, 64, & 200; 16 was smallest disk size with identical recall accuracy on test queries
* <details><summary><i>index sizes with various connectivity<>precision config pairs</i></summary>
```
❯ du -sh ./20240720/connectivity-*/*
979M ./20240720/connectivity-16/simplewiki-20240720.f16.index
1.8G ./20240720/connectivity-16/simplewiki-20240720.f32.index
1.8G ./20240720/connectivity-200/simplewiki-20240720.f16.index
1.8G ./20240720/connectivity-200/simplewiki-20240720.f32.index
1.0G ./20240720/connectivity-32/simplewiki-20240720.f16.index
1.9G ./20240720/connectivity-32/simplewiki-20240720.f32.index
1.2G ./20240720/connectivity-64/simplewiki-20240720.f16.index
2.0G ./20240720/connectivity-64/simplewiki-20240720.f32.index
```
</details>
* index expansion add (`efConstruction` in HNSW terms) of 128 (usearch default)
* quantizing embedding vectors from f32 to f16
- identical recall accuracy on test queries (i8 performed poorly by contrast)
* `multi` key support enabled (*so more than one chunk can refer to the same section id*)
the **original [simple english wikipedia dump](https://dumps.wikimedia.org/simplewiki/20240801/)** that was used to generate these artifacts, dated 2024-08-01
### modifiable reproduction script `wikiembed.py`
* *requires [msmarco distilbert tas b coreml](https://huggingface.co/ZachNagengast/coreml-msmarco-bert-base-dot-v5) model as written*
* [coremltools](https://github.com/apple/coremltools) prediction only works on macOS[*](https://apple.github.io/coremltools/docs-guides/source/model-prediction.html), which was used to generate the vector embeddings for the semantic index. for cross-platform coreml prediction, check out [tvm](https://tvm.apache.org/docs/how_to/compile_models/from_coreml.html#load-pretrained-coreml-model)
* other dependencies described in `requirements.txt`:
```sh
pip3 install -r requirements.txt
```
* make any desired changes to e.g. index parameters, wikipedia dump language or date, or embeddings model in the script
* run the script to download, clean, persist & index the dump contents
```sh
chmod +x wikiembed.py
./wikiembed.py
```
---
released under [creative commons license (CC BY-SA 3.0 unported)](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
data cleaning adapted from [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia)
by [britt lewis](https://bl3.dev)