The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/brittlewis12/wikiembed.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 79, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1810, in dataset_module_factory
                  raise FileNotFoundError(f"Couldn't find a dataset script at {relative_to_absolute_path(path)}")
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/brittlewis12/wikiembed.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

wikiembed.py

text embeddings for wikipedia articles

artifacts

sqlite3 db with article_sections table, containing cleaned wikipedia article contents by section

  • id - unique section id, generated by sqlite
  • article_id - wikipedia’s provided id for the article to which the section belongs
  • title - the article title
  • url - the article url
  • sequence_id - the per-article section order number
  • section_name - the header name from Wikipedia, or 'Lead' which has no heading.
  • text - the cleaned text contents for the section

usearch index for semantic search of wikipedia section contents

  • keyed on the section id from the sqlite article_sections table
  • vector embeddings generated with msmarco distilbert model in coreml
    • 768 dimensions
    • f32 precision
    • 512 token limit
  • section contents chunked & embedded with page title and section name in a chunk header
  • indexed with the following parameters:
    • inner product distance metric
    • connectivity (M in HSNW terms) of 16
      • tested 16, 32, 64, & 200; 16 was smallest disk size with identical recall accuracy on test queries
        • index sizes with various connectivity<>precision config pairs
          ❯ du -sh ./20240720/connectivity-*/*
          979M	./20240720/connectivity-16/simplewiki-20240720.f16.index
          1.8G	./20240720/connectivity-16/simplewiki-20240720.f32.index
          1.8G	./20240720/connectivity-200/simplewiki-20240720.f16.index
          1.8G	./20240720/connectivity-200/simplewiki-20240720.f32.index
          1.0G	./20240720/connectivity-32/simplewiki-20240720.f16.index
          1.9G	./20240720/connectivity-32/simplewiki-20240720.f32.index
          1.2G	./20240720/connectivity-64/simplewiki-20240720.f16.index
          2.0G	./20240720/connectivity-64/simplewiki-20240720.f32.index
          
    • index expansion add (efConstruction in HNSW terms) of 128 (usearch default)
    • quantizing embedding vectors from f32 to f16
      • identical recall accuracy on test queries (i8 performed poorly by contrast)
    • multi key support enabled (so more than one chunk can refer to the same section id)

the original simple english wikipedia dump that was used to generate these artifacts, dated 2024-08-01

modifiable reproduction script wikiembed.py

  • requires msmarco distilbert tas b coreml model as written
  • coremltools prediction only works on macOS*, which was used to generate the vector embeddings for the semantic index. for cross-platform coreml prediction, check out tvm
  • other dependencies described in requirements.txt:
    pip3 install -r requirements.txt
    
  • make any desired changes to e.g. index parameters, wikipedia dump language or date, or embeddings model in the script
  • run the script to download, clean, persist & index the dump contents
    chmod +x wikiembed.py
    ./wikiembed.py
    

released under creative commons license (CC BY-SA 3.0 unported)

data cleaning adapted from olm/wikipedia

by britt lewis

Downloads last month
0