NSFW-Safe-Dataset / README.md
eliasalbouzidi's picture
Update README.md
1f7d4a8 verified
---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 31586681
num_examples: 187788
- name: validation
num_bytes: 6664342
num_examples: 40241
- name: test
num_bytes: 6715925
num_examples: 40240
- name: train_nopreprocessing
num_bytes: 30766778
num_examples: 174191
- name: validation_nopreprocessing
num_bytes: 6460819
num_examples: 37209
- name: test_nopreprocessing
num_bytes: 6534563
num_examples: 37300
download_size: 57115183
dataset_size: 88729108
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: train_nopreprocessing
path: data/train_nopreprocessing-*
- split: validation_nopreprocessing
path: data/validation_nopreprocessing-*
- split: test_nopreprocessing
path: data/test_nopreprocessing-*
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- english
- nsfw
- safety
- diffusers
- transformers
- innapropriate
size_categories:
- 100K<n<1M
---
# Dataset Card
This dataset was created to develop filters that block NSFW (Not Safe For Work) text content. It was assembled by scraping data from the web and utilizing existing open-source datasets. A significant portion of the dataset consists of descriptions for images and scenes. The primary objective of this dataset is to prevent diffusers from generating NSFW content but it can be used for other moderation purposes.
- **Shared by:** Centrale Supélec Students
- **Language:** English
- **License:** apache-2.0
## Uses
The dataset can be utilized to fine-tune transformer models for the classification of NSFW prompts. It was used to develop a DistilBERT filter that achieve a F1 score of 0.973 (validation). The filter is available for use at https://huggingface.co/eliasalbouzidi/distilbert-nsfw-text-classifier.
## Dataset Structure
This dataset contains six splits, with the first three having undergone preprocessing, and the last three corresponding to the first three but without any preprocessing.
#### Processing
To reduce noise in the data and minimize biases,we preprocessed the dataset by:
1. Normalizing case
2. Removing numbers
3. Removing punctuation and brackets
4. Removing URLs
5. Removing HTML tags
6. Removing Twitter mentions
## Contact
Please reach out to [email protected] if you have any questions or feedback.