Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
dmlls commited on
Commit
9936656
1 Parent(s): 75553a0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pretty_name: CANNOT
5
+ ---
6
+ # Dataset Card for CANNOT
7
+
8
+ ## Dataset Description
9
+
10
+ - **Homepage: https://github.com/dmlls/cannot-dataset**
11
+ - **Repository: https://github.com/dmlls/cannot-dataset**
12
+ - **Paper: tba**
13
+
14
+ ### Dataset Summary
15
+
16
+
17
+ **CANNOT** is a dataset that focuses on negated textual pairs. It currently
18
+ contains **77,376 samples**, of which roughly of them are negated pairs of
19
+ sentences, and the other half are not (they are paraphrased versions of each
20
+ other).
21
+
22
+ The most frequent negation that appears in the dataset is verbal negation (e.g.,
23
+ will → won't), although it also contains pairs with antonyms (cold → hot).
24
+
25
+ ### Languages
26
+ CANNOT includes exclusively texts in **English**.
27
+
28
+ ## Dataset Structure
29
+
30
+ The dataset is given as a
31
+ [`.tsv`](https://en.wikipedia.org/wiki/Tab-separated_values) file with the
32
+ following structure:
33
+
34
+ | premise | hypothesis | label |
35
+ |:------------|:---------------------------------------------------|:-----:|
36
+ | A sentence. | An equivalent, non-negated sentence (paraphrased). | 0 |
37
+ | A sentence. | The sentence negated. | 1 |
38
+
39
+
40
+ The dataset can be easily loaded into a Pandas DataFrame by running:
41
+
42
+ ```Python
43
+ import pandas as pd
44
+
45
+ dataset = pd.read_csv('negation_dataset_v1.0.tsv', sep='\t')
46
+
47
+ ```
48
+
49
+ ## Dataset Creation
50
+
51
+
52
+ The dataset has been created by cleaning up and merging the following datasets:
53
+
54
+ 1. _Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal
55
+ Negation_ (see
56
+ [`datasets/nan-nli`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/nan-nli)).
57
+
58
+ 2. _GLUE Diagnostic Dataset_ (see
59
+ [`datasets/glue-diagnostic`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/glue-diagnostic)).
60
+
61
+ 3. _Automated Fact-Checking of Claims from Wikipedia_ (see
62
+ [`datasets/wikifactcheck-english`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/wikifactcheck-english)).
63
+
64
+ 4. _From Group to Individual Labels Using Deep Features_ (see
65
+ [`datasets/sentiment-labelled-sentences`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/sentiment-labelled-sentences)).
66
+ In this case, the negated sentences were obtained by using the Python module
67
+ [`negate`](https://github.com/dmlls/negate).
68
+
69
+
70
+ Additionally, for each of the negated samples, another pair of non-negated
71
+ sentences has been added by paraphrasing them with the pre-trained model
72
+ [`🤗tuner007/pegasus_paraphrase`](https://huggingface.co/tuner007/pegasus_paraphrase).
73
+
74
+ Furthermore, the dataset from _It Is Not Easy To Detect Paraphrases: Analysing
75
+ Semantic Similarity With Antonyms and Negation Using the New SemAntoNeg
76
+ Benchmark_ (see
77
+ [`datasets/antonym-substitution`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/antonym-substitution))
78
+ has also been included. This dataset already provides both the paraphrased and
79
+ negated version for each premise, so no further processing was needed.
80
+
81
+ Finally, the swapped version of each pair (premise ⇋ hypothesis) has also been
82
+ included, and any duplicates have been removed.
83
+
84
+ The contribution of each of these individual datasets to the final CANNOT
85
+ dataset is:
86
+
87
+ | Dataset | Samples |
88
+ |:--------------------------------------------------------------------------|-----------:|
89
+ | Not another Negation Benchmark | 118 |
90
+ | GLUE Diagnostic Dataset | 154 |
91
+ | Automated Fact-Checking of Claims from Wikipedia | 14,970 |
92
+ | From Group to Individual Labels Using Deep Features | 2,110 |
93
+ | It Is Not Easy To Detect Paraphrases | 8,597 |
94
+ | <p align="right"><b>Total</b></p> | **25,949** |
95
+
96
+ _Note_: The numbers above include only the original queries present in the
97
+ datasets.
98
+
99
+
100
+ ## Additional Information
101
+
102
+ ### Licensing Information
103
+
104
+ TODO
105
+
106
+ ### Citation Information
107
+
108
+ tba
109
+
110
+ ### Contributions
111
+
112
+ Contributions to the dataset can be submitted through the [project repository](https://github.com/dmlls/cannot-dataset).