orieg commited on
Commit
82e8c1e
1 Parent(s): e9a94d1

add base code for elsevier-oa-cc-by corpus

Browse files
Files changed (3) hide show
  1. LICENCE.md +3 -0
  2. README.md +300 -1
  3. elsevier-oa-cc-by.py +160 -0
LICENCE.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ CC BY 4.0
2
+
3
+ You can share, copy and modify this dataset so long as you give appropriate credit, provide a link to the CC BY license, and indicate if changes were made, but you may not do so in a way that suggests the rights holder has endorsed you or your use of the dataset. Note that further permission may be required for any content within the dataset that is identified as belonging to a third party.
README.md CHANGED
@@ -1,3 +1,302 @@
1
  ---
2
- license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Elsevier OA CC-By Corpus
13
+ paperswithcode_id: elsevier-oa-cc-by-corpus
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - fill-mask
20
+ - summarization
21
+ - text-classification
22
+ task_ids:
23
+ - masked-language-modeling
24
+ - news-articles-summarization
25
+ - news-articles-headline-generation
26
  ---
27
+
28
+ # Dataset Card for [Dataset Name]
29
+
30
+ ## Table of Contents
31
+ - [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
32
+ - [Table of Contents](#table-of-contents)
33
+ - [Dataset Description](#dataset-description)
34
+ - [Dataset Summary](#dataset-summary)
35
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
36
+ - [Languages](#languages)
37
+ - [Dataset Structure](#dataset-structure)
38
+ - [Data Instances](#data-instances)
39
+ - [Data Fields](#data-fields)
40
+ - [Data Splits](#data-splits)
41
+ - [Dataset Creation](#dataset-creation)
42
+ - [Curation Rationale](#curation-rationale)
43
+ - [Source Data](#source-data)
44
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
45
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
46
+ - [Annotations](#annotations)
47
+ - [Annotation process](#annotation-process)
48
+ - [Who are the annotators?](#who-are-the-annotators)
49
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
50
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
51
+ - [Social Impact of Dataset](#social-impact-of-dataset)
52
+ - [Discussion of Biases](#discussion-of-biases)
53
+ - [Other Known Limitations](#other-known-limitations)
54
+ - [Additional Information](#additional-information)
55
+ - [Dataset Curators](#dataset-curators)
56
+ - [Licensing Information](#licensing-information)
57
+ - [Citation Information](#citation-information)
58
+ - [Contributions](#contributions)
59
+
60
+ ## Dataset Description
61
+
62
+ - **Homepage:** https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs
63
+ - **Repository:** https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs
64
+ - **Paper:** https://arxiv.org/abs/2008.00774
65
+ - **Leaderboard:**
66
+ - **Point of Contact:** [@orieg](https://huggingface.co/orieg)
67
+
68
+ ### Dataset Summary
69
+
70
+ Elsevier OA CC-By Corpus: This is a corpus of 40k (40, 091) open access (OA) CC-BY articles from across Elsevier’s journals
71
+ representing a large scale, cross-discipline set of research data to support NLP and ML research.
72
+
73
+ ***docId*** The docID is the identifier of the document. This is unique to the document, and can be resolved into a URL
74
+ for the document through the addition of https//www.sciencedirect.com/science/pii/<docId>
75
+
76
+ ***abstract*** This is the author provided abstract for the document
77
+ body_text The full text for the document. The text has been split on sentence boundaries, thus making it easier to
78
+ use across research projects. Each sentence has the title (and ID) of the section which it is from, along with titles (and
79
+ IDs) of the parent section. The highest-level section takes index 0 in the parents array. If the array is empty then the
80
+ title of the section for the sentence is the highest level section title. This will allow for the reconstruction of the article
81
+ structure. References have been extracted from the sentences. The IDs of the extracted reference and their respective
82
+ offset within the sentence can be found in the “refoffsets” field. The complete list of references are can be found in
83
+ the “bib_entry” field along with the references’ respective metadata. Some will be missing as we only keep ‘clean’
84
+ sentences,
85
+
86
+ ***bib_entities*** All the references from within the document can be found in this section. If the meta data for the
87
+ reference is available, it has been added against the key for the reference. Where possible information such as the
88
+ document titles, authors, and relevant identifiers (DOI and PMID) are included. The keys for each reference can be
89
+ found in the sentence where the reference is used with the start and end offset of where in the sentence that reference
90
+ was used.
91
+
92
+ ***metadata*** Meta data includes additional information about the article, such as list of authors, relevant IDs (DOI and
93
+ PMID). Along with a number of classification schemes such as ASJC and Subject Classification.
94
+
95
+ ***Author_highlights*** Author highlights were included in the corpus where the author(s) have provided them. The
96
+ coverage is 61% of all articles. The author highlights, consisting of 4 to 6 sentences, is provided by the author with
97
+ the aim of summarising the core findings and results in the article.
98
+
99
+ ### Supported Tasks and Leaderboards
100
+
101
+ [More Information Needed]
102
+
103
+ ### Languages
104
+
105
+ English (`en`).
106
+
107
+ ## Dataset Structure
108
+
109
+ * ***title***:This is the author provided title for the document. 100% coverage.
110
+ * ***abstract***: This is the author provided abstract for the document. 99.25% coverage.
111
+ * ***keywords***: This is the author and publisher provided keywords for the document. 100% coverage.
112
+ * ***asjc***: This is the disciplines for the document as represented by 334 ASJC (All Science Journal Classification) codes. 100% coverage.
113
+ * ***subjareas***: This is the Subject Classification for the document as represented by 27 ASJC top-level subject classifications. 100% coverage.
114
+ * ***body_text***: The full text for the document. 100% coverage.
115
+ * ***author_highlights***: This is the author provided highlights for the document. 61.31% coverage.
116
+
117
+ ### Data Instances
118
+
119
+ The original dataset was published with the following json structure:
120
+ ```
121
+ {
122
+ "docId": <str>,
123
+ "metadata":{
124
+ "title": <str>,
125
+ "authors": [
126
+ {
127
+ "first": <str>,
128
+ "initial": <str>,
129
+ "last": <str>,
130
+ "email": <str>
131
+ },
132
+ ...
133
+ ],
134
+ "issn": <str>,
135
+ "volume": <str>,
136
+ "firstpage": <str>,
137
+ "lastpage": <str>,
138
+ "pub_year": <int>,
139
+ "doi": <str>,
140
+ "pmid": <str>,
141
+ "openaccess": "Full",
142
+ "subjareas": [<str>],
143
+ "keywords": [<str>],
144
+ "asjc": [<int>],
145
+ },
146
+ "abstract":[
147
+ {
148
+ "sentence": <str>,
149
+ "startOffset": <int>,
150
+ "endOffset": <int>
151
+ },
152
+ ...
153
+ ],
154
+ "bib_entries":{
155
+ "BIBREF0":{
156
+ "title":<str>,
157
+ "authors":[
158
+ {
159
+ "last":<str>,
160
+ "initial":<str>,
161
+ "first":<str>
162
+ },
163
+ ...
164
+ ],
165
+ "issn": <str>,
166
+ "volume": <str>,
167
+ "firstpage": <str>,
168
+ "lastpage": <str>,
169
+ "pub_year": <int>,
170
+ "doi": <str>,
171
+ "pmid": <str>
172
+ },
173
+ ...
174
+ },
175
+ "body_text":[
176
+ {
177
+ "sentence": <str>,
178
+ "secId": <str>,
179
+ "startOffset": <int>,
180
+ "endOffset": <int>,
181
+ "title": <str>,
182
+ "refoffsets": {
183
+ <str>:{
184
+ "endOffset":<int>,
185
+ "startOffset":<int>
186
+ }
187
+ },
188
+ "parents": [
189
+ {
190
+ "id": <str>,
191
+ "title": <str>
192
+ },
193
+ ...
194
+ ]
195
+ },
196
+ ...
197
+ ]
198
+ }
199
+ ```
200
+
201
+ ### Data Fields
202
+
203
+ [More Information Needed]
204
+
205
+ ### Data Splits
206
+
207
+ [More Information Needed]
208
+
209
+ ## Dataset Creation
210
+
211
+ ### Curation Rationale
212
+
213
+ See `3.1 Data Sampling` in the [original paper](https://doi.org/10.48550/arXiv.2008.00774).
214
+
215
+ [More Information Needed]
216
+
217
+ ### Source Data
218
+
219
+ #### Initial Data Collection and Normalization
220
+
221
+ Date the data was collected: 2020-06-25T11:00:00.000Z
222
+
223
+ [More Information Needed]
224
+
225
+ #### Who are the source language producers?
226
+
227
+ [More Information Needed]
228
+
229
+ ### Annotations
230
+
231
+ #### Annotation process
232
+
233
+ [More Information Needed]
234
+
235
+ #### Who are the annotators?
236
+
237
+ [More Information Needed]
238
+
239
+ ### Personal and Sensitive Information
240
+
241
+ [More Information Needed]
242
+
243
+ ## Considerations for Using the Data
244
+
245
+ ### Social Impact of Dataset
246
+
247
+ [More Information Needed]
248
+
249
+ ### Discussion of Biases
250
+
251
+ [More Information Needed]
252
+
253
+ ### Other Known Limitations
254
+
255
+ [More Information Needed]
256
+
257
+ ## Additional Information
258
+
259
+ ### Dataset Curators
260
+
261
+ [More Information Needed]
262
+
263
+ ### Licensing Information
264
+
265
+ [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
266
+
267
+ ### Citation Information
268
+
269
+ ```
270
+ @article{Kershaw2020ElsevierOC,
271
+ title = {Elsevier OA CC-By Corpus},
272
+ author = {Daniel James Kershaw and R. Koeling},
273
+ journal = {ArXiv},
274
+ year = {2020},
275
+ volume = {abs/2008.00774},
276
+ doi = {https://doi.org/10.48550/arXiv.2008.00774},
277
+ url = {https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs},
278
+ keywords = {Science, Natural Language Processing, Machine Learning, Open Dataset},
279
+ abstract = {We introduce the Elsevier OA CC-BY corpus. This is the first open
280
+ corpus of Scientific Research papers which has a representative sample
281
+ from across scientific disciplines. This corpus not only includes the
282
+ full text of the article, but also the metadata of the documents,
283
+ along with the bibliographic information for each reference.}
284
+ }
285
+ ```
286
+
287
+ ```
288
+ @dataset{https://10.17632/zm33cdndxs.3,
289
+ doi = {10.17632/zm33cdndxs.2},
290
+ url = {https://data.mendeley.com/datasets/zm33cdndxs/3},
291
+ author = "Daniel Kershaw and Rob Koeling",
292
+ keywords = {Science, Natural Language Processing, Machine Learning, Open Dataset},
293
+ title = {Elsevier OA CC-BY Corpus},
294
+ publisher = {Mendeley},
295
+ year = {2020},
296
+ month = {sep}
297
+ }
298
+ ```
299
+
300
+ ### Contributions
301
+
302
+ Thanks to [@orieg](https://github.com/orieg) for adding this dataset.
elsevier-oa-cc-by.py ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Elsevier OA CC-By Corpus Dataset."""
18
+
19
+
20
+
21
+ import json
22
+ import glob
23
+ import os
24
+ import math
25
+
26
+ import datasets
27
+
28
+
29
+ _CITATION = """
30
+ @article{Kershaw2020ElsevierOC,
31
+ title = {Elsevier OA CC-By Corpus},
32
+ author = {Daniel James Kershaw and R. Koeling},
33
+ journal = {ArXiv},
34
+ year = {2020},
35
+ volume = {abs/2008.00774},
36
+ doi = {https://doi.org/10.48550/arXiv.2008.00774},
37
+ url = {https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs},
38
+ keywords = {Science, Natural Language Processing, Machine Learning, Open Dataset},
39
+ abstract = {We introduce the Elsevier OA CC-BY corpus. This is the first open
40
+ corpus of Scientific Research papers which has a representative sample
41
+ from across scientific disciplines. This corpus not only includes the
42
+ full text of the article, but also the metadata of the documents,
43
+ along with the bibliographic information for each reference.}
44
+ }
45
+ """
46
+
47
+ _DESCRIPTION = """
48
+ Elsevier OA CC-By is a corpus of 40k (40, 091) open access (OA) CC-BY articles
49
+ from across Elsevier’s journals and include the full text of the article, the metadata,
50
+ the bibliographic information for each reference, and author highlights.
51
+ """
52
+
53
+ _HOMEPAGE = "https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs/3"
54
+
55
+ _LICENSE = "CC-BY-4.0"
56
+
57
+ _URLS = {
58
+ "mendeley": "https://data.mendeley.com/public-files/datasets/zm33cdndxs/files/4e03ae48-04a7-44d4-b103-ce73e548679c/file_downloaded"
59
+ }
60
+
61
+
62
+ class ElsevierOaCcBy(datasets.GeneratorBasedBuilder):
63
+ """Elsevier OA CC-By Dataset."""
64
+
65
+ VERSION = datasets.Version("1.0.0")
66
+
67
+ BUILDER_CONFIGS = [
68
+ datasets.BuilderConfig(name="mendeley", version=VERSION, description="Official Mendeley dataset"),
69
+ ]
70
+
71
+ DEFAULT_CONFIG_NAME = "mendeley"
72
+
73
+ def _info(self):
74
+ features = datasets.Features(
75
+ {
76
+ "title": datasets.Value("string"),
77
+ "abstract": datasets.Value("string"),
78
+ "subjareas": datasets.Sequence(datasets.Value("string")),
79
+ "keywords": datasets.Sequence(datasets.Value("string")),
80
+ "asjc": datasets.Sequence(datasets.Value("string")),
81
+ "body_text": datasets.Sequence(datasets.Value("string")),
82
+ "author_highlights": datasets.Sequence(datasets.Value("string")),
83
+ }
84
+ )
85
+ return datasets.DatasetInfo(
86
+ # This is the description that will appear on the datasets page.
87
+ description=_DESCRIPTION,
88
+ # This defines the different columns of the dataset and their types
89
+ features=features, # Here we define them above because they are different between the two configurations
90
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
91
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
92
+ # supervised_keys=("sentence", "label"),
93
+ # Homepage of the dataset for documentation
94
+ homepage=_HOMEPAGE,
95
+ # License for the dataset if available
96
+ license=_LICENSE,
97
+ # Citation for the dataset
98
+ citation=_CITATION,
99
+ )
100
+
101
+ def _split_generators(self, dl_manager):
102
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
103
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
104
+
105
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
106
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
107
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
108
+ urls = _URLS[self.config.name]
109
+ data_dir = dl_manager.download_and_extract(urls)
110
+
111
+ corpus_path = os.path.join(data_dir, "json")
112
+
113
+ return [
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.TRAIN,
116
+ # These kwargs will be passed to _generate_examples
117
+ gen_kwargs={
118
+ "filepath": corpus_path,
119
+ "split": "train",
120
+ "split_range": [0, 32072]
121
+ },
122
+ ),
123
+ datasets.SplitGenerator(
124
+ name=datasets.Split.TEST,
125
+ # These kwargs will be passed to _generate_examples
126
+ gen_kwargs={
127
+ "filepath": corpus_path,
128
+ "split": "test",
129
+ "split_range": [32073, 36082]
130
+ },
131
+ ),
132
+ datasets.SplitGenerator(
133
+ name=datasets.Split.VALIDATION,
134
+ # These kwargs will be passed to _generate_examples
135
+ gen_kwargs={
136
+ "filepath": corpus_path,
137
+ "split": "validation",
138
+ "split_range": [36083, 40091]
139
+ },
140
+ ),
141
+ ]
142
+
143
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
144
+ def _generate_examples(self, filepath, split, split_range):
145
+ # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
146
+ # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
147
+ json_files = glob.glob(f"{filepath}/*.json")
148
+ for doc in json_files[split_range[0]:split_range[1]]:
149
+ with open(doc) as f:
150
+ paper = json.loads(f.read())
151
+ # Yields examples as (key, example) tuples
152
+ yield paper['docId'], {
153
+ 'title': paper['metadata']['title'],
154
+ 'subjareas': paper['metadata']['subjareas'] if 'subjareas' in paper['metadata'] else [],
155
+ 'keywords': paper['metadata']['keywords'] if 'keywords' in paper['metadata'] else [],
156
+ 'asjc': paper['metadata']['asjc'] if 'asjc' in paper['metadata'] else [],
157
+ 'abstract': paper['abstract'] if 'abstract' in paper else "",
158
+ "body_text": [s['sentence'] for s in sorted(paper['body_text'], key = lambda i: (i['secId'], i['startOffset']))],
159
+ "author_highlights": [s['sentence'] for s in sorted(paper['author_highlights'], key = lambda i: i['startOffset'])] if 'author_highlights' in paper else [],
160
+ }