Molbap HF staff commited on
Commit
1198455
1 Parent(s): 74b08db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -8
README.md CHANGED
@@ -1,3 +1,4 @@
 
1
  ---
2
  license: other
3
  license_name: pdfa-eng-train
@@ -16,12 +17,7 @@ size_categories:
16
 
17
  ### Dataset Summary
18
 
19
- PDFA dataset is a document dataset filtered from the SafeDocs corpus, aka CC-MAIN-2021-31-PDF-UNTRUNCATED, with 48 million pages kept as valid samples.
20
- Each document exists as a pairing of a pdf and a json file containing extensive OCR annotation as well as metadata information about rendering times. The filterings and packaging in
21
- webdataset format are tailored towards multimodal machine learning at scale, specifically image-to-text tasks.
22
-
23
- In this dataset, an additional filtering has been done to restrict documents to the english language to 18.6 million pages over 2.16 million documents
24
- Further, the metadata for each document has been formatted in the same way as `https://huggingface.co/datasets/pixparse/IDL-wds`.
25
 
26
  ### Usage
27
 
@@ -41,8 +37,82 @@ print(next(iter(dataset['train'])).keys())
41
  Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension,
42
  as well as the count of files per shard.
43
 
44
- ### Data Splits
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  #### Train
48
  * `pdfa-eng-train-*.tar`
@@ -55,9 +125,17 @@ as well as the count of files per shard.
55
 
56
  Pablo Montalvo, Ross Wightman
57
 
 
 
 
 
 
58
  ### Licensing Information
59
 
60
  Data has been filtered from the original corpus. As a consequence, users should note [Common Crawl's license and terms of use](https://commoncrawl.org/terms-of-use) and the [Digital Corpora project's Terms of Use](https://digitalcorpora.org/about-digitalcorpora/terms-of-use/).
61
 
62
  ### Citation Information
63
- ??
 
 
 
 
1
+
2
  ---
3
  license: other
4
  license_name: pdfa-eng-train
 
17
 
18
  ### Dataset Summary
19
 
20
+ PDFA dataset is a document dataset filtered from the SafeDocs corpus, aka CC-MAIN-2021-31-PDF-UNTRUNCATED. The original purpose of that corpus is for comprehensive file format analysis. The purpose of that subset differs in that regard, as focus has been done on making the dataset machine learning-ready.
 
 
 
 
 
21
 
22
  ### Usage
23
 
 
37
  Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension,
38
  as well as the count of files per shard.
39
 
 
40
 
41
+ #### Words and lines document metadata
42
+
43
+ Initially, we started from the readily available ~11TB zip files from PDFA in their initial [data release](https://digitalcorpora.org/corpora/file-corpora/cc-main-2021-31-pdf-untruncated/).
44
+ From the pdf digital files, we extracted words, bounding boxes and image bounding boxes that are available in the pdf file. This information is then reshaped into lines organized in reading order, under the key `lines`. We keep non-reshaped word and bounding box information under the `word` key, should users want to use their own heuristic.
45
+
46
+ The way we obtain an approximate reading order is simply by looking at the frequency peaks of the leftmost word x-coordinate. A frequency peak means that a high number of lines are starting from the same point. Then, we keep track of the x-coordinate of each such identified column. If no peaks are found, the document is assumed to be readable in plain format.
47
+
48
+ ```python
49
+ def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=1):
50
+ """
51
+ Identifies the x-coordinates that best separate columns by analyzing the derivative of a histogram
52
+ of the 'left' values (xmin) of bounding boxes.
53
+
54
+ Args:
55
+ page (dict): Page data with 'bbox' containing bounding boxes of words.
56
+ min_prominence (float): The required prominence of peaks in the histogram.
57
+ num_bins (int): Number of bins to use for the histogram.
58
+ kernel_width (int): The width of the Gaussian kernel used for smoothing the histogram.
59
+
60
+ Returns:
61
+ separators (list): The x-coordinates that separate the columns, if any.
62
+ """
63
+ try:
64
+ left_values = [b[0] for b in page['bbox']]
65
+ hist, bin_edges = np.histogram(left_values, bins=num_bins)
66
+ hist = scipy.ndimage.gaussian_filter1d(hist, kernel_width)
67
+ min_val = min(hist)
68
+ hist = np.insert(hist, [0, len(hist)], min_val)
69
+ bin_width = bin_edges[1] - bin_edges[0]
70
+ bin_edges = np.insert(bin_edges, [0, len(bin_edges)], [bin_edges[0] - bin_width, bin_edges[-1] + bin_width])
71
+
72
+ peaks, _ = scipy.signal.find_peaks(hist, prominence=min_prominence * np.max(hist))
73
+ derivatives = np.diff(hist)
74
+
75
+ separators = []
76
+ if len(peaks) > 1:
77
+ # This finds the index of the maximum derivative value between peaks
78
+ # which indicates peaks after trough --> column
79
+ for i in range(len(peaks)-1):
80
+ peak_left = peaks[i]
81
+ peak_right = peaks[i+1]
82
+ max_deriv_index = np.argmax(derivatives[peak_left:peak_right]) + peak_left
83
+ separator_x = bin_edges[max_deriv_index + 1]
84
+ separators.append(separator_x)
85
+ except Exception as e:
86
+ separators = []
87
+ return separators
88
+ ```
89
+
90
+ For each pdf document, we store statistics on the file size, number of words (as characters separated by spaces), number of pages, as well as the rendering times of each page for a given dpi.
91
+ #### Filtering process
92
+
93
+ File size and page rendering time are used to set thresholds in the final dataset: the goal is to remove files that are larger than 100 MB, or that take more than 500ms to render on a modern machine, to optimize dataloading at scale. Having "too large" or "too slow" files would add a burden to large-scale training pipelines and we choose to alleviate this in the current release. Finally, a full pass over the dataset is done, trying to open a bytestream
94
+
95
+ We get to 48 million pages kept as valid samples.
96
+
97
+ As a last step, we use XLM-Roberta to restrict the dataset to an english subset, specifically `papluca/xlm-roberta-base-language-detection` , on the first 512 words of the first page of each document.
98
+ Be aware that some documents may have several languages embedded in them, or that some predictions might be inaccurate.
99
+
100
+
101
+ At the end, each document exists as a pairing of a pdf and a json file containing extensive OCR annotation as well as metadata information about rendering times. The filterings and packaging in
102
+ webdataset format are tailored towards multimodal machine learning at scale, specifically image-to-text tasks.
103
+
104
+ ### Dataset statistics
105
+
106
+
107
+ In this dataset, an additional filtering has been done to restrict documents to the english language to 18.6 million pages over 2.16 million documents. This filtering has been done using XLM
108
+
109
+ Further, the metadata for each document has been formatted in this way:
110
+
111
+ TODO add formatting
112
+
113
+ Such a formatting follows the multimodal dataset from the Industry Document Library, `https://huggingface.co/datasets/pixparse/IDL-wds`.
114
+
115
+ ### Data Splits
116
 
117
  #### Train
118
  * `pdfa-eng-train-*.tar`
 
125
 
126
  Pablo Montalvo, Ross Wightman
127
 
128
+ ### Disclaimer
129
+
130
+ This dataset, as a corpus, does not represent the intent and purpose from CC-MAIN-2021-31-PDF-UNTRUNCATED.
131
+ TODO add disclaimer on biases of using that dataset as a faithful representation of existing documents on the web
132
+
133
  ### Licensing Information
134
 
135
  Data has been filtered from the original corpus. As a consequence, users should note [Common Crawl's license and terms of use](https://commoncrawl.org/terms-of-use) and the [Digital Corpora project's Terms of Use](https://digitalcorpora.org/about-digitalcorpora/terms-of-use/).
136
 
137
  ### Citation Information
138
+ ??
139
+
140
+
141
+