File size: 3,204 Bytes
4e79884
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b2d3e52
 
 
 
 
 
 
 
 
 
 
 
4e79884
b2d3e52
 
 
 
 
 
 
 
5badc26
b2d3e52
 
 
 
 
9c3a626
b2d3e52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
dataset_info:
  features:
  - name: query
    dtype: string
  - name: image_filename
    dtype: string
  - name: image
    dtype: image
  - name: answer
    dtype: string
  - name: answer_type
    dtype: string
  - name: page
    dtype: string
  - name: model
    dtype: string
  - name: prompt
    dtype: string
  - name: source
    dtype: string
  splits:
  - name: test
    num_bytes: 774039186.125
    num_examples: 1663
  download_size: 136066416
  dataset_size: 774039186.125
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
license: cc-by-4.0
task_categories:
- visual-question-answering
- question-answering
language:
- en
tags:
- Document Retrieval
- VisualQA
- QA
size_categories:
- 1K<n<10K
---

## Dataset Description

This is the test set taken from the [TAT-DQA dataset](https://nextplusplus.github.io/TAT-DQA/)is a large-scale Document VQA dataset that was constructed from publicly available real-world financial reports. It focuses on rich tabular and textual content requiring numerical reasoning. Questions and answers were manually annotated by human experts in finance.

Example of data (see viewer)

### Data Curation
Unlike other 'academic' datasets, we kept the full test set as this dataset closely represents our use case of document retrieval. There are 1,663 image-query pairs. 

### Load the dataset 

```python
from datasets import load_dataset
ds = load_dataset("vidore/tatdqa_test", split="test")
```

### Dataset Structure

Here is an example of a dataset instance structure: 

```json
features:
  - name: questionId
    dtype: string
  - name: query
    dtype: string
  - name: question_types
    dtype: 'null'
  - name: image
    dtype: image
  - name: docId
    dtype: int64
  - name: image_filename
    dtype: string
  - name: page
    dtype: string
  - name: answer
    dtype: 'null'
  - name: data_split
    dtype: string
  - name: source
    dtype: string
```

## Citation Information

If you use this dataset in your research, please cite the original dataset as follows:

```latex
    @inproceedings{zhu-etal-2021-tat,
    title = "{TAT}-{QA}: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance",
    author = "Zhu, Fengbin  and
      Lei, Wenqiang  and
      Huang, Youcheng  and
      Wang, Chao  and
      Zhang, Shuo  and
      Lv, Jiancheng  and
      Feng, Fuli  and
      Chua, Tat-Seng",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.254",
    doi = "10.18653/v1/2021.acl-long.254",
    pages = "3277--3287"
}

@inproceedings{zhu2022towards,
  title={Towards complex document understanding by discrete reasoning},
  author={Zhu, Fengbin and Lei, Wenqiang and Feng, Fuli and Wang, Chao and Zhang, Haozhou and Chua, Tat-Seng},
  booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
  pages={4857--4866},
  year={2022}
}


```