tatdqa_test / README.md
HugSib's picture
Update README.md
5badc26 verified
|
raw
history blame
No virus
3.21 kB
metadata
dataset_info:
  features:
    - name: query
      dtype: string
    - name: image_filename
      dtype: string
    - name: image
      dtype: image
    - name: answer
      dtype: string
    - name: answer_type
      dtype: string
    - name: page
      dtype: string
    - name: model
      dtype: string
    - name: prompt
      dtype: string
    - name: source
      dtype: string
  splits:
    - name: test
      num_bytes: 774039186.125
      num_examples: 1663
  download_size: 136066416
  dataset_size: 774039186.125
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
license: cc-by-4.0
task_categories:
  - visual-question-answering
  - question-answering
language:
  - en
tags:
  - Document Retrieval
  - VisualQA
  - QA
size_categories:
  - 1K<n<10K

Dataset Description

This is the test set taken from the TAT-DQA datasetis a large-scale Document VQA dataset that was constructed from publicly available real-world financial reports. It focuses on rich tabular and textual content requiring numerical reasoning. Questions and answers were manually annotated by human experts in finance.

Example of data (see viewer)

Data Curation

Unlike other 'academic' datasets, we kept the full test set as this dataset closely represents our use case of document retrieval. There are 1,663 image-query pairs.

Load the dataset

from datasets import load_dataset
ds = load_dataset("vidore/tatdqa_test", split = 'test')

Dataset Structure

Here is an example of a dataset instance structure:

features:
  - name: questionId
    dtype: string
  - name: query
    dtype: string
  - name: question_types
    dtype: 'null'
  - name: image
    dtype: image
  - name: docId
    dtype: int64
  - name: image_filename
    dtype: string
  - name: page
    dtype: string
  - name: answer
    dtype: 'null'
  - name: data_split
    dtype: string
  - name: source
    dtype: string

Citation Information

If you use this dataset in your research, please cite the original dataset as follows:

    @inproceedings{zhu-etal-2021-tat,
    title = "{TAT}-{QA}: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance",
    author = "Zhu, Fengbin  and
      Lei, Wenqiang  and
      Huang, Youcheng  and
      Wang, Chao  and
      Zhang, Shuo  and
      Lv, Jiancheng  and
      Feng, Fuli  and
      Chua, Tat-Seng",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.254",
    doi = "10.18653/v1/2021.acl-long.254",
    pages = "3277--3287"
}

@inproceedings{zhu2022towards,
  title={Towards complex document understanding by discrete reasoning},
  author={Zhu, Fengbin and Lei, Wenqiang and Feng, Fuli and Wang, Chao and Zhang, Haozhou and Chua, Tat-Seng},
  booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
  pages={4857--4866},
  year={2022}
}