NJUyued commited on
Commit
c9e0a9f
1 Parent(s): 70a3349

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -24,11 +24,6 @@ tags:
24
  - json
25
  size_categories:
26
  - 100K<n<1M
27
- homepage: projectnumina.ai
28
- repository: https://github.com/alipay/PC2-NoiseofWeb
29
- point-of-contact: Yue Duan
30
- size-of-downloaded-dataset-files: 3.53 GB
31
- number-of-rows: 100,000
32
  ---
33
 
34
  # PC2-NoiseofWeb
@@ -38,7 +33,7 @@ This repo releases data introduced in our paper
38
  > ***PC2: Pseudo-Classification Based Pseudo-Captioning for Noisy Correspondence Learning in Cross-Modal Retrieval***
39
  > ***Authors**: Yue Duan, Zhangxuan Gu, Zhenzhe Ying, Lei Qi, Changhua Meng and Yinghuan Shi*
40
 
41
- Quick links: [[arXiv (coming soon)]() | [Published paper (coming soon)]() | [Poster (coming soon)]() | [Zhihu (coming soon)]() | [Code download]() | [Dataset download](https://huggingface.co/datasets/NJUyued/NoW/resolve/main/NoW.zip?download=true)]
42
 
43
  ## Data Collection
44
  We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100K** cross-modal pairs consisting of **website images** and **multilingual website meta-descriptions** (**98,000 pairs for training, 1,000 for validation, and 1,000 for testing**). NoW has two main characteristics: *without human annotations and the noisy pairs are naturally captured*. The source image data of NoW is obtained by taking screenshots when accessing web pages on mobile user interface (MUI) with 720 $\times$ 1280 resolution, and we parse the meta-description field in the HTML source code as the captions. In [NCR](https://github.com/XLearning-SCU/2021-NeurIPS-NCR) (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by [Bottom-up Attention Model](https://github.com/peteanderson80/bottom-up-attention) to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt [APT](https://openaccess.thecvf.com/content/CVPR2023/papers/Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023_paper.pdf) as the detection model since it is trained on MUI data. Then, we capture the 768-dimensional features of top 36 objects for one image. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic. **The estimated noise ratio of this dataset is nearly 70%**.
@@ -56,14 +51,17 @@ We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100
56
  |-- h5100k_precomp
57
  | |-- dev_caps_bpe.txt
58
  | |-- dev_caps_bert.txt
 
59
  | |-- dev_ids.txt
60
  | |-- dev_ims.npy
61
  | |-- test_caps_bpe.txt
62
  | |-- test_caps_bert.txt
 
63
  | |-- test_ids.txt
64
  | |-- test_ims.npy
65
  | |-- train_caps_bpe.txt
66
  | |-- train_caps_bert.txt
 
67
  | |-- train_ids.txt
68
  | |-- train_ims.npy
69
  |-- vocab
@@ -73,4 +71,4 @@ We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100
73
 
74
  ```
75
 
76
- Please note that since our raw data contains some sensitive business data, we only provide the **encoded image features** (\*_ims.npy) and the **token ids of the text tokenized**. For tokenizer, we use both [Tokenizers](https://github.com/huggingface/tokenizers) with [BPE](https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.BPE) to produce \*_caps_bpe.txt and [BertTokenizer](https://huggingface.co/transformers/v3.0.2/model_doc/bert.html#berttokenizer) with [bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) pre-trained model to produce \*_caps_bert.txt. **Our vocabulary size of BPE tokenizer is 10,000 and that of BertTokenizer is 32702**. \*_ids.txt records the serial number of the data in the original 500k dataset. In the future, we may process and make the original dataset public.
 
24
  - json
25
  size_categories:
26
  - 100K<n<1M
 
 
 
 
 
27
  ---
28
 
29
  # PC2-NoiseofWeb
 
33
  > ***PC2: Pseudo-Classification Based Pseudo-Captioning for Noisy Correspondence Learning in Cross-Modal Retrieval***
34
  > ***Authors**: Yue Duan, Zhangxuan Gu, Zhenzhe Ying, Lei Qi, Changhua Meng and Yinghuan Shi*
35
 
36
+ Quick links: [[Repo](https://github.com/alipay/PC2-NoiseofWeb) | [arXiv (coming soon)]() | [Published paper (coming soon)]() | [Poster (coming soon)]() | [Zhihu (coming soon)]() | [Dataset download](https://huggingface.co/datasets/NJUyued/NoW/resolve/main/NoW.zip?download=true)]
37
 
38
  ## Data Collection
39
  We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100K** cross-modal pairs consisting of **website images** and **multilingual website meta-descriptions** (**98,000 pairs for training, 1,000 for validation, and 1,000 for testing**). NoW has two main characteristics: *without human annotations and the noisy pairs are naturally captured*. The source image data of NoW is obtained by taking screenshots when accessing web pages on mobile user interface (MUI) with 720 $\times$ 1280 resolution, and we parse the meta-description field in the HTML source code as the captions. In [NCR](https://github.com/XLearning-SCU/2021-NeurIPS-NCR) (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by [Bottom-up Attention Model](https://github.com/peteanderson80/bottom-up-attention) to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt [APT](https://openaccess.thecvf.com/content/CVPR2023/papers/Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023_paper.pdf) as the detection model since it is trained on MUI data. Then, we capture the 768-dimensional features of top 36 objects for one image. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic. **The estimated noise ratio of this dataset is nearly 70%**.
 
51
  |-- h5100k_precomp
52
  | |-- dev_caps_bpe.txt
53
  | |-- dev_caps_bert.txt
54
+ | |-- dev_caps_jieba.txt
55
  | |-- dev_ids.txt
56
  | |-- dev_ims.npy
57
  | |-- test_caps_bpe.txt
58
  | |-- test_caps_bert.txt
59
+ | |-- test_caps_jieba.txt
60
  | |-- test_ids.txt
61
  | |-- test_ims.npy
62
  | |-- train_caps_bpe.txt
63
  | |-- train_caps_bert.txt
64
+ | |-- train_caps_jieba.txt
65
  | |-- train_ids.txt
66
  | |-- train_ims.npy
67
  |-- vocab
 
71
 
72
  ```
73
 
74
+ Please note that since our raw data contains some sensitive business data, we only provide the **encoded image features** (\*_ims.npy) and the **token ids of the text tokenized**. For tokenizer, we provide [Tokenizers](https://github.com/huggingface/tokenizers) with [BPE](https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.BPE) to produce \*_caps_bpe.txt, [BertTokenizer](https://huggingface.co/transformers/v3.0.2/model_doc/bert.html#berttokenizer) with [bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) pre-trained model to produce \*_caps_bert.txt, and [Jieba](https://github.com/fxsjy/jieba) to produce \*_caps_jieba.txt. **Our vocabulary size of BPETokenizer is 10,000, while BertTokenizer and JiebaTokenizer have a vocabulary size of 32,702 and 56,271 respectively.** (recorded in now100k_precomp_vocab\_\*.txt). \*_ids.txt records the data indexs in the original 500k dataset. In the future, we may process and make the original dataset public.