File size: 5,880 Bytes
32e7075
 
 
 
 
 
 
 
 
 
 
11c582c
70a3349
 
 
32e7075
 
 
 
70a3349
 
8cf4bab
 
 
32e7075
 
 
ca17441
 
 
177a7bd
ca17441
177a7bd
 
 
f37329e
177a7bd
 
108c685
 
 
177a7bd
ca17441
 
0a74f23
945c19d
 
 
32e7075
945c19d
 
ca17441
 
 
 
 
 
 
 
c9e0a9f
ca17441
 
 
 
c9e0a9f
ca17441
 
 
 
c9e0a9f
ca17441
 
 
 
 
 
 
 
 
6e6ed4b
 
6d097e5
6e6ed4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e6d0f5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: cc-by-nc-4.0
task_categories:
- text-to-image
- image-to-text
- text-retrieval
language:
- zh
- en
- ja
- ru
tags:
- image-text retrieval
- noisy correspondence learning
- NCL-specific benchmark
- realistic
- industry
- mobile user interface
- image-text matching
- image
- text
- npy
- txt
- json
size_categories:
- 100K<n<1M
---

# PC2-NoiseofWeb

This repo releases data introduced in our paper accepted:

> **PC2: Pseudo-Classification Based Pseudo-Captioning for Noisy Correspondence Learning in Cross-Modal Retrieval**  
> **Authors**: **[Yue Duan](https://njuyued.github.io/)**, Zhangxuan Gu, Zhenzhe Ying, Lei Qi, Changhua Meng and Yinghuan Shi
  
- **Quick links:** [[Code](https://github.com/alipay/PC2-NoiseofWeb) | [[PDF](https://arxiv.org/pdf/2408.01349)/[Abs](https://arxiv.org/abs/2408.01349)-arXiv | [PDF/Abs-Published (coming soon)]() | [Slides/Video (coming soon)]() | [文章解读-知乎(Zhihu)](https://zhuanlan.zhihu.com/p/711149124) | [视频解读-bilibili](https://www.bilibili.com/video/BV1zppMezEQe/) | [Dataset download](https://huggingface.co/datasets/NJUyued/NoW/resolve/main/NoW.zip?download=true)]

- 📰 **Latest news:**

   - We provide a **video presentation (in chinese)** of this work on [bilibili](https://www.bilibili.com/video/BV1zppMezEQe/).
   - We write a **detailed explanation (in chinese)** of this work on [Zhihu](https://zhuanlan.zhihu.com/p/711149124).
   - Our paper is accepted by **ACM International Conference on Multimedia (ACM MM) 2024** 🎉🎉. Thanks to users.

## Data Collection
We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100K image-text pairs** consisting of **website images** and **multilingual website meta-descriptions** (**98,000 pairs for training, 1,000 for validation, and 1,000 for testing**). NoW has two main characteristics: *without human annotations and the noisy pairs are naturally captured*.  The source image data of NoW is obtained by taking screenshots when accessing web pages on mobile user interface (MUI) with 720 X 1280 resolution, and we parse the meta-description field in the HTML source code as the captions. In [NCR](https://github.com/XLearning-SCU/2021-NeurIPS-NCR) (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by [Bottom-up Attention Model](https://github.com/peteanderson80/bottom-up-attention) to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt [APT](https://openaccess.thecvf.com/content/CVPR2023/papers/Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023_paper.pdf) as the detection model since it is trained on MUI data. Then, we capture the 768-dimensional features of top 36 objects for one image. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic.  **The estimated noise ratio of this dataset is nearly 70%**.  

<div align=center>

<img width="750px" src="NoW.jpg"> 
 
</div>

## Data Structure

```

|-- h5100k_precomp
|   |-- dev_caps_bpe.txt
|   |-- dev_caps_bert.txt
|   |-- dev_caps_jieba.txt
|   |-- dev_ids.txt
|   |-- dev_ims.npy
|   |-- test_caps_bpe.txt
|   |-- test_caps_bert.txt
|   |-- test_caps_jieba.txt
|   |-- test_ids.txt
|   |-- test_ims.npy
|   |-- train_caps_bpe.txt
|   |-- train_caps_bert.txt
|   |-- train_caps_jieba.txt
|   |-- train_ids.txt
|   |-- train_ims.npy
|-- vocab
|   |-- now100k_precomp_vocab_bert.json
|   |-- now100k_precomp_vocab_bpe.json
|   |-- now100k_precomp_vocab_jieba.json

```

Please note that since our raw data contains some sensitive business data, we only provide the **encoded image features** (\*_ims.npy) and the **token ids of the text tokenized**. For tokenizer, we provide [Tokenizers](https://github.com/huggingface/tokenizers) with [BPE](https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.BPE) to produce \*_caps_bpe.txt, [BertTokenizer](https://huggingface.co/transformers/v3.0.2/model_doc/bert.html#berttokenizer) with [bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) pre-trained model to produce \*_caps_bert.txt, and [Jieba](https://github.com/fxsjy/jieba) to produce \*_caps_jieba.txt. **Our vocabulary size of BPETokenizer is 10,000, while BertTokenizer and JiebaTokenizer have a vocabulary size of 32,702 and 56,271 respectively.** (recorded in now100k_precomp_vocab\_\*.txt). \*_ids.txt records the data indexs in the original 500k dataset. In the future, we may process and make the original dataset public.

## Usage

```
# data_path: your dataset name and path
# data_split: {train,dev,test}
# tokenizer: {bpe,bert,jieba}
# vocabulary size of {bpe,bert,jieba} is {10000,32702,56271} 

# captions
with open(os.path.join(data_path, "{}_caps_{}.txt".format(data_split, tokenizer))) as f:
    for line in f:
        captions.append(line.strip())
captions_token = []
for index in range(len(captions)):
  caption = captions[index]
  tokens = caption.split(',')
  caption = []
  caption.append(vocab("<start>"))
  caption.extend([int(token) for token in tokens if token])
  caption.append(vocab("<end>"))
  captions_token.append(caption)

# images
images = np.load(os.path.join(data_path, "%s_ims.npy" % data_split))

return captions_token, images
```
Additionally, you can search for code snippets containing the string `now100k_precomp` in `co_train.py`, `data.py`, `evaluation.py`, and `run.py` in [PC2's repo](https://github.com/alipay/PC2-NoiseofWeb) and refer to them to process the NoW dataset for use in your own code.