--- license: cc-by-nc-4.0 task_categories: - text-to-image - image-to-text - text-retrieval language: - zh - en - ja - ru tags: - image-text retrieval - noisy correspondence learning - NCL-specific benchmark - realistic - industry - mobile user interface - image-text matching - image - text - npy - txt - json size_categories: - 100K ***PC2: Pseudo-Classification Based Pseudo-Captioning for Noisy Correspondence Learning in Cross-Modal Retrieval*** > ***Authors**: Yue Duan, Zhangxuan Gu, Zhenzhe Ying, Lei Qi, Changhua Meng and Yinghuan Shi* Quick links: [[Repo](https://github.com/alipay/PC2-NoiseofWeb) | [arXiv (coming soon)]() | [Published paper (coming soon)]() | [Poster (coming soon)]() | [Zhihu (coming soon)]() | [Dataset download](https://huggingface.co/datasets/NJUyued/NoW/resolve/main/NoW.zip?download=true)] ## Data Collection We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100K** cross-modal pairs consisting of **website images** and **multilingual website meta-descriptions** (**98,000 pairs for training, 1,000 for validation, and 1,000 for testing**). NoW has two main characteristics: *without human annotations and the noisy pairs are naturally captured*. The source image data of NoW is obtained by taking screenshots when accessing web pages on mobile user interface (MUI) with 720 $\times$ 1280 resolution, and we parse the meta-description field in the HTML source code as the captions. In [NCR](https://github.com/XLearning-SCU/2021-NeurIPS-NCR) (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by [Bottom-up Attention Model](https://github.com/peteanderson80/bottom-up-attention) to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt [APT](https://openaccess.thecvf.com/content/CVPR2023/papers/Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023_paper.pdf) as the detection model since it is trained on MUI data. Then, we capture the 768-dimensional features of top 36 objects for one image. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic. **The estimated noise ratio of this dataset is nearly 70%**.
## Data Structure ``` |-- h5100k_precomp | |-- dev_caps_bpe.txt | |-- dev_caps_bert.txt | |-- dev_caps_jieba.txt | |-- dev_ids.txt | |-- dev_ims.npy | |-- test_caps_bpe.txt | |-- test_caps_bert.txt | |-- test_caps_jieba.txt | |-- test_ids.txt | |-- test_ims.npy | |-- train_caps_bpe.txt | |-- train_caps_bert.txt | |-- train_caps_jieba.txt | |-- train_ids.txt | |-- train_ims.npy |-- vocab | |-- now100k_precomp_vocab_bert.json | |-- now100k_precomp_vocab_bpe.json | |-- now100k_precomp_vocab_jieba.json ``` Please note that since our raw data contains some sensitive business data, we only provide the **encoded image features** (\*_ims.npy) and the **token ids of the text tokenized**. For tokenizer, we provide [Tokenizers](https://github.com/huggingface/tokenizers) with [BPE](https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.BPE) to produce \*_caps_bpe.txt, [BertTokenizer](https://huggingface.co/transformers/v3.0.2/model_doc/bert.html#berttokenizer) with [bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) pre-trained model to produce \*_caps_bert.txt, and [Jieba](https://github.com/fxsjy/jieba) to produce \*_caps_jieba.txt. **Our vocabulary size of BPETokenizer is 10,000, while BertTokenizer and JiebaTokenizer have a vocabulary size of 32,702 and 56,271 respectively.** (recorded in now100k_precomp_vocab\_\*.txt). \*_ids.txt records the data indexs in the original 500k dataset. In the future, we may process and make the original dataset public.