Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Captioning

#2
by alfredplpl - opened

I’m going to do captioning the images with Florence-2.
What do you plan?

@alfredplpl That would be great! Honestly, I am hoping someone else will caption Megalith-10m so I don't have to 😅

Florence-2 captions looked okay in my initial test:

image.png

Some other captioners that might be worth trying:

  1. SD3 used CogVLM
  2. CommonCanvas used BLIP-2
  3. AuraDiffusion seems to be using MoonDream2
  4. PixArt started with LLaVA then switched to Share-Captioner
    image.png

I was curious and did a quick comparison of the different captioners here https://gist.github.com/madebyollin/ea2a80be58bd4e910a2cd73e20e5ee0c. Based on my initial impressions, I think Florence-2 is good but CogVLM and Moondream2 seemed a bit better (slightly more detail, slightly fewer hallucinations on the three images I tried)

I can caption the entire dataset in about a week if you forward it to me with llava-next 8b, which I mostly use because it's fast. However, the dataset is missing metadata which is valuable for captioning ie the image title and descriptions. I can write a script in beautifulsoup to fetch all that, but it will take a bit. If you already have the images downloaded and could share them privately, that would help too.

In general florence2 does poorly with data integration, because it is a tiny model and lacks most context aside from identifying things with bounding boxes. CogVLM2 and InternVL are better but are slow. I can do multi-label classifier output for open-images and booru tags as well.

Let me know if you're interested! Communication here is fine.

Thank you for your discussion.

But, I couldn’t download the images from Flickr because of the access limit.

Could you upload the images to Hugging Face?
If you do so, I can download the images and caption them.

@animepfp Thanks for your interest! The image titles / tags / descriptions on Flickr seemed very challenging to use when I looked at them (mostly empty / useless, highly multilingual & non-standard formats, often irrelevant or misleading), but maybe there's some signal there. If you can fetch album names as well you might be able to get better results.
image.png

@alfredplpl Flickr will rate-limit you if you try to download too many images at once from one machine, but if you lower the number of parallel downloading threads I think the dataset should be able to download in a day or two.

I don't have the full-resolution dataset downloaded myself (apparently it's around 2TB), but I do have low-resolution (256x256) jpgs saved. Would that be enough for your captioning script?

Unfortunately that is probably a little small for the llavanext architecture. I can try to download it independently. Thank you.

@animepfp Sounds good! If you upload results anywhere (raw images, captions, or both), let me know and I can add a link to the README 🤝

@madebyollin I also think that the 256x256 images is small for dense captioning. But the 256x256 images are useful for text-to-image training on first stage. I'll try caption the 256x256 images if you upload them.

It is hard for me to download the full-size images.
Someone also said:
https://x.com/IMG_5955/status/1812331380657070521

Thank you for your cooperation.

@alfredplpl Regarding downloading performance, I did a quick test downloading the first 1m images with 4 threads at 512px resolution, and it seems to take < 1 day per million

img2dataset --url_list megalith-00001-of-00010.parquet --input_format "parquet" \
   --url_col "url_highres" --output_format files \
   --save_additional_columns '["url_source"]' \
   --output_folder megalith-10m-00001-of-00010 --processes_count 1 --thread_count 4 --image_size 512 \
   --resize_mode keep_ratio --resize_only_if_bigger true --min_image_size 512

image.png

I don't remember what the rate limit is, so you might be even able to increase this to 8 or 16 threads and download faster.

Regarding the tweet:

Megalith-10M images are resized to 1024px on the long edge, making them unsuitable for HD image generation model training. Full-size images are available via Flickr API, but strict limits make it extremely difficult to obtain 10M images this way...

This is true; b (1024px) is the largest size that uses a shared secret (https://www.flickr.com/services/api/misc.urls.html) so downloading higher resolutions probably requires more work (I haven't attempted it yet, but it might require having an API key at download time or something).

Thank you for your comment.

I tried the script. Then, I got the next 1m images.

$ img2dataset --url_list megalith-00002-of-00010.parquet --input_format "parquet"    --url_col "url_highres" --output_format files    --save_additional_columns '["url_source"]'    --output_folder megalith-10m-00002-of-00010 --processes_count 2 --thread_count 8 --image_size 512    --resize_mode keep_ratio --resize_only_if_bigger true --min_image_size 512
92it [26:35:48, 1040.74s/it]
worker  - success: 0.965 - failed to download: 0.013 - failed to resize: 0.022 - images per sec: 5 - count: 10000
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       197G  143G   47G  76% /

So, I will try caption them because the size is enough for dense captioning.

I have captioned 1m images by Florence 2 and T4 x4. The all of captions will be released in the next week from my company.

@alfredplpl That's great news!

InternVL2 short and long captions should be done by the end of next month.

Thanks all! Some updates:

  • @liuliu87 from DrawThings.ai has uploaded some preliminary captions made with ShareCaptioner here as well as archived raw images which should make Megalith-10m captioning more convenient. Screenshot 2024-07-25 at 08.15.46.png

  • I've started a section in the README for linking to the Megalith-10m captioning efforts I'm aware of

Thanks, @liuliu87 .

I continue captioning the images because we can avoid overfitting.

According to the NeurIPS paper[1], multiple caption is valid for overfitting.

IMG_8838.jpeg

[1] https://arxiv.org/abs/2305.20086

@alfredplpl Yes, having multiple captions available would be ideal! Both to reduce memorization of images, and to reduce the model's reliance on any specific captioning style/format.

No worries. I can get the florence and sharegpt4v captions after, make shortened versions of each, and then put them in my repo for a total of 6x captions per image and link/credit all the other repos.

InternVL2 is a little slow, but the results so far are pretty good.

Awesome, thanks! I've added a link + example images from megalith-10m-florence2 to the README as well.

Great work @animepfp ! Added a flickr-megalith-10m-internvl2-multi-caption link and sample images to the README.

Sign up or log in to comment