In this repository I use CoMM to make Zero-Shot Combined Image Retrival task.
I leveraged a backbone for CoMM using OpenAI’s CLIP text and image encoders. I then trained CoMM on the CC3M dataset to achieve strong performance on recall@1, @5, and @10 for both image-to-text and text-to-image retrieval. After that, I applied the trained CoMM model to perform zero-shot composed image retrieval on the CIRCO benchmark dataset.
Unfortunately, the performances obtained are not comparable with the state of the art.
Validation Metrics on CC3M during CoMM's train:
- Recall@1_text2img: 0.0998
- Recall@5_text2img: 0.2510
- Recall@10_text2img: 0.3432
- Recall@1_img2text: 0.0917
- Recall@5_img2text: 0.2347
- Recall@10_img2text: 0.3239
Validation Metrics on CIRCO using CoMM:
- circo_map_at5: 3.00
- circo_map_at10: 3.50
- circo_map_at25: 4.16
- circo_map_at50: 4.47
- circo_recall_at5: 10.45
- circo_recall_at10: 15.91
- circo_recall_at25: 22.73
- circo_recall_at50: 29.55
The original pepper from which most of the code comes is "What to align in multimodal contrastive learning ? " Below the reference: Benoit Dufumier* & Javiera Castillo Navarro*, Devis Tuia, Jean-Philippe Thiran
Abstract: Humans perceive the world through multisensory integration, blending the information of different modalities to adapt their behavior. Alignment through contrastive learning offers an appealing solution for multimodal self-supervised learning. Indeed, by considering each modality as a different view of the same entity, it learns to align features of different modalities in a shared representation space. However, this approach is intrinsically limited as it only learns shared or redundant information between modalities, while multimodal interactions can arise in other ways. In this work, we introduce CoMM, a Contrastive MultiModal learning strategy that enables the communication between modalities in a single multimodal space. Instead of imposing cross- or intra- modality constraints, we propose to align multimodal representations by maximizing the mutual information between augmented versions of these multimodal features. Our theoretical analysis shows that shared, synergistic and unique terms of information naturally emerge from this formulation, allowing to estimate multimodal interactions beyond redundancy. We test CoMM both in a controlled and in a series of real-world settings: in the former, we demonstrate that CoMM effectively captures redundant, unique and synergistic information between modalities. In the latter, we show that CoMM learns complex multimodal interactions and achieves state-of-the-art results on seven multimodal tasks.
You can install all the packages required to run CoMM with conda:
git clone https://github.com/Duplums/CoMM && cd CoMM
conda env create -f environment.yml
conda activate multimodal