Few-Shot Semantic Segmentation meets Explainability
Installation • Quick Start • Datasets • Reproduction • Examples
AffinityExplainer is a comprehensive framework designed to interpret matching-based few-shot semantic segmentation models. By extracting and visualizing pixel-level contributions from support images, AffinityExplainer reveals how these models make predictions, providing unprecedented transparency into their decision-making process.
This repository accompanies our paper:
"Matching-Based Few-Shot Semantic Segmentation Models Are Interpretable by Design"
- 🔍 Pixel-Level Attribution: Extract the contribution of each support pixel to final predictions
- 📊 Interactive Visualizations: Comprehensive tools for analyzing model behavior
- ⚡ One-Line Deployment: Run demos instantly with minimal setup
- 🎯 Reproducible Research: Complete scripts for all paper experiments
Experience AffinityExplainer instantly without any installation:
uvx --from https://github.com/pasqualedem/AffinityExplainer app💡 Requirements: Only uv is needed to run this command
This launches an interactive web application where you can explore the interpretability capabilities of matching-based few-shot segmentation models.
You can also run the demo locally after installation:
streamlit run affex/app.pyWe use uv for fast and reliable dependency management.
Ensure you have uv installed:
curl -LsSf https://astral.sh/uv/install.sh | shClone the repository and install dependencies:
git clone https://github.com/pasqualedem/AffinityExplainer.git
cd AffinityExplainer
uv sync
source .venv/bin/activateAffinityExplainer supports PASCAL VOC12 and COCO datasets for few-shot semantic segmentation experiments.
bash scripts/download_pascal.shbash scripts/download_coco.shThe datasets will be automatically organized in the appropriate directory structure for use with the framework.
AffinityExplainer supports DCAMA and DMTNet few-shot segmentation models. Download pre-trained weights using the scripts below:
bash scripts/download_dcama.sh
bash scripts/download_dmtnet.shAll experiments and ablation studies from the paper can be reproduced using the provided scripts in the scripts/ directory.
Each line in scripts/experiments.sh corresponds to a specific experiment configuration:
# Example: Run COCO 1-shot experiment
python main.py grid --parameters parameters/coco/cut_iauc_miou_N1K1.yamlBelow are example interpretability visualizations generated by AffinityExplainer on the DCAMA model:
These visualizations demonstrate how support pixels contribute to query segmentation, revealing the matching patterns learned by few-shot models.
If you find AffinityExplainer useful in your research, please consider citing our paper:
@misc{marinisMatchingBasedFewShotSemantic2025,
title = {Matching-{Based} {Few}-{Shot} {Semantic} {Segmentation} {Models} {Are} {Interpretable} by {Design}},
url = {http://arxiv.org/abs/2511.18163},
doi = {10.48550/arXiv.2511.18163},
publisher = {arXiv},
author = {Marinis, Pasquale De and Kaymak, Uzay and Brussee, Rogier and Vessio, Gennaro and Castellano, Giovanna},
year = {2025},
}This project is licensed under the MIT License - see the LICENSE file for details.
We thank the authors of the few-shot semantic segmentation models used in this work for making their code publicly available.
For questions, issues, or collaborations, please:
- Open an issue on GitHub
- Contact me via email
Made with ❤️ for Interpretable AI