- Published on
PRIMA: Multi-Image Vision-Language Models for Reasoning Segmentation
- Authors
- Name
- Muntasir Wahed
- Name
- Kiet A. Nguyen
- Name
- Adheesh Sunil Juvekar
- Name
- Xinzhuo Li
- Name
- Xiaona Zhou
- Name
- Vedant Shah
- Name
- Tianjiao Yu
- Name
- Pinar Yanardag
- Name
- Ismini Lourentzou
- Affiliation
- University of Illinois Urbana - Champaign
- Affiliation
- Affiliation
- Virginia Tech
Despite significant advancements in Large Vision-Language Models (LVLMs), existing pixel-grounding models operate on single-image settings, limiting their ability to perform detailed, fine-grained comparisons across multiple images. Conversely, current multi-image understanding models lack pixel-level grounding. Our work addresses this gap by introducing the task of multi-image pixel-grounded reasoning segmentation, and PRIMA, a novel LVLM that integrates pixel-level grounding with robust multi-image reasoning capabilities to produce contextually rich, pixel-grounded explanations. Central to PRIMA is an efficient vision module that queries fine-grained visual representations across multiple images, reducing TFLOPs by $25.3\%$. To support training and evaluation, we curate $M^4Seg$, a new reasoning segmentation benchmark consisting of $\sim$224K question-answer pairs that require fine-grained visual understanding across multiple images. Experimental results demonstrate PRIMA outperforms state-of-the-art baselines.