Home

Awesome

Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal Models

<font size=7><div align='center'><b>Lumen</b>: <b>L</b>arge m<b>u</b>ltimodal <b>m</b>odel with versatile vision-centric capabilities <b>en</b>hancement</div></font>

News

Lumen: Unleashing Versatile Vision-centric Capabilities of [Paper] <br /> Yang Jiao, Shaoxiang Chen, Zequn Jie, Jingjing Chen, Lin Ma, Yu-Gang Jiang<br />

Abstract

Large Multimodal Model (LMM) is a hot research topic in the computer vision area and has also demonstrated remarkable potential across multiple disciplinary fields. A recent trend is to further extend and enhance the perception capabilities of LMMs. The current methods follow the paradigm of adapting the visual task outputs to the format of the language model, which is the main component of a LMM. This adaptation leads to convenient development of such LMMs with minimal modifications, however, it overlooks the intrinsic characteristics of diverse visual tasks and hinders the learning of perception capabilities. To address this issue, we propose a novel LMM architecture named Lumen, a Large multimodal model with versatile vision-centric capability enhancement. We decouple the LMM's learning of perception capabilities into task-agnostic and task-specific stages. Lumen first promotes fine-grained vision-language concept alignment, which is the fundamental capability for various visual tasks. Thus the output of the task-agnostic stage is a shared representation for all the tasks we address in this paper. Then the task-specific decoding is carried out by flexibly routing the shared representation to lightweight task decoders with negligible training efforts.

Performances

Object Detection

TypeModelInput SizemAPAP50AP75
SpecialistsFaster R-CNN-R501333*80040.361.044.0
DETR-DC51333*80043.363.145.9
Vision GeneralistsPix2Seq-v21024*102446.5--
UniPerceiver-v21600*140058.6--
LMM GeneralistsGriffon-13B448*44824.840.625.1
Lumen-7B448*44833.951.234.2
Lumen-7B-v1.5448*44835.353.235.8

Instance Segmentation

TypeModelmAPAP50AP75
SpecialistsMask R-CNN-R5037.158.440.1
PolarMask30.552.031.1
Vision GeneralistsPix2Seq-v238.2--
UniPerceiver-v250.6--
LMM GeneralistsLumen-7B29.147.529.6
Lumen-7B-v1.530.449.831.0

Pose Estimation

TypeModelmAPAP50AP75
SpecialistsCPM62.786.270.9
RTMPose68.288.375.9
Vision GeneralistsPix2Seq-v264.8--
LMM GeneralistsLumen-7B65.490.472.2
Lumen-7B-v1.567.290.475.6

VQA

ModelEN_MMBench_DEVSEED_IMGMMEMMMU_VALMathVista
InstructBLIP36.058.81213/29232.925.3
MiniGPT-424.347.4582/144-23.1
Shikra58.8----
Qwen-VL-Chat60.658.21488/36135.9-
LLaVA-v1.564.366.11511/29635.623.5
Lumen-7B-v1.564.965.81426/33235.224.6

Citation

If you find this project useful in your research, please consider citing:

@article{jiao2024lumen,
  title={Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal Models},
  author={Jiao, Yang and Chen, Shaoxiang and Jie, Zequn and Chen, Jingjing and Ma, Lin and Jiang, Yu-Gang},
  journal={arXiv preprint arXiv:2403.07304},
  year={2024}
}

Acknowledgement