Home

Awesome

<img width="60" alt="image" src="https://github.com/OpenGVLab/InternVL/assets/8529570/5aa4cda8-b453-40a0-9336-17012b430ae8"> Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed

This repository contains our customized mmcv/mmsegmentation/mmdetection code, integrated with DeepSpeed, which can be used for training large-scale object detection and semantic segmentation models.

Note, this codebase requires you to install a lower version of the environment (i.e., torch==1.12.0), which is different from our main repository's environment.

What is InternVL?

[Paper] [Chat Demo] [Quick Start]

InternVL scales up the ViT to 6B parameters and aligns it with LLM.

It is the largest open-source vision/vision-language foundation model (14B) to date, achieving 32 state-of-the-art performances on a wide range of tasks such as visual perception, cross-modal retrieval, multimodal dialogue, etc.

PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

Performance

Installation

Note, this codebase requires you to install a lower version of the environment (i.e., torch==1.12.0), which is different from our main repository's environment.

How to use?

The usage is basically consistent with that of common mmsegmentation and mmdetection.

Please enter the corresponding folder to check README.

Schedule

Citation

If you find this project useful in your research, please consider citing:

@article{chen2023internvl,
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2312.14238},
  year={2023}
}