Home

Awesome

<img src="figures/trol_emoji.png" style="vertical-align: -5px;" :height="50px" width="50px">TroL: Traversal of Layers for Large Language and Vision Models [ArXiv]

ezgif-3-e30b467e05

šŸ“° News

Thanks to huggingface staff, we can use free ZeroGPU (NVIDIA A100) for each user but there are limited queries, so if the inferences are stuck, then please wait for few minutes. (Local demo speed is much more faster than this online GPU space.)

Official PyTorch implementation code for realizing the technical part of Traversal of Layers (TroL) to improve numerous vision language performances with efficient model size. This code is developed from scratch. so I have been trying to improve the readibility and simplicity of the code, compared with LLaVA which has relatively complexly structured code.

šŸ’” Highlighted Images

<img src="figures/figure1.jpg" width="192" height="300"/>

Figure 1. TroL Layer. New Propagation.

<img src="figures/figure2.jpg" width="355" height="300">

Figure 2. Structure of TroL-Mixer.

<img src="figures/figure3.jpg" width="819" height="300">

Figure 3. Performances across numerous model sizes.

<img src="figures/figure4.jpg" width="599" height="300">

Figure 4. Comparison with Closed-source LLVMs.

<img src="figures/figure5.jpg" width="1008" height="300">

Figure 5. Investigating where layer traversing (reusing layers) mostly happens.

šŸ“Š Results

Open-source LLVMs with Standard Model Size

LLVMsSQA-IMGPOPEMMEMMBMathVistaSEED-IMGMM-VetLLaVA-W
Yi-VL-6B71.782.5191564.229.767.532.151.9
LLaVA-NeXT-7B70.186.5185169.634.670.243.972.3
MM1-7B72.686.6185872.335.970.942.1-
TroL-1.8B87.588.6203876.145.469.045.169.7
TroL-3.8B90.886.5198079.255.170.551.176.6
TroL-7B92.887.8230851.875.354.792.887.1

Open-source LLVMs with Large Model Sizes

LLVMsAI2DChartQAMMEMMBMathVistaMM-VetLLaVA-W
InternVL1.5-40B79.068.0217582.247.748.9-
InternVL1.5-26B80.783.8218882.253.562.8-
MM1-30B--206975.139.448.7-
MiniGemini-34B--210579.638.953.0-
MiniGemini-HD-34B--214180.643.359.3-
LLaVA-NeXT-34B74.968.7203079.346.057.488.8
LLaVA-NeXT-8B71.669.5197272.137.5-80.1
LLaVA-NeXT-72B77.477.0215980.546.6-89.2
LLaVA-NeXT-110B80.480.4220180.549.0-90.4
TroL-1.8B68.964.0203876.145.445.169.7
TroL-3.8B73.673.8198079.255.151.176.6
TroL-7B78.571.2230883.551.854.792.8

Closed-source LLVMs

LLVMsSQA-IMGAI2DChartQAMMEMMBMathVistaSEED-IMGMMStar
Qwen-VL-Plus71.675.978.1218367.043.372.739.7
Gemini-Pro80.173.974.1193373.645.270.741.6
GPT-4V84.678.278.5192777.049.969.146.1
TroL-1.8B87.568.964.0203876.145.469.045.5
TroL-3.8B90.873.673.8198079.255.170.546.5
TroL-7B92.878.571.2230883.551.875.351.3

šŸ“‹ Visual Instruction Tuning Dataset Description for <img src="figures/trol_emoji.png" style="vertical-align: -5px;" :height="50px" width="50px">TroL

Total: 2273830 (2.3M)

------------------------------
* Real-World Image: 755k
* Real-World Text: 143K
* Document & Chart & Diagram & Sign & Symbol: 627k
* Math: 747k
    - Math with Vision: 180k
    - Math with Text only: 566k
------------------------------

- ShareGPT4V-Caption [without SAM] (91021, 91k)
- ShareGPT4V-Instruction [Without few samples of OCR-VQA] (664703, 664k)
- ALLAVA4V-Text (143000, 143k)
- MiniGemini-Instruction [DocVQA, ChartQA, DVQA, AI2D] (27670, 27k)
- DocDownstream (574268, 574k)
- DocReason (25877, 25k)
- GLLaVA-Align (60252, 60k)
- GLLaVA-QA (117205, 117k)
- MathVision (3040, 3k)
- MathInstruct [TextOnlyDataset] (262040, 262k)
- MathPlus [TextOnlyDataset] (304754, 304k)

We collect the following nine datasets. For MiniGemini, we selectively use data samples only for DocVQA, ChartQA, DVQA, and AI2D. Therefore, it is no need for you to download all data samples for MiniGemini.

Gathered Dataset Layout

TroL_Dataset_Path
ā”œā”€ā”€ llava                                                       # ShareGPT4V
ā”‚   ā””ā”€ā”€ llava_pretrain                  
ā”‚       ā””ā”€ā”€ images                  
ā”œā”€ā”€ coco                                                        # ShareGPT4V
ā”‚   ā””ā”€ā”€ train2017                   
ā”œā”€ā”€ sam                                                         # ShareGPT4V
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ gqa                                                         # ShareGPT4V
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ ocr_vqa                                                     # ShareGPT4V
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ textvqa                                                     # ShareGPT4V
ā”‚   ā””ā”€ā”€ train_images                    
ā”œā”€ā”€ vg                                                          # ShareGPT4V
ā”‚   ā”œā”€ā”€ VG_100K                 
ā”‚   ā””ā”€ā”€ VG_100K_2                   
ā”œā”€ā”€ share_textvqa                                               # ShareGPT4V
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ web-celebrity                                               # ShareGPT4V
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ web-landmark                                                # ShareGPT4V
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ wikiart                                                     # ShareGPT4V
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ share_textvqa                                               # ShareGPT4V
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ docvqa                                                      # MiniGemini
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ chartqa                                                     # MiniGemini
ā”‚   ā””ā”€ā”€ train                   
ā”‚       ā””ā”€ā”€ images                  
ā”œā”€ā”€ dvqa                                                        # MiniGemini
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ ai2d                                                        # MiniGemini
ā”‚   ā””ā”€ā”€ images                  
ā”œā”€ā”€ imgs                                                        # DocDownstream & DocReason
ā”‚   ā””ā”€ā”€ ChartQA
ā”‚   ā””ā”€ā”€ DUE_Benchmark
ā”‚       ā””ā”€ā”€ DeepForm
ā”‚       ā””ā”€ā”€ DocVQA
ā”‚       ā””ā”€ā”€ InfographicsVQA
ā”‚       ā””ā”€ā”€ KleisterCharity
ā”‚       ā””ā”€ā”€ TabFact
ā”‚       ā””ā”€ā”€ WikiTableQuestions
ā”‚   ā””ā”€ā”€ TextCaps
ā”‚   ā””ā”€ā”€ TextVQA
ā”‚   ā””ā”€ā”€ VisualMRC
ā”œā”€ā”€ geo3k                                                       # GLLaVA
|   ā””ā”€ā”€ train
ā”œā”€ā”€ geoqa_plus                                                  # GLLaVA
ā”œā”€ā”€ images                                                      # MathVision
|
ā”œā”€ā”€ sharegpt4v_instruct_gpt4-vision_cap100k.json                # ShareGPT4V-Caption
ā”œā”€ā”€ sharegpt4v_mix665k_cap23k_coco-ap9k_lcs3k_sam9k_div2k.json  # ShareGPT4V-Instruction
ā”œā”€ā”€ Evol-Instruct-GPT4-Turbo-143K.json                          # ALLAVA4V-Text
ā”œā”€ā”€ train.jsonl                                                 # DocDownstream
ā”œā”€ā”€ detailed_explanation.jsonl                                  # DocReason
ā”œā”€ā”€ minigemini_instruction.json                                 # MiniGemini-Instruction
ā”œā”€ā”€ gllava_align.parquet                                        # GLLaVA-Align
ā”œā”€ā”€ gllava_qa.parquet                                           # GLLaVA-QA
ā”œā”€ā”€ mathvision.parquet                                          # MathVision
ā”œā”€ā”€ MathInstruct.json                                           # MathInstruct
ā””ā”€ā”€ mathplus.parquet                                            # MathPlus

šŸ“‚ Evaluation Benchmarks

These are the list of evaluation datasets. If you completely download them, the dataset should be placed in the folder by the following below directory layout.

Evaluation Dataset Directory Layout

Evaluation_Dataset_Path
ā”œā”€ā”€ LLVisionQA-QBench               # Q-Bench
ā”œā”€ā”€ ScienceQA                       # SQA-IMG
ā”œā”€ā”€ ai2d                            # AI2D
ā”œā”€ā”€ chartqa                         # ChartQA
ā”œā”€ā”€ SEED-Bench                      # SEED-IMG
ā”œā”€ā”€ POPE                            # POPE
ā”œā”€ā”€ HallusionBench                  # HallusionBench
ā”œā”€ā”€ MME_Benchmark_release_version   # MME
ā”œā”€ā”€ MathVista                       # MathVista
ā”œā”€ā”€ MMBench                         # MMB
ā”œā”€ā”€ mm-vet                          # MM-Vet
ā”œā”€ā”€ llava-bench-in-the-wild         # LLaVA Bench in the Wild
ā”œā”€ā”€ MMStar                          # MMStar
ā”œā”€ā”€ MathVerse                       # MathVerse
ā””ā”€ā”€ VisualWebBench                  # VisualWebBench