Home

Awesome

header

<p align="center"> 📃 <a href="https://arxiv.org/abs/2409.02889" target="_blank">Paper</a> • 🌐 <a href="" target="_blank">Demo</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/LongLLaVA-53B-A13B" target="_blank">LongLLaVA-53B-A13B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/LongLLaVA-9B" target="_blank">LongLLaVA-9B</a> </p>

efficiency

🌈 Update

Architecture

<details> <summary>Click to view the architecture image</summary>

Architecture Image

</details>

Results

<details> <summary>Click to view the Results</summary> </details>

Results reproduction

1. Environment Setup

pip install -r requirements.txt

2. Data DownLoad and Construction

<details> <summary>Dataset Taxonomy</summary>

Dataset

</details>

3. Training

4. Evaluation

python cli.py --model_dir path-to-longllava
query = 'What does the picture show?'
image_paths = ['image_path1'] # image or video path

from cli import Chatbot
bot = Chatbot(path-to-longllava)
output = bot.chat(query, image_paths)
print(output) # Prints the output of the model
python Eval.sh

5. Reproduce other results in Paper

python /utils/cal_flops.py
python ./benchmarks/Efficiency/evaluate.py
python ./benchmarks/Efficiency/evaluatevllm.py
python ./utils/dense_downcycling.py

TO DO

Acknowledgement

Citation

@misc{wang2024longllavascalingmultimodalllms,
      title={LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture}, 
      author={Xidong Wang and Dingjie Song and Shunian Chen and Chen Zhang and Benyou Wang},
      year={2024},
      eprint={2409.02889},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.02889}, 
}