Home

Awesome

header

<p align="center"> ๐Ÿ“ƒ <a href="" target="_blank">Paper</a> โ€ข ๐ŸŒ <a href="" target="_blank">Demo</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/FreedomIntelligence/LongLLaVA" target="_blank">LongLLaVA</a> </p>

efficiency

๐ŸŒˆ Update

Architecture

<details> <summary>Click to view the architecture image</summary>

Architecture Image

</details>

Results

<details> <summary>Click to view the Results</summary> </details>

Results reproduction

1. Environment Setup

pip install -r requirements.txt

2. Data DownLoad and Construction

<details> <summary>Dataset Taxonomy</summary>

Dataset

</details>

3. Training

4. Evaluation

python cli.py --model_dir path-to-longllava
query = 'What does the picture show?'
image_paths = ['image_path1'] # image or video path

from cli import Chatbot
bot = Chatbot(path-to-longllava)
output = bot.inference(query, image_paths)
print(output) # Prints the output of the model
python Eval.sh

5. Reproduce other results in Paper

python /utils/cal_flops.py
python ./benchmarks/Efficiency/evaluate.py
python ./benchmarks/Efficiency/evaluatevllm.py

TO DO

Acknowledgement

Citation

@misc{wang2024longllavascalingmultimodalllms,
      title={LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture}, 
      author={Xidong Wang and Dingjie Song and Shunian Chen and Chen Zhang and Benyou Wang},
      year={2024},
      eprint={2409.02889},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.02889}, 
}