Home

Awesome

LMDeploy-Jetson Community

Deploying LLMs offline on the NVIDIA Jetson platform marks the dawn of a new era in embodied intelligence, where devices can function independently without continuous internet access.

[中文] | [English]

This project focuses on adapting LMDeploy for use with NVIDIA Jetson series edge computing cards, facilitating the implementation of InternLM series LLMs for Offline Embodied Intelligence (OEI).

Latest News🎉

Community Recruitment

Verified model/platform

ModelsInternLM-7BInternLM-20BInternLM2-1.8BInternLM2-7BInternLM2-20B
Orin AGX(32G)<br>Jetpack 5.1<br>Mem:??/??<br>14.68 token/s<br>Mem:??/??<br>5.82 token/s<br>Mem:??/??<br>56.57 token/s<br>Mem:??/??<br>14.56 token/s<br>Mem:??/??<br>6.16 token/s
Orin NX(16G)<br>Jetpack 5.1<br>Mem:8.6G/16G<br>7.39 token/s<br>Mem:14.7G/16G<br>3.08 token/s<br>Mem:5.6G/16G<br>22.96 token/s<br>Mem:9.2G/16G<br>7.48 token/s<br>Mem:14.8G/16G<br>3.19 token/s
Xavier NX(8G)<br>Jetpack 5.1<br>Mem:4.35G/8G<br>28.36 token/s

If you have more Jetson series boards, feel free to run benchmarks and submit the results via Pull Requests (PR) to become one of the community contributors!

Future Work

Tutorial

S1.Quantize on server by W4A16

S2.Install Miniconda on Jetson

S3.Install CMake-3.29.0 on Jetson

S4.Install RapidJson on Jetson

S5.Install Pytorch-2.1.0 on Jetson

S6.Port LMDeploy-0.2.5 to Jetson

S7.Run InternLM offline on Jetson

Appendix

Community Projects

Citation

If this project is helpful to your work, please cite it using the following format:

@misc{2024lmdeployjetson,
    title={LMDeploy-Jetson:Opening a new era of Offline Embodied Intelligence},
    author={LMDeploy-Jetson Community},
    url={https://github.com/BestAnHongjun/LMDeploy-Jetson},
    year={2024}
}

Acknowledgements