Home

Awesome

Image Animation Turbo Boost

<img src="https://img.shields.io/badge/python-3-lightgrey"></a> <img src="https://img.shields.io/badge/c%2B%2B-11-blue"></a> <img src="https://img.shields.io/badge/onnxruntime-1.9-orange"></a> <img src="https://img.shields.io/badge/openvino-2021.4-green"></a> <img src="https://img.shields.io/badge/tensorrt-8-yellowgreen"></a>

Aim to accelerate the image-animation-model inference through the inference frameworks such as onnx、tensorrt and openvino.


https://github.com/TalkUHulk/Image-Animation-Turbo-Boost/assets/18474163/a6cbb56c-10b3-4328-b9ce-ccece2e92aad

https://github.com/TalkUHulk/Image-Animation-Turbo-Boost/assets/18474163/49ddfd0c-1f4d-4837-914e-4f1efaed1e92

FOMM

The model using from FOMM

Convert

python export_onnx.py --output-name-kp kp_detector.onnx --output-name-fomm fomm.onnx --config config/vox-adv-256.yaml --ckpt ./checkpoints/vox-adv-cpk.pth.tar

dev environment: docker pull chaoyiyuan/tensorrt8:latest

Run:

onnx2trt fomm.onnx -o fomm.trt

Demo


TPSMM

The model using from TPSMM

Convert

python export_onnx.py --output-name-kp kp_detector.onnx --output-name-tpsmm tpsmm.onnx --config config/vox-256.yaml --ckpt ./checkpoints/vox.pth.tar

dev environment: docker pull openvino/ubuntu18_dev:2021.4.2_src

python3 mo.py --input_model ./tpsmm.onnx  --output_dir ./openvino --data_type FP32

Demo

ONNXRuntime

To test python demo run:

python demo/ONNXRuntime/python/demo.py --source ../assets/source.png --driving ../assets/driving.mp4 --onnx-file-tpsmm tpsmm.onnx --onnx-file-kp kp_detector.onnx

To test c++ demo run:

mkdir build && cd build
cmake ..
make -j8
./onnx_demo xxx/tpsmm.onnx xxx/kp_detector.onnx xxx/source.png xxx/driving.mp4 ./generated_onnx.mp4

OpenVINO

To test python demo run:

python demo/OpenVINO/python/demo.py --source ../assets/source.png --driving ../assets/driving.mp4 --xml-kp xxxx/kp_detector_sim.xml --xml-tpsmm xxx/tpsmm_sim.xml --bin-kp xxx/kp_detector_sim.bin --bin-tpsmm xxx/tpsmm_sim.bin

To test c++ demo run:

mkdir build && cd build
cmake ..
make -j8
./openvino_demo xxx/tpsmm.xml xxx/tpsmm.bin xxx/kp_detector.xml xxx/kp_detector.bin xxx/source.png xxx/driving.mp4 ./generated_onnx.mp4

Result

FrameWorkElapsed(s)Language
pytorch(cpu)6python
ONNXRuntime~1.2python
ONNXRuntime~1.6c++
OpenVINO~0.6python
OpenVINO~0.6c++

ONNXRuntime C++ is slower compared with python, maybe related to libraries which compiled by myself.

<p align="center"> <img src="images/generated_py_onnx.gif" width="640px"/> <br> generated by python onnx.</p> <p align="center"> <img src="images/generated_py_opv.gif" width="640px"/> <br> generated by python openvino.</p> <p align="center"> <img src="images/generated_cpp_onnx.gif" width="640px"/> <br> generated by cpp onnx.</p> <p align="center"> <img src="images/generated_cpp_opv.gif" width="640px"/> <br> generated by cpp openvino.</p>

To Do

Failed to convert to tensorrt, maybe scatter ops is not supported. This will be fixed in 8.4GA, according to issues

Pretrained Models

Please download the pre-trained models from the following links.

PathDescription
FOMMOriginal Pretrained Pytorch Model.
TPSMMOriginal Pretrained Pytorch Model.
FOMM Onnxonnx model of fomm.
FOMM TensorRTtrt model of fomm.
TPSMM Onnxonnx model of tpsmm.
TPSMM OpenVINOopenvino model of tpsmm.

Acknowledgments

FOMM is AliaksandrSiarohin's work.

TPSMM is yoyo-nb's work.

Thanks for the excellent works!

My work is to modify part of the network,and enable the model can be converted to onnx、openvino or tensorrt.