Awesome
Image Animation Turbo Boost
<img src="https://img.shields.io/badge/python-3-lightgrey"></a> <img src="https://img.shields.io/badge/c%2B%2B-11-blue"></a> <img src="https://img.shields.io/badge/onnxruntime-1.9-orange"></a> <img src="https://img.shields.io/badge/openvino-2021.4-green"></a> <img src="https://img.shields.io/badge/tensorrt-8-yellowgreen"></a>
Aim to accelerate the image-animation-model inference through the inference frameworks such as onnx、tensorrt and openvino.
FOMM
The model using from FOMM
Convert
- Convert to onnx:
python export_onnx.py --output-name-kp kp_detector.onnx --output-name-fomm fomm.onnx --config config/vox-adv-256.yaml --ckpt ./checkpoints/vox-adv-cpk.pth.tar
- Convert to trt:
dev environment: docker pull chaoyiyuan/tensorrt8:latest
Run:
onnx2trt fomm.onnx -o fomm.trt
Demo
TPSMM
The model using from TPSMM
Convert
- Convert to onnx:
python export_onnx.py --output-name-kp kp_detector.onnx --output-name-tpsmm tpsmm.onnx --config config/vox-256.yaml --ckpt ./checkpoints/vox.pth.tar
- Convert to openvino:
dev environment: docker pull openvino/ubuntu18_dev:2021.4.2_src
python3 mo.py --input_model ./tpsmm.onnx --output_dir ./openvino --data_type FP32
Demo
ONNXRuntime
To test python demo run:
python demo/ONNXRuntime/python/demo.py --source ../assets/source.png --driving ../assets/driving.mp4 --onnx-file-tpsmm tpsmm.onnx --onnx-file-kp kp_detector.onnx
To test c++ demo run:
- build
mkdir build && cd build
cmake ..
make -j8
./onnx_demo xxx/tpsmm.onnx xxx/kp_detector.onnx xxx/source.png xxx/driving.mp4 ./generated_onnx.mp4
OpenVINO
To test python demo run:
python demo/OpenVINO/python/demo.py --source ../assets/source.png --driving ../assets/driving.mp4 --xml-kp xxxx/kp_detector_sim.xml --xml-tpsmm xxx/tpsmm_sim.xml --bin-kp xxx/kp_detector_sim.bin --bin-tpsmm xxx/tpsmm_sim.bin
To test c++ demo run:
- build
mkdir build && cd build
cmake ..
make -j8
./openvino_demo xxx/tpsmm.xml xxx/tpsmm.bin xxx/kp_detector.xml xxx/kp_detector.bin xxx/source.png xxx/driving.mp4 ./generated_onnx.mp4
Result
FrameWork | Elapsed(s) | Language |
---|---|---|
pytorch(cpu) | 6 | python |
ONNXRuntime | ~1.2 | python |
ONNXRuntime | ~1.6 | c++ |
OpenVINO | ~0.6 | python |
OpenVINO | ~0.6 | c++ |
<p align="center"> <img src="images/generated_py_onnx.gif" width="640px"/> <br> generated by python onnx.</p> <p align="center"> <img src="images/generated_py_opv.gif" width="640px"/> <br> generated by python openvino.</p> <p align="center"> <img src="images/generated_cpp_onnx.gif" width="640px"/> <br> generated by cpp onnx.</p> <p align="center"> <img src="images/generated_cpp_opv.gif" width="640px"/> <br> generated by cpp openvino.</p>ONNXRuntime C++ is slower compared with python, maybe related to libraries which compiled by myself.
To Do
Failed to convert to tensorrt, maybe scatter ops is not supported. This will be fixed in 8.4GA, according to issues
Pretrained Models
Please download the pre-trained models from the following links.
Path | Description |
---|---|
FOMM | Original Pretrained Pytorch Model. |
TPSMM | Original Pretrained Pytorch Model. |
FOMM Onnx | onnx model of fomm. |
FOMM TensorRT | trt model of fomm. |
TPSMM Onnx | onnx model of tpsmm. |
TPSMM OpenVINO | openvino model of tpsmm. |
Acknowledgments
FOMM is AliaksandrSiarohin's work.
TPSMM is yoyo-nb's work.
Thanks for the excellent works!
My work is to modify part of the network,and enable the model can be converted to onnx、openvino or tensorrt.