Home

Awesome

<div id="lite.ai.toolkit-Introduction"></div>

logo-v3

<div align='center'> <img src=https://img.shields.io/badge/Linux-pass-brightgreen.svg > <img src=https://img.shields.io/badge/Device-GPU/CPU-yellow.svg > <img src=https://img.shields.io/badge/ONNXRuntime-1.17.1-turquoise.svg > <img src=https://img.shields.io/badge/MNN-2.8.2-hotpink.svg > <img src=https://img.shields.io/github/stars/DefTruth/lite.ai.toolkit.svg?style=social > <img src=https://img.shields.io/github/downloads/DefTruth/lite.ai.toolkit/total?color=ccf&label=downloads&logo=github&logoColor=lightgrey > </div>

πŸ› Lite.Ai.ToolKit: A lite C++ toolkit of awesome AI models, such as Object Detection, Face Detection, Face Recognition, Segmentation, Matting, etc. See Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub.

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/5b28aed1-e207-4256-b3ea-3b52f9e68aed' height="80px" width="80px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/28274741-8745-4665-abff-3a384b75f7fa' height="80px" width="80px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/c802858c-6899-4246-8839-5721c43faffe' height="80px" width="80px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/20a18d56-297c-4c72-8153-76d4380fc9ec' height="80px" width="80px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/f4dd5263-8514-4bb0-a0dd-dbe532481aff' height="80px" width="80px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/b6a431d2-225b-416b-8a1e-cf9617d79a63' height="80px" width="80px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/84d3ed6a-b711-4c0a-8e92-a2da05a0d04e' height="80px" width="80px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/157b9e11-fc92-445b-ae0d-0d859c8663ee' height="80px" width="80px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/ef0eeabe-6dbe-4837-9aad-b806a8398697' height="80px" width="80px"> </div>

News πŸ‘‡πŸ‘‡

Most of my time now is focused on LLM/VLM Inference. Please check πŸ“–Awesome-LLM-Inference , πŸ“–Awesome-SD-Inference and πŸ“–CUDA-Learn-Notes for more details. Now, lite.ai.toolkit is mainly maintained by πŸŽ‰@wangzijian1010.

Citations πŸŽ‰πŸŽ‰

@misc{lite.ai.toolkit@2021,
  title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
  url={https://github.com/DefTruth/lite.ai.toolkit},
  note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
  author={DefTruth, wangzijian1010 etc},
  year={2021}
}
<!-- <div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/5b28aed1-e207-4256-b3ea-3b52f9e68aed' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/28274741-8745-4665-abff-3a384b75f7fa' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/64fea806-f13b-4dc9-98fc-cd01319b75f4' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/c802858c-6899-4246-8839-5721c43faffe' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/20a18d56-297c-4c72-8153-76d4380fc9ec' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/f4dd5263-8514-4bb0-a0dd-dbe532481aff' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/6344f307-15e3-4593-9866-50f5ee777f43' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/f8d65d8c-2a3d-4634-9169-3bc36452d997' height="90px" width="90px"> <br> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/3d8ec05e-f850-40e5-b4a9-2914c4ac5b9e' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/5684e1d9-b3b1-45af-ac38-d9201490d46e' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/b6a431d2-225b-416b-8a1e-cf9617d79a63' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/84d3ed6a-b711-4c0a-8e92-a2da05a0d04e' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/157b9e11-fc92-445b-ae0d-0d859c8663ee' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/ef0eeabe-6dbe-4837-9aad-b806a8398697' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/7d3e929e-c363-4457-862c-8b725f0463ec' height="90px" width="90px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/95106e7e-c6bc-433d-b20c-95b579e85a06' height="90px" width="90px"> </div> -->

Features πŸ‘πŸ‘‹

Build πŸ‘‡πŸ‘‡

Download prebuilt lite.ai.toolkit library from tag/v0.2.0, or just build it from source:

git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git  # latest
cd lite.ai.toolkit && sh ./build.sh # >= 0.2.0, support Linux only, tested on Ubuntu 20.04.6 LTS

Quick Start 🌟🌟

<div id="lite.ai.toolkit-Quick-Start"></div>

Example0: Object Detection using YOLOv5. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

int main(int argc, char *argv[]) {
  std::string onnx_path = "yolov5s.onnx";
  std::string test_img_path = "test_yolov5.jpg";
  std::string save_img_path = "test_results.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  delete yolov5;
  return 0;
}

You can download the prebuilt lite.ai.tooklit library and test resources from tag/v0.2.0.

export LITE_AI_TAG_URL=https://github.com/DefTruth/lite.ai.toolkit/releases/download/v0.2.0
wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg

Quick Setup πŸ‘€

To quickly setup lite.ai.toolkit, you can follow the CMakeLists.txt listed as belows. πŸ‘‡πŸ‘€

set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL)
find_package(lite.ai.toolkit REQUIRED PATHS ${lite.ai.toolkit_DIR})
add_executable(lite_yolov5 test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})

Mixed with MNN or ONNXRuntime πŸ‘‡πŸ‘‡

The goal of lite.ai.toolkit is not to abstract on top of MNN and ONNXRuntime. So, you can use lite.ai.toolkit mixed with MNN(-DENABLE_MNN=ON, default OFF) or ONNXRuntime(-DENABLE_ONNXRUNTIME=ON, default ON). The lite.ai.toolkit installation package contains complete MNN and ONNXRuntime. The workflow may looks like:

#include "lite/lite.h"
// 0. use yolov5 from lite.ai.toolkit to detect objs.
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
// 1. use OnnxRuntime or MNN to implement your own classfier.
interpreter = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(mnn_path));
// or: session = new Ort::Session(ort_env, onnx_path, session_options);
classfier = interpreter->createSession(schedule_config);
// 2. then, classify the detected objs use your own classfier ...

The included headers of MNN and ONNXRuntime can be found at mnn_config.h and ort_config.h.

<details> <summary> πŸ”‘οΈ Check the detailed Quick Start!Click here! </summary>

Download resources

You can download the prebuilt lite.ai.tooklit library and test resources from tag/v0.2.0.

export LITE_AI_TAG_URL=https://github.com/DefTruth/lite.ai.toolkit/releases/download/v0.2.0
wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg
tar -zxvf lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz

Write test code

write YOLOv5 example codes and name it test_lite_yolov5.cpp:

#include "lite/lite.h"

int main(int argc, char *argv[]) {
  std::string onnx_path = "yolov5s.onnx";
  std::string test_img_path = "test_yolov5.jpg";
  std::string save_img_path = "test_results.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  delete yolov5;
  return 0;
}

Setup CMakeLists.txt

cmake_minimum_required(VERSION 3.10)
project(lite_yolov5)
set(CMAKE_CXX_STANDARD 17)

set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL)
find_package(lite.ai.toolkit REQUIRED PATHS ${lite.ai.toolkit_DIR})
if (lite.ai.toolkit_Found)
    message(STATUS "lite.ai.toolkit_INCLUDE_DIRS: ${lite.ai.toolkit_INCLUDE_DIRS}")
    message(STATUS "        lite.ai.toolkit_LIBS: ${lite.ai.toolkit_LIBS}")
    message(STATUS "   lite.ai.toolkit_LIBS_DIRS: ${lite.ai.toolkit_LIBS_DIRS}")
endif()
add_executable(lite_yolov5 test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})

Build example

mkdir build && cd build && cmake .. && make -j1

Then, export the lib paths to LD_LIBRARY_PATH which listed by lite.ai.toolkit_LIBS_DIRS.

export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/opencv/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/onnxruntime/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/MNN/lib:$LD_LIBRARY_PATH # if -DENABLE_MNN=ON

Run binary:

cp ../yolov5s.onnx ../test_yolov.jpg .
./lite_yolov5

The output logs:

LITEORT_DEBUG LogId: ../examples/hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
Name: images
Dims: 1
Dims: 3
Dims: 640
Dims: 640
=============== Output-Dims ==============
Output: 0 Name: pred Dim: 0 :1
Output: 0 Name: pred Dim: 1 :25200
Output: 0 Name: pred Dim: 2 :85
Output: 1 Name: output2 Dim: 0 :1
......
Output: 3 Name: output4 Dim: 1 :3
Output: 3 Name: output4 Dim: 2 :20
Output: 3 Name: output4 Dim: 3 :20
Output: 3 Name: output4 Dim: 4 :85
========================================
detected num_anchors: 25200
generate_bboxes num: 48
</details> <div id="lite.ai.toolkit-Supported-Models-Matrix"></div> <!-- <details> <summary> πŸ”‘οΈ Supported Models Matrix!Click here! </summary> -->

Supported Models Matrix

ClassSizeTypeDemoONNXRuntimeMNNNCNNTNNLinuxMacOSWindowsAndroid
YoloV528Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
YoloV3236Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
TinyYoloV333Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
YoloV4176Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
SSD76Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
SSDMobileNetV127Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
YoloX3.5Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
TinyYoloV4VOC22Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
TinyYoloV4COCO22Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
YoloR39Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
ScaledYoloV4270Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
EfficientDet15Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
EfficientDetD7220Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
EfficientDetD8322Mdetectiondemoβœ…///βœ…βœ”οΈβœ”οΈ/
YOLOP30Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
NanoDet1.1Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
NanoDetPlus4.5Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
NanoDetEffi...12Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
YoloX_V_0_1_13.5Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
YoloV5_V_6_07.5Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
GlintArcFace92Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
GlintCosFace92Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
GlintPartialFC170Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
FaceNet89Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
FocalArcFace166Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
FocalAsiaArcFace166Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
TencentCurricularFace249Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
TencentCifpFace130Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
CenterLossFace280Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
SphereFace80Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
PoseRobustFace92Mfaceiddemoβœ…///βœ…βœ”οΈβœ”οΈ/
NaivePoseRobustFace43Mfaceiddemoβœ…///βœ…βœ”οΈβœ”οΈ/
MobileFaceNet3.8Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
CavaGhostArcFace15Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
CavaCombinedFace250Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
MobileSEFocalFace4.5Mfaceiddemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
RobustVideoMatting14Mmattingdemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈβ”
MGMatting113Mmattingdemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈ/
MODNet24Mmattingdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
MODNetDyn24Mmattingdemoβœ…///βœ…βœ”οΈβœ”οΈ/
BackgroundMattingV220Mmattingdemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈ/
BackgroundMattingV2Dyn20Mmattingdemoβœ…///βœ…βœ”οΈβœ”οΈ/
UltraFace1.1Mface::detectdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
RetinaFace1.6Mface::detectdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
FaceBoxes3.8Mface::detectdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
FaceBoxesV23.8Mface::detectdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
SCRFD2.5Mface::detectdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
YOLO5Face4.8Mface::detectdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
PFLD1.0Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
PFLD984.8Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
MobileNetV2689.4Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
MobileNetV2SE6811Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
PFLD682.8Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
FaceLandmark10002.0Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
PIPNet9844.0Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
PIPNet6844.0Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
PIPNet2944.0Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
PIPNet1944.0Mface::aligndemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
FSANet1.2Mface::posedemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈβ”
AgeGoogleNet23Mface::attrdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
GenderGoogleNet23Mface::attrdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
EmotionFerPlus33Mface::attrdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
VGG16Age514Mface::attrdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
VGG16Gender512Mface::attrdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
SSRNet190Kface::attrdemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈβ”
EfficientEmotion715Mface::attrdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
EfficientEmotion815Mface::attrdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
MobileEmotion713Mface::attrdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
ReXNetEmotion730Mface::attrdemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈ/
EfficientNetLite449Mclassificationdemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈ/
ShuffleNetV28.7Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
DenseNet12130.7Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
GhostNet20Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
HdrDNet13Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
IBNNet97Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
MobileNetV213Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
ResNet44Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
ResNeXt95Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
DeepLabV3ResNet101232Msegmentationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
FCNResNet101207Msegmentationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈ/
FastStyleTransfer6.4Mstyledemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
Colorizer123Mcolorizationdemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈ/
SubPixelCNN234Kresolutiondemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈβ”
SubPixelCNN234Kresolutiondemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈβ”
InsectDet27Mdetectiondemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈβ”
InsectID22Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβœ”οΈ
PlantID30Mclassificationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβœ”οΈ
YOLOv5BlazeFace3.4Mface::detectdemoβœ…βœ…//βœ…βœ”οΈβœ”οΈβ”
YoloV5_V_6_17.5Mdetectiondemoβœ…βœ…//βœ…βœ”οΈβœ”οΈβ”
HeadSeg31Msegmentationdemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈβ”
FemalePhoto2Cartoon15Mstyledemoβœ…βœ…/βœ…βœ…βœ”οΈβœ”οΈβ”
FastPortraitSeg400ksegmentationdemoβœ…βœ…//βœ…βœ”οΈβœ”οΈβ”
PortraitSegSINet380ksegmentationdemoβœ…βœ…//βœ…βœ”οΈβœ”οΈβ”
PortraitSegExtremeC3Net180ksegmentationdemoβœ…βœ…//βœ…βœ”οΈβœ”οΈβ”
FaceHairSeg18Msegmentationdemoβœ…βœ…//βœ…βœ”οΈβœ”οΈβ”
HairSeg18Msegmentationdemoβœ…βœ…//βœ…βœ”οΈβœ”οΈβ”
MobileHumanMatting3Mmattingdemoβœ…βœ…//βœ…βœ”οΈβœ”οΈβ”
MobileHairSeg14Msegmentationdemoβœ…βœ…//βœ…βœ”οΈβœ”οΈβ”
YOLOv617Mdetectiondemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
FaceParsingBiSeNet50Msegmentationdemoβœ…βœ…βœ…βœ…βœ…βœ”οΈβœ”οΈβ”
FaceParsingBiSeNetDyn50Msegmentationdemoβœ…////βœ”οΈβœ”οΈβ”
<!-- </details> --> <div id="lite.ai.toolkit-Model-Zoo"></div> <details> <summary> πŸ”‘οΈ Model Zoo!Click here! </summary>

Model Zoo.

<div id="lite.ai.toolkit-2"></div>

Lite.Ai.ToolKit contains almost 100+ AI models with 500+ frozen pretrained files now. Most of the files are converted by myself. You can use it through lite::cv::Type::Class syntax, such as lite::cv::detection::YoloV5. More details can be found at Examples for Lite.Ai.ToolKit. Note, for Google Drive, I can not upload all the *.onnx files because of the storage limitation (15G).

FileBaidu DriveGoogle DriveDocker HubHub (Docs)
ONNXBaidu Drive code: 8ginGoogle DriveONNX Docker v0.1.22.01.08 (28G), v0.1.22.02.02 (400M)ONNX Hub
MNNBaidu Drive code: 9v63❔MNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (213M)MNN Hub
NCNNBaidu Drive code: sc7f❔NCNN Docker v0.1.22.01.08 (9G), v0.1.22.02.02 (197M)NCNN Hub
TNNBaidu Drive code: 6o6k❔TNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (217M)TNN Hub
  docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08  # (28G)
  docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08   # (11G)
  docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08  # (9G)
  docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08   # (11G)
  docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.02.02  # (400M) + YOLO5Face
  docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.02.02   # (213M) + YOLO5Face
  docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.02.02  # (197M) + YOLO5Face
  docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.02.02   # (217M) + YOLO5Face

πŸ”‘οΈ How to download Model Zoo from Docker Hub?

Model Hubs

The pretrained and converted ONNX files provide by lite.ai.toolkit are listed as follows. Also, see Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub for more details.

</details> <div id="lite.ai.toolkit-Examples-for-Lite.AI.ToolKit"></div> <details> <summary> πŸ”‘οΈ More Examples!Click here! </summary>

πŸ”‘οΈ More Examples.

More examples can be found at examples.

<div id="lite.ai.toolkit-object-detection"></div>

Example0: Object Detection using YOLOv5. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_yolov5_1.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolov5;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/44dbf4ac-0f38-41b6-930b-55b032b3c2ee' height="256px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/23aca3df-76a6-45c0-a48b-7968b4d4b9d8' height="256px"> </div>

Or you can use Newest πŸ”₯πŸ”₯ ! YOLO series's detector YOLOX or YoloR. They got the similar results.

More classes for general object detection (80 classes, COCO).

auto *detector = new lite::cv::detection::YoloX(onnx_path);  // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YoloV4(onnx_path); 
auto *detector = new lite::cv::detection::YoloV3(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); 
auto *detector = new lite::cv::detection::SSD(onnx_path); 
auto *detector = new lite::cv::detection::YoloV5(onnx_path); 
auto *detector = new lite::cv::detection::YoloR(onnx_path);  // Newest YOLO detector !!! 2021-05
auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path); 
auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDet(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path); 
auto *detector = new lite::cv::detection::YOLOP(onnx_path);
auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::NanoDetPlus(onnx_path); // Super fast and tiny! 2021/12/25
auto *detector = new lite::cv::detection::NanoDetEfficientNetLite(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::YoloV5_V_6_0(onnx_path); 
auto *detector = new lite::cv::detection::YoloV5_V_6_1(onnx_path); 
auto *detector = new lite::cv::detection::YoloX_V_0_1_1(onnx_path);  // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YOLOv6(onnx_path);  // Newest 2022 YOLO detector !!!

<div id="lite.ai.toolkit-matting"></div>

Example1: Video Matting using RobustVideoMatting2021πŸ”₯πŸ”₯πŸ”₯. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../examples/logs/test_lite_rvm_0.mp4";
  std::string background_path = "../../../examples/lite/resources/test_lite_matting_bgr.jpg";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector<lite::types::MattingContent> contents;
  
  // 1. video matting.
  cv::Mat background = cv::imread(background_path);
  rvm->detect_video(video_path, output_path, contents, false, 0.4f,
                    20, true, true, background);
  
  delete rvm;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/a6da4814-6643-4dfc-89ce-57f140c999fc' height="150px" width="150px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/9e4f060e-3de8-44c4-a20f-74a0ff3943bb' height="150px" width="150px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/42bb2991-333a-4524-b874-6ab6156b3425' height="150px" width="150px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/f8d65d8c-2a3d-4634-9169-3bc36452d997' height="150px" width="150px"> <br> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/c1411bb7-5537-4d6e-81f7-c902c2256a72' height="150px" width="150px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/6344f307-15e3-4593-9866-50f5ee777f43' height="150px" width="150px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/4d824828-7727-48df-8aae-64e15ca1c03b' height="150px" width="150px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/e8102fd6-e869-4a42-a19f-dd6d180dda92' height="150px" width="150px"> </div>

More classes for matting (image matting, video matting, trimap/mask-free, trimap/mask-based)

auto *matting = new lite::cv::matting::RobustVideoMatting:(onnx_path);  //  WACV 2022.
auto *matting = new lite::cv::matting::MGMatting(onnx_path); // CVPR 2021
auto *matting = new lite::cv::matting::MODNet(onnx_path); // AAAI 2022
auto *matting = new lite::cv::matting::MODNetDyn(onnx_path); // AAAI 2022 Dynamic Shape Inference.
auto *matting = new lite::cv::matting::BackgroundMattingV2(onnx_path); // CVPR 2020 
auto *matting = new lite::cv::matting::BackgroundMattingV2Dyn(onnx_path); // CVPR 2020 Dynamic Shape Inference.
auto *matting = new lite::cv::matting::MobileHumanMatting(onnx_path); // 3Mb only !!!

<div id="lite.ai.toolkit-face-alignment"></div>

Example2: 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../examples/logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/318691ec-7226-4d55-990b-a320635d8910' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/d64ae30e-a0b9-4ac9-bf4f-9d6f80c2c05a' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/c802858c-6899-4246-8839-5721c43faffe' height="224px" width="224px"> </div>

More classes for face alignment (68 points, 98 points, 106 points, 1000 points)

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks, 1.0Mb only!
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks, 4.8Mb only!
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks, 2.8Mb only!
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks, 9.4Mb only!
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks, 11Mb only!
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks, 2.0Mb only!
auto *align = new lite::cv::face::align::PIPNet98(onnx_path);  // 98 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet68(onnx_path);  // 68 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet29(onnx_path);  // 29 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet19(onnx_path);  // 19 landmarks, CVPR2021!

<div id="lite.ai.toolkit-colorization"></div>

Example3: Colorization using colorization. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/022dd4ab-1048-4d51-8e84-f839464d013e' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/8eceb121-6da6-41d9-8dbf-949034f27247' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/54a9b811-d21b-4120-8381-df0f858dba8b' height="224px" width="224px"> <br> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/95f265a8-ca85-4df1-b2f0-04e1dd3d8fff' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/34bd3bcb-377a-47a0-b8bf-a44603f9b275' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/e1eff9e9-9f3c-4558-8826-d05c0c254e29' height="224px" width="224px"> </div>

More classes for colorization (gray to rgb)

auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);

<div id="lite.ai.toolkit-face-recognition"></div>

Example4: Face Recognition using ArcFace. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/8311a1e0-1945-4a70-a361-c15a6e55baab' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/c1619f3f-cb12-4607-9e72-4a9f9224ef09' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/21859087-8458-4be6-b1ab-f20c1546e310' height="224px" width="224px"> </div>

Detected Sim01: 0.721159 Sim02: -0.0626267

More classes for face recognition (face id vector extract)

auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !

<div id="lite.ai.toolkit-face-detection"></div>

Example5: Face Detection using SCRFD 2021. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/scrfd_2.5g_bnkps_shape640x640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_detector.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_scrfd.jpg";
  
  auto *scrfd = new lite::cv::face::detect::SCRFD(onnx_path);
  
  std::vector<lite::types::BoxfWithLandmarks> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  scrfd->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);
  
  delete scrfd;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/b913c502-93fc-4a29-8114-9a3450c512f0' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/28274741-8745-4665-abff-3a384b75f7fa' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/a0bc4d9f-df88-4757-bcfb-214f2c1d4991' height="224px" width="224px"> </div>

More classes for face detection (super fast face detection)

auto *detector = new lite::face::detect::UltraFace(onnx_path);  // 1.1Mb only !
auto *detector = new lite::face::detect::FaceBoxes(onnx_path);  // 3.8Mb only ! 
auto *detector = new lite::face::detect::FaceBoxesv2(onnx_path);  // 4.0Mb only ! 
auto *detector = new lite::face::detect::RetinaFace(onnx_path);  // 1.6Mb only ! CVPR2020
auto *detector = new lite::face::detect::SCRFD(onnx_path);  // 2.5Mb only ! CVPR2021, Super fast and accurate!!
auto *detector = new lite::face::detect::YOLO5Face(onnx_path);  // 2021, Super fast and accurate!!
auto *detector = new lite::face::detect::YOLOv5BlazeFace(onnx_path);  // 2021, Super fast and accurate!!

<div id="lite.ai.toolkit-segmentation"></div>

Example6: Object Segmentation using DeepLabV3ResNet101. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
  std::string save_img_path = "../../../examples/logs/test_lite_deeplabv3_resnet101.jpg";

  auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads

  lite::types::SegmentContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  deeplabv3_resnet101->detect(img_bgr, content);

  if (content.flag)
  {
    cv::Mat out_img;
    cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
    cv::imwrite(save_img_path, out_img);
    if (!content.names_map.empty())
    {
      for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
      {
        std::cout << it->first << " Name: " << it->second << std::endl;
      }
    }
  }
  delete deeplabv3_resnet101;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/cf143f25-a233-40f1-a4b9-7ad52f691799' height="256px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/f4dd5263-8514-4bb0-a0dd-dbe532481aff' height="256px"> </div>

More classes for object segmentation (general objects segmentation)

auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
auto *segment = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path);

<div id="lite.ai.toolkit-face-attributes-analysis"></div>

Example7: Age Estimation using SSRNet . Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/ssrnet.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_ssrnet.jpg";

  auto *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);

  lite::types::Age age;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ssrnet->detect(img_bgr, age);
  lite::utils::draw_age_inplace(img_bgr, age);
  cv::imwrite(save_img_path, img_bgr);

  delete ssrnet;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/6dc688d9-95be-40f3-b9b8-1a2f69e39e1d' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/77089174-f744-4603-b417-c23caeb344d7' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/8f279483-5849-4356-885c-5806337ced2a' height="224px" width="224px"> </div>

More classes for face attributes analysis (age, gender, emotion)

auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);  
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); 
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions, 13Mb only!
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::SSRNet(onnx_path); // age estimation, 190kb only!!!

<div id="lite.ai.toolkit-image-classification"></div>

Example8: 1000 Classes Classification using DenseNet. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/densenet121.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";

  auto *densenet = new lite::cv::classification::DenseNet(onnx_path);

  lite::types::ImageNetContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  densenet->detect(img_bgr, content);
  if (content.flag)
  {
    const unsigned int top_k = content.scores.size();
    if (top_k > 0)
    {
      for (unsigned int i = 0; i < top_k; ++i)
        std::cout << i + 1
                  << ": " << content.labels.at(i)
                  << ": " << content.texts.at(i)
                  << ": " << content.scores.at(i)
                  << std::endl;
    }
  }
  delete densenet;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/58e3b905-367d-486a-a3b6-062cef87d726' height="224px" width="350px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/cf81d298-5903-4d3c-ad06-14882911c006' height="224px" width="350px"> </div>

More classes for image classification (1000 classes)

auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);  
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); // 8.7Mb only!
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); // 13Mb only!
auto *classifier = new lite::cv::classification::ResNet(onnx_path); 
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);

<div id="lite.ai.toolkit-head-pose-estimation"></div>

Example9: Head Pose Estimation using FSANet. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/fsanet-var.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_fsanet.jpg";

  auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::EulerAngles euler_angles;
  fsanet->detect(img_bgr, euler_angles);
  
  if (euler_angles.flag)
  {
    lite::utils::draw_axis_inplace(img_bgr, euler_angles);
    cv::imwrite(save_img_path, img_bgr);
    std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
  }
  delete fsanet;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/20a18d56-297c-4c72-8153-76d4380fc9ec)' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/6630a13b-af81-4606-8a81-37fb416f0a64' height="224px" width="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/fb873266-2cfd-4b08-8ffb-639aee1ca2b6' height="224px" width="224px"> </div>

More classes for head pose estimation (euler angle, yaw, pitch, roll)

auto *pose = new lite::cv::face::pose::FSANet(onnx_path); // 1.2Mb only!

<div id="lite.ai.toolkit-style-transfer"></div>

Example10: Style Transfer using FastStyleTransfer. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/style-candy-8.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_fast_style_transfer_candy.jpg";
  
  auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
 
  lite::types::StyleContent style_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  fast_style_transfer->detect(img_bgr, style_content);

  if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
  delete fast_style_transfer;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/c42af6ea-0b3a-4816-902a-9958fdef5653' height="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/bbaa6e7e-50c0-4993-b6e9-aee681e61fdb' height="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/95106e7e-c6bc-433d-b20c-95b579e85a06' height="224px"> <br> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/13a17444-27a4-4153-a6ee-5fff0a7fc667' height="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/6e2c1d8b-f4a2-4433-b31b-b60f381344c1' height="224px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/9f3f706a-50b7-43e4-8631-13ffa9b12fb5' height="224px"> </div>

More classes for style transfer (neural style transfer, others)

auto *transfer = new lite::cv::style::FastStyleTransfer(onnx_path); // 6.4Mb only

Example11: Human Head Segmentation using HeadSeg. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/minivision_head_seg.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_head_seg.png";
  std::string save_img_path = "../../../examples/logs/test_lite_head_seg.jpg";

  auto *head_seg = new lite::cv::segmentation::HeadSeg(onnx_path, 4); // 4 threads

  lite::types::HeadSegContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  head_seg->detect(img_bgr, content);
  if (content.flag) cv::imwrite(save_img_path, content.mask * 255.f);

  delete head_seg;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/5684e1d9-b3b1-45af-ac38-d9201490d46e' height="180px" width="180px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/b6a431d2-225b-416b-8a1e-cf9617d79a63' height="180px" width="180px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/ff9740a5-a70e-400c-8301-fc19c92c6248' height="180px" width="180px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/62747811-3856-4f40-9057-9ec4db687b31' height="180px" width="180px"> </div>

More classes for human segmentation (head, portrait, hair, others)

auto *segment = new lite::cv::segmentation::HeadSeg(onnx_path); // 31Mb
auto *segment = new lite::cv::segmentation::FastPortraitSeg(onnx_path); // <= 400Kb !!! 
auto *segment = new lite::cv::segmentation::PortraitSegSINet(onnx_path); // <= 380Kb !!!
auto *segment = new lite::cv::segmentation::PortraitSegExtremeC3Net(onnx_path); // <= 180Kb !!! Extreme Tiny !!!
auto *segment = new lite::cv::segmentation::FaceHairSeg(onnx_path); // 18M
auto *segment = new lite::cv::segmentation::HairSeg(onnx_path); // 18M
auto *segment = new lite::cv::segmentation::MobileHairSeg(onnx_path); // 14M

Example12: Photo transfer to Cartoon Photo2Cartoon. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string head_seg_onnx_path = "../../../examples/hub/onnx/cv/minivision_head_seg.onnx";
  std::string cartoon_onnx_path = "../../../examples/hub/onnx/cv/minivision_female_photo2cartoon.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_female_photo2cartoon.jpg";
  std::string save_mask_path = "../../../examples/logs/test_lite_female_photo2cartoon_seg.jpg";
  std::string save_cartoon_path = "../../../examples/logs/test_lite_female_photo2cartoon_cartoon.jpg";

  auto *head_seg = new lite::cv::segmentation::HeadSeg(head_seg_onnx_path, 4); // 4 threads
  auto *female_photo2cartoon = new lite::cv::style::FemalePhoto2Cartoon(cartoon_onnx_path, 4); // 4 threads

  lite::types::HeadSegContent head_seg_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  head_seg->detect(img_bgr, head_seg_content);

  if (head_seg_content.flag && !head_seg_content.mask.empty())
  {
    cv::imwrite(save_mask_path, head_seg_content.mask * 255.f);
    // Female Photo2Cartoon Style Transfer
    lite::types::FemalePhoto2CartoonContent female_cartoon_content;
    female_photo2cartoon->detect(img_bgr, head_seg_content.mask, female_cartoon_content);
    
    if (female_cartoon_content.flag && !female_cartoon_content.cartoon.empty())
      cv::imwrite(save_cartoon_path, female_cartoon_content.cartoon);
  }

  delete head_seg;
  delete female_photo2cartoon;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/5684e1d9-b3b1-45af-ac38-d9201490d46e' height="180px" width="180px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/1970b922-6027-44b3-9211-9f057e2ce003' height="180px" width="180px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/73494f60-9efd-48cb-a993-5a5837badb12' height="180px" width="180px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/e10e9624-3176-4137-988b-c73be6103fed' height="180px" width="180px"> </div>

More classes for photo style transfer.

auto *transfer = new lite::cv::style::FemalePhoto2Cartoon(onnx_path);

Example13: Face Parsing using FaceParsing. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/face_parsing_512x512.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_parsing.png";
  std::string save_img_path = "../../../examples/logs/test_lite_face_parsing_bisenet.jpg";

  auto *face_parsing_bisenet = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path, 8); // 8 threads

  lite::types::FaceParsingContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_parsing_bisenet->detect(img_bgr, content);

  if (content.flag && !content.merge.empty())
    cv::imwrite(save_img_path, content.merge);
  
  delete face_parsing_bisenet;
}

The output is:

<div align='center'> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/32532cbc-ef90-4afb-9fa9-0a1f52b18654' height="180px" width="180px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/157b9e11-fc92-445b-ae0d-0d859c8663ee' height="180px" width="180px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/e7945202-e4dc-4e70-b931-019afdc5a95b' height="180px" width="180px"> <img src='https://github.com/DefTruth/lite.ai.toolkit/assets/31974251/7dbba712-078a-4cd6-b968-d6f565e10a3e' height="180px" width="180px"> </div>

More classes for face parsing (hair, eyes, nose, mouth, others)

auto *segment = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path); // 50Mb
auto *segment = new lite::cv::segmentation::FaceParsingBiSeNetDyn(onnx_path); // Dynamic Shape Inference.
</details>

©️License

GNU General Public License v3.0

πŸŽ‰Contribute

Please consider ⭐ this repo if you like it, as it is the simplest way to support me.

<div align='center'> <a href="https://star-history.com/#DefTruth/lite.ai.toolkit&Date"> <picture align='center'> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=DefTruth/lite.ai.toolkit&type=Date&theme=dark" /> <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=DefTruth/lite.ai.toolkit&type=Date" /> <img width=450 height=300 alt="Star History Chart" src="https://api.star-history.com/svg?repos=DefTruth/lite.ai.toolkit&type=Date" /> </picture> </a> </div>