Awesome
YOLOv10 OpenVINO C++ Inference
Implementing YOLOv10 object detection using OpenVINO for efficient and accurate real-time inference in C++.
Features
- Support for
ONNX
andOpenVINO IR
model formats - Support for
FP32
,FP16
andINT8
precisions - Support for loading model with dynamic shape
Tested on Ubuntu 18.04
, 20.04
, 22.04
.
Dependencies
Dependency | Version |
---|---|
OpenVINO | >=2023.3 |
OpenCV | >=3.2.0 |
C++ | >=14 |
CMake | >=3.10.2 |
Installation Options
You have two options for setting up the environment: manually installing dependencies or using Docker.
<details> <summary><b>Manual Installation</b></summary>Install Dependencies
apt-get update
apt-get install -y \
libtbb2 \
cmake \
make \
git \
libyaml-cpp-dev \
wget \
libopencv-dev \
pkg-config \
g++ \
gcc \
libc6-dev \
make \
build-essential \
sudo \
ocl-icd-libopencl1 \
python3 \
python3-venv \
python3-pip \
libpython3.8
Install OpenVINO
You can download OpenVINO from here.
wget -O openvino.tgz https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu20_2023.3.0.13775.ceeafaf64f3_x86_64.tgz && \
sudo mkdir /opt/intel
sudo mv openvino.tgz /opt/intel/
cd /opt/intel
sudo tar -xvf openvino.tgz
sudo rm openvino.tgz
sudo mv l_openvino* openvino
</details>
<details>
<summary><b>Using Docker</b></summary>
Building the Docker Image
To build the Docker image yourself, use the following command:
docker build . -t yolov10
Pulling the Docker Image
Alternatively, you can pull the pre-built Docker image from Docker Hub (available for Ubuntu 18.04, 20.04, and 22.04):
docker pull rlggyp/yolov10:18.04
docker pull rlggyp/yolov10:20.04
docker pull rlggyp/yolov10:22.04
For detailed usage information, please visit the Docker Hub repository page.
Running a Container
Grant the Docker container access to the X server by running the following command:
xhost +local:docker
To run a container from the image, use the following docker run
command:
docker run -it --rm --mount type=bind,src=$(pwd),dst=/repo \
--env DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /dev:/dev \
-w /repo \
rlggyp/yolov10:<tag>
</details>
Build
git clone https://github.com/rlggyp/YOLOv10-OpenVINO-CPP-Inference.git
cd YOLOv10-OpenVINO-CPP-Inference/src
mkdir build
cd build
cmake ..
make
Usage
You can download the YOLOv10 model from here: ONNX, OpenVINO IR FP32, OpenVINO IR FP16, OpenVINO IR INT8
Using an ONNX Model Format
# For video input:
./video <model_path.onnx> <video_path>
# For image input:
./detect <model_path.onnx> <image_path>
# For real-time inference with a camera:
./camera <model_path.onnx> <camera_index>
Using an OpenVINO IR Model Format
# For video input:
./video <model_path.xml> <video_path>
# For image input:
./detect <model_path.xml> <image_path>
# For real-time inference with a camera:
./camera <model_path.xml> <camera_index>
<p align="center">
<img alt="traffic_gif" src="assets/traffic.gif", width="80%">
<img alt="result_bus" src="assets/result_bus.png", width="80%">
<img alt="result_zidane" src="assets/result_zidane.png", width="80%">
</p>
References
- How to export the YOLOv10 model
- Convert and Optimize YOLOv10 with OpenVINO
- Exporting the model into OpenVINO format
- Model Export with Ultralytics YOLO
- Supported models by OpenVINO
- YOLOv10 exporter notebooks
Contributing
Contributions are welcome! If you have any suggestions, bug reports, or feature requests, please open an issue or submit a pull request.
License
This project is licensed under the MIT License. See the LICENSE file for details.