Home

Awesome

Yolo-v4 and Yolo-v3/v2 for Windows and Linux

(neural network for object detection) - Tensor Cores can be used on Linux and Windows

Paper Yolo v4: https://arxiv.org/abs/2004.10934

More details: http://pjreddie.com/darknet/yolo/

CircleCI TravisCI AppveyorCI Contributors License: Unlicense DOI

  1. Improvements in this repository
  2. How to use
  3. How to compile on Linux
  4. How to compile on Windows
  5. Training and Evaluation of speed and accuracy on MS COCO
  6. How to train with multi-GPU:
  7. How to train (to detect your custom objects)
  8. How to train tiny-yolo (to detect your custom objects)
  9. When should I stop training
  10. How to improve object detection
  11. How to mark bounded boxes of objects and create annotation files
  12. How to use Yolo as DLL and SO libraries
Darknet Logo  readme AP50:95 / AP50 - FPS (Tesla V100) Paper: https://arxiv.org/abs/2004.10934

How to evaluate AP of YOLOv4 on the MS COCO evaluation server

  1. Download and unzip test-dev2017 dataset from MS COCO server: http://images.cocodataset.org/zips/test2017.zip
  2. Download list of images for Detection taks and replace the paths with yours: https://raw.githubusercontent.com/AlexeyAB/darknet/master/scripts/testdev2017.txt
  3. Download yolov4.weights file: https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT
  4. Content of the file cfg/coco.data should be
classes= 80
train  = <replace with your path>/trainvalno5k.txt
valid = <replace with your path>/testdev2017.txt
names = data/coco.names
backup = backup
eval=coco
  1. Create /results/ folder near with ./darknet executable file
  2. Run validation: ./darknet detector valid cfg/coco.data cfg/yolov4.cfg yolov4.weights
  3. Rename the file /results/coco_results.json to detections_test-dev2017_yolov4_results.json and compress it to detections_test-dev2017_yolov4_results.zip
  4. Submit file detections_test-dev2017_yolov4_results.zip to the MS COCO evaluation server for the test-dev2019 (bbox)

How to evaluate FPS of YOLOv4 on GPU

  1. Compile Darknet with GPU=1 CUDNN=1 CUDNN_HALF=1 OPENCV=1 in the Makefile (or use the same settings with Cmake)
  2. Download yolov4.weights file 245 MB: yolov4.weights (Google-drive mirror yolov4.weights )
  3. Get any .avi/.mp4 video file (preferably not more than 1920x1080 to avoid bottlenecks in CPU performance)
  4. Run one of two commands and look at the AVG FPS:

Pre-trained models

There are weights-file for different cfg-files (trained for MS COCO dataset):

FPS on RTX 2070 (R) and Tesla V100 (V):

<details><summary><b>CLICK ME</b> - Yolo v3 models</summary> </details> <details><summary><b>CLICK ME</b> - Yolo v2 models</summary> </details>

Put it near compiled: darknet.exe

You can get cfg-files by path: darknet/cfg/

Requirements

Yolo v3 in other frameworks

Datasets

Examples of results

Yolo v3

Others: https://www.youtube.com/user/pjreddie/videos

Improvements in this repository

And added manual - How to train Yolo v4-v2 (to detect your custom objects)

Also, you might be interested in using a simplified repository where is implemented INT8-quantization (+30% speedup and -1% mAP reduced): https://github.com/AlexeyAB/yolo2_light

How to use on the command line

On Linux use ./darknet instead of darknet.exe, like this:./darknet detector test ./cfg/coco.data ./cfg/yolov4.cfg ./yolov4.weights

On Linux find executable file ./darknet in the root directory, while on Windows find it in the directory \build\darknet\x64

For using network video-camera mjpeg-stream with any Android smartphone
  1. Download for Android phone mjpeg-stream soft: IP Webcam / Smart WebCam

  2. Connect your Android phone to computer by WiFi (through a WiFi-router) or USB

  3. Start Smart WebCam on your phone

  4. Replace the address below, on shown in the phone application (Smart WebCam) and launch:

How to compile on Linux (using cmake)

The CMakeLists.txt will attempt to find installed optional dependencies like CUDA, cudnn, ZED and build against those. It will also create a shared object library file to use darknet for code development.

Do inside the cloned repository:

mkdir build-release
cd build-release
cmake ..
make
make install

How to compile on Linux (using make)

Just do make in the darknet directory. Before make, you can set such options in the Makefile: link

To run Darknet on Linux use examples from this article, just use ./darknet instead of darknet.exe, i.e. use this command: ./darknet detector test ./cfg/coco.data ./cfg/yolov4.cfg ./yolov4.weights

How to compile on Windows (using CMake-GUI)

This is the recommended approach to build Darknet on Windows if you have already installed Visual Studio 2015/2017/2019, CUDA > 10.0, cuDNN > 7.0, and OpenCV > 2.4.

Use CMake-GUI as shown here on this IMAGE:

  1. Configure
  2. Optional platform for generator (Set: x64)
  3. Finish
  4. Generate
  5. Open Project
  6. Set: x64 & Release
  7. Build
  8. Build solution

How to compile on Windows (using vcpkg)

If you have already installed Visual Studio 2015/2017/2019, CUDA > 10.0, cuDNN > 7.0, OpenCV > 2.4, then to compile Darknet it is recommended to use CMake-GUI.

Otherwise, follow these steps:

  1. Install or update Visual Studio to at least version 2017, making sure to have it fully patched (run again the installer if not sure to automatically update to latest version). If you need to install from scratch, download VS from here: Visual Studio Community

  2. Install CUDA and cuDNN

  3. Install git and cmake. Make sure they are on the Path at least for the current account

  4. Install vcpkg and try to install a test library to make sure everything is working, for example vcpkg install opengl

  5. Define an environment variables, VCPKG_ROOT, pointing to the install path of vcpkg

  6. Define another environment variable, with name VCPKG_DEFAULT_TRIPLET and value x64-windows

  7. Open Powershell and type these commands:

PS \>                  cd $env:VCPKG_ROOT
PS Code\vcpkg>         .\vcpkg install pthreads opencv[ffmpeg] #replace with opencv[cuda,ffmpeg] in case you want to use cuda-accelerated openCV
  1. Open Powershell, go to the darknet folder and build with the command .\build.ps1. If you want to use Visual Studio, you will find two custom solutions created for you by CMake after the build, one in build_win_debug and the other in build_win_release, containing all the appropriate config flags for your system.

How to compile on Windows (legacy way)

  1. If you have CUDA 10.0, cuDNN 7.4 and OpenCV 3.x (with paths: C:\opencv_3.0\opencv\build\include & C:\opencv_3.0\opencv\build\x64\vc14\lib), then open build\darknet\darknet.sln, set x64 and Release https://hsto.org/webt/uh/fk/-e/uhfk-eb0q-hwd9hsxhrikbokd6u.jpeg and do the: Build -> Build darknet. Also add Windows system variable CUDNN with path to CUDNN: https://user-images.githubusercontent.com/4096485/53249764-019ef880-36ca-11e9-8ffe-d9cf47e7e462.jpg

    1.1. Find files opencv_world320.dll and opencv_ffmpeg320_64.dll (or opencv_world340.dll and opencv_ffmpeg340_64.dll) in C:\opencv_3.0\opencv\build\x64\vc14\bin and put it near with darknet.exe

    1.2 Check that there are bin and include folders in the C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0 if aren't, then copy them to this folder from the path where is CUDA installed

    1.3. To install CUDNN (speedup neural network), do the following:

    1.4. If you want to build without CUDNN then: open \darknet.sln -> (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and remove this: CUDNN;

  2. If you have other version of CUDA (not 10.0) then open build\darknet\darknet.vcxproj by using Notepad, find 2 places with "CUDA 10.0" and change it to your CUDA-version. Then open \darknet.sln -> (right click on project) -> properties -> CUDA C/C++ -> Device and remove there ;compute_75,sm_75. Then do step 1

  3. If you don't have GPU, but have OpenCV 3.0 (with paths: C:\opencv_3.0\opencv\build\include & C:\opencv_3.0\opencv\build\x64\vc14\lib), then open build\darknet\darknet_no_gpu.sln, set x64 and Release, and do the: Build -> Build darknet_no_gpu

  4. If you have OpenCV 2.4.13 instead of 3.0 then you should change paths after \darknet.sln is opened

    4.1 (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories: C:\opencv_2.4.13\opencv\build\include

    4.2 (right click on project) -> properties -> Linker -> General -> Additional Library Directories: C:\opencv_2.4.13\opencv\build\x64\vc14\lib

  5. If you have GPU with Tensor Cores (nVidia Titan V / Tesla V100 / DGX-2 and later) speedup Detection 3x, Training 2x: \darknet.sln -> (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and add here: CUDNN_HALF;

    Note: CUDA must be installed only after Visual Studio has been installed.

How to compile (custom):

Also, you can to create your own darknet.sln & darknet.vcxproj, this example for CUDA 9.1 and OpenCV 3.0

Then add to your created project:

C:\opencv_3.0\opencv\build\include;..\..\3rdparty\include;%(AdditionalIncludeDirectories);$(CudaToolkitIncludeDir);$(CUDNN)\include

C:\opencv_3.0\opencv\build\x64\vc14\lib;$(CUDA_PATH)\lib\$(PlatformName);$(CUDNN)\lib\x64;%(AdditionalLibraryDirectories)

..\..\3rdparty\lib\x64\pthreadVC2.lib;cublas.lib;curand.lib;cudart.lib;cudnn.lib;%(AdditionalDependencies)

OPENCV;_TIMESPEC_DEFINED;_CRT_SECURE_NO_WARNINGS;_CRT_RAND_S;WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions)

How to train with multi-GPU:

  1. Train it first on 1 GPU for like 1000 iterations: darknet.exe detector train cfg/coco.data cfg/yolov4.cfg yolov4.conv.137

  2. Then stop and by using partially-trained model /backup/yolov4_1000.weights run training with multigpu (up to 4 GPUs): darknet.exe detector train cfg/coco.data cfg/yolov4.cfg /backup/yolov4_1000.weights -gpus 0,1,2,3

If you get a Nan, then for some datasets better to decrease learning rate, for 4 GPUs set learning_rate = 0,00065 (i.e. learning_rate = 0.00261 / GPUs). In this case also increase 4x times burn_in = in your cfg-file. I.e. use burn_in = 4000 instead of 1000.

https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ

How to train (to detect your custom objects):

(to train old Yolo v2 yolov2-voc.cfg, yolov2-tiny-voc.cfg, yolo-voc.cfg, yolo-voc.2.0.cfg, ... click by the link)

Training Yolo v4 (and v3):

  1. For training cfg/yolov4-custom.cfg download the pre-trained weights-file (162 MB): yolov4.conv.137 (Google drive mirror yolov4.conv.137 )

  2. Create file yolo-obj.cfg with the same content as in yolov4-custom.cfg (or copy yolov4-custom.cfg to yolo-obj.cfg) and:

So if classes=1 then should be filters=18. If classes=2 then write filters=21.

(Do not write in the cfg-file: filters=(classes + 5)x3)

(Generally filters depends on the classes, coords and number of masks, i.e. filters=(classes + coords + 1)*<number of mask>, where mask is indices of anchors. If mask is absence, then filters=(classes + coords + 1)*num)

So for example, for 2 objects, your file yolo-obj.cfg should differ from yolov4-custom.cfg in such lines in each of 3 [yolo]-layers:

[convolutional]
filters=21

[region]
classes=2
  1. Create file obj.names in the directory build\darknet\x64\data\, with objects names - each in new line

  2. Create file obj.data in the directory build\darknet\x64\data\, containing (where classes = number of objects):

classes= 2
train  = data/train.txt
valid  = data/test.txt
names = data/obj.names
backup = backup/
  1. Put image-files (.jpg) of your objects in the directory build\darknet\x64\data\obj\

  2. You should label each object on images from your dataset. Use this visual GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 & v3: https://github.com/AlexeyAB/Yolo_mark

It will create .txt-file for each .jpg-image-file - in the same directory and with the same name, but with .txt-extension, and put to file: object number and object coordinates on this image, for each object in new line:

<object-class> <x_center> <y_center> <width> <height>

Where:

For example for img1.jpg you will be created img1.txt containing:

1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667
  1. Create file train.txt in directory build\darknet\x64\data\, with filenames of your images, each filename in new line, with path relative to darknet.exe, for example containing:
data/obj/img1.jpg
data/obj/img2.jpg
data/obj/img3.jpg
  1. Download pre-trained weights for the convolutional layers and put to the directory build\darknet\x64

  2. Start training by using the command line: darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137

    To train on Linux use command: ./darknet detector train data/obj.data yolo-obj.cfg yolov4.conv.137 (just use ./darknet instead of darknet.exe)

    • (file yolo-obj_last.weights will be saved to the build\darknet\x64\backup\ for each 100 iterations)
    • (file yolo-obj_xxxx.weights will be saved to the build\darknet\x64\backup\ for each 1000 iterations)
    • (to disable Loss-Window use darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -dont_show, if you train on computer without monitor like a cloud Amazon EC2)
    • (to see the mAP & Loss-chart during training on remote server without GUI, use command darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -dont_show -mjpeg_port 8090 -map then open URL http://ip-address:8090 in Chrome/Firefox browser)

8.1. For training with mAP (mean average precisions) calculation for each 4 Epochs (set valid=valid.txt or train.txt in obj.data file) and run: darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map

  1. After training is complete - get result yolo-obj_final.weights from path build\darknet\x64\backup\

Note: If during training you see nan values for avg (loss) field - then training goes wrong, but if nan is in some other lines - then training goes well.

Note: If you changed width= or height= in your cfg-file, then new width and height must be divisible by 32.

Note: After training use such command for detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

Note: if error Out of memory occurs then in .cfg-file you should increase subdivisions=16, 32 or 64: link

How to train tiny-yolo (to detect your custom objects):

Do all the same steps as for the full yolo model as described above. With the exception of:

For training Yolo based on other models (DenseNet201-Yolo or ResNet50-Yolo), you can download and get pre-trained weights as showed in this file: https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/partial.cmd If you made you custom model that isn't based on other models, then you can train it without pre-trained weights, then will be used random initial weights.

When should I stop training:

Usually sufficient 2000 iterations for each class(object), but not less than 4000 iterations in total. But for a more precise definition when you should stop training, use the following manual:

  1. During training, you will see varying indicators of error, and you should stop when no longer decreases 0.XXXXXXX avg:

Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000, count: 8 Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000, count: 8

9002: 0.211667, 0.60730 avg, 0.001000 rate, 3.868000 seconds, 576128 images Loaded: 0.000000 seconds

When you see that average loss 0.xxxxxx avg no longer decreases at many iterations then you should stop training. The final avgerage loss can be from 0.05 (for a small model and easy dataset) to 3.0 (for a big model and a difficult dataset).

  1. Once training is stopped, you should take some of last .weights-files from darknet\build\darknet\x64\backup and choose the best of them:

For example, you stopped training after 9000 iterations, but the best result can give one of previous weights (7000, 8000, 9000). It can happen due to overfitting. Overfitting - is case when you can detect objects on images from training-dataset, but can't detect objects on any others images. You should get weights from Early Stopping Point:

Overfitting

To get weights from Early Stopping Point:

2.1. At first, in your file obj.data you must specify the path to the validation dataset valid = valid.txt (format of valid.txt as in train.txt), and if you haven't validation images, just copy data\train.txt to data\valid.txt.

2.2 If training is stopped after 9000 iterations, to validate some of previous weights use this commands:

(If you use another GitHub repository, then use darknet.exe detector recall... instead of darknet.exe detector map...)

And comapre last output lines for each weights (7000, 8000, 9000):

Choose weights-file with the highest mAP (mean average precision) or IoU (intersect over union)

For example, bigger mAP gives weights yolo-obj_8000.weights - then use this weights for detection.

Or just train with -map flag:

darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map

So you will see mAP-chart (red-line) in the Loss-chart Window. mAP will be calculated for each 4 Epochs using valid=valid.txt file that is specified in obj.data file (1 Epoch = images_in_train_txt / batch iterations)

(to change the max x-axis value - change max_batches= parameter to 2000*classes, f.e. max_batches=6000 for 3 classes)

loss_chart_map_chart

Example of custom object detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

mAP is default metric of precision in the PascalVOC competition, this is the same as AP50 metric in the MS COCO competition. In terms of Wiki, indicators Precision and Recall have a slightly different meaning than in the PascalVOC competition, but IoU always has the same meaning.

precision_recall_iou

Custom object detection:

Example of custom object detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

Yolo_v2_trainingYolo_v2_training

How to improve object detection:

  1. Before training:
  1. After training - for detection:

How to mark bounded boxes of objects and create annotation files:

Here you can find repository with GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 - v4: https://github.com/AlexeyAB/Yolo_mark

With example of: train.txt, obj.names, obj.data, yolo-obj.cfg, air1-6.txt, bird1-4.txt for 2 classes of objects (air, bird) and train_obj.cmd with example how to train this image-set with Yolo v2 - v4

Different tools for marking objects in images:

  1. in C++: https://github.com/AlexeyAB/Yolo_mark
  2. in Python: https://github.com/tzutalin/labelImg
  3. in Python: https://github.com/Cartucho/OpenLabeling
  4. in C++: https://www.ccoderun.ca/darkmark/
  5. in JavaScript: https://github.com/opencv/cvat

Using Yolo9000

Simultaneous detection and classification of 9000 objects: darknet.exe detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights data/dog.jpg

How to use Yolo as DLL and SO libraries

There are 2 APIs:


  1. To compile Yolo as C++ DLL-file yolo_cpp_dll.dll - open the solution build\darknet\yolo_cpp_dll.sln, set x64 and Release, and do the: Build -> Build yolo_cpp_dll

    • You should have installed CUDA 10.0
    • To use cuDNN do: (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and add at the beginning of line: CUDNN;
  2. To use Yolo as DLL-file in your C++ console application - open the solution build\darknet\yolo_console_dll.sln, set x64 and Release, and do the: Build -> Build yolo_console_dll

    • you can run your console application from Windows Explorer build\darknet\x64\yolo_console_dll.exe use this command: yolo_console_dll.exe data/coco.names yolov4.cfg yolov4.weights test.mp4

    • after launching your console application and entering the image file name - you will see info for each object: <obj_id> <left_x> <top_y> <width> <height> <probability>

    • to use simple OpenCV-GUI you should uncomment line //#define OPENCV in yolo_console_dll.cpp-file: link

    • you can see source code of simple example for detection on the video file: link

yolo_cpp_dll.dll-API: link

struct bbox_t {
    unsigned int x, y, w, h;    // (x,y) - top-left corner, (w, h) - width & height of bounded box
    float prob;                    // confidence - probability that the object was found correctly
    unsigned int obj_id;        // class of object - from range [0, classes-1]
    unsigned int track_id;        // tracking id for video (0 - untracked, 1 - inf - tracked object)
    unsigned int frames_counter;// counter of frames on which the object was detected
};

class Detector {
public:
        Detector(std::string cfg_filename, std::string weight_filename, int gpu_id = 0);
        ~Detector();

        std::vector<bbox_t> detect(std::string image_filename, float thresh = 0.2, bool use_mean = false);
        std::vector<bbox_t> detect(image_t img, float thresh = 0.2, bool use_mean = false);
        static image_t load_image(std::string image_filename);
        static void free_image(image_t m);

#ifdef OPENCV
        std::vector<bbox_t> detect(cv::Mat mat, float thresh = 0.2, bool use_mean = false);
	std::shared_ptr<image_t> mat_to_image_resize(cv::Mat mat) const;
#endif
};