Home

Awesome

Human-Segmentation-PyTorch

Human segmentation models, training/inference code, and trained weights, implemented in PyTorch.

Supported networks

To assess architecture, memory, forward time (in either cpu or gpu), numper of parameters, and number of FLOPs of a network, use this command:

python measure_model.py

Dataset

Portrait Segmentation (Human/Background)

Set

git clone --recursive https://github.com/AntiAegis/Human-Segmentation-PyTorch.git
cd Human-Segmentation-PyTorch
git submodule sync
git submodule update --init --recursive
workon humanseg
pip install -r requirements.txt
pip install -e models/pytorch-image-models

Training

python train.py --config config/config_DeepLab.json --device 0

where config/config_DeepLab.json is the configuration file which contains network, dataloader, optimizer, losses, metrics, and visualization configurations.

python train.py --config config/config_DeepLab.json --device 0 --resume path_to_checkpoint/model_best.pth

Inference

There are two modes of inference: video and webcam.

python inference_video.py --watch --use_cuda --checkpoint path_to_checkpoint/model_best.pth
python inference_webcam.py --use_cuda --checkpoint path_to_checkpoint/model_best.pth

Benchmark

CPU: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
GPU: GeForce GTX 1050 Mobile, CUDA 9.0
ModelParametersFLOPsCPU timeGPU timemIoU
UNet_MobileNetV2 (alpha=1.0, expansion=6)4.7M1.3G167ms17ms91.37%
UNet_ResNet1816.6M9.1G165ms21ms90.09%
DeepLab3+_ResNet1816.6M9.1G133ms28ms91.21%
BiSeNet_ResNet1811.9M4.7G88ms10ms87.02%
PSPNet_ResNet1812.6M20.7G235ms666ms---
ICNet_ResNet1811.6M2.0G48ms55ms86.27%