Awesome
VSGD-Net
This is the official PyTorch implementation of VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images
Prerequisites
- Linux or macOS
- Python 2 or 3
- NVIDIA GPU (11G memory or larger) + CUDA cuDNN
Getting Started
Installation
- Install PyTorch and dependencies from http://pytorch.org
- Install python libraries dominate.
pip install dominate
- Clone this repo:
git clone https://github.com/kechunl/VSGD-Net.git
cd VSGD-Net
Quick Inference
- A few example H&E skin biopsy images are included in the
datasets/test_A
folder. - Please download the pre-trained Melanocyte model from here (google drive link), and unzip it under
./checkpoints/
. - Test the model (
bash ./scripts/test_melanocyte.sh
)
The test results will be saved to a html file here: ./results/melanocyte/test_latest/index.html
Training
- An example training script is provided (
./scripts/train_melanocyte.sh
):
# Multi-GPU, use decoder feature in FPN, use attention module
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 28501 train.py --name Melanocyte_Attn_DecoderFeat --dataroot DATA_PATH --resize_or_crop none --gpu_ids 0,1,2,3 --batchSize 2 --no_instance --loadSize 256 --ngf 32 --has_real_image --save_epoch_freq 5 --use_resnet_as_backbone --use_UNet_skip --fpn_feature decoder --niter_decay 200
Note: please specify the data path as explained in Training with your own dataset.
Training with your own dataset
- If you want to train with your own dataset, please generate the corresponding image patches and name the folders as
train_A
andtrain_B
. For detection purpose, you should also name the mask folder astrain_mask
. In our paper, we use 256x256 patched in 10x magnification. Please refer to our paper for the preprocessing steps. - The default setting for preprocessing is
none
which will do nothing other than making sure the image is divisible by 32. If you want a different setting, please change it by using the--resize_or_crop
option. For example,scale_width_and_crop
first resizes the image to have widthopt.loadSize
and then does random cropping of size(opt.fineSize, opt.fineSize)
.crop
skips the resizing step and only performs random cropping.scale_width
scales the width of all training images toopt.loadSize
(256) while keeping the aspect ratio.
More Training/Test Details
- Flags: see
options/train_options.py
andoptions/base_options.py
for all the training flags; seeoptions/test_options.py
andoptions/base_options.py
for all the test flags.
Acknowledgement
This project is based on Pix2PixHD.