Home

Awesome

Semantic-Colorization-GAN

This is a supplementary material for the paper SCGAN: Saliency Map-guided Colorization with Generative Adversarial Network, IEEE Transactions on Circuits and Systems for Video Technology (TCSVT'20).

IEEE Xplore: https://ieeexplore.ieee.org/abstract/document/9257445/?casa_token=SCtE33FH3SAAAAAA:NaOfOWFjqItL2yizMZVNYVswXSv7Djl0vezWVTzpWajY8CbBoq9piVlU3z9GQjv8ZFCPCUXPoZQ

Arxiv: https://arxiv.org/abs/2011.11377

1 Training

We release the training code in train folder.

The codes require following libs:

If you want to train on multispectral data, please refer to the train on multispectral images folder.

The pre-trained global feature network can be found in: https://portland-my.sharepoint.com/:f:/g/personal/yzzhao2-c_my_cityu_edu_hk/ErOcFJc0pilMvCkE53Essi0Bjj89h90l0Y9kEYv390kPEw?e=exNoad

The saliency maps are computed by PFAN: https://github.com/CaitinZhao/cvpr2019_Pyramid-Feature-Attention-Network-for-Saliency-detection

2 Evaluation

Please refer to evaluation folder.

3 Testing Examples

3.1 Colorization Results

We show the representative image of our system.

Represent Represent

We provide a lot of results randomly selected from ImageNet and MIT Place 365 validation datasets. These images contain multiple scenes and colors.

Results

3.2 Comparison Results

The comparison results with other fully-automatic algorithms are:

Comparison1

The comparison results with other example-based algorithms are:

Comparison2

3.3 Examples of Semantic Confusion and Object Intervention Problems

We give some examples to illustrate the semantic confusion and object intervention problems intuitively. The SCGAN intergrates the low-level and high-level semantic information and understands how to genrate a reasonable colorization. These settings / architectures help the main colorization network to minimize the semantic confusion and object intervention problems.

There is some examples of semantic confusion problem.

Semantic Confusion

There is some examples of object intervention problem.

Object Intervention

To further prove this point, we give more examples about generated attention region and how the saliency map works.

Attention Region

3.4 How our Model Learns at each Epoch

In order to prove our system has the strong fitting ability, we plot the evolution of results of multiple epochs of pre-training term and refinement term. We can see the CNN learns the high-level information at second term.

Evolution

4 Legacy Image Colorization

4.1 Portrait Photographs

We choose several famous legacy portrait photographs in our experiments. The photographs chosen are with different race, gender, age, and scene. We also select a photo of Andy Lau, which represents the contemporary photographs.

People Image

4.2 Landscape Photographs

We choose many landscape photographs by Ansel Adams because the quality is so good. While these photographs are taken from US National Archives (Public Domain).

Landscape Image

4.3 Famous Lagacy Photographs

In this section, we select some famous photographs (especially before 1950). And we give a color version of them.

Famous Image2

4.4 Other Works

There are many fantastic legacy photography works. Our colorization system still predicts visually high-quality reasonable colorized images.

Famous Image1

5 Related Projects

Automatic Colorization: Project Github

Learning Representations for Automatic Colorization: Project Paper Github

Colorful Image Colorization: Project Paper Github

Let there be Color!: Project Paper Github

Deoldify: Project Project2 Github

ColouriseSG: Project

Pix2Pix: Project Paper Github

CycleGAN: Project Paper Github

6 Reference

If you think the paper is helpful for your research, please cite:

@article{zhao2020scgan,
  title={SCGAN: Saliency Map-guided Colorization with Generative Adversarial Network},
  author={Zhao, Yuzhi and Po, Lai-Man and Cheung, Kwok-Wai and Yu, Wing-Yin and Abbas Ur Rehman, Yasar},
  journal={IEEE Transactions on Circuits and Systems for Video Technology},
  volume={31},
  number={8},
  pages={3062--3077},
  year={2020}
}

7 Find our Latest Works About Image / Video Colorization

A similar work on mobile phone image enhancement is available in this webpage

@inproceedings{zhao2019saliency,
  title={Saliency map-aided generative adversarial network for raw to rgb mapping},
  author={Zhao, Yuzhi and Po, Lai-Man and Zhang, Tiantian and Liao, Zongbang and Shi, Xiang and others},
  booktitle={Proceedings of the International Conference on Computer Vision Workshop},
  pages={3449--3457},
  year={2019}
}

A SOTA fully-automatic video colorization work is available in this webpage

@article{zhao2022vcgan,
  title={VCGAN: Video Colorization with Hybrid Generative Adversarial Network},
  author={Zhao, Yuzhi and Po, Lai-Man and Yu, Wing-Yin and Rehman, Yasar Abbas Ur and Liu, Mengyang and Zhang, Yujia and Ou, Weifeng},
  journal={IEEE Transactions on Multimedia},
  volume={25},
  pages={3017-3032},
  year={2022}
}

A legacy photo restoration work including scribble-based image colorization is available in this webpage

@inproceedings{zhao2021legacy,
  title={Legacy Photo Editing with Learned Noise Prior},
  author={Zhao, Yuzhi and Po, Lai-Man and Lin, Tingyu and Wang, Xuehui and Liu, Kangcheng and Zhang, Yujia and Yu, Wing-Yin and Xian, Pengfei and Xiong, Jingjing},
  booktitle={Proceedings of the Winter Conference on Applications of Computer Vision},
  pages={2103--2112},
  year={2021}
}

A SOTA scribble-based video colorization work is available in this webpage

@article{zhao2023svcnet,
  title={SVCNet: Scribble-Based Video Colorization Network With Temporal Aggregation},
  author={Zhao, Yuzhi and Po, Lai-Man and Liu, Kangcheng and Wang, Xuehui and Yu, Wing-Yin and Xian, Pengfei and Zhang, Yujia and Liu, Mengyang},
  journal={IEEE Transactions on Image Processing},
  volume={32},
  pages={4443-4458},
  year={2023}
}