Awesome
<div align="center"> <h1>[ICCV'23] Size Does Matter: Size-aware Virtual Try-on via Clothing-oriented Transformation Try-on Network</h1> <div> Chieh-Yun Chen<sup>1,2</sup>, Yi-Chung Chen<sup>1,3</sup>, Hong-Han Shuai<sup>2</sup>, Wen-Huang Cheng<sup>3</sup>, </div> <div> <sup>1</sup>Stylins.ai  <sup>2</sup>National Yang Ming Chiao Tung University  <sup>3</sup>National Taiwan University </div>Official Pytorch implementation [Paper][Supplement]
</div>Abstract: Virtual try-on tasks aim at synthesizing realistic try-on results by trying target clothes on humans. Most previous works relied on the Thin Plate Spline or the prediction of appearance flows to warp clothes to fit human body shapes. However, both approaches cannot handle complex warping, leading to over distortion or misalignment. Furthermore, there is a critical unaddressed challenge of adjusting clothing sizes for try-on. To tackle these issues, we propose a Clothing-Oriented Transformation Try-On Network (COTTON). COTTON leverages clothing structure with landmarks and segmentation to design a novel landmark-guided transformation for precisely deforming clothes, allowing for size adjustment during try-on. Additionally, to properly remove the clothing region from the human image without losing significant human characteristics, we propose a clothing elimination policy based on both transformed clothes and human segmentation. This method enables users to try on clothes tucked-in or untucked while retaining more human characteristics. Both qualitative and quantitative results show that COTTON outperforms the state-of-the-art high-resolution virtual try-on approaches.
Implementation
Please see ./code for more implementation details.
Multi-garment try-on results
Multi-size try-on results
Visual comparison with state-of-the-art virtual try-on methods
-
Preserving human characteristics, i.e., tattoo
Due to the proposed Clothing Elimination Policy, COTTON is able to preserve the human characteristics, i.e. tattoo.
-
Preserving clothing characteristics, i.e., neckline
Our proposed Clothing Segmentation Network properly segments the region of clothes around the neckline that cannot be seen when people wear it. It helps COTTON to yield correct neckline type on try-on results. On the other hand, the baselines all lead to undesired noise around the neckline on the final synthesis results.
Citation
@InProceedings{Chen_2023_ICCV,
author = {Chen, Chieh-Yun and Chen, Yi-Chung and Shuai, Hong-Han and Cheng, Wen-Huang},
title = {Size Does Matter: Size-aware Virtual Try-on via Clothing-oriented Transformation Try-on Network},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {7513-7522}
}