Awesome
Awesome-Sketch-Synthesis
A collection of papers about Sketch Synthesis (Generation). Mainly focus on stroke-level vector sketch synthesis.
Feel free to create a PR or an issue.
Outlines
0. Survey
1. Datasets
Here Vector strokes
means having svg data. With photos
means having the photo-sketch paired data.
<table>
<tr>
<td><strong>Level</strong></td>
<td><strong>Dataset</strong></td>
<td><strong>Source</strong></td>
<td><strong>Vector strokes</strong></td>
<td><strong>With photos</strong></td>
<td><strong>Notes</strong></td>
</tr>
<tr>
<td rowspan="3"><strong>Characters</strong></td>
<td> <a href="https://github.com/brendenlake/omniglot/">Omniglot</a> </td>
<td> </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> Alphabets characters </td>
</tr>
<tr>
<td> <a href="http://kanjivg.tagaini.net/">KanjiVG</a> </td>
<td> </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> Chinese characters </td>
</tr>
<tr>
<td> <a href="https://github.com/rois-codh/kmnist">Kuzushiji</a> </td>
<td> </td>
<td> :x: </td>
<td> :x: </td>
<td> Japanese characters </td>
</tr>
<tr>
<td rowspan="1"><strong>Icon</strong></td>
<td> <a href="https://github.com/marcdemers/FIGR-8-SVG">FIGR-8-SVG</a> </td>
<td> </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> Icons with text descriptions </td>
</tr>
<tr>
<td rowspan="1"><strong>Systematic Symbol</strong></td>
<td> <a href="https://github.com/GuangmingZhu/SketchIME">SketchIME</a> </td>
<td> ACM MM 2023</td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> Systematic sketches with semantic annotations </td>
</tr>
<tr>
<td rowspan="9"><strong>Instance-level</strong></td>
<td> <a href="http://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/">TU-Berlin</a> </td>
<td> SIGGRAPH 2012 </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> Multi-category hand sketches </td>
</tr>
<tr>
<td> <a href="http://sketchy.eye.gatech.edu/">Sketchy</a> </td>
<td> SIGGRAPH 2016 </td>
<td> :heavy_check_mark: </td>
<td> :heavy_check_mark: </td>
<td> Multi-category photo-sketch paired </td>
</tr>
<tr>
<td> <a href="https://quickdraw.withgoogle.com/data">QuickDraw</a> </td>
<td> ICLR 2018 </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> Multi-category hand sketches </td>
</tr>
<tr>
<td> <a href="https://drive.google.com/file/d/15s2BR-QwLgX_DObQBrYlUlZqUU90EL9G/view">QMUL-Shoe-Chair-V2</a> </td>
<td> CVPR 2016 </td>
<td> :heavy_check_mark: </td>
<td> :heavy_check_mark: </td>
<td> Only two categories </td>
</tr>
<tr>
<td> <a href="https://github.com/KeLi-SketchX/SketchX-PRIS-Dataset">Sketch Perceptual Grouping (SPG)</a> </td>
<td> ECCV 2018 </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> With part-level semantic segmentation information </td>
</tr>
<tr>
<td> <a href="https://facex.idvxlab.com/">FaceX</a> </td>
<td> AAAI 2019 </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> Labeled facial sketches </td>
</tr>
<tr>
<td> <a href="https://github.com/facebookresearch/DoodlerGAN">Creative Sketch</a> </td>
<td> ICLR 2021 </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> With annotated part segmentation </td>
</tr>
<tr>
<td> <a href="https://github.com/HaohanWang/ImageNet-Sketch">ImageNet-Sketch</a> </td>
<td> NeurIPS 2019 </td>
<td> :x: </td>
<td> :x: </td>
<td> 50 images for each of the 1000 ImageNet classes </td>
</tr>
<tr>
<td> <a href="https://seva-benchmark.github.io/">SEVA</a> </td>
<td> NeurIPS 2023 </td>
<td> :heavy_check_mark: </td>
<td> :heavy_check_mark: </td>
<td> 90K human-generated sketches that vary in detail </td>
</tr>
<tr>
<td rowspan="6"><strong>Scene-level</strong></td>
<td> <a href="https://sketchyscene.github.io/SketchyScene/">SketchyScene</a> </td>
<td> ECCV 2018 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> With semantic/instance segmentation information </td>
</tr>
<tr>
<td> <a href="http://projects.csail.mit.edu/cmplaces/">CMPlaces</a> </td>
<td> TPAMI 2018 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Cross-modal scene dataset </td>
</tr>
<tr>
<td> <a href="http://sweb.cityu.edu.hk/hongbofu/doc/context_based_sketch_classification_Expressive2018.pdf">Context-Skecth</a> </td>
<td> Expressive 2018 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Context-based scene sketches for co-classification </td>
</tr>
<tr>
<td> <a href="https://sysu-imsl.github.io/EdgeGAN/index.html">SketchyCOCO</a> </td>
<td> CVPR 2020 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Scene sketch, segmentation and normal images </td>
</tr>
<tr>
<td> <a href="http://www.pinakinathc.me/fscoco/">FS-COCO</a> </td>
<td> ECCV 2022 </td>
<td> :heavy_check_mark: </td>
<td> :heavy_check_mark: </td>
<td> Scene sketches with text description </td>
</tr>
<tr>
<td> <a href="https://link.springer.com/article/10.1007/s00371-022-02731-8">SFSD</a> </td>
<td> VC 2022 </td>
<td> :heavy_check_mark: </td>
<td> :heavy_check_mark: </td>
<td> Completely hand-drawn scene sketches with label annotation </td>
</tr>
<tr>
<td rowspan="2"><strong>Drawing from photos</strong></td>
<td> <a href="http://www.cs.cmu.edu/~mengtial/proj/sketch/">Photo-Sketching</a> </td>
<td> WACV 2019 </td>
<td> :heavy_check_mark: </td>
<td> :heavy_check_mark: </td>
<td> ScenePhoto-sketch paired </td>
</tr>
<tr>
<td> <a href="https://github.com/zachzeyuwang/tracing-vs-freehand">Tracing-vs-Freehand</a> </td>
<td> SIGGRAPH 2021 </td>
<td> :heavy_check_mark: </td>
<td> :heavy_check_mark: </td>
<td> Tracings and freehand drawings of images </td>
</tr>
<tr>
<td rowspan="1"><strong>Drawing from 3D models</strong></td>
<td> <a href="https://chufengxiao.github.io/DifferSketching/">DifferSketching</a> </td>
<td> SIGGRAPH Asia 2022 </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> 3D model-sketch paired, with novice and professional ones </td>
</tr>
<tr>
<td rowspan="3"><strong>Portrait</strong></td>
<td> <a href="https://mmlab.ie.cuhk.edu.hk/datasets.html">CUFS</a> </td>
<td> TPAMI 2009 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Face-sketch pairs </td>
</tr>
<tr>
<td> <a href="https://github.com/yiranran/APDrawingGAN">APDrawing</a> </td>
<td> CVPR 2019 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Portrait-sketch paired </td>
</tr>
<tr>
<td> <a href="https://github.com/kwanyun/SKSF-A">SKSF-A</a> </td>
<td> EG 2024 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Face-sketch pairs of seven styles </td>
</tr>
<tr>
<td rowspan="1"><strong>Children's Drawing</strong></td>
<td> <a href="https://github.com/facebookresearch/AnimatedDrawings">Amateur Drawings</a> </td>
<td> TOG 2023 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> With character bounding boxes, segmentation masks, and joint location annotations </td>
</tr>
<tr>
<td rowspan="2"><strong>Rough sketch</strong></td>
<td> <a href="https://esslab.jp/~ess/en/data/davincidataset/">Da Vinci</a> </td>
<td> CGI 2018 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Line drawing restoration dataset </td>
</tr>
<tr>
<td> <a href="https://cragl.cs.gmu.edu/sketchbench/">Rough Sketch Benchmark</a> </td>
<td> SIGGRAPH Asia 2020 </td>
<td> :heavy_check_mark: </td>
<td> :heavy_check_mark: </td>
<td> Rough and clean sketch pairs (only for evaluation) </td>
</tr>
<tr>
<td rowspan="5"><strong>CAD</strong></td>
<td> <a href="https://gfx.cs.princeton.edu/proj/ld3d/">ld3d</a> </td>
<td> SIGGRAPH 2008 </td>
<td> :x: </td>
<td> :x: </td>
<td> Line Drawings of 3D Shapes </td>
</tr>
<tr>
<td> <a href="https://ns.inria.fr/d3/OpenSketch/">OpenSketch</a> </td>
<td> SIGGRAPH Asia 2019 </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> Product Design Sketches </td>
</tr>
<tr>
<td> <a href="https://github.com/PrincetonLIPS/SketchGraphs">SketchGraphs</a> </td>
<td> ICML 2020 Workshop </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> Sketches extracted from real-world CAD models </td>
</tr>
<tr>
<td> <a href="https://github.com/AutodeskAILab/Fusion360GalleryDataset">Fusion 360 Gallery</a> </td>
<td> SIGGRAPH 2021 </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> For 'sketch and extrude' designs </td>
</tr>
<tr>
<td> <a href="https://floorplancad.github.io/">FloorPlanCAD</a> </td>
<td> ICCV 2021 </td>
<td> :heavy_check_mark: </td>
<td> :x: </td>
<td> With instance and semantic annotations </td>
</tr>
<tr>
<td rowspan="10"><strong>Anime</strong></td>
<td> <a href="https://gwern.net/danbooru2021">Danbooru2021</a> </td>
<td> / </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Anime images annotated with tags </td>
</tr>
<tr>
<td> <a href="https://github.com/lllyasviel/DanbooRegion">DanbooRegion</a> </td>
<td> ECCV 2020 </td>
<td> :x: </td>
<td> :x: </td>
<td> Anime images with region annotations </td>
</tr>
<tr>
<td> <a href="https://github.com/zsl2018/StyleAnime">Danbooru-Parsing</a> </td>
<td> TOG 2023 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> For anime portrait parsing and anime translation </td>
</tr>
<tr>
<td> <a href="https://www.cs.toronto.edu/creativeflow/">CreativeFlow+</a> </td>
<td> CVPR 2019 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Large densely annotated artistic video dataset </td>
</tr>
<tr>
<td> <a href="https://github.com/lisiyao21/AnimeInterp">ATD-12K</a> </td>
<td> CVPR 2021 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Animation frames with flow annotations </td>
</tr>
<tr>
<td> <a href="https://lisiyao21.github.io/projects/AnimeRun">AnimeRun</a> </td>
<td> NeurIPS 2022 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Correspondence dataset for 2D-styled cartoons </td>
</tr>
<tr>
<td> <a href="https://github.com/kangyeolk/AnimeCeleb">AnimeCeleb</a> </td>
<td> ECCV 2022 </td>
<td> :x: </td>
<td> :x: </td>
<td> Animation head images with pose annotations </td>
</tr>
<tr>
<td> <a href="https://github.com/ykdai/BasicPBC/tree/main/dataset">PaintBucket-Character</a> </td>
<td> CVPR 2024 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Animation frames with region annotations </td>
</tr>
<tr>
<td> <a href="https://zhenglinpan.github.io/sakuga_dataset_webpage/">Sakuga-42M</a> </td>
<td> arxiv 24.05 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Cartoon videos with text descriptions and tags </td>
</tr>
<tr>
<td> <a href="https://github.com/zhenglinpan/AnitaDataset">Anita</a> </td>
<td> online 2024 </td>
<td> :x: </td>
<td> :heavy_check_mark: </td>
<td> Professional hand-drawn cartoon keyframes, with 1080P sketch and color images </td>
</tr>
</table>
2. Sketch-Synthesis Approaches
1) Semantic Concept-to-sketch
<table>
<tr>
<td><strong>Level</strong></td>
<td><strong>Paper</strong></td>
<td><strong>Source</strong></td>
<td><strong>Code/Project Link</strong></td>
</tr>
<tr>
<td rowspan="12"><strong>Instance-level</strong></td>
<td> <a href="https://openreview.net/pdf?id=Hy6GHpkCW">A Neural Representation of Sketch Drawings (sketch-rnn)</a> </td>
<td> ICLR 2018 </td>
<td>
<a href="https://github.com/tensorflow/magenta/tree/master/magenta/models/sketch_rnn">[Code]</a>
<a href="https://magenta.tensorflow.org/sketch-rnn-demo">[Project]</a>
<a href="https://magenta.tensorflow.org/assets/sketch_rnn_demo/index.html">[Demo]</a>
</td>
</tr>
<tr>
<td> <a href="https://arxiv.org/pdf/1709.04121.pdf">Sketch-pix2seq: a Model to Generate Sketches of Multiple Categories</a> </td>
<td> </td>
<td>
<a href="https://github.com/MarkMoHR/sketch-pix2seq">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://idvxlab.com/papers/2019AAAI_Sketcher_Cao.pdf">AI-Sketcher : A Deep Generative Model for Producing High-Quality Sketches</a> </td>
<td> AAAI 2019 </td>
<td> <a href="https://facex.idvxlab.com/">[Project]</a> </td>
</tr>
<tr>
<td> <a href="https://ieeexplore.ieee.org/abstract/document/8854308">Stroke-based sketched symbol reconstruction and segmentation (stroke-rnn)</a> </td>
<td> CGA 2019 </td>
<td> </td>
</tr>
<tr>
<td> <a href="https://arxiv.org/abs/2007.02190">BĂ©zierSketch: A generative model for scalable vector sketches</a> </td>
<td> ECCV 2020 </td>
<td>
<a href="https://github.com/dasayan05/stroke-ae">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="http://sketchx.ai/pixelor">Pixelor: A Competitive Sketching AI Agent. So you think you can beat me?</a> </td>
<td> SIGGRAPH Asia 2020 </td>
<td>
<a href="http://sketchx.ai/pixelor">[Project]</a>
<a href="https://github.com/dasayan05/neuralsort-siggraph">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://arxiv.org/abs/2011.10039">Creative Sketch Generation</a> </td>
<td> ICLR 2021 </td>
<td>
<a href="http://doodlergan.cloudcv.org/">[Project]</a>
<a href="https://github.com/facebookresearch/DoodlerGAN">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://arxiv.org/abs/2105.02769">Computer-Aided Design as Language</a> </td>
<td> arxiv 2105 </td>
<td>
</td>
</tr>
<tr>
<td> <a href="https://arxiv.org/abs/2112.03258">DoodleFormer: Creative Sketch Drawing with Transformers</a> </td>
<td> ECCV 2022 </td>
<td>
<a href="https://ankanbhunia.github.io/doodleformer/">[Project]</a>
<a href="https://github.com/ankanbhunia/doodleformer">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://openreview.net/forum?id=4eJ43EN2g6l">SketchKnitter: Vectorized Sketch Generation with Diffusion Models</a> </td>
<td> ICLR 2023 </td>
<td>
<a href="https://github.com/XDUWQ/SketchKnitter">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://ieeexplore.ieee.org/abstract/document/10144693">Self-Organizing a Latent Hierarchy of Sketch Patterns for Controllable Sketch Synthesis</a> </td>
<td> TNNLS 2023 </td>
<td>
<a href="https://github.com/CMACH508/RPCL-pix2seqH">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://openreview.net/forum?id=5xadJmgwix">Scale-Adaptive Diffusion Model for Complex Sketch Synthesis</a> </td>
<td> ICLR 2024 </td>
<td>
</td>
</tr>
</table>
2) Photo-to-sketch
<table>
<tr>
<td><strong>Data type</strong></td>
<td><strong>Paper</strong></td>
<td><strong>Source</strong></td>
<td><strong>Code/Project Link</strong></td>
</tr>
<tr>
<td rowspan="1"><strong>Facial</strong></td>
<td> <a href="https://dl.acm.org/citation.cfm?id=2461964">Style and abstraction in portrait sketching</a> </td>
<td> TOG 2013 </td>
<td>
</td>
</tr>
<tr>
<td rowspan="4"><strong>Instance-level</strong></td>
<td> <a href="https://link.springer.com/content/pdf/10.1007%2Fs11263-016-0963-9.pdf">Free-Hand Sketch Synthesis with Deformable Stroke Models</a> </td>
<td> IJCV 2017 </td>
<td>
<a href="https://panly099.github.io/skSyn.html">[Project]</a>
<a href="https://github.com/panly099/sketchSynthesis">[code]</a>
</td>
</tr>
<tr>
<td> <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Song_Learning_to_Sketch_CVPR_2018_paper.pdf">Learning to Sketch with Shortcut Cycle Consistency</a> </td>
<td> CVPR 2018 </td>
<td> <a href="https://github.com/seindlut/deep_p2s">[Code1]</a> <a href="https://github.com/MarkMoHR/sketch-photo2seq">[Code2]</a> </td>
</tr>
<tr>
<td> <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Muhammad_Learning_Deep_Sketch_CVPR_2018_paper.pdf">Learning Deep Sketch Abstraction</a> </td>
<td> CVPR 2018 </td>
<td> </td>
</tr>
<tr>
<td> <a href="https://arxiv.org/abs/2202.05822">CLIPasso: Semantically-Aware Object Sketching</a> </td>
<td> SIGGRAPH 2022 </td>
<td>
<a href="https://clipasso.github.io/clipasso/">[Project]</a>
<a href="https://github.com/yael-vinker/CLIPasso">[Code]</a>
</td>
</tr>
<tr>
<td rowspan="3"><strong>Scene-level</strong></td>
<td> <a href="https://arxiv.org/abs/2012.09004">Sketch Generation with Drawing Process Guided by Vector Flow and Grayscale</a> </td>
<td> AAAI 2021 </td>
<td>
<a href="https://github.com/TZYSJTU/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://arxiv.org/abs/2211.17256">CLIPascene: Scene Sketching with Different Types and Levels of Abstraction</a> </td>
<td> ICCV 2023 </td>
<td>
<a href="https://clipascene.github.io/CLIPascene/">[Project]</a>
<a href="https://github.com/yael-vinker/SceneSketch">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://openreview.net/forum?id=fLbdDspNW3">Learning Realistic Sketching: A Dual-agent Reinforcement Learning Approach</a> </td>
<td> ACM MM 2024 </td>
<td>
</td>
</tr>
<tr>
<td rowspan="1"><strong>Technical Drawings</strong></td>
<td> <a href="https://arxiv.org/abs/2003.05471">Deep Vectorization of Technical Drawings</a> </td>
<td> ECCV 2020 </td>
<td>
<a href="http://adase.group/3ddl/projects/vectorization/">[Project]</a>
<a href="https://github.com/Vahe1994/Deep-Vectorization-of-Technical-Drawings">[code]</a>
</td>
</tr>
</table>
<table>
<tr>
<td><strong>Type</strong></td>
<td><strong>Paper</strong></td>
<td><strong>Source</strong></td>
<td><strong>Code/Project Link</strong></td>
</tr>
<tr>
<td rowspan="7"><strong>Facial</strong></td>
<td> <a href="https://github.com/vijishmadhavan/ArtLine">ArtLine</a> </td>
<td> Online demo </td>
<td>
<a href="https://github.com/vijishmadhavan/ArtLine">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="http://openaccess.thecvf.com/content_CVPR_2019/papers/Yi_APDrawingGAN_Generating_Artistic_Portrait_Drawings_From_Face_Photos_With_Hierarchical_CVPR_2019_paper.pdf">APDrawingGAN: Generating Artistic Portrait Drawings from Face Photos with Hierarchical GANs</a> </td>
<td> CVPR 2019 </td>
<td>
<a href="https://github.com/yiranran/APDrawingGAN">[Code]</a>
<a href="https://face.lol/">[Demo]</a>
</td>
</tr>
<tr>
<td> <a href="https://openaccess.thecvf.com/content_CVPR_2020/papers/Yi_Unpaired_Portrait_Drawing_Generation_via_Asymmetric_Cycle_Mapping_CVPR_2020_paper.pdf">Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping</a> </td>
<td> CVPR 2020 </td>
<td>
<a href="https://github.com/yiranran/Unpaired-Portrait-Drawing">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://ieeexplore.ieee.org/document/9069416">Line Drawings for Face Portraits From Photos Using Global and Local Structure Based GANs</a> </td>
<td> TPAMI 2020 </td>
<td>
<a href="https://github.com/yiranran/APDrawingGAN2">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://ieeexplore.ieee.org/abstract/document/9699090">Quality Metric Guided Portrait Line Drawing Generation from Unpaired Training Data</a> </td>
<td> TPAMI 2022 </td>
<td>
<a href="https://github.com/yiranran/QMUPD">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://arxiv.org/abs/2309.00216">Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation</a> </td>
<td> ICCV 2023 </td>
<td>
<a href="https://github.com/AiArt-HDU/HIDA">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://arxiv.org/abs/2403.11263">Stylized Face Sketch Extraction via Generative Prior with Limited Data</a> </td>
<td> EG 2024 </td>
<td>
<a href="https://github.com/kwanyun/StyleSketch/">[Code]</a>
<a href="https://kwanyun.github.io/stylesketch_project/">[Project]</a>
</td>
</tr>
<tr>
<td rowspan="2"><strong>Instance-level</strong></td>
<td> <a href="http://openaccess.thecvf.com/content_ECCV_2018/papers/Kaiyue_Pang_Deep_Factorised_Inverse-Sketching_ECCV_2018_paper.pdf">Deep Factorised Inverse-Sketching</a> </td>
<td> ECCV 2018 </td>
<td> </td>
</tr>
<tr>
<td> <a href="http://openaccess.thecvf.com/content_WACV_2020/papers/Kampelmuhler_Synthesizing_human-like_sketches_from_natural_images_using_a_conditional_convolutional_WACV_2020_paper.pdf">Synthesizing human-like sketches from natural images using a conditional convolutional decoder</a> </td>
<td> WACV 2020 </td>
<td>
<a href="https://github.com/kampelmuehler/synthesizing_human_like_sketches">[Code]</a>
</td>
</tr>
<tr>
<td rowspan="4"><strong>Anime</strong></td>
<td> <a href="https://github.com/lllyasviel/sketchKeras">sketchKeras</a> </td>
<td> online demo </td>
<td>
<a href="https://github.com/lllyasviel/sketchKeras">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://github.com/hepesu/LineDistiller">LineDistiller</a> </td>
<td> online demo </td>
<td>
<a href="https://github.com/hepesu/LineDistiller">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://github.com/Mukosame/Anime2Sketch">Anime2Sketch</a> </td>
<td> online demo </td>
<td>
<a href="https://github.com/Mukosame/Anime2Sketch">[Code]</a>
</td>
</tr>
<tr>
<td> <a href="https://dl.acm.org/doi/10.1145/3550454.3555504">Reference Based Sketch Extraction via Attention Mechanism</a> </td>
<td> SIGGRAPH Asia 2022 </td>
<td>
<a href="https://github.com/ref2sketch/ref2sketch">[Code]</a>
</td>
</tr>
<tr>
<td rowspan="2"><strong>Scene-level</strong></td>
<td> <a href="https://arxiv.org/pdf/1901.00542.pdf">Photo-Sketching: Inferring Contour Drawings from Images</a> </td>
<td> WACV 2019 </td>
<td>
<a href="https://github.com/mtli/PhotoSketch">[Code]</a>
<a href="http://www.cs.cmu.edu/~mengtial/proj/sketch/">[Project]</a>
</td>
</tr>
<tr>
<td> <a href="https://carolineec.github.io/informative_drawings/">Learning to generate line drawings that convey geometry and semantics</a> </td>
<td> CVPR 2022 </td>
<td>
<a href="https://github.com/carolineec/informative-drawings">[Code]</a>
<a href="https://carolineec.github.io/informative_drawings/">[Project]</a>
</td>
</tr>
<tr>
<td rowspan="1"><strong>Arbitrary</strong></td>
<td> <a href="https://dl.acm.org/doi/abs/10.1145/3592392">Semi-supervised reference-based sketch extraction using a contrastive learning</a> </td>
<td> SIGGRAPH 2023 </td>
<td>
<a href="https://github.com/Chanuku/semi_ref2sketch_code">[Code]</a>
<a href="https://chanuku.github.io/Semi_ref2sketch/">[Project]</a>
</td>
</tr>
</table>
3) Text/Attribute-to-sketch
4) 3D shape-to-sketch
5) Art-to-sketch
Here we list sketch synthesis based on other image types, like Manga, line art, rough sketch, etc.
a) Line art
- Raster-to-Vector (a.k.a. Vectorization)
b) Rough sketch simplification / cleanup
c) Manga (Comics)
3. Vector Graphics Generation (2D)
Here we focus on learning-based vector graphics generation without depending on vector training data, and traditional vectorization algorithms.
- Learning with external black-box (non-differentiable) rendering simulator
- Learning with built-in differentiable rendering module
4. Vector Graphics Generation (3D)