Home

Awesome

Awesome-Sketch-Synthesis

Awesome

A collection of papers about Sketch Synthesis (Generation). Mainly focus on stroke-level vector sketch synthesis.

Feel free to create a PR or an issue.

examples

Outlines


0. Survey

PaperSourceCode/Project Link
Deep Learning for Free-Hand Sketch: A SurveyTPAMI 2022[code]

1. Datasets

Here Vector strokes means having svg data. With photos means having the photo-sketch paired data.

<table> <tr> <td><strong>Level</strong></td> <td><strong>Dataset</strong></td> <td><strong>Source</strong></td> <td><strong>Vector strokes</strong></td> <td><strong>With photos</strong></td> <td><strong>Notes</strong></td> </tr> <tr> <td rowspan="3"><strong>Characters</strong></td> <td> <a href="https://github.com/brendenlake/omniglot/">Omniglot</a> </td> <td> </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> Alphabets characters </td> </tr> <tr> <td> <a href="http://kanjivg.tagaini.net/">KanjiVG</a> </td> <td> </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> Chinese characters </td> </tr> <tr> <td> <a href="https://github.com/rois-codh/kmnist">Kuzushiji</a> </td> <td> </td> <td> :x: </td> <td> :x: </td> <td> Japanese characters </td> </tr> <tr> <td rowspan="1"><strong>Icon</strong></td> <td> <a href="https://github.com/marcdemers/FIGR-8-SVG">FIGR-8-SVG</a> </td> <td> </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> Icons with text descriptions </td> </tr> <tr> <td rowspan="1"><strong>Systematic Symbol</strong></td> <td> <a href="https://github.com/GuangmingZhu/SketchIME">SketchIME</a> </td> <td> ACM MM 2023</td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> Systematic sketches with semantic annotations </td> </tr> <tr> <td rowspan="9"><strong>Instance-level</strong></td> <td> <a href="http://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/">TU-Berlin</a> </td> <td> SIGGRAPH 2012 </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> Multi-category hand sketches </td> </tr> <tr> <td> <a href="http://sketchy.eye.gatech.edu/">Sketchy</a> </td> <td> SIGGRAPH 2016 </td> <td> :heavy_check_mark: </td> <td> :heavy_check_mark: </td> <td> Multi-category photo-sketch paired </td> </tr> <tr> <td> <a href="https://quickdraw.withgoogle.com/data">QuickDraw</a> </td> <td> ICLR 2018 </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> Multi-category hand sketches </td> </tr> <tr> <td> <a href="https://drive.google.com/file/d/15s2BR-QwLgX_DObQBrYlUlZqUU90EL9G/view">QMUL-Shoe-Chair-V2</a> </td> <td> CVPR 2016 </td> <td> :heavy_check_mark: </td> <td> :heavy_check_mark: </td> <td> Only two categories </td> </tr> <tr> <td> <a href="https://github.com/KeLi-SketchX/SketchX-PRIS-Dataset">Sketch Perceptual Grouping (SPG)</a> </td> <td> ECCV 2018 </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> With part-level semantic segmentation information </td> </tr> <tr> <td> <a href="https://facex.idvxlab.com/">FaceX</a> </td> <td> AAAI 2019 </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> Labeled facial sketches </td> </tr> <tr> <td> <a href="https://github.com/facebookresearch/DoodlerGAN">Creative Sketch</a> </td> <td> ICLR 2021 </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> With annotated part segmentation </td> </tr> <tr> <td> <a href="https://github.com/HaohanWang/ImageNet-Sketch">ImageNet-Sketch</a> </td> <td> NeurIPS 2019 </td> <td> :x: </td> <td> :x: </td> <td> 50 images for each of the 1000 ImageNet classes </td> </tr> <tr> <td> <a href="https://seva-benchmark.github.io/">SEVA</a> </td> <td> NeurIPS 2023 </td> <td> :heavy_check_mark: </td> <td> :heavy_check_mark: </td> <td> 90K human-generated sketches that vary in detail </td> </tr> <tr> <td rowspan="6"><strong>Scene-level</strong></td> <td> <a href="https://sketchyscene.github.io/SketchyScene/">SketchyScene</a> </td> <td> ECCV 2018 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> With semantic/instance segmentation information </td> </tr> <tr> <td> <a href="http://projects.csail.mit.edu/cmplaces/">CMPlaces</a> </td> <td> TPAMI 2018 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Cross-modal scene dataset </td> </tr> <tr> <td> <a href="http://sweb.cityu.edu.hk/hongbofu/doc/context_based_sketch_classification_Expressive2018.pdf">Context-Skecth</a> </td> <td> Expressive 2018 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Context-based scene sketches for co-classification </td> </tr> <tr> <td> <a href="https://sysu-imsl.github.io/EdgeGAN/index.html">SketchyCOCO</a> </td> <td> CVPR 2020 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Scene sketch, segmentation and normal images </td> </tr> <tr> <td> <a href="http://www.pinakinathc.me/fscoco/">FS-COCO</a> </td> <td> ECCV 2022 </td> <td> :heavy_check_mark: </td> <td> :heavy_check_mark: </td> <td> Scene sketches with text description </td> </tr> <tr> <td> <a href="https://link.springer.com/article/10.1007/s00371-022-02731-8">SFSD</a> </td> <td> VC 2022 </td> <td> :heavy_check_mark: </td> <td> :heavy_check_mark: </td> <td> Completely hand-drawn scene sketches with label annotation </td> </tr> <tr> <td rowspan="2"><strong>Drawing from photos</strong></td> <td> <a href="http://www.cs.cmu.edu/~mengtial/proj/sketch/">Photo-Sketching</a> </td> <td> WACV 2019 </td> <td> :heavy_check_mark: </td> <td> :heavy_check_mark: </td> <td> ScenePhoto-sketch paired </td> </tr> <tr> <td> <a href="https://github.com/zachzeyuwang/tracing-vs-freehand">Tracing-vs-Freehand</a> </td> <td> SIGGRAPH 2021 </td> <td> :heavy_check_mark: </td> <td> :heavy_check_mark: </td> <td> Tracings and freehand drawings of images </td> </tr> <tr> <td rowspan="1"><strong>Drawing from 3D models</strong></td> <td> <a href="https://chufengxiao.github.io/DifferSketching/">DifferSketching</a> </td> <td> SIGGRAPH Asia 2022 </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> 3D model-sketch paired, with novice and professional ones </td> </tr> <tr> <td rowspan="3"><strong>Portrait</strong></td> <td> <a href="https://mmlab.ie.cuhk.edu.hk/datasets.html">CUFS</a> </td> <td> TPAMI 2009 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Face-sketch pairs </td> </tr> <tr> <td> <a href="https://github.com/yiranran/APDrawingGAN">APDrawing</a> </td> <td> CVPR 2019 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Portrait-sketch paired </td> </tr> <tr> <td> <a href="https://github.com/kwanyun/SKSF-A">SKSF-A</a> </td> <td> EG 2024 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Face-sketch pairs of seven styles </td> </tr> <tr> <td rowspan="1"><strong>Children's Drawing</strong></td> <td> <a href="https://github.com/facebookresearch/AnimatedDrawings">Amateur Drawings</a> </td> <td> TOG 2023 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> With character bounding boxes, segmentation masks, and joint location annotations </td> </tr> <tr> <td rowspan="2"><strong>Rough sketch</strong></td> <td> <a href="https://esslab.jp/~ess/en/data/davincidataset/">Da Vinci</a> </td> <td> CGI 2018 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Line drawing restoration dataset </td> </tr> <tr> <td> <a href="https://cragl.cs.gmu.edu/sketchbench/">Rough Sketch Benchmark</a> </td> <td> SIGGRAPH Asia 2020 </td> <td> :heavy_check_mark: </td> <td> :heavy_check_mark: </td> <td> Rough and clean sketch pairs (only for evaluation) </td> </tr> <tr> <td rowspan="5"><strong>CAD</strong></td> <td> <a href="https://gfx.cs.princeton.edu/proj/ld3d/">ld3d</a> </td> <td> SIGGRAPH 2008 </td> <td> :x: </td> <td> :x: </td> <td> Line Drawings of 3D Shapes </td> </tr> <tr> <td> <a href="https://ns.inria.fr/d3/OpenSketch/">OpenSketch</a> </td> <td> SIGGRAPH Asia 2019 </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> Product Design Sketches </td> </tr> <tr> <td> <a href="https://github.com/PrincetonLIPS/SketchGraphs">SketchGraphs</a> </td> <td> ICML 2020 Workshop </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> Sketches extracted from real-world CAD models </td> </tr> <tr> <td> <a href="https://github.com/AutodeskAILab/Fusion360GalleryDataset">Fusion 360 Gallery</a> </td> <td> SIGGRAPH 2021 </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> For 'sketch and extrude' designs </td> </tr> <tr> <td> <a href="https://floorplancad.github.io/">FloorPlanCAD</a> </td> <td> ICCV 2021 </td> <td> :heavy_check_mark: </td> <td> :x: </td> <td> With instance and semantic annotations </td> </tr> <tr> <td rowspan="10"><strong>Anime</strong></td> <td> <a href="https://gwern.net/danbooru2021">Danbooru2021</a> </td> <td> / </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Anime images annotated with tags </td> </tr> <tr> <td> <a href="https://github.com/lllyasviel/DanbooRegion">DanbooRegion</a> </td> <td> ECCV 2020 </td> <td> :x: </td> <td> :x: </td> <td> Anime images with region annotations </td> </tr> <tr> <td> <a href="https://github.com/zsl2018/StyleAnime">Danbooru-Parsing</a> </td> <td> TOG 2023 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> For anime portrait parsing and anime translation </td> </tr> <tr> <td> <a href="https://www.cs.toronto.edu/creativeflow/">CreativeFlow+</a> </td> <td> CVPR 2019 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Large densely annotated artistic video dataset </td> </tr> <tr> <td> <a href="https://github.com/lisiyao21/AnimeInterp">ATD-12K</a> </td> <td> CVPR 2021 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Animation frames with flow annotations </td> </tr> <tr> <td> <a href="https://lisiyao21.github.io/projects/AnimeRun">AnimeRun</a> </td> <td> NeurIPS 2022 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Correspondence dataset for 2D-styled cartoons </td> </tr> <tr> <td> <a href="https://github.com/kangyeolk/AnimeCeleb">AnimeCeleb</a> </td> <td> ECCV 2022 </td> <td> :x: </td> <td> :x: </td> <td> Animation head images with pose annotations </td> </tr> <tr> <td> <a href="https://github.com/ykdai/BasicPBC/tree/main/dataset">PaintBucket-Character</a> </td> <td> CVPR 2024 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Animation frames with region annotations </td> </tr> <tr> <td> <a href="https://zhenglinpan.github.io/sakuga_dataset_webpage/">Sakuga-42M</a> </td> <td> arxiv 24.05 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Cartoon videos with text descriptions and tags </td> </tr> <tr> <td> <a href="https://github.com/zhenglinpan/AnitaDataset">Anita</a> </td> <td> online 2024 </td> <td> :x: </td> <td> :heavy_check_mark: </td> <td> Professional hand-drawn cartoon keyframes, with 1080P sketch and color images </td> </tr> </table>

2. Sketch-Synthesis Approaches

1) Semantic Concept-to-sketch

<table> <tr> <td><strong>Level</strong></td> <td><strong>Paper</strong></td> <td><strong>Source</strong></td> <td><strong>Code/Project Link</strong></td> </tr> <tr> <td rowspan="12"><strong>Instance-level</strong></td> <td> <a href="https://openreview.net/pdf?id=Hy6GHpkCW">A Neural Representation of Sketch Drawings (sketch-rnn)</a> </td> <td> ICLR 2018 </td> <td> <a href="https://github.com/tensorflow/magenta/tree/master/magenta/models/sketch_rnn">[Code]</a> <a href="https://magenta.tensorflow.org/sketch-rnn-demo">[Project]</a> <a href="https://magenta.tensorflow.org/assets/sketch_rnn_demo/index.html">[Demo]</a> </td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/1709.04121.pdf">Sketch-pix2seq: a Model to Generate Sketches of Multiple Categories</a> </td> <td> </td> <td> <a href="https://github.com/MarkMoHR/sketch-pix2seq">[Code]</a> </td> </tr> <tr> <td> <a href="https://idvxlab.com/papers/2019AAAI_Sketcher_Cao.pdf">AI-Sketcher : A Deep Generative Model for Producing High-Quality Sketches</a> </td> <td> AAAI 2019 </td> <td> <a href="https://facex.idvxlab.com/">[Project]</a> </td> </tr> <tr> <td> <a href="https://ieeexplore.ieee.org/abstract/document/8854308">Stroke-based sketched symbol reconstruction and segmentation (stroke-rnn)</a> </td> <td> CGA 2019 </td> <td> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2007.02190">BĂ©zierSketch: A generative model for scalable vector sketches</a> </td> <td> ECCV 2020 </td> <td> <a href="https://github.com/dasayan05/stroke-ae">[Code]</a> </td> </tr> <tr> <td> <a href="http://sketchx.ai/pixelor">Pixelor: A Competitive Sketching AI Agent. So you think you can beat me?</a> </td> <td> SIGGRAPH Asia 2020 </td> <td> <a href="http://sketchx.ai/pixelor">[Project]</a> <a href="https://github.com/dasayan05/neuralsort-siggraph">[Code]</a> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2011.10039">Creative Sketch Generation</a> </td> <td> ICLR 2021 </td> <td> <a href="http://doodlergan.cloudcv.org/">[Project]</a> <a href="https://github.com/facebookresearch/DoodlerGAN">[Code]</a> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2105.02769">Computer-Aided Design as Language</a> </td> <td> arxiv 2105 </td> <td> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2112.03258">DoodleFormer: Creative Sketch Drawing with Transformers</a> </td> <td> ECCV 2022 </td> <td> <a href="https://ankanbhunia.github.io/doodleformer/">[Project]</a> <a href="https://github.com/ankanbhunia/doodleformer">[Code]</a> </td> </tr> <tr> <td> <a href="https://openreview.net/forum?id=4eJ43EN2g6l">SketchKnitter: Vectorized Sketch Generation with Diffusion Models</a> </td> <td> ICLR 2023 </td> <td> <a href="https://github.com/XDUWQ/SketchKnitter">[Code]</a> </td> </tr> <tr> <td> <a href="https://ieeexplore.ieee.org/abstract/document/10144693">Self-Organizing a Latent Hierarchy of Sketch Patterns for Controllable Sketch Synthesis</a> </td> <td> TNNLS 2023 </td> <td> <a href="https://github.com/CMACH508/RPCL-pix2seqH">[Code]</a> </td> </tr> <tr> <td> <a href="https://openreview.net/forum?id=5xadJmgwix">Scale-Adaptive Diffusion Model for Complex Sketch Synthesis</a> </td> <td> ICLR 2024 </td> <td> </td> </tr> </table>

2) Photo-to-sketch

<table> <tr> <td><strong>Data type</strong></td> <td><strong>Paper</strong></td> <td><strong>Source</strong></td> <td><strong>Code/Project Link</strong></td> </tr> <tr> <td rowspan="1"><strong>Facial</strong></td> <td> <a href="https://dl.acm.org/citation.cfm?id=2461964">Style and abstraction in portrait sketching</a> </td> <td> TOG 2013 </td> <td> </td> </tr> <tr> <td rowspan="4"><strong>Instance-level</strong></td> <td> <a href="https://link.springer.com/content/pdf/10.1007%2Fs11263-016-0963-9.pdf">Free-Hand Sketch Synthesis with Deformable Stroke Models</a> </td> <td> IJCV 2017 </td> <td> <a href="https://panly099.github.io/skSyn.html">[Project]</a> <a href="https://github.com/panly099/sketchSynthesis">[code]</a> </td> </tr> <tr> <td> <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Song_Learning_to_Sketch_CVPR_2018_paper.pdf">Learning to Sketch with Shortcut Cycle Consistency</a> </td> <td> CVPR 2018 </td> <td> <a href="https://github.com/seindlut/deep_p2s">[Code1]</a> <a href="https://github.com/MarkMoHR/sketch-photo2seq">[Code2]</a> </td> </tr> <tr> <td> <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Muhammad_Learning_Deep_Sketch_CVPR_2018_paper.pdf">Learning Deep Sketch Abstraction</a> </td> <td> CVPR 2018 </td> <td> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2202.05822">CLIPasso: Semantically-Aware Object Sketching</a> </td> <td> SIGGRAPH 2022 </td> <td> <a href="https://clipasso.github.io/clipasso/">[Project]</a> <a href="https://github.com/yael-vinker/CLIPasso">[Code]</a> </td> </tr> <tr> <td rowspan="3"><strong>Scene-level</strong></td> <td> <a href="https://arxiv.org/abs/2012.09004">Sketch Generation with Drawing Process Guided by Vector Flow and Grayscale</a> </td> <td> AAAI 2021 </td> <td> <a href="https://github.com/TZYSJTU/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale">[Code]</a> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2211.17256">CLIPascene: Scene Sketching with Different Types and Levels of Abstraction</a> </td> <td> ICCV 2023 </td> <td> <a href="https://clipascene.github.io/CLIPascene/">[Project]</a> <a href="https://github.com/yael-vinker/SceneSketch">[Code]</a> </td> </tr> <tr> <td> <a href="https://openreview.net/forum?id=fLbdDspNW3">Learning Realistic Sketching: A Dual-agent Reinforcement Learning Approach</a> </td> <td> ACM MM 2024 </td> <td> </td> </tr> <tr> <td rowspan="1"><strong>Technical Drawings</strong></td> <td> <a href="https://arxiv.org/abs/2003.05471">Deep Vectorization of Technical Drawings</a> </td> <td> ECCV 2020 </td> <td> <a href="http://adase.group/3ddl/projects/vectorization/">[Project]</a> <a href="https://github.com/Vahe1994/Deep-Vectorization-of-Technical-Drawings">[code]</a> </td> </tr> </table> <table> <tr> <td><strong>Type</strong></td> <td><strong>Paper</strong></td> <td><strong>Source</strong></td> <td><strong>Code/Project Link</strong></td> </tr> <tr> <td rowspan="7"><strong>Facial</strong></td> <td> <a href="https://github.com/vijishmadhavan/ArtLine">ArtLine</a> </td> <td> Online demo </td> <td> <a href="https://github.com/vijishmadhavan/ArtLine">[Code]</a> </td> </tr> <tr> <td> <a href="http://openaccess.thecvf.com/content_CVPR_2019/papers/Yi_APDrawingGAN_Generating_Artistic_Portrait_Drawings_From_Face_Photos_With_Hierarchical_CVPR_2019_paper.pdf">APDrawingGAN: Generating Artistic Portrait Drawings from Face Photos with Hierarchical GANs</a> </td> <td> CVPR 2019 </td> <td> <a href="https://github.com/yiranran/APDrawingGAN">[Code]</a> <a href="https://face.lol/">[Demo]</a> </td> </tr> <tr> <td> <a href="https://openaccess.thecvf.com/content_CVPR_2020/papers/Yi_Unpaired_Portrait_Drawing_Generation_via_Asymmetric_Cycle_Mapping_CVPR_2020_paper.pdf">Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping</a> </td> <td> CVPR 2020 </td> <td> <a href="https://github.com/yiranran/Unpaired-Portrait-Drawing">[Code]</a> </td> </tr> <tr> <td> <a href="https://ieeexplore.ieee.org/document/9069416">Line Drawings for Face Portraits From Photos Using Global and Local Structure Based GANs</a> </td> <td> TPAMI 2020 </td> <td> <a href="https://github.com/yiranran/APDrawingGAN2">[Code]</a> </td> </tr> <tr> <td> <a href="https://ieeexplore.ieee.org/abstract/document/9699090">Quality Metric Guided Portrait Line Drawing Generation from Unpaired Training Data</a> </td> <td> TPAMI 2022 </td> <td> <a href="https://github.com/yiranran/QMUPD">[Code]</a> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2309.00216">Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation</a> </td> <td> ICCV 2023 </td> <td> <a href="https://github.com/AiArt-HDU/HIDA">[Code]</a> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2403.11263">Stylized Face Sketch Extraction via Generative Prior with Limited Data</a> </td> <td> EG 2024 </td> <td> <a href="https://github.com/kwanyun/StyleSketch/">[Code]</a> <a href="https://kwanyun.github.io/stylesketch_project/">[Project]</a> </td> </tr> <tr> <td rowspan="2"><strong>Instance-level</strong></td> <td> <a href="http://openaccess.thecvf.com/content_ECCV_2018/papers/Kaiyue_Pang_Deep_Factorised_Inverse-Sketching_ECCV_2018_paper.pdf">Deep Factorised Inverse-Sketching</a> </td> <td> ECCV 2018 </td> <td> </td> </tr> <tr> <td> <a href="http://openaccess.thecvf.com/content_WACV_2020/papers/Kampelmuhler_Synthesizing_human-like_sketches_from_natural_images_using_a_conditional_convolutional_WACV_2020_paper.pdf">Synthesizing human-like sketches from natural images using a conditional convolutional decoder</a> </td> <td> WACV 2020 </td> <td> <a href="https://github.com/kampelmuehler/synthesizing_human_like_sketches">[Code]</a> </td> </tr> <tr> <td rowspan="4"><strong>Anime</strong></td> <td> <a href="https://github.com/lllyasviel/sketchKeras">sketchKeras</a> </td> <td> online demo </td> <td> <a href="https://github.com/lllyasviel/sketchKeras">[Code]</a> </td> </tr> <tr> <td> <a href="https://github.com/hepesu/LineDistiller">LineDistiller</a> </td> <td> online demo </td> <td> <a href="https://github.com/hepesu/LineDistiller">[Code]</a> </td> </tr> <tr> <td> <a href="https://github.com/Mukosame/Anime2Sketch">Anime2Sketch</a> </td> <td> online demo </td> <td> <a href="https://github.com/Mukosame/Anime2Sketch">[Code]</a> </td> </tr> <tr> <td> <a href="https://dl.acm.org/doi/10.1145/3550454.3555504">Reference Based Sketch Extraction via Attention Mechanism</a> </td> <td> SIGGRAPH Asia 2022 </td> <td> <a href="https://github.com/ref2sketch/ref2sketch">[Code]</a> </td> </tr> <tr> <td rowspan="2"><strong>Scene-level</strong></td> <td> <a href="https://arxiv.org/pdf/1901.00542.pdf">Photo-Sketching: Inferring Contour Drawings from Images</a> </td> <td> WACV 2019 </td> <td> <a href="https://github.com/mtli/PhotoSketch">[Code]</a> <a href="http://www.cs.cmu.edu/~mengtial/proj/sketch/">[Project]</a> </td> </tr> <tr> <td> <a href="https://carolineec.github.io/informative_drawings/">Learning to generate line drawings that convey geometry and semantics</a> </td> <td> CVPR 2022 </td> <td> <a href="https://github.com/carolineec/informative-drawings">[Code]</a> <a href="https://carolineec.github.io/informative_drawings/">[Project]</a> </td> </tr> <tr> <td rowspan="1"><strong>Arbitrary</strong></td> <td> <a href="https://dl.acm.org/doi/abs/10.1145/3592392">Semi-supervised reference-based sketch extraction using a contrastive learning</a> </td> <td> SIGGRAPH 2023 </td> <td> <a href="https://github.com/Chanuku/semi_ref2sketch_code">[Code]</a> <a href="https://chanuku.github.io/Semi_ref2sketch/">[Project]</a> </td> </tr> </table>

3) Text/Attribute-to-sketch

TypePaperSourceCode/Project Link
FacialText2Sketch: Learning Face Sketch from Facial Attribute TextICIP 2018
Scene-levelSketchforme: Composing Sketched Scenes from Text Descriptions for Interactive ApplicationsUIST 2019
Scene-levelScones: Towards Conversational Authoring of SketchesIUI 2020
TypePaperSourceCode/Project Link
ArbitraryModern Evolution Strategies for Creativity: Fitting Concrete Images and Abstract Conceptsarxiv 21.09[code] [project]
ArbitraryCLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image EncodersNeurIPS 2022[code]
ArbitraryStyleCLIPDraw: Coupling Content and Style in Text-to-Drawing TranslationIJCAI 2022[code]
SVGVectorFusion: Text-to-SVG by Abstracting Pixel-Based Diffusion ModelsCVPR 2023[project]
ArbitrarySketchDreamer: Interactive Text-Augmented Creative Sketch IdeationBMVC 2023[code]
ArbitraryDiffSketcher: Text Guided Vector Sketch Synthesis through Latent Diffusion ModelsNeurIPS 2023[project] [code]
IconIconShop: Text-Based Vector Icon Synthesis with Autoregressive TransformersSIGGRAPH Asia 2023[project]
SVGText-Guided Vector Graphics CustomizationSIGGRAPH Asia 2023[project]
ArbitraryText-based Vector Sketch Editing with Image Editing Diffusion PriorICME 2024[code]
SVGSVGDreamer: Text Guided SVG Generation with Diffusion ModelCVPR 2024[project] [code]
SVGNIVeL: Neural Implicit Vector Layers for Text-to-Vector GenerationCVPR 2024[project]
SVGText-to-Vector Generation with Neural Path RepresentationSIGGRAPH 2024[project]
ArbitrarySVGCraft: Beyond Single Object Text-to-SVG Synthesis with Comprehensive Canvas Layoutarxiv 24.04[code]
SVGVectorPainter: A Novel Approach to Stylized Vector Graphics Synthesis with Vectorized Strokesarxiv 24.05

4) 3D shape-to-sketch

PaperSourceCode/Project Link
DeepShapeSketch : Generating hand drawing sketches from 3D objectsIJCNN 2019
Neural Contours: Learning to Draw Lines from 3D ShapesCVPR 2020[project] [code]
Cloud2Curve: Generation and Vectorization of Parametric SketchesCVPR 2021[project]
Neural Strokes: Stylized Line Drawing of 3D ShapesICCV 2021[code]
Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D ModelsPG 2022
CAD2Sketch: Generating Concept Sketches from CAD SequencesSIGGRAPH Asia 2022[project]

5) Art-to-sketch

Here we list sketch synthesis based on other image types, like Manga, line art, rough sketch, etc.

a) Line art

PaperSourceCode/Project LinkDeep learning?
Closure-aware Sketch SimplificationSIGGRAPH Asia 2015[Project]No
StrokeAggregator: Consolidating Raw Sketches into Artist-Intended Curve DrawingsSIGGRAPH 2018[Project]No
StrokeStrip: Joint Parameterization and Fitting of Stroke ClustersSIGGRAPH 2021[Project] [code]No
StripMaker: Perception-driven Learned Vector Sketch ConsolidationSIGGRAPH 2023No
Region-Aware Simplification and Stylization of 3D Line DrawingsEG 2024No
PaperSourceCode/Project LinkDeep learning?
Topology-Driven Vectorization of Clean Line DrawingsTOG 2013No
Fidelity vs. Simplicity: a Global Approach to Line Drawing VectorizationSIGGRAPH 2016[Project]No
A Delaunay triangulation based approach for cleaning rough sketchesC&G 2018[Code]No
Semantic Segmentation for Line Drawing Vectorization Using Neural NetworksEG 2018[project] [code]Yes
Deep Line Drawing Vectorization via Line Subdivision and Topology ReconstructionPG 2019Yes
Inertia-based Fast Vectorization of Line DrawingsPG 2019No
Vectorization of Line Drawings via Polyvector FieldsTOG 2019[Code]No
Integer-Grid Sketch Simplification and VectorizationSGP 2020[Project] [Code]No
Deep Vectorization of Technical DrawingsECCV 2020[project] [code]Yes
General Virtual Sketching Framework for Vector Line ArtSIGGRAPH 2021[project] [code]Yes
Keypoint-Driven Line Drawing Vectorization via PolyVector FlowSIGGRAPH Asia 2021[project]Hybrid
End-to-end Line Drawing VectorizationAAAI 2022Yes
Vectorizing Line Drawings of Arbitrary Thickness via Boundary-based Topology ReconstructionCGF 2022No
Singularity-Free Frame Fields for Line Drawing VectorizationSGP 2023[code]No
Deep Sketch Vectorization via Implicit Surface ExtractionSIGGRAPH 2024[project] [code]Hybrid

b) Rough sketch simplification / cleanup

PaperSourceCode/Project Link
A Benchmark for Rough Sketch CleanupSIGGRAPH Asia 2020[Project] [Code]
PaperSourceCode/Project Link
Learning to Simplify: Fully Convolutional Networks for Rough Sketch CleanupSIGGRAPH 2016[Code] [Project]
Mastering Sketching: Adversarial Augmentation for Structured PredictionSIGGRAPH 2018[Code] [Project]
Real-Time Data-Driven Interactive Rough Sketch InkingSIGGRAPH 2018[Code] [Project]
Perceptual-aware Sketch Simplification Based on Integrated VGG LayersTVCG 2019

c) Manga (Comics)

PaperSourceCode/Project Link
Deep extraction of manga structural linesSIGGRAPH 2017[Code]
Manga Filling Style Conversion with Screentone Variational AutoencoderSIGGRAPH Asia 2020[Project] [Code]
Generating Manga from Illustrations via Mimicking Manga WorkflowCVPR 2021[Project] [Code]
MangaGAN: Unpaired Photo-to-Manga Translation Based on The Methodology of Manga DrawingAAAI 2021
MARVEL: Raster Gray-level Manga Vectorization via Primitive-wise Deep Reinforcement LearningTCSVT 2023

3. Vector Graphics Generation (2D)

Here we focus on learning-based vector graphics generation without depending on vector training data, and traditional vectorization algorithms.

PaperSourceCode/Project Link
Synthesizing Programs for Images using Reinforced Adversarial LearningICML 2018[Code]
Unsupervised Doodling and Painting with Improved SPIRALarxiv 1910[Project]
PaperSourceCode/Project Link
Layered Image Vectorization via Semantic Simplificationarxiv 24.06[webpage] [code]
ProcessPainter: Learning to draw from sequence dataSIGGRAPH Asia 2024[code]
Segmentation-guided Layer-wise Image Vectorization with Gradient FillsECCV 2024
Towards High-fidelity Artistic Image Vectorization via Texture-Encapsulated Shape ParameterizationCVPR 2024
SuperSVG: Superpixel-based Scalable Vector Graphics SynthesisCVPR 2024[code]
Vector Graphics Generation via Mutually Impulsed Dual-domain DiffusionCVPR 2024
Optimize and Reduce: A Top-Down Approach for Image VectorizationAAAI 2024[code]
Segmentation-Based Parametric Paintingarxiv 23.11[code] [project]
Editable Image Geometric Abstraction via Neural Primitive AssemblyICCV 2023
Stroke-based Neural Painting and Stylization with Dynamically Predicted Painting RegionACM MM 2023[code]
Intelli-Paint: Towards Developing More Human-Intelligible Painting AgentsECCV 2022[project]
Towards Layer-wise Image VectorizationCVPR 2022[code] [project]
Paint Transformer: Feed Forward Neural Painting with Stroke PredictionICCV 2021[code]
Combining Semantic Guidance and Deep Reinforcement Learning For Generating Human Level PaintingsCVPR 2021[code]
Rethinking Style Transfer: From Pixels to Parameterized BrushstrokesCVPR 2021[code]
Im2Vec: Synthesizing Vector Graphics without Vector SupervisionCVPR 2021[Project] [code]
Stylized Neural PaintingCVPR 2021[Code] [project]
Learning to Paint With Model-based Deep Reinforcement LearningICCV 2019[code]
Strokenet: A neural painting environmentICLR 2019[Code]
Neural Painters: A learned differentiable constraint for generating brushstroke paintingsarxiv 1904[Code]
Learning to Sketch with Deep Q Networks and Demonstrated Strokesarxiv 1810
Unsupervised Image to Sequence Translation with Canvas-Drawer Networksarxiv 1809
PaperSourceCode/Project Link
Depixelizing pixel artSIGGRAPH 2011
Perception-Driven Semi-Structured Boundary VectorizationSIGGRAPH 2018[Webpage]
PolyFit: Perception-aligned Vectorization of Raster Clip-art via Intermediate Polygonal FittingSIGGRAPH 2020[Webpage] [Code]
ClipGen: A Deep Generative Model for Clipart Vectorization and SynthesisTVCG 2021
TCB-Spline-Based Image VectorizationTOG 2022
Image vectorization and editing via linear gradient layer decompositionSIGGRAPH 2023

4. Vector Graphics Generation (3D)

PaperSourceCode/Project Link
3Doodle: Compact Abstraction of Objects with 3D StrokesSIGGRAPH 2024[code]
Diff3DS: Generating View-Consistent 3D Sketch via Differentiable Curve Renderingarxiv 24.05[webpage]
PaperSourceCode/Project Link
Wired Perspectives: Multi-View Wire Art Embraces Generative AICVPR 2024[code] [webpage]
Fabricable 3D Wire ArtSIGGRAPH 2024