Awesome
Live3D v2.2 (AttNR)
Neural Rendering with Attention: An Incremental Improvement for Anime Character Animation
Windows Bundle1 | 一键启动Windows懒人包1 | Discord | Twitter | Youtube (Demo) | 哔哩哔哩 | 知乎(有干货)
Project history
<i>2022/08</i> @transpchan is no longer author2 of the Live3Dv1 paper [see CoNR draft]. He is therefore no longer involved in the submittions of Live3Dv1 to AAAI, and IJCAI, which were made by other authors alone.
Call for Authors/Contributors: Please contact me if you are willing to be the author of the paper of v2 and future versions, or if you are willing to sponsor furher research. Pull requests are also welcome!!
<i>2022/10</i> Live3Dv2 is released. Changes are (1) dropping the ResNet50 encoder (2) adding self-attention to the U-Net (3) tuning the hyper-parameters of the networks.
<i>2022/11</i> Weight file for Live3Dv2 is released.
<i>2022/11</i> Stay tuned for Live3D v3
Try it yourself!
[Demo1] Generate videos
[Demo23]] Colorize your own model
[Demo3] Generate 3D point cloud from drawings
Footnotes
-
Windows bundle may include third-party binaries that is not distributed with GPLv3. (mostly from conda repo and still open-sourced with other licences) ↩ ↩2
-
Despite the fact that Live3Dv1 was initially @transpchan 's personal project, and he is the only person who is actually coding, training, and writing the draft of paper for Live3Dv1. Other authors did nothing but making one demo video and asking me to change the paper to make reviewers happy. Also, they are not involved in the development of Live3Dv2. ↩
-
Demo2 is from Live3D-v1 not from v2. Live3D-v2 is better in the quality, try it yourself. Drawings and character design are taken from the MIT-Licenced CoNR maintained by Megvii. ↩