Home

Awesome

Awesome-World-Models

This repository is a collection of research papers on World Models. It aims to provide a useful resource for those interested in this field.

World Models are a class of models in the field of artificial intelligence that aim to create a simplified, internal representation of the external world. These models are designed to predict the future state of the environment based on current observations and past experiences, allowing an agent to make informed decisions.

World Model Papers

  1. Learning to Model the World with Language. arxiv 2023. paper

    Jessy Lin, Yuqing Du, Olivia Watkins, Danijar Hafner, Pieter Abbeel, Dan Klein, Anca Dragan.

    language helps agents predict the future

  2. Unifying (Machine) Vision via Counterfactual World Modeling. arxiv 2023. paper

    Bear, Daniel M., Kevin Feigelis, Honglin Chen, Wanhee Lee, Rahul Venkatesh, Klemen Kotar, Alex Durango, and Daniel LK Yamins.

  3. World Models NIPS 2018. paper demo

    Ha, David, and Jürgen Schmidhuber.

  4. A Control-Centric Benchmark for Video Prediction. ICLR 2023. paper

    Tian, Stephen, Chelsea Finn, and Jiajun Wu.

  5. Transformers are sample efficient world models. ICLR 2023. paper Micheli, Vincent, Eloi Alonso, and François Fleuret.

  6. Towards Efficient World Models ICML 2023 Workshops. paper Eloi Alonso, Vincent Micheli, and François Fleuret.

  7. Learning latent dynamics for planning from pixels. PMLR 2019. paper Hafner, Danijar, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson.

Video Model Papers

  1. MAGVIT: Masked Generative Video Transformer. CVPR 2023. paper demo code

    Yu, Lijun, Yong Cheng, Kihyuk Sohn, José Lezama, Han Zhang, Huiwen Chang, Alexander G. Hauptmann et al.

    3d VQ + MaskGIT = 37fps on v100 sampling

  2. Diffusion Models for Video Prediction and Infilling. TMLR 2022. paper code

    Tobias Höppe, Arash Mehrjou, Stefan Bauer, Didrik Nielsen, Andrea Dittadi

  3. Unsupervised Learning for Physical Interaction through Video Prediction Neurips 2016. paper

    Finn, Chelsea, Ian Goodfellow, and Sergey Levine.

  4. Unsupervised Learning of Video Representations using Lstms. ICML 2015. paper Srivastava, Nitish, Elman Mansimov, and Ruslan Salakhudinov.

Action Model Papers

  1. Decision Transformer: Reinforcement Learning via Sequence Modeling Neurips 2021. paper

    Chen, Lili, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch.

  2. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion RSS 2023. paper demo Chi, Cheng and Feng, Siyuan and Du, Yilun and Xu, Zhenjia and Cousineau, Eric and Burchfiel, Benjamin and Song, Shuran