Awesome
CHALET: Cornell House Agent Learning Environment
CHALET is a 3D house simulator with support for navigation and manipulation. Unlike existing systems, CHALET supports both a wide range of object manipulation, as well as supporting complex environemnt layouts consisting of multiple rooms. The range of object manipulations includes the ability to pick up and place objects, toggle the state of objects like taps or televesions, open or close containers, and insert or remove objects from these containers. In addition, the simulator comes with 58 rooms that can be combined to create houses, including 10 default house layouts. CHALET is therefore suitable for setting up challenging environments for various AI tasks that require complex language understanding and planning, such as navigation, manipulation, instruction following, and interactive question answering.
Release
CHALET Version 0.1 is available for download at: http://clic.nlp.cornell.edu/resources/CHALET/
An unstable version of the CHALET source code is now available.
Please see the Wiki for details on how to use these builds and the source code.
For support and issues, please contact us through the issues section of the repository.
Data
CHALET has a large corpus of natural language instructions paired with human demonstrations. The corpus is available here. The corpus was published by Misra et al. 2018, and includes over 12K single instructions and 1.5K instruction paragraphs. Baselines and code are availazble in the CIFF repository.
Source Code
The CHALET source code is released without proprietary assets purcahsed from the Unity Asset Store. These assets are required to work with the original source, but not with the released binaries. We extensively modified the resources for CHALET. If you wish to work with the source code, please contact us (chalet-3d@googlegroups.com) for instruction on obtaining the required resources. See more details in the wiki.
Attribution
Environment and Simulator:
CHALET: Cornell House Agent Learning Environment
Claudia Yan, Dipendra Misra, Andrew Bennett, Aaron Walsman, Yonatan Bisk and Yoav Artzi
arXiv report 2018
Instructional Data:
Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction
Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi
EMNLP 2018
License
Attribution 4.0 International (CC BY 4.0)
Youtube Demos:
<a href="https://youtu.be/FBirx-10JPE">Demo 1</a> <a href="https://youtu.be/EpGS5606rn8">Demo 2</a> <a href="https://youtu.be/KAPyvdT05B0">Demo 3</a>
<p align="center"><img src="http://s1cyan.github.io/images/ctech/cabinetglass.gif"></p> <p align="center"><img src="http://s1cyan.github.io/images/ctech/candle.gif"></p> <p align="center"><img src="http://s1cyan.github.io/images/ctech/dresserdrawer.gif"></p> <p align="center"><img src="http://s1cyan.github.io/images/ctech/fridge.gif"></p> <p align="center"><img src="http://s1cyan.github.io/images/ctech/sink.gif"></p>