Awesome
Touch-and-Go
Dataset | Website | Paper
<br> <img src='imgs/teaser.jpg' align="right" width=960><br><br><br> This repository contains the official PyTorch implementation of our applications paper Touch and Go: Learning from Human-Collected Vision and Touch .
Touch and Go: Learning from
Human-Collected Vision and Touch
Fengyu Yang, Chenyang Ma, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan, Andrew Owens<br>
University of Michigan and Carnegie Mellon University <br>
In NeurIPS 2022 Datasets and Benchmarks Track
Todo
- Visuo-tacile Self-supervised Learning
- Tactile-driven Image Stylization
Citation
If you use this code for your research, please cite our paper.
@inproceedings{
yang2022touch,
title={Touch and Go: Learning from Human-Collected Vision and Touch},
author={Fengyu Yang and Chenyang Ma and Jiacheng Zhang and Jing Zhu and Wenzhen Yuan and Andrew Owens},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022}
}
Acknowledgments
We thank Xiaofeng Guo and Yufan Zhang for the extensive help with the GelSight sensor, and thank Daniel Geng, Yuexi Du and Zhaoying Pan for the helpful discussions. This work was supported in part by Cisco Systems and Wang Chu Chien-Wen Research Scholarship.