Awesome
Mnist
The Pytorch Implementation of "Appearance-Based Gaze Estimation in the Wild". (updated in 2021/04/28)
We build benchmarks for gaze estimation in our survey "Appearance-based Gaze Estimation With Deep Learning: A Review and Benchmark". This is the implemented code of the "Mnist" method in our benchmark. Please refer our survey for more details.
We recommend you to use data processing codes provided in <a href="http://phi-ai.org/GazeHub/" target="_blank">GazeHub</a>. You can direct run the method' code using the processed dataset.
Links to gaze estimation codes.
- A Coarse-to-fine Adaptive Network for Appearance-based Gaze Estimation, AAAI 2020 (Coming soon)
- Gaze360: Physically Unconstrained Gaze Estimation in the Wild, ICCV 2019
- Appearance-Based Gaze Estimation Using Dilated-Convolutions, ACCV 2019
- Appearance-Based Gaze Estimation via Evaluation-Guided Asymmetric Regression, ECCV 2018
- RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments, ECCV 2018
- MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation, TPAMI 2017
- It’s written all over your face: Full-face appearance-based gaze estimation, CVPRW 2017
- Eye Tracking for Everyone, CVPR 2016
- Appearance-Based Gaze Estimation in the Wild, CVPR 2015
Performance
The method is evaluated in three tasks. Please refer our survey for more details.
License
The code is under the license of CC BY-NC-SA 4.0 license.
Introduction
The project contains following files/folders.
model.py
, the model code.train.py
, the entry for training.test.py
, the entry for testing.config/
, this folder contains the config of the experiment in each dataset. To run our code, you should write your ownconfig.yaml
.reader/
, the code for reading data. You can use the provided reader or write your own reader.
Getting Started
Writing your own config.yaml
Normally, for training, you should change
train.save.save_path
, The model is saved in the$save_path$/checkpoint/
.train.data.image
, This is the path of image, please use the provided data processing code in <a href="http://phi-ai.org/GazeHub/" target="_blank">GazeHub</a>train.data.label
, This is the path of label.reader
, This indicates the used reader. It is the filename inreader
folder, e.g., reader/reader_mpii.py ==>reader: reader_mpii
.
For test, you should change
test.load.load_path
, it is usually the same astrain.save.save_path
. The test result is saved in$load_path$/evaluation/
.test.data.image
, it is usually the same astrain.data.image
.test.data.label
, it is usually the same astrain.data.label
.
Training
You can run
python train.py config/config_mpii.yaml 0
This means the code will run with config_mpii.yaml
and use the 0th
person as the test set.
You also can run
bash run.sh train.py config/config_mpii.yaml
This means the code will perform leave-one-person-out training automatically.
run.sh
performs iteration, you can change the iteration times in run.sh
for different datasets, e.g., set the iteration times as 4
for four-fold validation.
Test
You can run
python test.py config/config_mpii.yaml 0
or
bash run.sh test.py config/config_mpii.yaml
Result
After training or test, you can find the result from the $save_path$
in config_mpii.yaml
.
Citation
If you use our code, please cite:
@InProceedings{Zhang_2015_CVPR,
author = {Zhang, Xucong and Sugano, Yusuke and Fritz, Mario and Bulling, Andreas},
title = {Appearance-Based Gaze Estimation in the Wild},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}
@article{Cheng2021Survey,
title={Appearance-based Gaze Estimation With Deep Learning: A Review and Benchmark},
author={Yihua Cheng and Haofei Wang and Yiwei Bao and Feng Lu},
journal={arXiv preprint arXiv:2104.12668},
year={2021}
}
Contact
Please email any questions or comments to yihua_c@buaa.edu.cn.
Reference
- MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation
- EYEDIAP Database: Data Description and Gaze Tracking Evaluation Benchmarks
- Learning-by-Synthesis for Appearance-based 3D Gaze Estimation
- Gaze360: Physically Unconstrained Gaze Estimation in the Wild
- ETH-XGaze: A Large Scale Dataset for Gaze Estimation under Extreme Head Pose and Gaze Variation
- Appearance-Based Gaze Estimation in the Wild
- Appearance-Based Gaze Estimation Using Dilated-Convolutions
- RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments
- It’s written all over your face: Full-face appearance-based gaze estimation
- A Coarse-to-fine Adaptive Network for Appearance-based Gaze Estimation
- Eye Tracking for Everyone
- Adaptive Feature Fusion Network for Gaze Tracking in Mobile Tablets
- On-Device Few-Shot Personalization for Real-Time Gaze Estimation
- A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone