Home

Awesome

<!-- # eeg-self-supervision Resources for the paper titled "Domain-guided Self-supervision of EEG Data Improves Downstream Classification Performance and Generalizability". Accepted at ML4H Symposium 2021 with an oral spotlight! -->

Domain-guided Self-supervision of EEG Data Improves Downstream Classification Performance and Generalizability

Authors: Neeraj Wagh, Jionghao Wei, Samarth Rawal, Brent Berry, Leland Barnard, Benjamin Brinkmann, Gregory Worrell, David Jones, Yogatheesan Varatharajah

Affiliation: University of Illinois at Urbana-Champaign, Mayo Clinic

Work accepted in proceedings of the ML4H Symposium 2021 with an oral spotlight!

<!-- - ArXiv Pre-print: <> --> <!-- - PMLR Paper: <> --> <!-- - ML4H Poster: <> - ML4H 10-minute Video: <> - ML4H Slides: <> - Code: [GitHub Repo]() -->

Installation

Mapping naming conventions between the paper and code

Command Line Arguments

How to Reproduce the Results Reported in the Paper

  1. Download resources folder, which contains pre-computed feature arrays(power spectral density, topographical maps data, and preprocessed timeseries), metadata from Box. Place it in the root directory of the project.
  2. Enter evaluation folder.
  3. To evaluate linear baseline, run the following command:
python linear_baseline_eval.py --dataset={dataset choice} --task={task choice}
  1. To evaluate proposed ablation models, run the following command:
python ablation_models_eval.py --gpu_idx={gpu index} --dataset={dataset choice} --task={task choice} --mode={ablation model choice}
  1. To evaluate SOTA model, run the following command:
python SOTA_eval.py --gpu_idx={gpu index} --dataset={dataset choice} --task={task choice} 

How to Fine-tune Existing Pre-trained Models for Downstream Tasks

  1. Download resources folder from Box. Place it in the root directory of the project.
  2. Enter Fine-tune folder.
  3. To fine-tune ablation models, run the following command:
python ablation_pipeline.py --gpu_idx={gpu index} --dataset={dataset choice} --task={task choice} --mode={ablation model choice}
  1. To fine-tune SOTA model, run the following command:
python SOTA_pipeline.py --gpu_idx={gpu index} --dataset={dataset choice} --task={task choice}

How to Perform Supervised Learning for Downstream Tasks

  1. Download resources folder from Box. Place it in the root directory of the project.
  2. Enter supervised_learning folder.
  3. To train the supervised learning models, run the following command:
python linear_baseline_train.py --dataset={dataset choice} --task={task choice}

[Work in Progress] - How to Perform Pre-training using Domain-guided Self-superivsed Tasks

<!-- 1. Enter _Pretrain_ folder. 2. Run the following command: ```python python pretrain.py --gpu_idx={gpu index} --mode={wanted ablation model} ``` -->

[Work in Progress] - How to Use Your Own Dataset to Train the Self-supervised Learning Tasks

Contact

Citation

Wagh, N., Wei, J., Rawal, S., Berry, B., Barnard, L., Brinkmann, B., Worrell, G., Jones, D. & Varatharajah, Y.. (2021). Domain-guided Self-supervision of EEG Data Improves Downstream Classification Performance and Generalizability. Proceedings of Machine Learning for Health, in Proceedings of Machine Learning Research 158:130-142 Available from https://proceedings.mlr.press/v158/wagh21a.html.