Awesome
MAT
The official implementation of the Molecule Attention Transformer. ArXiv
<p align='center'> <img src="https://github.com/gmum/MAT/blob/master/assets/MAT.png" alt="architecture" width="600"/> </p>Code
EXAMPLE.ipynb
jupyter notebook with an example of loading pretrained weights into MAT,transformer.py
file with MAT class implementation,utils.py
file with utils functions.
More functionality will be available soon!
Pretrained weights
Pretrained weights are available here
Results
In this section we present the average rank across the 7 datasets from our benchmark.
-
Results for hyperparameter search budget of 500 combinations.
-
Results for hyperparameter search budget of 150 combinations.
-
Results for pretrained model
Requirements
- PyTorch 1.4
Acknowledgments
Transformer implementation is inspired by The Annotated Transformer.