Home

Awesome

License: MIT

iupacGPT

IUPAC-based large-scale molecular pre-trained model for property prediction and molecular generation

The IUPAC (International Union of Pure and Applied Chemistry) nomenclature is a globally recognized unique naming system which assigns names to chemical compounds. As a form of molecular representation closest to natural language, it allows to estimate molecular data in a large-scale pre-trained paradigm by employing machine learning approaches for natural language processing (NLP). Although, SMILES is currently popular molecular representation used by most generative models, different molecular representation is suitable for different scenarios, and considering the advantages of IUPAC in terms of readability, it becomes meaningful to explore the difference of these two different molecular representations for molecular generation and regression/classification tasks. In this paper, we attempt to adapt the capabilities of transformer to a large IUPAC corpus by constructing a GPT-2-like language model named iupacGPT. For each task in addition to the molecular generation, we freeze model parameters and attach trainable lightweight networks to fine tune. The results show that pre-trained iupacGPT can capture general knowledge that can be successfully transferred to the downstream tasks such as molecule generation and binary classification and property regression prediction. What’s more, with a same setup, iupacGPT outperforms the model smilesGPT in term of the downstream tasks. Overall, transformer-like language models pretrained on IUPAC corpora are promising alternatives that obtain more intuitive in terms of interpretability and semantic level than on SMILES corpora, and scale well with the pretraining data size.

Acknowledgements

We thank the authors of C5T5: Controllable Generation of Organic Molecules with Transformers and IUPAC2Struct: Transformer-based artificial neural networks for the conversion between chemical notations, Generative Pre-Training from Molecules for releasing their code. The code in this repository is based on their source code release (https://github.com/dhroth/c5t5, https://github.com/sanjaradylov/smiles-gpt, and https://github.com/sergsb/IUPAC2Struct). If you find this code useful, please consider citing their work.

Requirements

Python==3.8
RDKit==2020.03.3.0
pytorch
torchvision
torchaudio
cpuonly
tokenizers
adapter-transformers
pytorch-lightning
bertviz

https://github.com/rdkit/rdkit

Model & data

PubChem

https://pubchem.ncbi.nlm.nih.gov/

python3 scripts/iupac_classification.py checkpoints/iupac data/bbbp.csv p_np -e

python3 scripts/iupac_classification_pro.py checkpoints/iupac data/bbbp.csv p_np -e

python iupac_language-modeling_train.py

Model Metrics

MOSES

Molecular Sets (MOSES), a benchmarking platform to support research on machine learning for drug discovery. MOSES implements several popular molecular generation models and provides a set of metrics to evaluate the quality and diversity of generated molecules. With MOSES, MOSES aim to standardize the research on molecular generation and facilitate the sharing and comparison of new models. https://github.com/molecularsets/moses

QEPPI

quantitative estimate of protein-protein interaction targeting drug-likeness

https://github.com/ohuelab/QEPPI

License

Code is released under MIT LICENSE.

Cite: