Awesome
albert_pytorch
This repository contains a PyTorch implementation of the albert model from the paper
A Lite Bert For Self-Supervised Learning Language Representations
by Zhenzhong Lan. Mingda Chen....
Dependencies
- pytorch=1.10
- cuda=9.0
- cudnn=7.5
- scikit-learn
- sentencepiece
Download Pre-trained Models of English
Official download links: google albert
Adapt to this version,download pytorch model (google drive):
v1
v2
Fine-tuning
1. Place config.json
and 30k-clean.model
into the prev_trained_model/albert_base_v2
directory.
example:
├── prev_trained_model
| └── albert_base_v2
| | └── pytorch_model.bin
| | └── config.json
| | └── 30k-clean.model
2.convert albert tf checkpoint to pytorch
python convert_albert_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path=./prev_trained_model/albert_base_tf_v2 \
--bert_config_file=./prev_trained_model/albert_base_v2/config.json \
--pytorch_dump_path=./prev_trained_model/albert_base_v2/pytorch_model.bin
The General Language Understanding Evaluation (GLUE) benchmark is a collection of nine sentence- or sentence-pair language understanding tasks for evaluating and analyzing natural language understanding systems.
Before running anyone of these GLUE tasks you should download the GLUE data by running this script and unpack it to some directory $DATA_DIR.
3.run sh scripts/run_classifier_sst2.sh
to fine tuning albert model
Result
Performance of ALBERT on GLUE benchmark results using a single-model setup on dev:
Cola | Sst-2 | Mnli | Sts-b | |
---|---|---|---|---|
metric | matthews_corrcoef | accuracy | accuracy | pearson |
model | Cola | Sst-2 | Mnli | Sts-b |
---|---|---|---|---|
albert_base_v2 | 0.5756 | 0.926 | 0.8418 | 0.9091 |
albert_large_v2 | 0.5851 | 0.9507 | 0.9151 | |
albert_xlarge_v2 | 0.6023 | 0.9221 |