Awesome
Yet another pre-trained model for Thai BERT
thbert
BERT, a pre-trained unsupervised natural language processing model, prepared for fine-tuning to perform NLP downstream tasks significantly.
To enable research oppotunities with very few Thai Computational Linguitic resources, we willingly introduce fundamental language resouces, Thai BERT, build from scratch for researchers and enthusiast.
Pre-trained models
THBERT-Base, uncased
: Thai, 12-layer, 768-hidden, 12-headsTHBERT-Large, uncased
: Thai, 24-layer, 1024-hidden, 16-heads
Each .zip file contains three items:
- A TensorFlow checkpoint (
thbert_model.ckpt
) containing the pre-trained weights (3 files). - A vocab file (
vocab.txt
) to map WordPiece to word id. - A config file (
bert_config.json
) which specifies the hyperparameters of the model.
Pre-training data
Source
thwiki_dump
- https://dumps.wikimedia.org/thwiki/20200401/
- More than 800K sentences/paragraphs
THAI-NEST
: SoonBEST2010
: SoonORCHID
: Soon
Tokenization
sentencepiece
- unigram
- 128K vocabulary size