Awesome
Pre-trained word vectors of 30+ languages
This project has two purposes. First of all, I'd like to share some of my experience in nlp tasks such as segmentation or word vectors. The other, which is more important, is that probably some people are searching for pre-trained word vector models for non-English languages. Alas! English has gained much more attention than any other languages has done. Check this to see how easily you can get a variety of pre-trained English word vectors without efforts. I think it's time to turn our eyes to a multi language version of this.
<b>Nearing the end of the work, I happened to know that there is already a similar job named polyglot
. I strongly encourage you to check this great project. How embarrassing! Nevertheless, I decided to open this project. You will know that my job has its own flavor, after all.</b>
Requirements
- nltk >= 1.11.1
- regex >= 2016.6.24
- lxml >= 3.3.3
- numpy >= 1.11.2
- konlpy >= 0.4.4 (Only for Korean)
- mecab (Only for Japanese)
- pythai >= 0.1.3 (Only for Thai)
- pyvi >= 0.0.7.2 (Only for Vietnamese)
- jieba >= 0.38 (Only for Chinese)
- gensim > =0.13.1 (for Word2Vec)
- fastText (for fasttext)
Background / References
- Check this to know what word embedding is.
- Check this to quickly get a picture of Word2vec.
- Check this to install fastText.
- Watch this to really understand what's happening under the hood of Word2vec.
- Go get various English word vectors here if needed.
Work Flow
- STEP 1. Download the wikipedia database backup dumps of the language you want.
- STEP 2. Extract running texts to
data/
folder. - STEP 3. Run
build_corpus.py
. - STEP 4-1. Run
make_wordvector.sh
to get Word2Vec word vectors. - STEP 4-2. Run
fasttext.sh
to get fastText word vectors.
Pre-trained models
Two types of pre-trained models are provided. w
and f
represent word2vec
and fastText
respectively.