Awesome
The repository contains some python scripts for training and inferring test document vectors using paragraph vectors or doc2vec.
Requirements
- Python2: Pre-trained models and scripts all support Python2 only.
- Gensim: Best to use my forked version of gensim; the latest gensim has changed its Doc2Vec methods a little and so would not load the pre-trained models.
Pre-Trained Doc2Vec Models
- English Wikipedia DBOW (1.4GB): 2016-doc2vec/enwiki_dbow.tgz
- Associated Press News DBOW (0.6GB): 2016-doc2vec/apnews_dbow.tgz
Pre-Trained Word2Vec Models
For reproducibility we also released the pre-trained word2vec skip-gram models on Wikipedia and AP News:
- English Wikipedia Skip-Gram (1.4GB): 2016-doc2vec/enwiki_sg.tgz
- Associated Press News Skip-gram (0.6GB): 2016-doc2vec/apnews_sg.tgz
Directory Structure and Files
- train_model.py: example python script to train some toy data
- infer_test.py: example python script to infer test document vectors using trained model
- toy_data: directory containing some toy train/test documents and pre-trained word embeddings
Model Hyper-Parameter Explanation
- sample: this is the sub-sampling threshold to downsample frequent words; 10e-5 is usually good for DBOW, and 10e-6 for DMPV
- hs: 1 turns on hierarchical sampling; this is rarely turned on as negative sampling is in general better
- dm: 0 = DBOW; 1 = DMPV
- negative: number of negative samples; 5 is a good value
- dbow_words: 1 turns on updating of word embeddings. In DBOW, word embeddings are technically not learnt (only document embeddings are learnt). To learn word vectors, DBOW runs a step of skip-gram before the DBOW step to update the word embeddings. With dbow_words turned off, this means DBOW will randomly initialise word embeddings and keep them randomly initialised. This is rather bad in practice (as the model does not see relationships between words in the embedding space), so it should be turned on
- dm_concat: 1 = concatenate input word vectors for DMPV; 0 = sum/average input word vectors. This setting is only used for DMPV since DBOW has only one input word
- dm_mean: 1 = average input word vectors; 0 = sum input word vectors. Again, this setting is only used for DMPV. The original paragraph vector paper concatenates input word vectors for DMPV, and that's the setting we used in our paper
- iter: number of iterations/epochs to train the model
Publications
- Jey Han Lau and Timothy Baldwin (2016). An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation. In Proceedings of the 1st Workshop on Representation Learning for NLP, 2016.