Awesome
Solution to Kaggle's Quora Duplicate Question Detection Competition
The competition can be found via the link: https://www.kaggle.com/c/quora-question-pairs I was ranked 23rd (top 1%) among 3307 teams with this solution. This is a relatively lightweight model considering the other top solutions.
Prerequisites
- Download the pre-trained word vectors, namely glove.840B.300d, from https://nlp.stanford.edu/projects/glove/ and put it into the project directory.
- Download the train and test data from https://www.kaggle.com/c/quora-question-pairs/data. Create a folder named "data" and put them in.
- Install all the packages in requirements.txt.
Pipeline
- This code is written in Python 3.5 and tested on a machine with Intel i5-6300HQ processor and Nvidia GeForce GTX 950M. Keras is used with Tensorflow backend and GPU support.
- First run nlp_feature_extraction.py and non_nlp_feature extraction.py scripts. They may take an hour to finish.
- Then run model.py which may take around 5 hours to make 10 different predictions on the test set.
- Finally, ensemble and postprocess the predictions by postprocess.py.
Model Explanation
- Questions are preprocessed such that the different forms of writing the same thing are tried to be unified. So, LSTM does not learn different things from these different interpretations.
- Words which occur more than 100 times in the train set are collected. The rest is considered as rare words and replaced by the word "memento" which is my favorite movie from C. Nolan. Since "memento" is irrelevant to almost anything, it is absically a placeholder. How many of the rare words are common in the both pairs and how many of them are numeric are used as features. This whole process leads to better generalization in LSTM so that it cannot overfit particular pairs by just using these rare words.
- The features mentioned above are merged with NLP and non-NLP features. As a result, 4+15+6=25 features are prepared for the network.
- The train data is divided into 10 folds. In every run, one fold is kept as the validation set for early stopping. So, every run uses 1 fold different than the other for training which can contribute to the model variance. Since we are going to ensemble the models, increasing model variance reasonably is something we may want. I also did more 10fold runs with different model parameters for better ensebling during the competition.
Network Architecture
Postprocessing
- All the generated models are average ensembled.
- Since the class inbalance is told to be different in the test set, predictions are adjusted regarding to the test set class ratio.
- Postprocess method I explained in https://www.kaggle.com/divrikwicky/semi-magic-postprocess-0-005-lb-gain is used.
What made my model successful? BETTER GENERALIZATION
- All the features are question order independent. When you swap the first and the second question, the feature matrix does not change. For example, instead of using question1_frequency and question2_frequency, I have used min_frequency and max_frequency.
- Feature values are bounded when necessary. For example, number of neighbors are set to 5 for everything above 5, because I did not want to overfit on a particular pair with specific number of neighbor 76 etc.
- Features generated by LSTM is also question order independent. They share the same LSTM layer. After the LSTM layer, output of question1 and question2 merged with commutative operations which are square of difference and summation.
- I think a good preprocessing on the questions also leads to better generalization.
- Replacing the rare words with a placeholder before LSTM is another thing that I did for better generalization.
- The neural network is not so big and has reasonable amount of dropouts and gaussian noises.
- Different NN ppredictions are ensembled at the end.