Awesome
hncynic
The best Hacker News comments are written with a complete disregard for the linked article.
hncynic
is an attempt at capturing this phenomenon by training a model to predict
Hacker News comments just from the submission title. More specifically, I trained a
Transformer encoder-decoder model on
Hacker News data.
In my second attempt, I also included data from Wikipedia.
The generated comments are fun to read, but often turn out meaningless or contradictory -- see here for some examples generated from recent HN titles.
There is a demo live at https://hncynic.leod.org/.
A pretrained model together with some instructions may be found at https://hncynic.leod.org/hncynic-trained-model-v1.tar.gz.
Steps
Hacker News
Train a model on Hacker News data only:
- data: Prepare the data and extract title-comment pairs from the HN data dump.
- train: Train a Transformer translation model on the title-comment pairs using TensorFlow and OpenNMT-tf.
Transfer Learning
Train a model on Wikipedia data, then switch to Hacker News data:
- data-wiki: Prepare data from Wikipedia articles.
- train-wiki: Train a model to predict Wikipedia section texts from titles.
- train-wiki-hn: Continue training on HN data.
Hosting
- serve: Serve the model with TensorFlow serving.
- ui: Host a web interface for querying the model.
- twitter-bot: Run a twitter bot.
Future Work
- Acquire GCP credits, train for more steps.
- It's probably nonideal to use encoder-decoder models. In retrospect, I should have trained
a language model instead, on data like
title <SEP> comment
(see also: GPT-2). - I've completely excluded HN comments that are replies from the training data. It might be interesting to train on these as well.