Awesome
Visual Dialogue question asking offline RL environment
To serve the Visual Dialogue question asking environment used by the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning", follow the steps below:
Setup
git clone https://github.com/Sea-Snell/visdial-rl.git
cd visdial-rl
- install conda
- conda create --name my_visdial_env python=3.6.12
- conda activate my_visdial_env
- conda install pytorch=0.4.1 -c pytorch
pip install -r requirements.txt
sudo apt-get update
sudo apt-get install redis
redis-server --daemonize yes
- Download the zip files from the Google drive folder here. Place the downloaded and unzipped files, "data" and "checkpoints", at the root of the repo.
Serve
python serve_model.py -useGPU -startFrom checkpoints/abot_sl_ep60.vd -qstartFrom checkpoints/qbot_sl_ep60.vd
Optionally remove the -useGPU
flag if you don't want to serve the models on a GPU.
That's it! The Visual Dialogue environment should be accessible on port 5000 on localhost.