Awesome
Hierarchical attention for sentiment classification
Our recipe is based on a frequently cited paper, Hierarchical Attention Networks for Document Classification (Z. Yang et al.), published in 2017. We will classify the IMDB's reviews as positive and negative (25k reviews for training and the same number for testing). The proposed neural network’s architecture makes two steps:
- It encodes sentences. The attention mechanism predicts the importance of each word in the final embedding of a sentence.
- It encodes texts. The attention mechanism predicts the importance of each sentence in the final embedding of a text.
This architecture is interesting because it allows us to create an illustration to better understand which words and sentences were important for prediction. More information can be found in the original article.
The architecture of the Hierarchical Attention Network (HAN):
This recipe includes two scenarios:
- You can train the model yourself from scratch with the ability to make changes in data processing or architecture, it isn't tricky.
- You can play with a trained model in the Jupyter notebook; write your review or pick a random one from the test set, then visualize the model’s predictions.
Technologies
Catalyst
as pipeline runner for deep learning tasks. This new and rapidly developing library can significantly reduce the amount of boilerplate code. If you are familiar with the TensorFlow ecosystem, you can think of Catalyst as Keras for PyTorch. This framework is integrated with logging systems such as the well-known TensorBoard and the new Weights & biases.Pytorch
andTorchtext
as main frameworks for deep learning.NLTK
for data preprocessing.
Quick Start
0. Sign up at neu.ro
1. Install CLI and log in
pip install -U neuromation
neuro login
2. Run the recipe
git clone git@github.com:neuromation/ml-recipe-hier-attention.git
cd ml-recipe-hier-attention
make setup
make jupyter
Training commands
0. Setup.
make setup
- Before we start doing something, we have to run the command, which prepares a Docker container with all the necessary dependencies.
1. Training from scratch.
-
make training
- Runs the job on Neuro Platform with a training pipeline that includes logging via TensorBoard and W&B.- If you want to use W&B for logging, please, setup the environment variable before running the training command:
export WANDB_API_KEY=YOUR_TOKEN
. A new project with the nameneuro_imdb
will appear in the list of your projects in W&B's Web UI. - Note: The first run requires more time than subsequent runs, because of the need to download pre-trained word embeddings for Glove and to warm up the computing resources.
- If you want to use W&B for logging, please, setup the environment variable before running the training command:
-
make tensorboard
- Runs the job with TensorBoard for monitoring the training progress (losses, metrics, computational time and so on). -
make filebrowser
- Runs the job that allows you to conveniently view your files on the storage in browser.
2. Running the notebook.
make jupyter
- Run job with jupyter. If you skipped a training step, you can download our pre-trained model from the notebook.
<br/><br/><br/><br/>
Autogenerated description:
This project is created from Neuro Platform Project Template.
Development Environment
This project is designed to run on Neuro Platform, so you can jump into problem-solving right away.
Directory structure
Local directory | Description | Storage URI | Environment mounting point |
---|---|---|---|
data/ | Data | storage:ml-recipe-hier-attention/data/ | /ml-recipe-hier-attention/data/ |
src/ | Python modules | storage:ml-recipe-hier-attention/src/ | /ml-recipe-hier-attention/src/ |
notebooks/ | Jupyter notebooks | storage:ml-recipe-hier-attention/notebooks/ | /ml-recipe-hier-attention/notebooks/ |
No directory | Logs and results | storage:ml-recipe-hier-attention/results/ | /ml-recipe-hier-attention/results/ |
Development
Follow the instructions below to set up the environment and start your Jupyter Notebook development session.
Setup development environment
make setup
- Several files from the local project upload to the platform’s storage (namely,
requirements.txt
,apt.txt
,setup.cfg
). - A new job starts in our base environment.
- Pip requirements from
requirements.txt
and apt applications fromapt.txt
install in this environment. - The updated environment is saved under a new project-dependent name and will be used later on.
Run Jupyter with GPU
make jupyter
- The content of
modules
andnotebook
directories upload to the platform’s storage. - A job with Jupyter is started, and its web interface opens in the local web browser window.
Kill Jupyter
make kill-jupyter
This command terminates the job with Jupyter Notebooks. The notebooks remain saved on the platform’s storage. If you’d like to download them to the local notebooks/
directory, just run make download-notebooks
.
Help
make help
Data
Uploading via Web UI
On your local machine, run make filebrowser
and open the job's URL on your mobile device or desktop. Through a simple file explorer interface, you can upload test images and perform file operations.
Uploading via CLI
On your local machine, run make upload-data
. This command pushes local files from ./data
into storage:ml-recipe-hier-attention/data
and mounts them to your development environment's /project/data
.
Customization
Several variables in Makefile
are intended to be modified according to the project’s specifics. To change them, find the corresponding line in Makefile
and update it.
Data location
DATA_DIR_STORAGE?=$(PROJECT_PATH_STORAGE)/$(DATA_DIR)
This project template implies that your data is stored alongside the project. If this is the case, you don't need to change this variable. However, if your data is shared between several projects on the platform, you will need to change the following line to point to its location. For example:
DATA_DIR_STORAGE?=storage:datasets/cifar10
Training machine type
TRAINING_MACHINE_TYPE?=gpu-small
There are several machine types supported on the platform. Run neuro config show
to see the list.
HTTP authentication
HTTP_AUTH?=--http-auth
When jobs with HTTP interface are executed (for example, with Jupyter Notebooks or TensorBoard), this interface requires that the user be authenticated on the platform. However, if you want to share the link with someone who is not registered on the platform, you may disable the authentication requirement by updating this line to HTTP_AUTH?=--no-http-auth
.
Training command
TRAINING_COMMAND?='echo "Replace this placeholder with a training script execution"'
If you want to train some models from code instead of from Jupyter Notebook, you need to update this line. For example:
TRAINING_COMMAND="bash -c 'cd $(PROJECT_PATH_ENV) && python -u $(CODE_DIR)/train.py --data $(DATA_DIR)'"