Home

Awesome

AdapterBias

AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

Version License: MIT Hugging Face Transformers

arXiv link: https://arxiv.org/abs/2205.00305

To be published in Findings of NAACL 2022

Authors: Chin-Lun Fu*, Zih-Ching Chen*, Yun-Ru Lee, Hung-yi Lee

Overview

AdapterBias

In this study, AdapterBias, a surprisingly simple yet effective adapter architecture, is proposed. AdapterBias adds a token-dependent shift to the hidden output of transformer layers to adapt to downstream tasks with only a vector and a linear layer.

Dataset

We use GLUE Benchmark as our dataset. You can download all datasets from the website.

Training

cd src
python exp.py \
    --adapter True \
    --GLUE_path <ur_GLUE_path> \
    --output_path <output_path> \
    --model <model name> \
    --task <the task u want to run> \
    --epoch 100 \
    --lr 0.0001 \
    --max_len 512 \
    --batch_size 32 \

Inference

After you run the training, you can automatically get the prediction file in <output_path>/result/. Also, the saved model is in <output_path>/model/.

Running all nine tasks of GLUE benchmark, you can sumbit the prediction files to the website.

Imgur Image