Home

Awesome

CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations

Installation

    git clone https://github.com/Ydkwim/CTAL.git
    cd CTAL
    pip install -r requirements.txt

Preprocess


Upstream Pre-training

After you prepare both the acoustic and semantic features, you can start to pre-training the model with executing following shell command:

    python run_m2pretrainn.py --run transformer \
    --config path/to/your/config.yaml --name model_name

The pre-trained model will be saved to the path: result/transformer/model_name. For the convenience of all the users, we make our pre-trained upstream model available:


Downstream Finetune

It is very convient to use our pre-trained upstream model for different types of audio-and-language downstream tasks, including Sentiment Analysis, Emotion Recognition, Speaker Verification, etc. We prepare a sample fine-tuning script m2p_finetune.py here for everyone. To start the fine-tuning process, you can run the following commands:

    python m2p_finetune.py --config your/config/path \
    --task_name sentiment --epochs 10 --save_path your/save/path
    python m2p_finetune.py --config your/config/path \
    --task_name emotion --epochs 10 --save_path your/save/path
    python m2p_finetune.py --config your/config/path \
    --task_name verification --epochs 10 --save_path your/save/path

Contact

If you have any problem to the project, please feel free to report them as issues.