Home

Awesome

Using Transfer Learning for Code-Related Tasks

In this study, we extend our previous work <a href='https://ieeexplore.ieee.org/abstract/document/9401982/'>Studying the usage of text-to-text transfer transformer for code-related tasks</a> paying particular attention at the role played by pre-training and multi-task fine-tuning on the model's performance.

Pipeline

In order to pre-train and then finetune a T5 small model, we need a new sentencepiece model to accommodate the expanded vocabulary given by the java programming language, abstracted java tokens, and technical natural language.