Home

Awesome

When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain

Abstract

<p align="justify"> Pre-trained language models have shown impressive performance on a variety of tasks and domains. Previous research on financial language models usually employs a generic training scheme to train standard model architectures, without completely leveraging the richness of the financial data. We propose a novel domain specific Financial LANGuage model (FLANG) which uses financial keywords and phrases for better masking, together with span boundary objective and in-filing objective. Additionally, the evaluation benchmarks in the field have been limited. To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain. These include new benchmarks across 5 NLP tasks in financial domain as well as common benchmarks used in the previous research. Experiments on these benchmarks suggest that our model outperforms those in prior literature on a variety of NLP tasks. </p>

FLANG-ELECTRA Architecture

Architecture of our model. We use finance specific datasets and general English datasets (Wikpedia and BooksCorpus) for training the model. We follow the training strategy of ELECTRA with span boundary task which first predicts masked tokens using language model and then uses a discriminator to assess if a token is original or replaced. The generator and discriminator are trained end-to-end, and both words and phrases from financial vocabulary are used for masking. The final discriminator is then fine-tuned on individual tasks on our contributed benchmark suite, Financial Language Understanding Evaluation (FLUE). Note that our method is not specific to ELECTRA and can be generalized to other models. <sub>Architecture of our model. We use finance specific datasets and general English datasets (Wikpedia and BooksCorpus) for training the model. We follow the training strategy of ELECTRA with span boundary task which first predicts masked tokens using language model and then uses a discriminator to assess if a token is original or replaced. The generator and discriminator are trained end-to-end, and both words and phrases from financial vocabulary are used for masking. The final discriminator is then fine-tuned on individual tasks on our contributed benchmark suite, Financial Language Understanding Evaluation (FLUE). Note that our method is not specific to ELECTRA and can be generalized to other models.</sub>

FLUE: Financial Language Understanding Evaluation

FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.

NameTaskSourceDataset Size
FPBFinancial Sentiment AnalysisMalo et al. 2014b4,845
FiQA SAFinancial Sentiment AnalysisFiQA 20181,173
HeadlineNews Headline ClassificationSinha and Khandait 202011,412
NERNamed Entity RecognitionAlvarado et al. 20151,466
FinSBD3Structure Boundary DetectionFinSBD3 (FinWeb-2021)756
FiQA QAQuestion AnsweringFiQA 20186,640

Performance of FLANG Architecture of FLUE datasets

Model/MetricFPBFiQA SAHeadlineNERFinSBD3FiQA QA
AccuracyMSEMean F-1F-1F-1nDCG
BERT-base0.8560.0730.9670.790.950.46
FinBERT0.8720.070.9680.80.890.42
FLANG-BERT(ours)0.9120.0540.9720.830.960.51
ELECTRA0.8810.0660.9660.780.940.52
FLANG-ELECTRA(ours)0.9190.0340.980.820.970.55

Financial Sentiment Analysis

  1. Financial PhraseBank (Classification)
    • Data: Financial PhraseBank
    • Cite: Malo, Pekka, et al. "Good debt or bad debt: Detecting semantic orientations in economic texts." Journal of the Association for Information Science and Technology 65.4 (2014): 782-796.
  2. FiQA 2018 Task-1 (Regression)
    • Data and Ref: FiQA 2018
    • Cite: Maia, Macedo & Handschuh, Siegfried & Freitas, Andre & Davis, Brian & McDermott, Ross & Zarrouk, Manel & Balahur, Alexandra. (2018). WWW'18 Open Challenge: Financial Opinion Mining and Question Answering. WWW '18: Companion Proceedings of the The Web Conference 2018. 1941-1942. 10.1145/3184558.3192301.

News Headline Classification

Named Entity Recognition

Structure Boundary Detection

Question Answering

Leaderboard

Coming soon!

Citation

Please cite the model with the following citation:

@INPROCEEDINGS{shah-etal-2022-flang,
    author = {Shah, Raj Sanjay  and
      Chawla, Kunal and
      Eidnani, Dheeraj and
      Shah, Agam and
      Du, Wendi and
      Chava, Sudheer and
      Raman, Natraj and
      Smiley, Charese and
      Chen, Jiaao and
      Yang, Diyi },
    title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
    booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
    year = {2022},
    publisher = {Association for Computational Linguistics}
}

Contact information

Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any issues and questions.

Steps to use the code

  1. Clone the Repo
  2. cd into the repo in your terminal

Dependencies

Install dependencies with the following command pip install -r requirements.txt

Raw data

tokens.npy contains the tokens for financial vocabulary in a numpy array format.

To train FLANG-BERT, run

python train_FLANG_BERT.py

To train FLANG-ELECTRA, run

python train_FLANG_ELECTRA.py