Home

Awesome

<div align="center"> <img align="center" width="30%" alt="image" src="https://github.com/AI4Finance-Foundation/FinGPT/assets/31713746/e0371951-1ce1-488e-aa25-0992dafcc139"> </div>

FinRL-Meta: A Metaverse of Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning

Downloads Downloads Python 3.6 PyPI

FinRL-Meta builds a universe of market environments for data-driven financial reinforcement learning. We aim to help the users in our community to easily build environments.

Check out our latest competition: ACM ICAIF 2023 FinRL Contest

Visitors

Check out the FinRL Project

  1. FinRL-Meta provides hundreds of market environments.
  2. FinRL-Meta reproduces existing papers as benchmarks.
  3. FinRL-Meta provides dozens of demos/tutorials, organized in a curriculum.

Outline

News and Tutorials

Our Goals

Design Principles

Overview

Overview image of FinRL-Meta We utilize a layered structure in FinRL-Meta, as shown in the figure above, that consists of three layers: data layer, environment layer, and agent layer. Each layer executes its functions and is independent. Meanwhile, layers interact through end-to-end interfaces to implement the complete workflow of algorithm trading. Moreover, the layer structure allows easy extension of user-defined functions.

DataOps

<div align="center"> <img align="center" src=figs/FinRL-Meta-Data_layer_v2.png width="80%"> </div>

DataOps applies the ideas of lean development and DevOps to the data analytics field. DataOps practices have been developed in companies and organizations to improve the quality and efficiency of data analytics. These implementations consolidate various data sources, unify and automate the pipeline of data analytics, including data accessing, cleaning, analysis, and visualization.

However, the DataOps methodology has not been applied to financial reinforcement learning researches. Most researchers access data, clean data, and extract technical indicators (features) in a case-by-case manner, which involves heavy manual work and may not guarantee the data quality.

To deal with financial big data (unstructured), we follow the DataOps paradigm and implement an automatic pipeline in the following figure: task planning, data processing, training-testing-trading, and monitoring agents’ performance. Through this pipeline, we continuously produce DRL benchmarks on dynamic market datasets.

<div align="center"> <img align="center" src=figs/finrl_meta_dataops.png width="80%"> </div>

Supported Data Sources:

Data SourceTypeRange and FrequencyRequest LimitsRaw DataPreprocessed Data
AkshareCN Securities, A share2017-now, 1 dayNAOHLCVPrices&Indicators
AlpacaUS Stocks, ETFs2015-now, 1minAccount-specificOHLCVPrices&Indicators
BaostockCN Securities1990-12-19-now, 5minAccount-specificOHLCVPrices&Indicators
BinanceCryptocurrencyAPI-specific, 1s, 1minAPI-specificTick-level daily aggegrated trades, OHLCVPrices&Indicators
CCXTCryptocurrencyAPI-specific, 1minAPI-specificOHLCVPrices&Indicators
IEXCloudNMS US securities1970-now, 1 day100 per second per IPOHLCVPrices&Indicators
JoinQuantCN Securities2005-now, 1min3 requests each timeOHLCVPrices&Indicators
QuantConnectUS Securities1998-now, 1sNAOHLCVPrices&Indicators
RiceQuantCN Securities2005-now, 1msAccount-specificOHLCVPrices&Indicators
TushareCN Securities, A share-now, 1 minAccount-specificOHLCVPrices&Indicators
WRDSUS Securities2003-now, 1ms5 requests each timeIntraday TradesPrices&Indicators
YahooFinanceUS SecuritiesFrequency-specific, 1min2,000/hourOHLCVPrices&Indicators

OHLCV: open, high, low, and close prices; volume

adjusted_close: adjusted close price

Technical indicators users can add: 'macd', 'boll_ub', 'boll_lb', 'rsi_30', 'dx_30', 'close_30_sma', 'close_60_sma'. Users also can add their features.

Plug-and-Play (PnP)

In the development pipeline, we separate market environments from the data layer and the agent layer. A DRL agent can be directly plugged into our environments. Different agents/algorithms can be compared by running on the same benchmark environment for fair evaluations.

The following DRL libraries are supported:

A demonstration notebook for plug-and-play with ElegantRL, Stable Baselines3 and RLlib: Plug and Play with DRL Agents

"Training-Testing-Trading" Pipeline

<div align="center"> <img align="center" src=figs/timeline.png width="80%"> </div>

We employ a training-testing-trading pipeline. First, a DRL agent is trained in a training dataset and fine-tuned (adjusting hyperparameters) in a testing dataset. Then, backtest the agent (on historical dataset), or depoy in a paper/live trading market.

This pipeline address the information leakage problem by separating the training/testing and trading periods.

Such a unified pipeline also allows fair comparisons among different algorithms.

Our Vision

For future work, we plan to build a multi-agent-based market simulator that consists of over ten thousands of agents, namely, a FinRL-Metaverse. First, FinRL-Metaverse aims to build a universe of market environments, like the XLand environment (source) and planet-scale climate forecast (source) by DeepMind. To improve the performance for large-scale markets, we will employ GPU-based massive parallel simulation just as Isaac Gym (source). Moreover, it will be interesting to explore the deep evolutionary RL framework (source) to simulate the markets. Our final goal is to provide insights into complex market phenomena and offer guidance for financial regulations through FinRL-Meta.

<div align="center"> <img align="center" src=figs/finrl_metaverse.png width="80%"> </div>

Citing FinRL-Meta

Dynamic Datasets and Market Environments for Financial Reinforcement Learning

@article{dynamic_datasets,
    author = {Liu, Xiao-Yang and Xia, Ziyi and Yang, Hongyang and Gao, Jiechao and Zha, Daochen and Zhu, Ming and Wang, Christina Dan and Wang, Zhaoran and Guo, Jian},
    title = {Dynamic Datasets and Market Environments for Financial Reinforcement Learning},
    journal = {Machine Learning - Nature},
    year = {2024}
}

FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning

@article{finrl_meta_2022,
    author = {Liu, Xiao-Yang and Xia, Ziyi and Rui, Jingyang and Gao, Jiechao and Yang, Hongyang and Zhu, Ming and Wang, Christina Dan and Wang, Zhaoran and Guo, Jian},
    title = {{FinRL-Meta}: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning},
    journal = {NeurIPS},
    year = {2022}
}

FinRL-Meta: Data-Driven Deep ReinforcementLearning in Quantitative Finance

@article{finrl_meta_2021,
    author = {Liu, Xiao-Yang and Rui, Jingyang and Gao, Jiechao and Yang, Liuqing and Yang, Hongyang and Wang, Zhaoran and Wang, Christina Dan and Guo Jian},
    title   = {{FinRL-Meta}: Data-Driven Deep ReinforcementLearning in Quantitative Finance},
    journal = {Data-Centric AI Workshop, NeurIPS},
    year    = {2021}
}

Collaborators

<div align="center"> <img align="center" src=figs/Columbia_logo.jpg width="120"> &nbsp;&nbsp; <img align="center" src=figs/IDEA_Logo.png width="200"> &nbsp;&nbsp; <img align="center" src=figs/Northwestern_University.png width="120"> &nbsp;&nbsp; <img align="center" src=figs/NYU_Shanghai_Logo.png width="200"> &nbsp;&nbsp; </div>

Disclaimer: Nothing herein is financial advice, and NOT a recommendation to trade real money. Please use common sense and always first consult a professional before trading or investing.