Home

Awesome

Podracer

News: We are out of hands, please star it and let us know it is urgent to update this project. Thanks for your feedback.

This project can be regarded as FinRL 2.0: intermediate-level framework for full-stack developers and professionals. It is built on ElegantRL and FinRL

We maintain an elegant (lightweight, efficient and stable) FinRL lib, helping researchers and quant traders to develop algorithmic strategies easily.

Design Principles

DRL Algorithms

Currently, most model-free deep reinforcement learning (DRL) algorithms are supported:

For DRL algorithms, please check out the educational webpage OpenAI Spinning Up.

File Structure

<a href="https://github.com/AI4Finance-LLC/Elegant-FinRL" target="\_blank"> <div align="center"> <img src="https://github.com/Yonv1943/ElegantRL/blob/master/figs/File_structure.png" width="100%"/> </div> <!-- <div align="center"><caption>Slack Invitation Link</caption></div> --> </a>

An agent in agent.py uses networks in net.py and is trained in run.py by interacting with an environment in env.py.

Formulation of the Stock Trading Problem

<a href="https://github.com/AI4Finance-LLC/Elegant-FinRL" target="\_blank"> <div align="center"> <img src="figs/1.png" width="50%"/> </div> <!-- <div align="center"><caption>Slack Invitation Link</caption></div> --> </a>

Formally, we model stock trading as a Markov Decision Process (MDP), and formulate the trading objective as maximization of expected return:

<a href="https://github.com/AI4Finance-LLC/Elegant-FinRL" target="\_blank"> <div align="center"> <img src="figs/2.png" width="50%"/> </div> <!-- <div align="center"><caption>Slack Invitation Link</caption></div> --> </a>

Stock Trading Environment

Environment Design

The environment is designed in the OpenAI gym-style since it is considered as the standard implementation of reinforcement learning environments.

State Space and Action Space

Easy-to-customize Features