A Deep Reinforcement Learning Framework for Stock Market Trading

Overview

DQN-Trading

This is a framework based on deep reinforcement learning for stock market trading. This project is the implementation code for the two papers:

The deep reinforcement learning algorithm used here is Deep Q-Learning.

Acknowledgement

Requirements

Install pytorch using the following commands. This is for CUDA 11.1 and python 3.8:

pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
  • python = 3.8
  • pandas = 1.3.2
  • numpy = 1.21.2
  • matplotlib = 3.4.3
  • cython = 0.29.24
  • scikit-learn = 0.24.2

TODO List

  • Right now this project does not have a code for getting user hyper-parameters from terminal and running the code. We preferred writing a jupyter notebook (Main.ipynb) in which you can set the input data, the model, along with setting the hyper-parameters.

  • The project also does not have a code to do Hyper-parameter search (its easy to implement).

  • You can also set the seed for running the experiments in the original code for training the models.

Developers' Guidelines

In this section, I briefly explain different parts of the project and how to change each. The data for the project downloaded from Yahoo Finance where you can search for a specific market there and download your data under the Historical Data section. Then you create a directory with the name of the stock under the data directory and put the .csv file there.

The DataLoader directory contains files to process the data and interact with the RL agent. The DataLoader.py loads the data given the folder name under Data folder and also the name of the .csv file. For this, you should use the YahooFinanceDataLoader class for using data downloaded from Yahoo Finance.

The Data.py file is the environment that interacts with the RL agent. This file contains all the functionalities used in a standard RL environment. For each agent, I developed a class inherited from the Data class that only differs in the state space (consider that states for LSTM and convolutional models are time-series, while for other models are simple OHLCs). In DataForPatternBasedAgent.py the states are patterns extracted using rule-based methods in technical analysis. In DataAutoPatternExtractionAgent.py states are Open, High, Low, and Close prices (plus some other information about the candle-stick like trend, upper shadow, lower shadow, etc). In DataSequential.py as it is obvious from the name, the state space is time-series which is used in both LSTM and Convolutional models. DataSequencePrediction.py was an idea for feeding states that have been predicted using an LSTM model to the RL agent. This idea is raw and needs to be developed.

Where ever we used encoder-decoder architecture, the decoder is the DQN agent whose neural network is the same across all the models.

The DeepRLAgent directory contains the DQN model without encoder part (VanillaInput) whose data loader corresponds to DataAutoPatternExtractionAgent.py and DataForPatternBasedAgent.py; an encoder-decoder model where the encoder is a 1d convolutional layer added to the decoder which is DQN agent under SimpleCNNEncoder directory; an encoder-decoder model where encoder is a simple MLP model and the decoder is DQN agent under MLPEncoder directory.

Under the EncoderDecoderAgent there exist all the time-series models, including CNN (two-layered 1d CNN as encoder), CNN2D (a single-layered 2d CNN as encoder), CNN-GRU (the encoder is a 1d CNN over input and then a GRU on the output of CNN. The purpose of this model is that CNN extracts features from each candlestick, thenGRU extracts temporal dependency among those extracted features.), CNNAttn (A two-layered 1d CNN with attention layer for putting higher emphasis on specific parts of the features extracted from the time-series data), and a GRU encoder which extracts temporal relations among candles. All of these models use DataSequential.py file as environment.

For running each agent, please refer to the Main.py file for instructions on how to run each agent and feed data. The Main.py file also has code for plotting results.

The Objects directory contains the saved models from our experiments for each agent.

The PatternDetectionCandleStick directory contains Evaluation.py file which has all the evaluation metrics used in the paper. This file receives the actions from the agents and evaluate the result of the strategy offered by each agent. The LabelPatterns.py uses rule-based methods to generate buy or sell signals. Also Extract.py is another file used for detecting wellknown candlestick patterns.

RLAgent directory is the implementation of the traditional RL algorithm SARSA-λ using cython. In order to run that in the Main.ipynb you should first build the cython file. In order to do that, run the following script inside it's directory in terminal:

python setup.py build_ext --inplace

This works for both linux and windows.

For more information on the algorithms and models, please refer to the original paper. You can find them in the references.

If you had any questions regarding the paper, code, or you wanted to contribute, please send me an email: [email protected]

References

@article{taghian2020learning,
  title={Learning financial asset-specific trading rules via deep reinforcement learning},
  author={Taghian, Mehran and Asadi, Ahmad and Safabakhsh, Reza},
  journal={arXiv preprint arXiv:2010.14194},
  year={2020}
}

@article{taghian2021reinforcement,
  title={A Reinforcement Learning Based Encoder-Decoder Framework for Learning Stock Trading Rules},
  author={Taghian, Mehran and Asadi, Ahmad and Safabakhsh, Reza},
  journal={arXiv preprint arXiv:2101.03867},
  year={2021}
}
Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

ACSC Automatic extrinsic calibration for non-repetitive scanning solid-state LiDAR and camera systems. System Architecture 1. Dependency Tested with U

KINO 192 Dec 13, 2022
Pseudo-Visual Speech Denoising

Pseudo-Visual Speech Denoising This code is for our paper titled: Visual Speech Enhancement Without A Real Visual Stream published at WACV 2021. Autho

Sindhu 94 Oct 22, 2022
Making self-supervised learning work on molecules by using their 3D geometry to pre-train GNNs. Implemented in DGL and Pytorch Geometric.

3D Infomax improves GNNs for Molecular Property Prediction Video | Paper We pre-train GNNs to understand the geometry of molecules given only their 2D

Hannes Stärk 95 Dec 30, 2022
Playing around with FastAPI and streamlit to create a YoloV5 object detector

FastAPI-Streamlit-based-YoloV5-detector Playing around with FastAPI and streamlit to create a YoloV5 object detector It turns out that a User Interfac

2 Jan 20, 2022
A curated list of awesome game datasets, and tools to artificial intelligence in games

🎮 Awesome Game Datasets In computer science, Artificial Intelligence (AI) is intelligence demonstrated by machines. Its definition, AI research as th

Leonardo Mauro 454 Jan 03, 2023
Unified learning approach for egocentric hand gesture recognition and fingertip detection

Unified Gesture Recognition and Fingertip Detection A unified convolutional neural network (CNN) algorithm for both hand gesture recognition and finge

Mohammad 227 Dec 25, 2022
Graduation Project

Gesture-Detection-and-Depth-Estimation This is my graduation project. (1) In this project, I use the YOLOv3 object detection model to detect gesture i

ChaosAT 1 Nov 23, 2021
(ICCV'21) Official PyTorch implementation of Relational Embedding for Few-Shot Classification

Relational Embedding for Few-Shot Classification (ICCV 2021) Dahyun Kang, Heeseung Kwon, Juhong Min, Minsu Cho [paper], [project hompage] We propose t

Dahyun Kang 82 Dec 24, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Gym Threat Defense

Gym Threat Defense The Threat Defense environment is an OpenAI Gym implementation of the environment defined as the toy example in Optimal Defense Pol

Hampus Ramström 5 Dec 08, 2022
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

943 Jan 07, 2023
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

Mehdi Cherti 135 Dec 30, 2022
Based on Stockfish neural network(similar to LcZero)

MarcoEngine Marco Engine - interesnaya neyronnaya shakhmatnaya set', kotoraya ispol'zuyet metod samoobucheniya(dostizheniye khoroshoy igy putem proboy

Marcus Kemaul 4 Mar 12, 2022
Visual Adversarial Imitation Learning using Variational Models (VMAIL)

Visual Adversarial Imitation Learning using Variational Models (VMAIL) This is the official implementation of the NeurIPS 2021 paper. Project website

14 Nov 18, 2022
PyTorch implementation for "Mining Latent Structures with Contrastive Modality Fusion for Multimedia Recommendation"

MIRCO PyTorch implementation for paper: Latent Structures Mining with Contrastive Modality Fusion for Multimedia Recommendation Dependencies Python 3.

Big Data and Multi-modal Computing Group, CRIPAC 9 Dec 08, 2022
3ds-Ghidra-Scripts - Ghidra scripts to help with 3ds reverse engineering

3ds Ghidra Scripts These are ghidra scripts to help with 3ds reverse engineering

Zak 7 May 23, 2022
Code for the paper "A Study of Face Obfuscation in ImageNet"

A Study of Face Obfuscation in ImageNet Code for the paper: A Study of Face Obfuscation in ImageNet Kaiyu Yang, Jacqueline Yau, Li Fei-Fei, Jia Deng,

35 Oct 04, 2022
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.

relational-rnn-pytorch An implementation of DeepMind's Relational Recurrent Neural Networks (Santoro et al. 2018) in PyTorch. Relational Memory Core (

Sang-gil Lee 241 Nov 18, 2022
Official Implementation of "Designing an Encoder for StyleGAN Image Manipulation"

Designing an Encoder for StyleGAN Image Manipulation (SIGGRAPH 2021) Recently, there has been a surge of diverse methods for performing image editing

749 Jan 09, 2023
Supervised & unsupervised machine-learning techniques are applied to the database of weighted P4s which admit Calabi-Yau hypersurfaces.

Weighted Projective Spaces ML Description: The database of 5-vectors describing 4d weighted projective spaces which admit Calabi-Yau hypersurfaces are

Ed Hirst 3 Sep 08, 2022