Python library to make development of portfolio analysis faster and easier

Overview

Trafalgar

Python library to make development of portfolio analysis faster and easier

Installation 🔥

For the moment, Trafalgar is still in beta development. To install it you should:

  1. Download requirements.txt in the folder where you want to execute the trafalgar library
  2. Go to your folder directory with the command prompt and write :
pip install -r requirements.txt
  1. Download trafalgars-0.0.1-py3-none-any.whl in the same folder
  2. Go to your folder directory with the command prompt and write :
pip install trafalgars-0.0.1-py3-none-any.whl

Features include 📈

  • Get close price, open price, adj close, volume and graphs of these in one line of code!
  • Build a efficient frontier programm in 3 lines of code
  • Backtest a portfolio, see its stats and compare it to a benchmark

Here is the code of this article from a google collab, you can use it to follow along with this article: https://colab.research.google.com/drive/1qgFDDQneQP-oddbJVWWApfPKFMnbpj6I?usp=sharing

Documentation

Call the library

First, you should do:

from trafalgar import *

Graph of the closing price of a stock

#graph_close(stock, start_date, end_date)
graph_close(["FB"], "2020-01-01", "2021-01-01")

Graph of the closing price of multiple stocks

graph_close(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the volume

#graph_volume(stock, start_date, end_date)

#for one stock
graph_volume(["FB"], "2020-01-01", "2021-01-01")

#for multiple stocks
graph_volume(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the opening price

#graph_open(stock, start_date, end_date)

#for one stock
graph_open(["FB"], "2020-01-01", "2021-01-01")

#for multiple stocks
graph_open(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the adjusted closing price

#graph_adj_close(stock, start_date, end_date)

#for one stock
graph_adj_close(["FB"], "2020-01-01", "2021-01-01")

#for multiple stocks
graph_adj_close(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the returns (for each day)

#returns_graph(stock, start_date, end_date)

#this one only work for one stock
returns_graph("FB", "2020-01-01", "2021-01-01")

Get closing price data (in dataframe format)

#close(stock, start_date, end_date)
close(["AAPL"], "2020-01-01", "2021-01-01")

Get volume data (in dataframe format)

#volume(stock, start_date, end_date)
volume(["AAPL"], "2020-01-01", "2021-01-01")

Get opening price data (in dataframe format)

#open(stock, start_date, end_date)
open(["AAPL"], "2020-01-01", "2021-01-01")

Get adjusted closing price data (in dataframe format)

#adj_close(stock, start_date, end_date)
adj_close(["AAPL"], "2020-01-01", "2021-01-01")

Covariance between stocks

#covariance(stocks, start_date, end_date, days) -> usually, days = 252
covariance(["AAPL", "DIS", "AMD"], "2020-01-01", "2021-01-01", 252)

Get data from a stock in OHLCV format directly

#ohlcv(stock, start_date, end_date)
ohlcv("AAPL", "2020-01-01", "2021-01-01")

Graph the cumulative returns of a stock/portfolio

#cum_returns_graph(stocks, weights, start_date, end_date)
cum_returns_graph(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

Get cumulative returns data of a stock/portfolio (in a dataframe format)

#cum_returns(stocks, weights, start_date, end_date)
cum_returns(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

Disclaimer : From there, the functions only work for portfolios, not for individual stocks. However there is a way to make it work for individual stock:

#let's say we want to calculate the annual_volatility of Apple. 
#We have to have at least 2 elements in our stock list. Here these are Apple and Facebook
#In order to get the volatility of only Apple we just have to put the weights of Facebook at 0 (so no money will be allocated to this stock) and put the weights of Apple at 1 (so all our money will be allocated to this stock)
annual_volatility(["FB", "AAPL"], [1, 0],"2020-01-01", "2021-01-01")

Annual Volatility of a portfolio/stock

#annual_volatility(stocks, weights, start_date, end_date)

#for your portfolio
annual_volatility(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

#for one stock (FB)
annual_volatility(["FB", "AAPL"], [1, 0],"2020-01-01", "2021-01-01")

Sharpe Ratio of a portfolio/stock

#sharpe_ratio(stocks, weights, start_date, end_date)

#for your portfolio
sharpe_ratio(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

#for one stock (FB)
sharpe_ratio(["FB", "AAPL"], [1, 0],"2020-01-01", "2021-01-01")

Compare the returns of a portfolio/stock to a benchmark

#returns_benchmark(stocks, weights, benchmark, start_date, end_date)

#for your portfolio
returns_benchmark(["AAPL", "AMD", "MSFT"], [0.3, 0.4, 0.3], "SPY", "2020-01-01", "2021-01-01")

#for one stock(AAPL)
returns_benchmark(["AAPL", "AMD"], [1,0], "SPY", "2020-01-01", "2021-01-01")

Blue line : returns of your portfolio Red line : returns of the benchmark

Compare the cumulative returns of a portfolio/stock to a benchmark

#cum_returns_benchmark(stocks, weights, benchmark, start_date, end_date)

#for your portfolio
cum_returns_benchmark(["AAPL", "AMD", "MSFT"], [0.3, 0.4, 0.3], "SPY", "2020-01-01", "2021-01-01")

#for one stock(AAPL)
cum_returns_benchmark(["AAPL", "AMD"], [1,0], "SPY", "2020-01-01", "2021-01-01")

Blue line : cumulative returns of your portfolio Red line : cumulative returns of the benchmark

Alpha and Beta of a portfolio/stock

#alpha_beta(stocks, weights, benchmark, start_date, end_date)

#for your portfolio
alpha_beta(["AAPL", "AMD", "MSFT"], [0.3, 0.4, 0.3], "SPY", "2020-01-01", "2021-01-01")

#for one stock(AAPL)
alpha_beta(["AAPL", "AMD"], [1,0], "SPY", "2020-01-01", "2021-01-01")

Efficient frontier to optimize allocation of shares in your portfolio

#efficient_frontier(stocks, start_date, end_date, iterations) -> iterations = 10000 is a good starting point
efficient_frontier(["AAPL", "FB", "TSLA", "BABA"], "2020-01-01", "2021-01-01", 10000)

Graph individual cumulative returns for your portfolio

#individual_cum_returns_graph(stocks, start_date, end_date)
individual_cum_returns_graph(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Individual cumulative returns datas for your portfolio (in dataframe format)

#individual_cum_returns(stocks, start_date, end_date)
individual_cum_returns(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Mean daily return of each stocks in your portfolio

#individual_mean_daily_return(stocks, start_date, end_date)
individual_mean_daily_return(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Portfolio mean daily return

#portfolio_daily_mean_return(stocks,weights, start_date, end_date)
portfolio_daily_mean_return(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Value at Risk of a stock (still in development)

#VaR(stock, start_date, end_date, confidence_level)
VaR("FB","2020-01-01", "2021-01-01", 98)

License

MIT

Comments
  • Issue with Pandas datareader

    Issue with Pandas datareader

    Describe the bug This seems to effect all your branches

    RemoteDataError Traceback (most recent call last) in ----> 1 oracle(portfolio)

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/empyrial.py in oracle(my_portfolio, prediction_days, based_on) 334 335 --> 336 df = web.DataReader(asset, data_source='yahoo', start = my_portfolio.start_date, end= my_portfolio.end_date) 337 df = pd.DataFrame(df) 338 df.reset_index(level=0, inplace=True)

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs) 197 else: 198 kwargs[new_arg_name] = new_arg_value --> 199 return func(*args, **kwargs) 200 201 return cast(F, wrapper)

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/data.py in DataReader(name, data_source, start, end, retry_count, pause, session, api_key) 374 375 if data_source == "yahoo": --> 376 return YahooDailyReader( 377 symbols=name, 378 start=start,

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/base.py in read(self) 251 # If a single symbol, (e.g., 'GOOG') 252 if isinstance(self.symbols, (string_types, int)): --> 253 df = self._read_one_data(self.url, params=self._get_params(self.symbols)) 254 # Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT']) 255 elif isinstance(self.symbols, DataFrame):

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/yahoo/daily.py in _read_one_data(self, url, params) 151 url = url.format(symbol) 152 --> 153 resp = self._get_response(url, params=params) 154 ptrn = r"root.App.main = (.*?);\n}(this));" 155 try:

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/base.py in _get_response(self, url, params, headers) 179 msg += "\nResponse Text:\n{0}".format(last_response_text) 180 --> 181 raise RemoteDataError(msg) 182 183 def _get_crumb(self, *args):

    RemoteDataError: Unable to read URL: https://finance.yahoo.com/quote/BABA/history?period1=1591671600&period2=1625972399&interval=1d&frequency=1d&filter=history Response Text: b'\n \n \n \n Yahoo\n \n \n \n \n \n \n \n \n

    \n \n \n \n
    \n Yahoo Logo\n

    Will be right back...

    \n

    Thank you for your patience.

    \n

    Our engineers are working quickly to resolve the issue.

    \n
    \n '

    1

    opened by geofffoster 8
  • Failed to build scs ERROR: Could not build wheels for scs which use PEP 517 and cannot be installed directly

    Failed to build scs ERROR: Could not build wheels for scs which use PEP 517 and cannot be installed directly

    I am using Python 3.8.10. I had a separate environment and I got the following error when pip installing empyrial Failed to build scs ERROR: Could not build wheels for scs which use PEP 517 and cannot be installed directly

    From this link https://github.com/pydata/bottleneck/issues/281 I tried pip install --upgrade pip setuptools wheel but I am still getting the same bug when installing empyrial.

    • OS: Ubuntu 20.04
    • mini conda version and a separate environment for trading
    • Python 3.8.10
    • Let me know if there is any way around this bug. Thanks
    opened by gurusura 8
  • RemoteDataError: No data fetched using 'YahooDailyReader'

    RemoteDataError: No data fetched using 'YahooDailyReader'

    Discussed in https://github.com/ssantoshp/Empyrial/discussions/27

    Originally posted by karim1104 July 3, 2021 Starting July 1, I'm getting the error "RemoteDataError: No data fetched using 'YahooDailyReader'". I've tried it in different Python environments (3.6, 3.8, 3.9). It seems like a Pandas DataReader issue (https://github.com/pydata/pandas-datareader/issues/868) How can we resolve this? I have a subscription to FMP, is there a way I use instead of Yahoo Finance?

    opened by ssantoshp 6
  • Error when rebalancing with only one stock

    Error when rebalancing with only one stock

    Hi, I have tried to reproduce test results and simulated one single stock over time by forcing the weight distribution as shown below: tickers = ["stock1", "stock2"] weights_new_ = [1.0, 0.0] no optimizer is used, so just using the quantstats calculations of ratios and returns. In the next example, we do the same but with a yearly rebalancer. The thing here is that the results should be exactly the same. There seems to be a slight error in the returns calculations over time, which turns out to be bigger with more rebalancing.

    I will have some more look at it, and update if I find the bug. Btw great work!

    opened by atobiese 5
  • Unlisted Stock Symbol Counted in Pie Chart

    Unlisted Stock Symbol Counted in Pie Chart

    Hi , awesome tool santosh bhai. If a ticker symbol data is not listed at the time of the start date , it stills counts the ticker in the pie chart portfolio. Ideally it should not .. or am i getting this wrong. very new guy. regards ,

    opened by lawzeus 5
  • get_report error

    get_report error

    Describe the bug the sample code (as per --> https://empyrial.gitbook.io/empyrial/save-the-tearsheet/get-a-report) is throwing an error

    To Reproduce Steps to reproduce the behavior:

    1. Go to 'https://empyrial.gitbook.io/empyrial/save-the-tearsheet/get-a-report...'
    2. Run the sample code
    3. Scroll down to '....'
    4. See error

    NameError Traceback (most recent call last) /var/folders/41/q1hx0rjd5xzck1vl121t6b2m0000gn/T/ipykernel_11664/2518190224.py in 10 empyrial(portfolio) 11 ---> 12 get_report(portfolio)

    NameError: name 'get_report' is not defined

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Additional context using jypiterlab notebook

    opened by lawzeus 5
  • Support for custom data, or data from other exchanges

    Support for custom data, or data from other exchanges

    Is your feature request related to a problem? Please describe. I want to analyze portfolio in other exchanges.

    Describe the solution you'd like Ability to provide other exchange data.

    opened by suvojit-0x55aa 4
  • Error when running fundlens

    Error when running fundlens

    Anaconda3\lib\site-packages\empyrial.py", line 610, in fundlens ['Dividend yield', yahoo_financials.get_dividend_yield()], ['Payout ratio', yahoo_financials.get_payout_ratio()], ['Controversy', controversy], ['Social score', social_score],

    UnboundLocalError: local variable 'controversy' referenced before assignment

    opened by jaredre 4
  • rebalance has a bug

    rebalance has a bug

    When you set up quarterly rebalance with only one ticker, the strategy and benchmark show different values. This is a bug. They should be completely equal.

    The codebase below reproduces the issue. The EOY returns and the timeseries plot of Cumulative returns vs benchmark show that the strategy and benchmark are divergent.

    from empyrial import empyrial, Engine portfolio = Engine(
    start_date= "2021-01-01", portfolio= ["BTC-USD", "GOOG"], weights = [1, 0.], #equal weighting is set by default benchmark = ["BTC-USD"], #SPY is set by default rebalance = 'quarterly' ) empyrial(portfolio)

    opened by rgleavenworth 3
  • Graph styling

    Graph styling

    Is there a way to override the default styling parameters used in your tearsheet? I understand that most of the styling is inherited from quantstats. Anyway you can suggest how to change things like facecolor, linewidth, etc?

    opened by rgleavenworth 3
  • EM Optimizer fails if benchmark changed to Nifty50  (yahoo ticker used

    EM Optimizer fails if benchmark changed to Nifty50 (yahoo ticker used "^NSEI")

    Describe the bug The EM optimiser fails when the default benchmark is altered to Nifty .

    However if the default is restored it works .

    To Reproduce Steps to reproduce the behavior : use this code "from empyrial import empyrial, Engine

    portfolio = Engine(
    start_date= "2015-01-01", #start date for the backtesting portfolio= ["TCS.NS", "INFY.NS", "HDFC.NS", "KOTAKBANK.NS","TITAN.NS","NESTLEIND.NS"], #assets in your portfolio benchmark = ["NSEI"] optimizer = "EF" ) empyrial(portfolio)"

    Expected behavior error message " File "/var/folders/41/q1hx0rjd5xzck1vl121t6b2m0000gn/T/ipykernel_2924/1251204071.py", line 7 optimizer = "EF" ^ SyntaxError: invalid syntax

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: MacOsx
    • Browser Chrome
    • jupyter

    Additional context Add any other context about the problem here.

    opened by lawzeus 3
  • assets value / non-stock based portfolio?

    assets value / non-stock based portfolio?

    Wondering if Empyrial can be used with a non-stock based portfolio. The example in the docs is like this:

    from empyrial import empyrial, Engine portfolio = Engine(
    start_date= "2018-06-09", portfolio= ["BABA", "PDD", "KO", "AMD","^IXIC"], weights = [0.2, 0.2, 0.2, 0.2, 0.2], #equal weighting is set by default benchmark = ["SPY"] #SPY is set by default ) empyrial(portfolio)

    Is there any alternate way to define a portfolio, not as a list of stocks / weights but based on the value of the assets in the account?

    opened by andrew521 2
  • str and Timestamp error

    str and Timestamp error

    The code:

    from empyrial import empyrial, Engine
    portfolio = Engine(
                      start_date= "2021-01-01", #start date for the backtesting
                      end_date= "2022-05-01",
                      portfolio= tickers[:], #assets in your portfolio
                      weights = w2[:],
                      benchmark=["XU100.IS"]
    )
    print(empyrial(portfolio))
    print(portfolio)
    

    It gives an error like below.

    TypeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_10148/966461475.py in 11 ) 12 ---> 13 print(empyrial(portfolio)) 14 print(portfolio)

    ~\AppData\Roaming\Python\Python39\site-packages\empyrial.py in empyrial(my_portfolio, rf, sigma_value, confidence_value) 304 empyrial.SR = SR 305 --> 306 CR = qs.stats.calmar(returns) 307 CR = CR.tolist() 308 CR = str(round(CR, 2))

    ~\AppData\Roaming\Python\Python39\site-packages\quantstats\stats.py in calmar(returns, prepare_returns) 547 if prepare_returns: 548 returns = _utils._prepare_returns(returns) --> 549 cagr_ratio = cagr(returns) 550 max_dd = max_drawdown(returns) 551 return cagr_ratio / abs(max_dd)

    ~\AppData\Roaming\Python\Python39\site-packages\quantstats\stats.py in cagr(returns, rf, compounded) 500 total = _np.sum(total) 501 --> 502 years = (returns.index[-1] - returns.index[0]).days / 365. 503 504 res = abs(total + 1.0) ** (1.0 / years) - 1

    TypeError: unsupported operand type(s) for -: 'str' and 'Timestamp'

    opened by burakgulmez 1
Releases(v1.9.8)
Owner
Santosh Passoubady
the Copycat Coder
Santosh Passoubady
Sample data associated with the Aurora-BP study

The Aurora-BP Study and Dataset This repository contains sample code, sample data, and explanatory information for working with the Aurora-BP dataset

Microsoft 16 Dec 12, 2022
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Bethge Lab 61 Dec 21, 2022
NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles

NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles NewsMTSC is a dataset for target-dependent sentiment classification (TSC)

Felix Hamborg 79 Dec 30, 2022
Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated

Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated. This engine can later be used for downstream tasks in NLP such as Q&A, summarization, generation

Diego 1 Mar 20, 2022
Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations

Expediting Vision Transformers via Token Reorganizations This repository contain

Youwei Liang 101 Dec 26, 2022
An automated program that helps customers of Pizza Palour place their pizza orders

PIzza_Order_Assistant Introduction An automated program that helps customers of Pizza Palour place their pizza orders. The program uses voice commands

Tindi Sommers 1 Dec 26, 2021
REST API for sentence tokenization and embedding using Multilingual Universal Sentence Encoder.

What is MUSE? MUSE stands for Multilingual Universal Sentence Encoder - multilingual extension (16 languages) of Universal Sentence Encoder (USE). MUS

Dani El-Ayyass 47 Sep 05, 2022
Various capabilities for static malware analysis.

Malchive The malchive serves as a compendium for a variety of capabilities mainly pertaining to malware analysis, such as scripts supporting day to da

MITRE Cybersecurity 64 Nov 22, 2022
This repository details the steps in creating a Part of Speech tagger using Trigram Hidden Markov Models and the Viterbi Algorithm without using external libraries.

POS-Tagger This repository details the creation of a Part-of-Speech tagger using Trigram Hidden Markov Models to predict word tags in a word sequence.

Raihan Ahmed 1 Dec 09, 2021
2021搜狐校园文本匹配算法大赛baseline

sohu2021-baseline 2021搜狐校园文本匹配算法大赛baseline 简介 分享了一个搜狐文本匹配的baseline,主要是通过条件LayerNorm来增加模型的多样性,以实现同一模型处理不同类型的数据、形成不同输出的目的。 线下验证集F1约0.74,线上测试集F1约0.73。

苏剑林(Jianlin Su) 45 Sep 06, 2022
Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module.

Import Subtitles for Blender VSE Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module. Supported formats by py

4 Feb 27, 2022
Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers and helping them make a wise buying decision.

Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers an

Parv Bhatt 1 Jan 01, 2022
Python library for processing Chinese text

SnowNLP: Simplified Chinese Text Processing SnowNLP是一个python写的类库,可以方便的处理中文文本内容,是受到了TextBlob的启发而写的,由于现在大部分的自然语言处理库基本都是针对英文的,于是写了一个方便处理中文的类库,并且和TextBlob

Rui Wang 6k Jan 02, 2023
Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation This repository contains the implementation of the following paper: Live Speech

OldSix 575 Dec 31, 2022
A fast and easy implementation of Transformer with PyTorch.

FasySeq FasySeq is a shorthand as a Fast and easy sequential modeling toolkit. It aims to provide a seq2seq model to researchers and developers, which

宁羽 7 Jul 18, 2022
Mysticbbs-rjam - rJAM splitscreen message reader for MysticBBS A46+

rJAM splitscreen message reader for MysticBBS A46+

Robbert Langezaal 4 Nov 22, 2022
ProtFeat is protein feature extraction tool that utilizes POSSUM and iFeature.

Description: ProtFeat is designed to extract the protein features by employing POSSUM and iFeature python-based tools. ProtFeat includes a total of 39

GOKHAN OZSARI 5 Dec 16, 2022
NVDA, the free and open source Screen Reader for Microsoft Windows

NVDA NVDA (NonVisual Desktop Access) is a free, open source screen reader for Microsoft Windows. It is developed by NV Access in collaboration with a

NV Access 1.6k Jan 07, 2023
Ελληνικά νέα (Python script) / Greek News Feed (Python script)

Ελληνικά νέα (Python script) / Greek News Feed (Python script) Ελληνικά English Το 2017 είχα υλοποιήσει ένα Python script για να εμφανίζει τα τωρινά ν

Loren Kociko 1 Jun 14, 2022
내부 작업용 django + vue(vuetify) boilerplate. 짠 하면 돌아감.

Pocket Galaxy 아주 간단한 개인용, 혹은 내부용 툴을 만들어야하는데 이왕이면 웹이 편하죠? 그럴때를 위해 만들어둔 django와 vue(vuetify)로 이뤄진 boilerplate 입니다. 각 폴더에 있는 설명서대로 실행을 시키면 일단 당장 뭔가가 돌아갑니

Jamie J. Seol 16 Dec 03, 2021