A tax calculator for stocks and dividends activities.

Overview

Revolut Stocks calculator for Bulgarian National Revenue Agency

GitHub release (latest SemVer) GitHub Release Date GitHub issues by-label GitHub Workflow Status (branch) GitHub all releases Docker Pulls GitHub stars GitHub watchers GitHub followers GitHub license

Twitter URL

Cat owe taxes joke.

Information

Processing and calculating the required information about stock possession and operation is complicated and time-consuming. So that brought the idea of developing a calculator that is able to automate the process from end-to-end.

Revolut Stock calculator is able to parse Revolut statement documents and provide a ready-to-use tax declaration file(dec50_2020_data.xml), that you can import into NAP online system. Each part of the declaration is also exported as csv file for verification.

How it works

  1. The calculator recursively scans the input directory for statement files(*.pdf).
  2. The statement files are then being parsed to extract all activity information.
  3. The calculator then obtains the last published exchange rate(USD to BGN) for the day of each trade.
  4. During the last step all activities are processed to produce the required data.

Considerations

  1. The calculator parses exported statements in pdf format. Parsing a pdf file is a risky task and heavily depends on the structure of the file. In order to prevent miscalculations, please review the generated statements.csv file under the output directory and make sure all activities are correctly extracted from your statement files.
  2. Revolut doesn't provide information about which exact stock asset is being sold during a sale. As currently indicated at the end of each statement file, the default tax lot disposition method is First-In, First-Out. The calculator is developed according to that rule.
  3. The trade date(instead of the settlement date) is being used for every calculation. The decision is based on the fact that the Revolut stock platform makes the cash available immediately after the initiation of a stock sale. Although the cash can't be withdrawn, it could be used in making other deals and so it's assumed that the transfer is finished from a user perspective.
  4. By default the calculator uses locally cached exchange rates located here. If want you can select BNB online service as exchange rates provider by enabling the -b flag. When activating BNB online service provider, make sure you do not spam the BNB service with too many requests. Each execution makes around 3-5 requests.
  5. In application 8 part 1 you have to list all stocks, that you own by the end of the previous year(31.12.20XX). That includes stocks, that were purchased prior to the year, you're filling declaration for. There are comments in both csv and xml files to identify stock symbols along with their records. You can use those identification comments to aggregate records with data, out of the scope of the calculator.

Requirements

  • Python version >= 3.7
  • Docker and Docker Compose(only required for Docker Compose usage option)

Usage

Local

Note: The calculator is not natively tested on Windows OS. When using Windows it's preferable to use WSL and Docker.

Install dependencies

$ pip install -r requirements.txt

Run (single parser)

$ python stocks.py -i <path_to_input_dir> -o <path_to_output_dir>

Run (multiple parsers)

In order to use multiple parsers, you need to sort your statement files into a corresponding parser directory under the selected input directory. For example:

/input-directory/revolut - directory contains Revolut statement files
/input-directory/trading212 - directory contains Trading 212 statement files

You can use the help command to list supported parsers with their names.

$ python stocks.py -i <path_to_input_dir> -o <path_to_output_dir> -p <parser_name_1> -p <parser_name_2> ...

Help

$ python stocks.py -h

Output:

[INFO]: Collecting statement files.
[INFO]: Collected statement files for processing: ['input/statement-3cbc62e0-2e0c-44a4-ae0c-8daa4b7c41bc.pdf', 'input/statement-19ed667d-ba66-4527-aa7a-3a88e9e4d613.pdf'].
[INFO]: Parsing statement files.
[INFO]: Generating [statements.csv] file.
[INFO]: Populating exchange rates.
[INFO]: Generating [app8-part1.csv] file.
[INFO]: Calculating sales information.
[INFO]: Generating [app5-table2.csv] file.
[INFO]: Calculating dividends information.
[INFO]: Generating [app8-part4-1.csv] file.
[INFO]: Generating [dec50_2020_data.xml] file.
[INFO]: Profit/Loss: 1615981 lev.

00s joke.

Docker

Docker Hub images are built and published by GitHub Actions Workflow. The following tags are available:

  • main - the image is built from the latest commit in the main branch.
  • - the image is built from the released version.

Run

$ docker run --rm -v <path_to_input_dir>:/input:ro -v <path_to_output_dir>:/output gretch/nap-stocks-calculator:main -i /input -o /output

Docker Compose

Prepare

Replace and placeholders in the docker-compose.yml with paths to your input and output directories.

Run

$ docker-compose up --build

Results

Output file NAP mapping Description
dec50_2020_data.xml Декларация по чл.50 от ЗДДФЛ, Приложение 5 и 8 Tax declaration - ready for import.
statements.csv N/A Verification file to ensure correct parsing. Should be verified manually.
app5-table2.csv Приложение 5, Таблица 2
app8-part1.csv Приложение 8, Част ІV, 1
app8-part4-1.csv Приложение 8, Част І

Errors

Errors are being reported along with an ERROR label. For example:

[ERROR]: Unable to get exchange rate from BNB. Please, try again later.
Traceback (most recent call last):
  File "/mnt/c/Users/doino/Personal/revolut-stocks/libs/exchange_rates.py", line 57, in query_exchange_rates
    date = datetime.strptime(row[0], BNB_DATE_FORMAT)
  File "/usr/lib/python3.8/_strptime.py", line 568, in _strptime_datetime
    tt, fraction, gmtoff_fraction = _strptime(data_string, format)
  File "/usr/lib/python3.8/_strptime.py", line 349, in _strptime
    raise ValueError("time data %r does not match format %r" %
ValueError: time data ' ' does not match format '%d.%m.%Y'

Please, check the latest reported error in the log for more information.

"Unable to get exchange rate from BNB"

The error indicates that there was an issue obtaining the exchange rate from BNB online service. Please, test BNB online service manually here, before reporting an issue.

"No statement files found"

There was an issue finding input statement files. Please, check your input directory configuration and file permissions.

"Not activities found. Please, check your statement files"

The calculator parser was unable to parse any activities within your statement file. Please, check your statement files and ensure there are reported activities. If there are reported activities, but the error still persists, please open an issue.

"Statements contain unsupported activity types"

The calculator found unsupported activity type/s. Please, open an issue and include the reported activity type.

"Unable to find previously purchased shares to surrender as part of SSP"

The calculator, while trying to perform the SSP surrender shares operation, was unable to find the previously purchased shares for the same stock symbol. Please, ensure there is a statement file in the input directory, containing the original purchase.

Import

NOTE: Importing dec50_2020_data.xml will clear all filling in your current tax declaration.

The dec50_2020_data.xml file contains applications 5 and 8. It could be imported into NAP online system with the use of NAP web interface, navigating to Декларации, документи или данни, подавани от физически лица/Декларации по ЗДДФЛ/Декларация по чл.50 от ЗДДФЛ and clinking on Импорт на файл button.

During the import, a few errors will be reported. That's normal(see exceptions below). The reason for the errors is that the imported file contains data for applications 5 and 8 only, but the system expects a complete filling of the document. After the import, you can continue filling your tax declaration as usual. Don't forget to enable applications 5 and 8 under part 3 of the main document. After you enable them you should navigate to each application, verify the data and click Потвърди button.

During the import, if there are reported errors in the fillings of applications 5 or 8, that's a sign of a bug in the calculator itself. Please report the error here.

Parsers

Revolut

File format: .pdf

That's the default parser and handler statement files downloaded from Revolut app.

Trading 212

File format: .csv

Parser for statement files, generated by Trading 212 platform. Thanks to @bobcho.

CSV

File format: .csv

A generic parser for statements in CSV format. So for there, two identified usage scenarios:

  1. The parser could be used with structured data from any trading platform, that could be easily organized to fit the parser's requirements.
  2. The parser could be used to calculate tax information from multiple trading platforms. For example, you can generate statements.csv file for your Revolut activities and generate statements.csv file for your Trading 212 activities. Then you can append both files and process the resulted file once more. In the end, you'll receive tax information from both platforms.

In order for the file to be correctly parsed the following requirements should be met:

  1. The following columns should be presented:
    1. trade_date: The column should contain the date of the trade in dd.MM.YYYY format.
    2. activity_type: The current row activity type. The following types are supported: ["SELL", "BUY", "DIV", "DIVNRA", "SSP", "MAS"]
    3. company: The name of the stock company. For example Apple INC.
    4. symbol: The symbol of the stock. For example AAPL.
    5. quantity: The quantity of the activity. In order to correctly recognize surrender from addition SSP and MAS activities, the quantity should be positive or negative. For all other activity types, there is no such requirement(it could be an absolute value).
    6. price: The activity price per share.
    7. amount: The total amount of the activity. It should be a result of (quantity x price) + commissions + taxes.
  2. The first row should contain headers, indicating the column name, according to the mapping above. There is no requirement for columns to be presented in any particular order.
  3. The activities, listed in the file/s should be sorted from the earliest trading date to the latest one. The earliest date should be located at the very begging of the file. When you're processing multiple statement files you can append them together(no need to merge the activities).
  4. DIVNRA, which represents the tax that was paid upon receiving dividends, should follow DIV activity. Other activities could be listed between those two events. DIVNRA is not required for all DIVs but would trigger calculations for dividend tax owed to NAP. DIV activity amount should be equal to dividend value + tax.

In order to verify the parser correctness, you can compare the generated statements.csv file with your input file. The data should be the same in both files.

Contribution

As this was a late-night project, improvements could be made. I'm open to new PRs.

Please submit issues here.

Feedback

You can find me on my social media accounts:

Support

🍺 Buy Me A Beer

You might also like...
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.

Statistical Analysis 📈 This repository focuses on statistical analysis and the exploration used on various data sets for personal and professional pr

Spectacular AI SDK fuses data from cameras and IMU sensors and outputs an accurate 6-degree-of-freedom pose of a device.
Spectacular AI SDK fuses data from cameras and IMU sensors and outputs an accurate 6-degree-of-freedom pose of a device.

Spectacular AI SDK examples Spectacular AI SDK fuses data from cameras and IMU sensors (accelerometer and gyroscope) and outputs an accurate 6-degree-

Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

ToeholdTools is a Python package and desktop app designed to facilitate analyzing and designing toehold switches, created as part of the 2021 iGEM competition.

ToeholdTools Category Status Repository Package Build Quality A library for the analysis of toehold switch riboregulators created by the iGEM team Cit

A collection of robust and fast processing tools for parsing and analyzing web archive data.

ChatNoir Resiliparse A collection of robust and fast processing tools for parsing and analyzing web archive data. Resiliparse is part of the ChatNoir

Larch: Applications and Python Library for Data Analysis of X-ray Absorption Spectroscopy (XAS, XANES, XAFS, EXAFS), X-ray Fluorescence (XRF) Spectroscopy and Imaging

Larch: Data Analysis Tools for X-ray Spectroscopy and More Documentation: http://xraypy.github.io/xraylarch Code: http://github.com/xraypy/xraylarch L

A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.
A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.

Realtime Financial Market Data Visualization and Analysis Introduction This repo shows my project about real-time stock data pipeline. All the code is

Comments
  • trading212 support (partial)

    trading212 support (partial)

    This adds support for parsing Trading212 statements in csv format in addition to Revolut's pdf statements. Just put all the files in the input folder.

    Only partial support for time being - it lacks some activity types because I don't have a full statement at the moment (trading212 takes time to generate the statements and also I don't think I have received any dividends at all so far on their platform).

    Includes support for market and limit buy/sell orders only (lacks anything else). Also, output/statements.csv won't have settlement date for trading212 activities as its not included in the t212 export. As settle date is not used anywhere I guess this is fine.

    This should not break any current functionality or at least that was my intention.

    opened by bobcho 5
  • Update requirements.txt

    Update requirements.txt

    Issue: Doing a pip install -r requirements.txt as instructed returned an error launchpadlib 1.10.13 requires testresources, which is not installed. on my Ubuntu Server 20.04 LTS. Solution: Adding the testresources package to the requirements list fixed this for me.

    opened by drkskwlkr 2
  • General code improvements.

    General code improvements.

    Hello, I contributed some improvements.

    • Cover edge case when only out of order activities are present https://github.com/skilldeliver/revolut-stocks/blob/main/libs/parsers/revolut.py#L165
    • Using pipenv for dependencies workflow
    • Using black for formatting
    • Using flake8 for linter
    opened by skilldeliver 1
  • Fix activity range end index when SWEEP ACTIVITY string is missing

    Fix activity range end index when SWEEP ACTIVITY string is missing

    Set end_index for activities to the end of the page by default when activities are found. Because of missing SWEEP ACTIVITY string sometimes. Otherwise, activities can be missed.

    opened by micobg 0
Releases(0.6.0)
Owner
Doino Gretchenliev
Doino Gretchenliev
Detailed analysis on fraud claims in insurance companies, gives you information as to why huge loss take place in insurance companies

Insurance-Fraud-Claims Detailed analysis on fraud claims in insurance companies, gives you information as to why huge loss take place in insurance com

1 Jan 27, 2022
Using Python to scrape some basic player information from www.premierleague.com and then use Pandas to analyse said data.

PremiershipPlayerAnalysis Using Python to scrape some basic player information from www.premierleague.com and then use Pandas to analyse said data. No

5 Sep 06, 2021
Synthetic Data Generation for tabular, relational and time series data.

An Open Source Project from the Data to AI Lab, at MIT Website: https://sdv.dev Documentation: https://sdv.dev/SDV User Guides Developer Guides Github

The Synthetic Data Vault Project 1.2k Jan 07, 2023
Code for the DH project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval Muslim World"

Damast This repository contains code developed for the digital humanities project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval

University of Stuttgart Visualization Research Center 2 Jul 01, 2022
ELFXtract is an automated analysis tool used for enumerating ELF binaries

ELFXtract ELFXtract is an automated analysis tool used for enumerating ELF binaries Powered by Radare2 and r2ghidra This is specially developed for PW

Monish Kumar 49 Nov 28, 2022
WAL enables programmable waveform analysis.

This repro introcudes the Waveform Analysis Language (WAL). The initial paper on WAL will appear at ASPDAC'22 and can be downloaded here: https://www.

Institute for Complex Systems (ICS), Johannes Kepler University Linz 40 Dec 13, 2022
OpenDrift is a software for modeling the trajectories and fate of objects or substances drifting in the ocean, or even in the atmosphere.

opendrift OpenDrift is a software for modeling the trajectories and fate of objects or substances drifting in the ocean, or even in the atmosphere. Do

OpenDrift 167 Dec 13, 2022
A forecasting system dedicated to smart city data

smart-city-predictions System prognostyczny dedykowany dla danych inteligentnych miast Praca inżynierska realizowana przez Michała Stawikowskiego and

Kevin Lai 1 Nov 08, 2021
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.

Statistical Analysis 📈 This repository focuses on statistical analysis and the exploration used on various data sets for personal and professional pr

Andy Pham 1 Sep 03, 2022
Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen 3.7k Jan 03, 2023
Recommendations from Cramer: On the show Mad-Money (CNBC) Jim Cramer picks stocks which he recommends to buy. We will use this data to build a portfolio

Backtesting the "Cramer Effect" & Recommendations from Cramer Recommendations from Cramer: On the show Mad-Money (CNBC) Jim Cramer picks stocks which

Gábor Vecsei 12 Aug 30, 2022
PandaPy has the speed of NumPy and the usability of Pandas 10x to 50x faster (by @firmai)

PandaPy "I came across PandaPy last week and have already used it in my current project. It is a fascinating Python library with a lot of potential to

Derek Snow 527 Jan 02, 2023
Desafio 1 ~ Bantotal

Challenge 01 | Bantotal Please read the instructions for the challenge by selecting your preferred language below: Español Português License Copyright

Maratona Behind the Code 44 Sep 28, 2022
PyStan, a Python interface to Stan, a platform for statistical modeling. Documentation: https://pystan.readthedocs.io

PyStan PyStan is a Python interface to Stan, a package for Bayesian inference. Stan® is a state-of-the-art platform for statistical modeling and high-

Stan 229 Dec 29, 2022
This program analyzes a DNA sequence and outputs snippets of DNA that are likely to be protein-coding genes.

This program analyzes a DNA sequence and outputs snippets of DNA that are likely to be protein-coding genes.

1 Dec 28, 2021
A lightweight, hub-and-spoke dashboard for multi-account Data Science projects

A lightweight, hub-and-spoke dashboard for cross-account Data Science Projects Introduction Modern Data Science environments often involve many indepe

AWS Samples 3 Oct 30, 2021
CS50 pset9: Using flask API to create a web application to exchange stocks' shares.

C$50 Finance In this guide we want to implement a website via which users can “register”, “login” “buy” and “sell” stocks, like below: Background If y

1 Jan 24, 2022
A set of tools to analyse the output from TraDIS analyses

QuaTradis (Quadram TraDis) A set of tools to analyse the output from TraDIS analyses Contents Introduction Installation Required dependencies Bioconda

Quadram Institute Bioscience 2 Feb 16, 2022
Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities. This is aimed at those looking to get into the field of D

Joachim 1 Dec 26, 2021
simple way to build the declarative and destributed data pipelines with python

unipipeline simple way to build the declarative and distributed data pipelines. Why you should use it Declarative strict config Scaffolding Fully type

aliaksandr-master 0 Jan 26, 2022