vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models

Overview

python   MIT license  

vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models, such as:

  • T-test: verify if mean of distribution is zero;
  • Kupiec Test (1995): verify if the number of violations is consistent with the violations predicted by the model;
  • Berkowitz Test (2001): verify if conditional distributions of returns "GARCH(1,1)" used in the VaR Model is adherent to the data. In this specific test, we do not observe the whole data, only the tail;
  • Christoffersen and Pelletier Test (2004): also known as Duration Test. Duration is time between violations of VaR. It tests if VaR Model has quickly response to market movements by consequence the violations do not form volatility clusters. This test verifies if violations has no memory i.e. should be independent.

Installation

Using pip

You can install using the pip package manager by running:

pip install vartests

Alternatively, you could install the latest version directly from Github:

pip install https://github.com/rafa-rod/vartests/archive/refs/heads/main.zip

Why vartests is important?

After VaR calculation, it is necessary to perform statistic tests to evaluate the VaR Models. To select the best model, they should be validated by backtests.

Example

First of all, lets read a file with a PnL (distribution of profit and loss) of a portfolio in which also contains the VaR and its violations.

import pandas as pd

data = pd.read_excel("Example.xlsx", index_col=0)
violations = data["Violations"]
pnl = data["PnL"] 
data.sample(5)

The dataframe looks like:

' |     PnL       |      VaR        |   Violations |
  | -889.003707   | -2554.503872    |            0 |
  | -2554.503872  | -2202.221691    |            1 | 
  | -887.527423   | -2193.692570    |            0 |  
  | -274.344126   | -2160.290746    |            0 | 
  | 1376.018638   | -5719.833100    |            0 |'

Not all tests should be applied to the VaR Model. Some of them its applied whether the VaR Model has assumption of zero mean or follow a specific distribution. So you should test the data:

import vartests

vartests.zero_mean_test(pnl.values, conf_level=0.95)

This assumption is commom used in parametric VaR like EWMA and GARCH Models. Besides that, is necessary check assumption of distribution. So you should test with Berkowitz (2001):

import vartests

vartests.berkowtiz_tail_test(pnl, volatility_window=252, var_conf_level=0.99, conf_level=0.95)

The following tests should be used to any kind of VaR Models.

import vartests

vartests.kupiec_test(violations, var_conf_level=0.99, conf_level=0.95)

vartests.duration_test(violations, conf_level=0.95)

If you want to see the failure ratio of the VaR Model, just type:

import vartests

vartests.failure_rate(violations)
Owner
RAFAEL RODRIGUES
Quantitative Finance, data science, optimisation, Python, julia, R.
RAFAEL RODRIGUES
pyhsmm MITpyhsmm - Bayesian inference in HSMMs and HMMs. MIT

Bayesian inference in HSMMs and HMMs This is a Python library for approximate unsupervised inference in Bayesian Hidden Markov Models (HMMs) and expli

Matthew Johnson 527 Dec 04, 2022
A model checker for verifying properties in epistemic models

Epistemic Model Checker This is a model checker for verifying properties in epistemic models. The goal of the model checker is to check for Pluralisti

Thomas Träff 2 Dec 22, 2021
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.

Statistical Analysis 📈 This repository focuses on statistical analysis and the exploration used on various data sets for personal and professional pr

Andy Pham 1 Sep 03, 2022
Performance analysis of predictive (alpha) stock factors

Alphalens Alphalens is a Python Library for performance analysis of predictive (alpha) stock factors. Alphalens works great with the Zipline open sour

Quantopian, Inc. 2.5k Jan 09, 2023
Python ELT Studio, an application for building ELT (and ETL) data flows.

The Python Extract, Load, Transform Studio is an application for performing ELT (and ETL) tasks. Under the hood the application consists of a two parts.

Schlerp 55 Nov 18, 2022
Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python

Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python 📊

Thomas 2 May 26, 2022
MoRecon - A tool for reconstructing missing frames in motion capture data.

MoRecon - A tool for reconstructing missing frames in motion capture data.

Yuki Nishidate 38 Dec 03, 2022
Incubator for useful bioinformatics code, primarily in Python and R

Collection of useful code related to biological analysis. Much of this is discussed with examples at Blue collar bioinformatics. All code, images and

Brad Chapman 560 Jan 03, 2023
ped-crash-techvol: Texas Ped Crash Tech Volume Pack

ped-crash-techvol: Texas Ped Crash Tech Volume Pack In conjunction with the Final Report "Identifying Risk Factors that Lead to Increase in Fatal Pede

Network Modeling Center; Center for Transportation Research; The University of Texas at Austin 2 Sep 28, 2022
Get mutations in cluster by querying from LAPIS API

Cluster Mutation Script Get mutations appearing within user-defined clusters. Usage Clusters are defined in the clusters dict in main.py: clusters = {

neherlab 1 Oct 22, 2021
Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python

Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python This project is a good starting point for those who have little

Himanshu Kumar singh 2 Dec 04, 2021
PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams

PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams Motivation When dataset freshness is critical, the annotating of high speed

4 Aug 02, 2022
Implementation in Python of the reliability measures such as Omega.

OmegaPy Summary Simple implementation in Python of the reliability measures: Omega Total, Omega Hierarchical and Omega Hierarchical Total. Name Link O

Rafael Valero Fernández 2 Apr 27, 2022
Integrate bus data from a variety of sources (batch processing and real time processing).

Purpose: This is integrate bus data from a variety of sources such as: csv, json api, sensor data ... into Relational Database (batch processing and r

1 Nov 25, 2021
DataPrep — The easiest way to prepare data in Python

DataPrep — The easiest way to prepare data in Python

SFU Database Group 1.5k Dec 27, 2022
In this project, ETL pipeline is build on data warehouse hosted on AWS Redshift.

ETL Pipeline for AWS Project Description In this project, ETL pipeline is build on data warehouse hosted on AWS Redshift. The data is loaded from S3 t

Mobeen Ahmed 1 Nov 01, 2021
Python package to transfer data in a fast, reliable, and packetized form.

pySerialTransfer Python package to transfer data in a fast, reliable, and packetized form.

PB2 101 Dec 07, 2022
Python implementation of Principal Component Analysis

Principal Component Analysis Principal Component Analysis (PCA) is a dimension-reduction algorithm. The idea is to use the singular value decompositio

Ignacio Darago 1 Nov 06, 2021
Python script for transferring data between three drives in two separate stages

Waterlock Waterlock is a Python script meant for incrementally transferring data between three folder locations in two separate stages. It performs ha

David Swanlund 13 Nov 10, 2021
ETL flow framework based on Yaml configs in Python

ETL framework based on Yaml configs in Python A light framework for creating data streams. Setting up streams through configuration in the Yaml file.

Павел Максимов 18 Jul 06, 2022