Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

Overview

What is judgyprophet?

judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the business about future events. The aim is to enable users to perform forecasting with judgmental adjustment, in a way that is mathematically as sound as possible.

Some events will have a big effect on your timeseries. Some of which you are aware of ahead of time. For example:

  • An existing product entering a new market.
  • A price change to a product.

These events will typically cause a large change in your timeseries of e.g. product sales, which a standard statistical forecast will consistently underestimate.

The business will often have good estimates (or at least better than your statistical forecast) about how these events will affect your timeseries. But this is difficult to encode into your statistical forecasting algorithm. One option is to use a regressor, but this typically works poorly. This is because you have no data on the event before it occurs, and the statistical forecast does not know how to balance the information in your regressor and trend after the event occurs (which can lead to erratic behaviour).

judgyprophet solves this problem by encoding the business estimate of how the event will affect the forecast (the judgmental adjustment) as a Bayesian informative prior.

Before the event occurs, this business information is used to reflect the forecast of what will happen post-event e.g. the estimated uplift in product sales once the event has happened. After the event occurs, we update what the business thinks will happen, with what we see happening in the actuals. This is done using standard Bayesian updating.

Installation

1. install judgyprophet python package using pip

pip install judgyprophet

2. compile the STAN model

judgyprophet depends on STAN, whose models have to be compiled before running.

So to use judgyprophet, you have to compile the model. Do this in the shell using

python -c "from judgyprophet import JudgyProphet; JudgyProphet().compile()"

or in python using

from judgyprophet import JudgyProphet

JudgyProphet().compile()

This will take a while. But you only have to run this once, after the initial install.

Documentation

Full documentation is available on our Github Pages site here.

Scroll down for a quickstart tutorial.

A runnable jupyter notebook version of the quickstart tutorial is available here

Roadmap

Some things on our roadmap:

  • Currently judgyprophet STAN file is only tested on Unix-based Linux or Mac machines. We aim to fully test Windows machines ASAP.
  • Option to run full MCMC, rather than just L-BFGS.
  • Prediction intervals
  • Regressors/holidays

Quickstart Tutorial

Imagine your business currently operates in the US, but is launching its product in Europe. As a result it anticipates a sharp uptake in sales (which it has an estimate of). As your forecasting team, they come to you and ask you to account for this.

Let's look at how we might do this using judgyprophet with some example data, where we know what happened. First let's plot this:

from judgyprophet.tutorials.resources import get_trend_event

example_data = get_trend_event()
p = example_data.plot.line()

png

We can see that product sales increased sharply from about September 2020. Suppose it was a launch in a new market, and that the business had an initial estimate of the impact in May 2020. The business expected the slope increase to be 6.

Let's use judgyprophet to forecast this series from May 2020. We do this by encoding the initial business estimate as a trend event.

from judgyprophet import JudgyProphet
import pandas as pd
import seaborn as sns

# Create the expected trend events by consulting with the business
trend_events = [
    {'name': "New market entry", 'index': '2020-09-01', 'm0': 6}
]


# Cutoff the data to May 2020
data_may2020 = example_data.loc[:"2020-05-01"]

# Make the forecast with the business estimated level event
# We have no level events, so just provide the empty list.
jp = JudgyProphet()
# Because the event is beyond the actuals, judgyprophet throws a warning.
#    This is just because the Bayesian model at the event has no actuals to learn from.
#    The event is still used in predictions.
jp.fit(
    data=data_may2020,
    level_events=[],
    trend_events=trend_events,
    # Set random seed for reproducibility
    seed=13
)
predictions = jp.predict(horizon=12)
INFO:judgyprophet.judgyprophet:Rescaling onto 0-mean, 1-sd.
WARNING:judgyprophet.judgyprophet:Post-event data for trend event New market entry less than 0 points. Event deactivated in model. Event index: 2020-09-01, training data end index: 2019-06-01 00:00:00
WARNING:judgyprophet.utils:No active trend or level events (i.e. no event indexes overlap with data). The model will just fit a base trend to the data.


Initial log joint probability = -3.4521
    Iter      log prob        ||dx||      ||grad||       alpha      alpha0  # evals  Notes
       3      -2.92768      0.054987   8.11433e-14           1           1        7
Optimization terminated normally:
  Convergence detected: gradient norm is below tolerance

Because we are in May 2020, the forecasting algorithm has nothing to use for learning; so just uses the business estimate. Let's plot the result:

from judgyprophet.tutorials.resources import plot_forecast

plot_forecast(
    actuals=example_data,
    predictions=predictions,
    cutoff="2020-05-01",
    events=trend_events
)
INFO:prophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.
INFO:prophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
INFO:prophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.



Initial log joint probability = -17.0121
Iteration  1. Log joint probability =    10.4753. Improved by 27.4875.
Iteration  2. Log joint probability =    12.7533. Improved by 2.27796.
Iteration  3. Log joint probability =    25.4696. Improved by 12.7163.
Iteration  4. Log joint probability =     26.707. Improved by 1.2374.
Iteration  5. Log joint probability =    26.7075. Improved by 0.000514342.
Iteration  6. Log joint probability =    26.7104. Improved by 0.00296558.
Iteration  7. Log joint probability =    26.7122. Improved by 0.00171322.
Iteration  8. Log joint probability =    26.7157. Improved by 0.00351772.
Iteration  9. Log joint probability =    26.7159. Improved by 0.000208268.
Iteration 10. Log joint probability =    26.7159. Improved by 6.64977e-05.
Iteration 11. Log joint probability =     26.716. Improved by 6.89899e-05.
Iteration 12. Log joint probability =     26.716. Improved by 3.06578e-05.
Iteration 13. Log joint probability =     26.716. Improved by 8.91492e-07.
Iteration 14. Log joint probability =     26.716. Improved by 8.71052e-09.

png

We can see judgyprophet is accounting for the increased trend, but the business slightly overestimated the increase in sales due to the product launch.

Let's fast forward to January 2021, the business want to reforecast based on their estimate, and what they've seen so far for the product launch. This is where judgyprophet comes into its own.

Once actuals are observed after the event has taken place, judgyprophet updates its estimate of what the event impact is. Let's look at this in action:

# Cutoff the data to January 2021
data_jan2021 = example_data.loc[:"2021-01-01"]

# Reforecast using the new actuals, not we are at Jan 2021
jp = JudgyProphet()
jp.fit(
    data=data_jan2021,
    level_events=[],
    trend_events=trend_events,
    # Set random seed for reproducibility
    seed=13
)
predictions = jp.predict(horizon=12)
INFO:judgyprophet.judgyprophet:Rescaling onto 0-mean, 1-sd.
INFO:judgyprophet.judgyprophet:Adding trend event New market entry to model. Event index: 2020-09-01, training data start index: 2019-06-01 00:00:00, training data end index: 2021-01-01 00:00:00. Initial gradient: 6. Damping: None.


Initial log joint probability = -309.562
    Iter      log prob        ||dx||      ||grad||       alpha      alpha0  # evals  Notes
      10      -1.64341   2.10244e-05   3.61281e-06           1           1       15
Optimization terminated normally:
  Convergence detected: relative gradient magnitude is below tolerance

Now let's plot the results:

plot_forecast(actuals=example_data, predictions=predictions, cutoff="2021-01-01", events=trend_events)
INFO:prophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.
INFO:prophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
INFO:prophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.



Initial log joint probability = -24.5881
Iteration  1. Log joint probability =   -1.06803. Improved by 23.5201.
Iteration  2. Log joint probability =    11.6215. Improved by 12.6895.
Iteration  3. Log joint probability =    36.5271. Improved by 24.9056.
Iteration  4. Log joint probability =    37.3776. Improved by 0.850488.
Iteration  5. Log joint probability =    37.6489. Improved by 0.271259.
Iteration  6. Log joint probability =    37.6547. Improved by 0.00580657.
Iteration  7. Log joint probability =    37.7831. Improved by 0.128419.
Iteration  8. Log joint probability =    37.7884. Improved by 0.00527858.
Iteration  9. Log joint probability =     37.789. Improved by 0.000612124.
Iteration 10. Log joint probability =    37.7891. Improved by 9.93823e-05.
Iteration 11. Log joint probability =    37.7902. Improved by 0.00112416.
Iteration 12. Log joint probability =    37.7902. Improved by 3.17397e-06.
Iteration 13. Log joint probability =    37.7902. Improved by 1.59404e-05.
Iteration 14. Log joint probability =    37.7902. Improved by 5.06854e-07.
Iteration 15. Log joint probability =    37.7902. Improved by 6.87792e-07.
Iteration 16. Log joint probability =    37.7902. Improved by 4.82761e-08.
Iteration 17. Log joint probability =    37.7902. Improved by 2.50385e-07.
Iteration 18. Log joint probability =    37.7902. Improved by 6.60322e-09.

png

In this case, once judgyprophet observes the data post-event, the Bayesian updating starts to realise the business estimate is a bit large, so it reduces it.

This was a simple example to demonstrate judgyprophet. You can add many trend events into a single forecasting horizon, add damping. You can also add level events – changes in the forecasting level; and seasonality see our other tutorials for details about this.

You might also like...
This codebase is the official implementation of Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization (NeurIPS2021, Spotlight)

Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization This codebase is the official implementation of Test-Time Classifier A

A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

Official implementation of
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020

Accelerating Reinforcement Learning with Learned Skill Priors [Project Website] [Paper] Karl Pertsch1, Youngwoon Lee1, Joseph Lim1 1CLVR Lab, Universi

 Geometry-Free View Synthesis: Transformers and no 3D Priors
Geometry-Free View Synthesis: Transformers and no 3D Priors

Geometry-Free View Synthesis: Transformers and no 3D Priors Geometry-Free View Synthesis: Transformers and no 3D Priors Robin Rombach*, Patrick Esser*

 DETReg: Unsupervised Pretraining with Region Priors for Object Detection
DETReg: Unsupervised Pretraining with Region Priors for Object Detection

DETReg: Unsupervised Pretraining with Region Priors for Object Detection Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik

This repository contains the data and code for the paper
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" ([email protected])

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors
Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors

Make-A-Scene - PyTorch Pytorch implementation (inofficial) of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors (https://arxiv.org/

Implementation of CVPR'2022:Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors
Implementation of CVPR'2022:Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors

Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository contains

Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

Comments
  • Bumping jupyter versions in dev dependencies for security patch.

    Bumping jupyter versions in dev dependencies for security patch.

    Patching dev dependencies in light of jupyter security issues:

    • https://github.com/advisories/GHSA-m87f-39q9-6f55
    • https://github.com/advisories/GHSA-p737-p57g-4cpr
    opened by jackcbaker 0
  • Unspecified argument used in judgyprophet.fit()

    Unspecified argument used in judgyprophet.fit()

    The docstring of judgyprophet.fit() states that the dict array fed into 'trend_events' argument only needs three values per dict:

    :param trend_events: A list of dictionaries. Each dict should have the following entries
                - 'index' the start index of the event (i.e. index = i assumes the start of the event
                    is at location actuals[i]). The index should be of the same type as the actuals index.
                - 'm0' the estimated gradient increase following the event
                - 'gamma' (Optional) the damping to use for the trend. This is a float between 0 and 1.
                    It's not recommended to be below 0.8 and must be 0 > gamma <= 1.
                    If gamma is missing from the dict, or gamma = 1, a linear trend is used (i.e. no damping).
    

    But it actually needs 4 to work - the missing one being 'name'.

    The only need for this value currently is logging purposes (lines 1059 and 1085). Perhaps remove this argument from the logging, or add it as a forth key in the dictionary in the docstring?

    opened by Andrew47658 2
  • Correction for the docstring for judgyprophet.fit()

    Correction for the docstring for judgyprophet.fit()

    The docstring for judgyprophet.fit() states:

    :param actuals: A pandas series of the actual timeseries to forecast.
                It is assumed there are no missing data points,
                i.e. x[1] is the observation directly following x[0], etc.
    

    But I believe this argument should be named data, not actuals in the docstring. Thanks!

    opened by Andrew47658 0
Releases(0.1.2)
Owner
AstraZeneca
Data and AI: Unlocking new science insights
AstraZeneca
TVNet: Temporal Voting Network for Action Localization

TVNet: Temporal Voting Network for Action Localization This repo holds the codes of paper: "TVNet: Temporal Voting Network for Action Localization". P

hywang 5 Jul 26, 2022
Awesome AI Learning with +100 AI Cheat-Sheets, Free online Books, Top Courses, Best Videos and Lectures, Papers, Tutorials, +99 Researchers, Premium Websites, +121 Datasets, Conferences, Frameworks, Tools

All about AI with Cheat-Sheets(+100 Cheat-sheets), Free Online Books, Courses, Videos and Lectures, Papers, Tutorials, Researchers, Websites, Datasets

Niraj Lunavat 1.2k Jan 01, 2023
Python implementation of ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images, AAAI2022.

ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images Binh M. Le & Simon S. Woo, "ADD:

2 Oct 24, 2022
Official repository for Fourier model that can generate periodic signals

Conditional Generation of Periodic Signals with Fourier-Based Decoder Jiyoung Lee, Wonjae Kim, Daehoon Gwak, Edward Choi This repository provides offi

8 May 25, 2022
Heterogeneous Temporal Graph Neural Network

Heterogeneous Temporal Graph Neural Network This repository contains the datasets and source code of HTGNN. run_mag.ipynb is the training and testing

15 Dec 22, 2022
Memory Defense: More Robust Classificationvia a Memory-Masking Autoencoder

Memory Defense: More Robust Classificationvia a Memory-Masking Autoencoder Authors: - Eashan Adhikarla - Dan Luo - Dr. Brian D. Davison Abstract Many

Eashan Adhikarla 4 Dec 25, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
Some methods for comparing network representations in deep learning and neuroscience.

Generalized Shape Metrics on Neural Representations In neuroscience and in deep learning, quantifying the (dis)similarity of neural representations ac

Alex Williams 45 Dec 27, 2022
Lexical Substitution Framework

LexSubGen Lexical Substitution Framework This repository contains the code to reproduce the results from the paper: Arefyev Nikolay, Sheludko Boris, P

Samsung 37 Sep 15, 2022
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

77 Dec 24, 2022
A curated list of long-tailed recognition resources.

Awesome Long-tailed Recognition A curated list of long-tailed recognition and related resources. Please feel free to pull requests or open an issue to

Zhiwei ZHANG 542 Jan 01, 2023
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

PatrickStar: Parallel Training of Large Language Models via a Chunk-based Memory Management Meeting PatrickStar Pre-Trained Models (PTM) are becoming

Tencent 633 Dec 28, 2022
KoCLIP: Korean port of OpenAI CLIP, in Flax

KoCLIP This repository contains code for KoCLIP, a Korean port of OpenAI's CLIP. This project was conducted as part of Hugging Face's Flax/JAX communi

Jake Tae 100 Jan 02, 2023
Code for "Continuous-Time Meta-Learning with Forward Mode Differentiation" (ICLR 2022)

Continuous-Time Meta-Learning with Forward Mode Differentiation ICLR 2022 (Spotlight) - Installation - Example - Citation This repository contains the

Tristan Deleu 25 Oct 20, 2022
The code of paper 'Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection'

Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection Pytorch implemetation of paper 'Learning to Aggregate and Personalize

Tencent YouTu Research 136 Dec 29, 2022
Tutorial repo for an end-to-end Data Science project

End-to-end Data Science project This is the repo with the notebooks, code, and additional material used in the ITI's workshop. The goal of the session

Deena Gergis 127 Dec 30, 2022
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

Less Wright 266 Dec 28, 2022
PyTorch implementation of Rethinking Positional Encoding in Language Pre-training

TUPE PyTorch implementation of Rethinking Positional Encoding in Language Pre-training. Quickstart Clone this repository. git clone https://github.com

Jake Tae 5 Jan 27, 2022
code for Grapadora research paper experimentation

Road feature embedding selection method Code for research paper experimentation Abstract Traffic forecasting models rely on data that needs to be sens

Eric López Manibardo 0 May 26, 2022
AdaDM: Enabling Normalization for Image Super-Resolution

AdaDM AdaDM: Enabling Normalization for Image Super-Resolution. You can apply BN, LN or GN in SR networks with our AdaDM. Pretrained models (EDSR*/RDN

58 Jan 08, 2023