A simple flask application to collect annotations for the Turing Change Point Dataset, a benchmark dataset for change point detection algorithms

Overview

AnnotateChange

Welcome to the repository of the "AnnotateChange" application. This application was created to collect annotations of time series data in order to construct the Turing Change Point Dataset (TCPD). The TCPD is a dataset of real-world time series used to evaluate change point detection algorithms. For the change point detection benchmark that was created using this dataset, see the Turing Change Point Detection Benchmark repository.

Any work that uses this repository should cite our paper: Van den Burg & Williams - An Evaluation of Change Point Detection Algorithms (2020). You can use the following BibTeX entry:

@article{vandenburg2020evaluation,
        title={An Evaluation of Change Point Detection Algorithms},
        author={{Van den Burg}, G. J. J. and Williams, C. K. I.},
        journal={arXiv preprint arXiv:2003.06222},
        year={2020}
}

Here's a screenshot of what the application looks like during the annotation process:

screenshot of 
AnnotateChange

Some of the features of AnnotateChange include:

  • Admin panel to add/remove datasets, add/remove annotation tasks, add/remove users, and inspect incoming annotations.

  • Basic user management: authentication, email confirmation, forgotten password, automatic log out after inactivity, etc. Users are only allowed to register using an email address from an approved domain.

  • Task assignment of time series to user is done on the fly, ensuring no user ever annotates the same dataset twice, and prioritising datasets that are close to a desired number of annotations.

  • Interactive graph of a time series that supports pan and zoom, support for multidimensional time series.

  • Mandatory "demo" to onboard the user to change point annotation.

  • Backup of annotations to the admin via email.

  • Time series datasets are verified upon upload acccording to a strict schema.

Getting Started

Below are instructions for setting up the application for local development and for running the application with Docker.

Basic

AnnotateChange can be launched quickly for local development as follows:

  1. Clone the repo

    $ git clone https://github.com/alan-turing-institute/AnnotateChange
    $ cd AnnotateChange
    
  2. Set up a virtual environment and install dependencies (requires Python 3.7+)

    $ sudo apt-get install -y python3-venv # assuming Ubuntu
    $ pip install wheel
    $ python3 -m venv ./venv
    $ source ./venv/bin/activate
    $ pip install -r requirements.txt
    
  3. Create local development environment file

    $ cp .env.example .env.development
    $ sed -i 's/DB_TYPE=mysql/DB_TYPE=sqlite3/g' .env.development
    

    With DB_TYPE=sqlite3, we don't have to deal with MySQL locally.

  4. Initialize the database (this will be a local app.db file).

    $ ./flask.sh db upgrade
    
  5. Create the admin user account

    $ ./flask.sh admin add --auto-confirm-email
    

    The --auto-confirm-email flag automatically marks the email address of the admin user as confirmed. This is mostly useful in development environments when you don't have a mail address set up yet.

  6. Run the application

    $ ./flask.sh run
    

    This should tell you where its running, probably localhost:5000. You should be able to log in with the admin account you've just created.

  7. As admin, upload ALL demo datasets (included in demo_data) through: Admin Panel -> Add dataset. You should then be able to follow the introduction to the app (available from the landing page).

  8. After completing the instruction, you then will be able to access the user interface ("Home") to annotate your own time series.

Docker

To use AnnotateChange locally using Docker, follow the steps below. For a full-fledged installation on a server, see the deployment instructions.

  1. Install docker and docker-compose.

  2. Clone this repository and switch to it:

    $ git clone https://github.com/alan-turing-institute/AnnotateChange
    $ cd AnnotateChange
    
  3. Build the docker image:

    $ docker build -t gjjvdburg/annotatechange .
    
  4. Create the directory for persistent MySQL database storage:

    $ mkdir -p persist/{instance,mysql}
    $ sudo chown :1024 persist/instance
    $ chmod 775 persist/instance
    $ chmod g+s persist/instance
    
  5. Copy the environment variables file:

    $ cp .env.example .env
    

    Some environment variables can be adjusted if needed. For example, when moving to production, you'll need to change the FLASK_ENV variable accordingly. Please also make sure to set a proper SECRET_KEY and AC_MYSQL_PASSWORD (= MYSQL_PASSWORD). You'll also need to configure a mail account so the application can send out emails for registration etc. This is what the variables prefixed with MAIL_ are for. The ADMIN_EMAIL is likely your own email, it is used when the app encounters an error and to send backups of the annotation records. You can limit the email domains users can use with the USER_EMAIL_DOMAINS variable. See the config.py file for more info on the configuration options.

  6. Create a local docker network for communiation between the AnnotateChange app and the MySQL server:

    $ docker network create web
    
  7. Launch the services with docker-compose

    $ docker-compose up
    

    You may need to wait 2 minutes here before the database is initialized. If all goes well, you should be able to point your browser to localhost:7831 and see the landing page of the application. Stop the service before continuing to the next step (by pressing Ctrl+C).

  8. Once you have the app running, you'll want to create an admin account so you can upload datasets, manage tasks and users, and download annotation results. This can be done using the following command:

    $ docker-compose run --entrypoint 'flask admin add --auto-confirm-email' annotatechange
    
  9. As admin, upload ALL demo datasets (included in demo_data) through: Admin Panel -> Add dataset. You should then be able to follow the introduction to the app (available from the landing page).

  10. After completing the instruction, you then will be able to access the user interface ("Home") to annotate your own time series.

Notes

This codebase is provided "as is". If you find any problems, please raise an issue on GitHub.

The code is licensed under the MIT License.

This code was written by Gertjan van den Burg with helpful comments provided by Chris Williams.

Some implementation details

Below are some thoughts that may help make sense of the codebase.

  • AnnotateChange is a web application build on the Flask framework. See this excellent tutorial for an introduction to Flask. The flask.sh shell script loads the appropriate environment variables and runs the application.

  • The application handles user management and is centered around the idea of a "task" which links a particular user to a particular time series to annotate.

  • An admin role is available, and the admin user can manually assign and delete tasks as well as add/delete users, datasets, etc. The admin user is created using the cli (see the Getting Started documentation above).

  • All datasets must adhere to a specific dataset schema (see utils/dataset_schema.json). See the files in [demo_data] for examples, as well as those in TCPD.

  • Annotations are stored in the database using 0-based indexing. Tasks are assigned on the fly when a user requests a time series to annotate (see utils/tasks.py).

  • Users can only begin annotating when they have successfully passed the introduction.

  • Configuration of the app is done through environment variables, see the .env.example file for an example.

  • Docker is used for deployment (see the deployment documentation in docs), and Traefik is used for SSL, etc.

  • The time series graph is plotted using d3.js.

Owner
The Alan Turing Institute
The UK's national institute for data science and artificial intelligence.
The Alan Turing Institute
300+ Python Interview Questions

300+ Python Interview Questions

Pradeep Kumar 1.1k Jan 02, 2023
Python script to generate Vale linting rules from word usage guidance in the Red Hat Supplementary Style Guide

ssg-vale-rules-gen Python script to generate Vale linting rules from word usage guidance in the Red Hat Supplementary Style Guide. These rules are use

Vale at Red Hat 1 Jan 13, 2022
A Python Package To Generate Strong Passwords For You in Your Projects.

shPassGenerator Version 1.0.6 Ready To Use Developed by Shervin Badanara (shervinbdndev) on Github Language and technologies used in This Project Work

Shervin 11 Dec 19, 2022
The purpose of this project is to share knowledge on how awesome Streamlit is and can be

Awesome Streamlit The fastest way to build Awesome Tools and Apps! Powered by Python! The purpose of this project is to share knowledge on how Awesome

Marc Skov Madsen 1.5k Jan 07, 2023
EasyMultiClipboard - Python script written to handle more than 1 string in clipboard

EasyMultiClipboard - Python script written to handle more than 1 string in clipboard

WVlab 1 Jun 18, 2022
An ongoing curated list of OS X best applications, libraries, frameworks and tools to help developers set up their macOS Laptop.

macOS Development Setup Welcome to MacOS Local Development & Setup. An ongoing curated list of OS X best applications, libraries, frameworks and tools

Paul Veillard 3 Apr 03, 2022
Service for visualisation of high dimensional for hydrosphere

hydro-visualization Service for visualization of high dimensional for hydrosphere DEPENDENCIES DEBUG_ENV = bool(os.getenv("DEBUG_ENV", False)) APP_POR

hydrosphere.io 1 Nov 12, 2021
The source code that powers readthedocs.org

Welcome to Read the Docs Purpose Read the Docs hosts documentation for the open source community. It supports Sphinx docs written with reStructuredTex

Read the Docs 7.4k Dec 25, 2022
Generate a backend and frontend stack using Python and json-ld, including interactive API documentation.

d4 - Base Project Generator Generate a backend and frontend stack using Python and json-ld, including interactive API documentation. d4? What is d4 fo

Markus Leist 3 May 03, 2022
Python code for working with NFL play by play data.

nfl_data_py nfl_data_py is a Python library for interacting with NFL data sourced from nflfastR, nfldata, dynastyprocess, and Draft Scout. Includes im

82 Jan 05, 2023
Near Zero-Overhead Python Code Coverage

Slipcover: Near Zero-Overhead Python Code Coverage by Juan Altmayer Pizzorno and Emery Berger at UMass Amherst's PLASMA lab. About Slipcover Slipcover

PLASMA @ UMass 325 Dec 28, 2022
Automatic links from code examples to reference documentation

sphinx-codeautolink Automatic links from Python code examples to reference documentation at the flick of a switch! sphinx-codeautolink analyses the co

Felix Hildén 41 Dec 17, 2022
Parser manager for parsing DOC, DOCX, PDF or HTML files

Parser manager Description Parser gets PDF, DOC, DOCX or HTML file via API and saves parsed data to the database. Implemented in Ruby 3.0.1 using Acti

Эдем 4 Dec 04, 2021
LotteryBuyPredictionWebApp - Lottery Purchase Prediction Model

Lottery Purchase Prediction Model Objective and Goal Predict the lottery type th

Wanxuan Zhang 2 Feb 14, 2022
Sphinx theme for readthedocs.org

Read the Docs Sphinx Theme This Sphinx theme was designed to provide a great reader experience for documentation users on both desktop and mobile devi

Read the Docs 4.3k Dec 31, 2022
Automatically open a pull request for repositories that have no CONTRIBUTING.md file

automatic-contrib-prs Automatically open a pull request for repositories that have no CONTRIBUTING.md file for a targeted set of repositories. What th

GitHub 8 Oct 20, 2022
Credit EDA Case Study Using Python

This case study aims to identify patterns which indicate if a client has difficulty paying their installments which may be used for taking actions such as denying the loan, reducing the amount of loa

Purvi Padliya 1 Jan 14, 2022
This is the data scrapped of all the pitches made up potential startup's to established bussiness tycoons of India with all the details of Investments made, equity share, Name of investor etc.

SharkTankInvestor This is the data scrapped of all the pitches made up potential startup's to established bussiness tycoons of India with all the deta

Subradip Poddar 2 Aug 02, 2022
Generate YARA rules for OOXML documents using ZIP local header metadata.

apooxml Generate YARA rules for OOXML documents using ZIP local header metadata. To learn more about this tool and the methodology behind it, check ou

MANDIANT 34 Jan 26, 2022
Fully reproducible, Dockerized, step-by-step, tutorial on how to mock a "real-time" Kafka data stream from a timestamped csv file. Detailed blog post published on Towards Data Science.

time-series-kafka-demo Mock stream producer for time series data using Kafka. I walk through this tutorial and others here on GitHub and on my Medium

Maria Patterson 26 Nov 15, 2022