Active and Sample-Efficient Model Evaluation

Overview

Active Testing: Sample-Efficient Model Evaluation

Hi, good to see you here! 👋

This is code for "Active Testing: Sample-Efficient Model Evaluation".

Please cite our paper, if you find this helpful:

@article{kossen2021active,
  title={{A}ctive {T}esting: {S}ample-{E}fficient {M}odel {E}valuation},
  author={Kossen, Jannik and Farquhar, Sebastian and Gal, Yarin and Rainforth, Tom},
  journal={arXiv:2103.05331},
  year={2021}
}

animation

Setup

The requirements.txt can be used to set up a python environment for this codebase. You can do this, for example, with conda:

conda create -n isactive python=3.8
conda activate isactive
pip install -r requirements.txt

Reproducing the Experiments

  • To reproduce a figure of the paper, first run the appropriate experiments
sh reproduce/experiments/figure-X.sh
  • And then create the plots with the Jupyter Notebook at
notebooks/plots_paper.ipynb
  • (The notebook let's you conveniently select which plots to recreate.)

  • Which should put plots into notebooks/plots/.

  • In the above, replace X by

    • 123 for Figures 1, 2, 3
    • 4 for Figure 4
    • 5 for Figure 5
    • 6 for Figure 6
    • 7 for Figure 7
  • Other notes

    • Synthetic data experiments do not require GPUs and should run on pretty much all recent hardware.
    • All other plots, realistically speaking, require GPUs.
    • We are also happy to share a 4 GB file with results from all experiments presented in the paper.
    • You may want to produce plots 7 and 8 for other experiment setups than the one in the paper, i.e. ones you already have computed.
    • Some experiments, e.g. those for Figures 4 or 6, may run a really long time on a single GPU. It may be good to
      • execute the scripts in the sh-files in parallel on multiple GPUs.
      • start multiple runs in parallel and then combine experiments. (See below).
      • end the runs early / decrease number of total runs (this can be very reasonable -- look at the config files in conf/paper to modify this property)
    • If you want to understand the code, below we give a good strategy for approaching it. (Also start with synthetic data experiments. They have less complex code!)

Running A Custom Experiment

  • main.py is the main entry point into this code-base.

    • It executes a a total of n_runs active testing experiments for a fixed setup.
    • Each experiment:
      • Trains (or loads) one main model.
      • This model can then be evaluated with a variety of acquisition strategies.
      • Risk estimates are then computed for points/weights from all acquisition strategies for all risk estimators.
  • This repository uses Hydra to manage configs.

    • Look at conf/config.yaml or one of the experiments in conf/... for default configs and hyperparameters.
    • Experiments are autologged and results saved to ./output/.
  • See notebooks/eplore_experiment.ipynb for some example code on how to evaluate custom experiments.

    • The evaluations use activetesting.visualize.Visualiser which implements visualisation methods.
    • Give it a path to an experiment in output/path/to/experiment and explore the methods.
    • If you want to combine data from multiple runs, give it a list of paths.
    • I prefer to load this in Jupyter Notebooks, but hey, everybody's different.
  • A guide to the code

    • main.py runs repeated experiments and orchestrates the whole shebang.
      • It iterates through all n_runs and acquisition strategies.
    • experiment.py handles a single experiment.
      • It combines the model, dataset, acquisition strategy, and risk estimators.
    • datasets.py, aquisition.py, loss.py, risk_estimators.py all contain exactly what you would expect!
    • hoover.py is a logging module.
    • models/ contains all models, scikit-learn and pyTorch.
      • In sk2torch.py we have some code that wraps torch models in a way that lets them be used as scikit-learn models from the outside.

And Finally

Thanks for stopping by!

If you find anything wrong with the code, please contact us.

We are happy to answer any questions related to the code and project.

Owner
Jannik Kossen
PhD Student at OATML Oxford
Jannik Kossen
BOVText: A Large-Scale, Multidimensional Multilingual Dataset for Video Text Spotting

BOVText: A Large-Scale, Bilingual Open World Dataset for Video Text Spotting Updated on December 10, 2021 (Release all dataset(2021 videos)) Updated o

weijiawu 47 Dec 26, 2022
Code for "Learning Canonical Representations for Scene Graph to Image Generation", Herzig & Bar et al., ECCV2020

Learning Canonical Representations for Scene Graph to Image Generation (ECCV 2020) Roei Herzig*, Amir Bar*, Huijuan Xu, Gal Chechik, Trevor Darrell, A

roei_herzig 24 Jul 07, 2022
AI-UPV at IberLEF-2021 EXIST task: Sexism Prediction in Spanish and English Tweets Using Monolingual and Multilingual BERT and Ensemble Models

AI-UPV at IberLEF-2021 EXIST task: Sexism Prediction in Spanish and English Tweets Using Monolingual and Multilingual BERT and Ensemble Models Descrip

Angel de Paula 1 Jun 08, 2022
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 104 Oct 26, 2022
Geometry-Free View Synthesis: Transformers and no 3D Priors

Geometry-Free View Synthesis: Transformers and no 3D Priors Geometry-Free View Synthesis: Transformers and no 3D Priors Robin Rombach*, Patrick Esser*

CompVis Heidelberg 293 Dec 22, 2022
Contains supplementary materials for reproduce results in HMC divergence time estimation manuscript

Scalable Bayesian divergence time estimation with ratio transformations This repository contains the instructions and files to reproduce the analyses

Suchard Research Group 1 Sep 21, 2022
ChainerRL is a deep reinforcement learning library built on top of Chainer.

ChainerRL and PFRL ChainerRL (this repository) is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement al

Chainer 1.1k Jan 01, 2023
Instant-nerf-pytorch - NeRF trained SUPER FAST in pytorch

instant-nerf-pytorch This is WORK IN PROGRESS, please feel free to contribute vi

94 Nov 22, 2022
Official repository for "Orthogonal Projection Loss" (ICCV'21)

Orthogonal Projection Loss (ICCV'21) Kanchana Ranasinghe, Muzammal Naseer, Munawar Hayat, Salman Khan, & Fahad Shahbaz Khan Paper Link | Project Page

Kanchana Ranasinghe 83 Dec 26, 2022
In this work, we will implement some basic but important algorithm of machine learning step by step.

WoRkS continued English 中文 Français Probability Density Estimation-Non-Parametric Methods(概率密度估计-非参数方法) 1. Kernel / k-Nearest Neighborhood Density Est

liziyu0104 1 Dec 30, 2021
Simulator for FRC 2022 challenge: Rapid React

rrsim Simulator for FRC 2022 challenge: Rapid React out-1.mp4 Usage In order to run the simulator use the following: python3 rrsim.py [config_path] wh

1 Jan 18, 2022
Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on [ Paper ] [ Project Page ] This repository contains the code fo

Andrew Jong 97 Dec 13, 2022
(ICCV 2021) ProHMR - Probabilistic Modeling for Human Mesh Recovery

ProHMR - Probabilistic Modeling for Human Mesh Recovery Code repository for the paper: Probabilistic Modeling for Human Mesh Recovery Nikos Kolotouros

Nikos Kolotouros 209 Dec 13, 2022
Rotation Robust Descriptors

RoRD Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching Project Page | Paper link Evaluation and Datasets MMA : Training on

Udit Singh Parihar 25 Nov 15, 2022
It is a system used to detect bone fractures. using techniques deep learning and image processing

MohammedHussiengadalla-Intelligent-Classification-System-for-Bone-Fractures It is a system used to detect bone fractures. using techniques deep learni

Mohammed Hussien 7 Nov 11, 2022
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RMNet: Equivalently Removing Residual Connection from Networks This repository is the official implementation of "RMNet: Equivalently Removing Residua

184 Jan 04, 2023
Object Detection with YOLOv3

Object Detection with YOLOv3 Bu projede YOLOv3-608 modeli kullanılmıştır. Requirements Python 3.8 OpenCV Numpy Documentation Yolo ile ilgili detaylı b

Ayşe Konuş 0 Mar 27, 2022
Simple object detection app with streamlit

object-detection-app Simple object detection app with streamlit. Upload an image and perform object detection. Adjust the confidence threshold to see

Robin Cole 68 Jan 02, 2023
Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters"

Manga Character Screentone Synthesis Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters" presented in IEEE ISM 2

Tsubota 2 Nov 20, 2021
ProMP: Proximal Meta-Policy Search

ProMP: Proximal Meta-Policy Search Implementations corresponding to ProMP (Rothfuss et al., 2018). Overall this repository consists of two branches: m

Jonas Rothfuss 212 Dec 20, 2022