An end-to-end machine learning library to directly optimize AUC loss

Overview

LibAUC

An end-to-end machine learning library for AUC optimization.

Why LibAUC?

Deep AUC Maximization (DAM) is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. There are several benefits of maximizing AUC score over minimizing the standard losses, e.g., cross-entropy.

  • In many domains, AUC score is the default metric for evaluating and comparing different methods. Directly maximizing AUC score can potentially lead to the largest improvement in the model’s performance.
  • Many real-world datasets are usually imbalanced . AUC is more suitable for handling imbalanced data distribution since maximizing AUC aims to rank the predication score of any positive data higher than any negative data

Links

Installation

$ pip install libauc

Usage

Official Tutorials:

  • 01.Creating Imbalanced Benchmark Datasets [Notebook][Script]
  • 02.Training ResNet20 with Imbalanced CIFAR10 [Notebook][Script]
  • 03.Training with Pytorch Learning Rate Scheduling [Notebook][Script]
  • 04.Training with Imbalanced Datasets on Distributed Setting [Coming soon]

Quickstart for beginner:

>>> #import library
>>> from libauc.losses import AUCMLoss
>>> from libauc.optimizers import PESG
...
>>> #define loss
>>> Loss = AUCMLoss(imratio=0.1)
>>> optimizer = PESG(imratio=0.1)
...
>>> #training
>>> model.train()    
>>> for data, targets in trainloader:
>>>	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)
        loss = Loss(preds, targets) 
        optimizer.zero_grad()
        loss.backward(retain_graph=True)
        optimizer.step()
...	
>>> #restart stage
>>> optimizer.update_regularizer()		
...   
>>> #evaluation
>>> model.eval()    
>>> for data, targets in testloader:
	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)

Please visit our website or github for more examples.

Citation

If you find LibAUC useful in your work, please cite the following paper:

@article{yuan2020robust,
title={Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification},
author={Yuan, Zhuoning and Yan, Yan and Sonka, Milan and Yang, Tianbao},
journal={arXiv preprint arXiv:2012.03173},
year={2020}
}

Contact

If you have any questions, please contact us @ Zhuoning Yuan [[email protected]] and Tianbao Yang [[email protected]] or please open a new issue in the Github.

Comments
  • Only compatible with Nvidia GPU

    Only compatible with Nvidia GPU

    I tried running the example tutorial but I got the following error. ''' AssertionError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx '''

    opened by Beckham45 2
  • Extend to Multi-class Classification Task and Be compatible with PyTorch scheduler

    Extend to Multi-class Classification Task and Be compatible with PyTorch scheduler

    Hi Zhuoning,

    This is an interesting work! I am wondering if the DAM method can be extended to a multi-class classification task with long-tailed imbalanced data. Intuitively, this should be possible as the famous sklearn tool provides auc score for multi-class setting by using one-versus-rest or one-versus-one technique.

    Besides, it seems that optimizer.update_regularizer() is called only when the learning rate is reduced, thus it would be more elegant to incorporate this functional call into a pytorch lr scheduler. E.g.,

    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)
    scheduler.step()    # override the step to fulfill: optimizer.update_regularizer()
    
    

    For current libauc version, the PESG optimizer is not compatible with schedulers in torch.optim.lr_scheduler . It would be great if this feature can be supported in the future.

    Thanks for your work!

    opened by Cogito2012 2
  • When to use retain_graph=True?

    When to use retain_graph=True?

    Hi,

    When to use retain_graph=True in the loss backward function?

    In 2 examples (2 and 4), it is True. But not in the other examples.

    I appreciate your time.

    opened by dfrahman 1
  • Using AUCMLoss with imratio>1

    Using AUCMLoss with imratio>1

    I'm not very familiar with the maths in the paper so please forgive me if i'm asking something obvious.

    The AUCMLoss uses the "imbalance ratio" between positive and negative samples. The ratio is defined as

    the ratio of # of positive examples to the # of negative examples

    Or imratio=#pos/#neg

    When #pos<#neg, imratio is some value between 0 and 1. But when #pos>#neg, imratio>1

    Will this break the loss calculations? I have a feeling it would invalidate the many 1-self.p calculations in the LibAUC implementation, but as i'm not familiar with the maths I can't say for sure.

    Also, is there a problem (mathematically speaking) with calculating imratio=#pos/#total_samples, to avoid the problem above? When #pos<<#neg, #neg approximates #total_samples.

    opened by ayhyap 1
  • AUCMLoss does not use margin argument

    AUCMLoss does not use margin argument

    I noticed in the AUCMLoss class that the margin argument is not used. Following the formulation in the paper, the forward function should be changed in line 20 from 2*self.alpha*(self.p*(1-self.p) + \ to 2*self.alpha*(self.p*(1-self.p)*self.margin + \

    opened by ayhyap 1
  • How to train multi-label classification tasks? (like chexpert)

    How to train multi-label classification tasks? (like chexpert)

    I have started using this library and I've read your paper Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification, and I'm still not sure how to train a multi-label classification (MLC) model.

    Specifically, how did you fine-tune for the Chexpert multi-label classification task? (i.e. classify 5 diseases, where each image may have presence of 0, 1 or more diseases)

    • The first step pre-training with Cross-entropy loss seems clear to me
    • You mention: "In the second step of AUC maximization, we replace the last classifier layer trained in the first step by random weights and use our DAM method to optimize the last classifier layer and all previous layers.". The new classifier layer is a single or multi-label classifier?
    • In the Appendix I, figure 7 shows only one score as output for Deep AUC maximization (i.e. only one disease)
    • In the code, both AUCMLoss() and APLoss_SH() receive single-label outputs, not multi-label outputs, apparently

    How do you train for the 5 diseases? Train sequentially Cardiomegaly, then Edema, and so on? or with 5 losses added up? or something else?

    opened by pdpino 4
  • Example for tensorflow

    Example for tensorflow

    Thank you for the great library. Does it currently support tensorflow? If so, could you provide an example of how it can be used with tensorflow? Thank you very much

    opened by Kokkini 1
Releases(1.1.4)
  • 1.1.4(Jul 26, 2021)

    What's New

    • Added PyTorch dataloader for CheXpert dataset. Tutorial for training CheXpert is available here.
    • Added support for training AUC loss on CPU machines. Note that please remove lines with .cuda() from the code.
    • Fixed some bugs and improved the training stability
    Source code(tar.gz)
    Source code(zip)
  • 1.1.3(Jun 16, 2021)

  • 1.1.2(Jun 14, 2021)

    What's New

    1. Add SOAP optimizer contributed by @qiqi-helloworld @yzhuoning for optimizing AUPRC. Please check the tutorial here.
    2. Update ResNet18, ResNet34 with pretrained models on ImageNet1K
    3. Add new strategy for AUCM Loss: imratio is calculated over a mini-batch if initial value is not given
    4. Fixed some bugs and improved the training stability
    Source code(tar.gz)
    Source code(zip)
  • V1.1.0(May 10, 2021)

    What's New:

    • Fixed some bugs and improved the training stability
    • Changed default settings in loss function for binary labels to be 0 and 1
    • Added Pytorch dataloaders for CIFAR10, CIFAR100, CAT_vs_Dog, STL10
    • Enabled training DAM with Pytorch leanring scheduler, e.g., ReduceLROnPlateau, CosineAnnealingLR
    Source code(tar.gz)
    Source code(zip)
Weight initialization schemes for PyTorch nn.Modules

nninit Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin. ##Update This repo has been

Alykhan Tejani 69 Jan 26, 2021
Some bravo or inspiring research works on the topic of curriculum learning.

Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN Official code for NeurIPS 2021 paper "Towards Scalable Unpaired Virtu

131 Jan 07, 2023
Matplotlib Image labeller for classifying images

mpl-image-labeller Use Matplotlib to label images for classification. Works anywhere Matplotlib does - from the notebook to a standalone gui! For more

Ian Hunt-Isaak 5 Sep 24, 2022
Transfer SemanticKITTI labeles into other dataset/sensor formats.

LiDAR-Transfer Transfer SemanticKITTI labeles into other dataset/sensor formats. Content Convert datasets (NUSCENES, FORD, NCLT) to KITTI format Minim

Photogrammetry & Robotics Bonn 64 Nov 21, 2022
Pytorch based library to rank predicted bounding boxes using text/image user's prompts.

pytorch_clip_bbox: Implementation of the CLIP guided bbox ranking for Object Detection. Pytorch based library to rank predicted bounding boxes using t

Sergei Belousov 50 Nov 27, 2022
Official pytorch implementation of the IrwGAN for unaligned image-to-image translation

IrwGAN (ICCV2021) Unaligned Image-to-Image Translation by Learning to Reweight [Update] 12/15/2021 All dataset are released, trained models and genera

37 Nov 09, 2022
Complementary Patch for Weakly Supervised Semantic Segmentation, ICCV21 (poster)

CPN (ICCV2021) This is an implementation of Complementary Patch for Weakly Supervised Semantic Segmentation, which is accepted by ICCV2021 poster. Thi

Ferenas 20 Dec 12, 2022
Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022.

Jadena Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022. arXiv

Qing Guo 13 Nov 29, 2022
PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)

About PyTorch 1.2.0 Now the master branch supports PyTorch 1.2.0 by default. Due to the serious version problem (especially torch.utils.data.dataloade

Sanghyun Son 2.1k Jan 01, 2023
Multi-query Video Retreival

Multi-query Video Retreival

Princeton Visual AI Lab 17 Nov 22, 2022
Pytorch implementation for RelTransformer

RelTransformer Our Architecture This is a Pytorch implementation for RelTransformer The implementation for Evaluating on VG200 can be found here Requi

Vision CAIR Research Group, KAUST 21 Nov 22, 2022
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"

FLASH - Pytorch Implementation of the Transformer variant proposed in the paper Transformer Quality in Linear Time Install $ pip install FLASH-pytorch

Phil Wang 209 Dec 28, 2022
PyTorch GPU implementation of the ES-RNN model for time series forecasting

Fast ES-RNN: A GPU Implementation of the ES-RNN Algorithm A GPU-enabled version of the hybrid ES-RNN model by Slawek et al that won the M4 time-series

Kaung 305 Jan 03, 2023
Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process

Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process, a complete algorithm library is esta

Fu Pengyou 50 Jan 07, 2023
TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision

TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{you2019torchcv, author = {Ansheng You and Xiangtai Li and Zhen Zhu a

Donny You 2.2k Jan 06, 2023
Pytorch implementation of our paper accepted by NeurIPS 2021 -- Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) (Link) Overview Prerequisites Linu

Shaojie Li 34 Mar 31, 2022
code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Shiqi Yang 84 Dec 26, 2022
AbelNN: Deep Learning Python module from scratch

AbelNN: Deep Learning Python module from scratch I have implemented several neural networks from scratch using only Numpy. I have designed the module

Abel 2 Apr 12, 2022
Deep learning image registration library for PyTorch

TorchIR: Pytorch Image Registration TorchIR is a image registration library for deep learning image registration (DLIR). I have integrated several ide

Bob de Vos 40 Dec 16, 2022
Greedy Gaussian Segmentation

GGS Greedy Gaussian Segmentation (GGS) is a Python solver for efficiently segmenting multivariate time series data. For implementation details, please

Stanford University Convex Optimization Group 72 Dec 07, 2022