A deep learning framework for historical document image analysis

Related tags

Deep LearningDIVA-DAF
Overview

DIVA-DAF

PyTorch Lightning Config: Hydra Template
tests codecov

Description

A deep learning framework for historical document image analysis.

How to run

Install dependencies

# clone project
git clone https://github.com/DIVA-DIA/unsupervised_learning.git
cd unsupervised_learing

# create conda environment (IMPORTANT: needs Python 3.8+)
conda env create -f conda_env_gpu.yaml

# activate the environment using .autoenv
source .autoenv

# install requirements
pip install -r requirements.txt

Train model with default configuration. Care: you need to change the value of data_dir in config/datamodule/cb55_10_cropped_datamodule.yaml.

# default run based on config/config.yaml
python run.py

# train on CPU
python run.py trainer.gpus=0

# train on GPU
python run.py trainer.gpus=1

Train using GPU

# [default] train on all available GPUs
python run.py trainer.gpus=-1

# train on one GPU
python run.py trainer.gpus=1

# train on two GPUs
python run.py trainer.gpus=2

# train on CPU
python run.py trainer.accelerator=ddp_cpu

Train using CPU for debugging

# train on CPU
python run.py trainer.accelerator=ddp_cpu trainer.precision=32

Train model with chosen experiment configuration from configs/experiment/

python run.py +experiment=experiment_name

You can override any parameter from command line like this

python run.py trainer.max_epochs=20 datamodule.batch_size=64

Setup PyCharm

  1. Fork this repo
  2. Clone the repo to your local filesystem (git clone CLONELINK)
  3. Clone the repo onto your remote machine
  4. Move into the folder on your remote machine and create the conda environment (conda env create -f conda_env_gpu.yaml)
  5. Run source .autoenv in the root folder on your remote machine (activates the environment)
  6. Open the folder in PyCharm (File -> open)
  7. Add the interpreter (Preferences -> Project -> Python interpreter -> top left gear icon -> add... -> SSH Interpreter) follow the instructions (set the correct mapping to enable deployment)
  8. Upload the files (deployment)
  9. Create a wandb account (wandb.ai)
  10. Log via ssh onto your remote machine
  11. Go to the root folder of the framework and activate the environment (source .autoenv OR conda activate unsupervised_learning)
  12. Log into wandb. Execute wandb login and follow the instructions
  13. Now you should be able to run the basic experiment from PyCharm

Loading models

You can load the different model parts backbone or header as well as the whole task. To load the backbone or the header you need to add to your experiment config the field path_to_weights. e.g.

model:
    header:
        path_to_weights: /my/path/to/the/pth/file

To load the whole task you need to provide the path to the whole task to the trainer. This is with the field resume_from_checkpoint. e.g.

trainer:
    resume_from_checkpoint: /path/to/.ckpt/file

Freezing model parts

You can freeze both parts of the model (backbone or header) with the freeze flag in the config. E.g. you want to freeze the backbone: In the command line:

python run.py +model.backbone.freeze=True

In the config (e.g. model/backbone/baby_unet_model.yaml):

...
freeze: True
...

CARE: You can not train a model when you do not have trainable parameters (e.g. freezing backbone and header).

Selection in datasets

If you use the selection key you can either use an int, which takes the first n files, or a list of strings to filter the different datasets. In the case you are using a full-page dataset be aware that the selection list is a list of file names without the extension.

Cite us

@misc{vögtlin2022divadaf,
      title={DIVA-DAF: A Deep Learning Framework for Historical Document Image Analysis}, 
      author={Lars Vögtlin and Paul Maergner and Rolf Ingold},
      year={2022},
      eprint={2201.08295},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • Not working with ddp_cpu

    Not working with ddp_cpu

    Describe the bug If we want to run the framework with ddp_cpu as accelerator it wont work as it has a working directory problem.

    To Reproduce python run.py trainer.accelerator='ddp_cpu' trainer.precision=32

    Expected behavior We can use ddp_cpu to debug our system

    Additional context To avoid this problem at the moment we can just use the full path to the run.py file ($PWD/run.py).

    Checklist

    • [ ] Add a warning if ddp_cpu and not presicion=32
    bug If time Pipeline 
    opened by lvoegtlin 3
  • Use deepspeed to speed up the training

    Use deepspeed to speed up the training

    Is your feature request related to a problem? Please describe. To accelerate the training we could use the deepspeed plugin

    Describe the solution you'd like Make it possible to activate deepspeed through the config

    Checklist

    • [x] Test deepspeed
    • [ ] Include it into the config system
    wontfix If time Pipeline 
    opened by lvoegtlin 3
  • Load model checkpoint instead of default init

    Load model checkpoint instead of default init

    differ between train test and train and test Already started with two parameters train and test to define what part of the process should be done. need to include loading from ckpt for fine-tuning or just testing

    https://pytorch-lightning.readthedocs.io/en/stable/common/weights_loading.html

    PXL_20210706_154513642

    Evtl. weights_only would work

    We need to make our own callback which inherits from ModelCheckpoint and override/add the just model checkpoint save (https://github.com/PyTorchLightning/pytorch-lightning/blob/bca5adf6de1ae74c7103839aac54c8648464bee6/pytorch_lightning/callbacks/model_checkpoint.py#L485)

    Checklist

    • [x] test check if path_to_weights is set
    • [x] load model state from path
    • [x] create a generic model which takes an encoder and a header (configs)
    • [x] #15
    • [x] save model with a callback (create callback)
    • [x] if we are just testing we need a path_to_weights for both
    Important Module Pipeline 
    opened by lvoegtlin 3
  • Updating dependecies

    Updating dependecies

    Description

    Updating PL, torchmetrics and pytest to the newest version. Also introduces codecoverage with sonarcloud. Each PR will now be tested on testcoverage

    How to Test/Run?

    pytest

    opened by lvoegtlin 2
  • Fixed problem with multiple empty folders in checkpoints

    Fixed problem with multiple empty folders in checkpoints

    Description

    The checkpoint callback created the checkpoints in a dedicated epochs folder. The folder should get deleted if it's no longer the best. This did not also work with the built-in version of the model checkpoint callback. Solved it by doing a clean-up at the end of the experiment.

    How to Test/Run?

    python run.py trainer.max_epochs=20

    Something missing?

    opened by lvoegtlin 2
  • Feature/datamodule for gif imgs

    Feature/datamodule for gif imgs

    Description

    A datamodule that takes advantage of the index format. It no longer determines the classes by the color but takes the classes directly form the raw image and uses the palette as class encoding.

    How to Test/Run?

    pytest or python run.py experiment=development_baby_unet_indexed.yaml

    opened by lvoegtlin 2
  • DDP metric bias

    DDP metric bias

    Is your feature request related to a problem? Please describe. When running an experiment with DDP we have a little data bias if the dataset is not dividable by batch_size * num_processors. To make the users aware of this problem we can add a warning if num_samples % (batch_size * num_processors) != 0. Problem described here

    Describe the solution you'd like Raining an error if the condition from above is not met. Also, add a flag to ignore this error (ignore_ddp_bias)

    Describe alternatives you've considered Solve it with the ddp join function from PyTorch but it is very hard to hack that into pl.

    Checklist

    • [x] Create check and warning
    • [x] Add shuffle and drop_last_batch options to datamodule config
    • [x] Add shuffle/drop_last_batch to default config files
    enhancement Pipeline 
    opened by lvoegtlin 2
  • Add the strict parameter to make it possible to load non-fitting models

    Add the strict parameter to make it possible to load non-fitting models

    Describe the feature

    Make it possible to transfer weights between similar models

    Describe the solution you'd like

    A parameter strict in the models which defines the way to load if it is not fitting the weights file

    Checklist

    • [x] Add this parameter in the model config
    • [x] Use it to load the model
    • [x] Add log for missed/unexpected keys
    If time Module Pipeline 
    opened by lvoegtlin 2
  • Loss function as config

    Loss function as config

    Is your feature request related to a problem? Please describe. Make it possible to define the loss function in the config.

    Describe the solution you'd like Define some defaults functions and create a config for them. Then hand over the criterion object to the task at the beginning of the training.

    Checklist

    • [ ] define 4 basic losses (Xentropy, L1, MSE, BCE)
    • [ ] create configs
    • [ ] hand over the loss function as a parameter to the task
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Specify metric via callback

    Specify metric via callback

    Is your feature request related to a problem? Please describe. To make the system more flexible we have to implement the metrics with callbacks s.t. we can combine multiple metrics and also reuse them in other tasks.

    Describe the solution you'd like Implement mIoU (jar fashion), precision, recall, and accuracy as metric callbacks. Call metrics at the end of the steps (see) Also make sure that when we are testing and in ddp that we just run it on one gpu or with join (documentation of join) (look here or here)

    Checklist

    • [x] Implement DIVA HisDB metric class (our metric)
    • [x] Metric which is exactly like the jar
    • [x] Create config for mIoU
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Feature/add fcn

    Feature/add fcn

    Description

    UNet now has a swappable classifier. This makes working with it way easier, as we can easily fine-tune it onto a dataset with more or less classes.

    How to Test/Run?

    pytest or python run.py

    opened by lvoegtlin 1
  • Training/validation and test time

    Training/validation and test time

    Is your feature request related to a problem? Please describe. Get the exact time for the training (incl. validation) and the testing in seconds. This can be reported overall as well as for an epoch. The setup time of the framework should be excluded.

    Describe the solution you'd like Log these times into the used loggers and report it in the experiment summary file.

    Checklist

    • [ ] Check if PL already provides such a feature
    • [ ] Create timers for the different phases
    • [ ] Report these times
    • [ ] Test
    • [ ] PR
    opened by lvoegtlin 1
  • More complex return

    More complex return

    Is your feature request related to a problem? Please describe. Let the framework return more information, like beast model path, metric, etc. as a dictionary, s.t. calling files can chain together multiple frameworks runs.

    Describe the solution you'd like With a dictionary

    Checklist

    • [ ] Check what return information are needed
    • [ ] Add it tot he execution class
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Rework the backbone header model

    Rework the backbone header model

    Is your feature request related to a problem? Please describe. Think about the current Backboneheader model and try to adapt it to the new needs. Eventually, changes it to a new model.

    Checklist

    • [ ] Evaluate the existing model with the new needs
    • [ ] Think about solutions
    • [ ] Prototype the solutions
    • [ ] Implementation (models, workflow, callbacks)
    • [ ] Config adaption
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Test if possible conf_mat from base_task into a callback

    Test if possible conf_mat from base_task into a callback

    Is your feature request related to a problem? Please describe. The problem before with the conf mat callback was that it had a semaphore leak. As described here (https://github.com/ashleve/lightning-hydra-template/issues/189#issuecomment-1003532448), it should work now with the usage of torchmetrics.

    Checklist

    • [ ] Factor the conf mat log into callback
    • [ ] Extensice testing
    • [ ] Tests
    • [ ] PR
    enhancement Config 
    opened by lvoegtlin 0
  • Update hydra to 1.2

    Update hydra to 1.2

    Is your feature request related to a problem? Please describe. Update hydra to the newest version

    Checklist

    • [ ] update
    • [ ] adapt code
    • [ ] test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
  • Hyperparameter optimization

    Hyperparameter optimization

    Is your feature request related to a problem? Please describe. Create a possibility to do hyperparameter optimization with the framework

    Checklist

    • [ ] Check out which one works best
    • [ ] integrate it or use it as a script
    • [ ] Test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
Releases(version_0.2.2)
  • version_0.2.2(Jun 24, 2022)

    What's Changed

    • Experiment for rotnet with unet backbone by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/101
    • Created additional tests by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/100
    • Updated the version on PL to 1.5.10 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/112
    • Added tests for RolfFormat datamodule and RGB takes by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/114
    • Release 0.2.2 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/113

    Full Changelog: https://github.com/DIVA-DIA/DIVA-DAF/compare/version_0.2.1...version_0.2.2

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.1(Dec 2, 2021)

    What's Changed

    • Fixed selection parameter, removed todos, improved print_config, added self to configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/87
    • Added tests for tasks and fixed merge scripts by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/89
    • New log folder structure by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/91
    • Replacing numpy with torch in divahisdb functional by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/93
    • Rename config saved during a run, and print commands to rerun a run by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/95
    • Release 0.2.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/98

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.2.0...version_0.2.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.0(Nov 25, 2021)

    Some new things

    • new architectures (resnet)
    • new datamodules (rolf format, RGB, full-page, and SSL)
    • different bug fixes
    • experiment configs
    • refactoring and deletion of unused code
    • callback to check the compatibility of backbone and header
    • inference/prediction stage (list of files with regex)
    • freezing header or backbone
    • improved readme
    • improved testing

    What's Changed

    • Dev data refactoring by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/74
    • Dev rgb encoding by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/76
    • RotNet by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/75
    • log more by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/77
    • More architectures by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/78
    • Dev fixing tests by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/79
    • Created resnet FCN header by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/83
    • Dev rolf data format by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/84
    • Introduce inference/prediction and refactoring by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/85
    • release 0.2.0 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/86

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.1...version_0.2.0

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.1(Oct 22, 2021)

    Changelog:

    • fixed conf mat
    • optimized test and validation step
    • improved merging of crops
    • more metrics and optimizers
    • updated requirements

    What's Changed

    • made tests running also in the terminal by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/60
    • fixed evaluation tool problem by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/62
    • adding new optimiser configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/64
    • removed unused dependency by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/65
    • Dev improve datamodule tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/66
    • Dev fixing conf and f1 heatmap by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/68
    • :art: each worker of the dl gets now an own seed by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/69
    • Dev reduce gpu memory by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/71
    • upload run config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/72
    • release version 0.1.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/73

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.0...version_0.1.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.0(Oct 6, 2021)

    The first version of the framework

    What's Changed

    • Dev 38 create hydra configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/1
    • Dev 47 better logger name by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/3
    • Dev 43 configurable optimizers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/2
    • Dev 44 load model checkpoint by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/16
    • dev synced metric logging by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/17
    • When DDP num_workers = 0 was forced by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/19
    • Resolve ddp warning by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/20
    • Add strict parameter by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/21
    • Config refinement by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/23
    • Save config file for each run by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/28
    • add env by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/29
    • Dev 25 torchmetric introduction by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/30
    • Removed custom hydra config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/32
    • Dev 24 abstract task class by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/33
    • Dev 26 loading warning improvements by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/34
    • update pl to 1.4.4 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/36
    • Loss functions as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/37
    • ddp cpu not working by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/39
    • Dev shuffle data option by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/44
    • Dev dataset selected pages by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/49
    • Dev 9 metric as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/47
    • Fix conf mat and extend by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/51
    • Save metrics to csv by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/52
    • Check backbone header compatibility by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/53
    • abstract datamodule and resolvers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/56
    • Dev refactoring and tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/57
    • Dev 34 refactoring semantic segmentation by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/58
    • Version 0.1.0 of the fw by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/59

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/commits/version_0.1.0

    Source code(tar.gz)
    Source code(zip)
RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation

Multipath RefineNet A MATLAB based framework for semantic image segmentation and general dense prediction tasks on images. This is the source code for

Guosheng Lin 575 Dec 06, 2022
Curved Projection Reformation

Description Assuming that we already know the image of the centerline, we want the lumen to be displayed on a plane, which requires curved projection

夜听残荷 5 Sep 11, 2022
lightweight python wrapper for vowpal wabbit

vowpal_porpoise Lightweight python wrapper for vowpal_wabbit. Why: Scalable, blazingly fast machine learning. Install Install vowpal_wabbit. Clone and

Joseph Reisinger 163 Nov 24, 2022
Pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks."

alpha-GAN Unofficial pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks." arXi

Victor Shepardson 78 Dec 08, 2022
A Traffic Sign Recognition Project which can help the driver recognise the signs via text as well as audio. Can be used at Night also.

Traffic-Sign-Recognition In this report, we propose a Convolutional Neural Network(CNN) for traffic sign classification that achieves outstanding perf

Mini Project 64 Nov 19, 2022
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 08, 2023
Training BERT with Compute/Time (Academic) Budget

Training BERT with Compute/Time (Academic) Budget This repository contains scripts for pre-training and finetuning BERT-like models with limited time

Intel Labs 263 Jan 07, 2023
A baseline code for VSPW

A baseline code for VSPW Preparation Download VSPW dataset The VSPW dataset with extracted frames and masks is available here.

28 Aug 22, 2022
Codebase for BMVC 2021 paper "Text Based Person Search with Limited Data"

Text Based Person Search with Limited Data This is the codebase for our BMVC 2021 paper. Please bear with me refactoring this codebase after CVPR dead

Xiao Han 33 Nov 24, 2022
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021
Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance.

Qualcomm Innovation Center 137 Jan 03, 2023
EgGateWayGetShell py脚本

EgGateWayGetShell_py 免责声明 由于传播、利用此文所提供的信息而造成的任何直接或者间接的后果及损失,均由使用者本人负责,作者不为此承担任何责任。 使用 python3 eg.py urls.txt 目标 title:锐捷网络-EWEB网管系统 port:4430 漏洞成因 ?p

榆木 61 Nov 09, 2022
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
Code of paper: "DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks"

DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks Abstract: Adversarial training has been proven to

倪仕文 (Shiwen Ni) 58 Nov 10, 2022
Simultaneous NMT/MMT framework in PyTorch

This repository includes the codes, the experiment configurations and the scripts to prepare/download data for the Simultaneous Machine Translation wi

<a href=[email protected]"> 37 Sep 29, 2022
Learned model to estimate number of distinct values (NDV) of a population using a small sample.

Learned NDV estimator Learned model to estimate number of distinct values (NDV) of a population using a small sample. The model approximates the maxim

2 Nov 21, 2022
This repository contains the code used in the paper "Prompt-Based Multi-Modal Image Segmentation".

Prompt-Based Multi-Modal Image Segmentation This repository contains the code used in the paper "Prompt-Based Multi-Modal Image Segmentation". The sys

Timo Lüddecke 305 Dec 30, 2022
Unsupervised Real-World Super-Resolution: A Domain Adaptation Perspective

Unofficial pytorch implementation of the paper "Unsupervised Real-World Super-Resolution: A Domain Adaptation Perspective"

16 Nov 21, 2022
Simple tools for logging and visualizing, loading and training

TNT TNT is a library providing powerful dataloading, logging and visualization utilities for Python. It is closely integrated with PyTorch and is desi

1.5k Jan 02, 2023
PED: DETR for Crowd Pedestrian Detection

PED: DETR for Crowd Pedestrian Detection Code for PED: DETR For (Crowd) Pedestrian Detection Paper PED: DETR for Crowd Pedestrian Detection Installati

36 Sep 13, 2022