Baseline inference Algorithm for the STOIC2021 challenge.

Overview

STOIC2021 Baseline Algorithm

This codebase contains an example submission for the STOIC2021 COVID-19 AI Challenge. As a baseline algorithm, it implements a simple evaluation pipeline for an I3D model that was trained on the STOIC2021 training data. You can use this repo as a template for your submission to the Qualification phase of the STOIC2021 challenge.

If something does not work for you, please do not hesitate to contact us or add a post in the forum. If the problem is related to the code of this repository, please create a new issue on GitHub.

Table of Contents

Before implementing your own algorithm with this template, we recommend to first upload a grand-challenge.org Algorithm based on the unaltered template by following these steps:

Afterwards, you can easily implement your own algorithm, by altering this template and updating the Algorithm you created on grand-challenge.org.

Prerequisites

We recommend using this repository on Linux. If you are using Windows, we recommend installing Windows Subsystem for Linux (WSL). Please watch the official tutorial by Microsoft for installing WSL 2 with GPU support.

  • Have Docker installed.
  • Have an account on grand-challenge.org and make sure that you are a verified user there.

Building, testing, and exporting your container

Building

To test if your system is set up correctly, you can run ./build.sh (Linux) or ./build.bat (Windows), that simply implement this command:

docker build -t stoicalgorithm .

Please note that the next step (testing the container) also runs a build, so this step is not necessary if you are certain that everything is set up correctly.

Testing

To test if the docker container works as expected, test.sh/test.bat will build the container and run it on images provided in the ./test/ folder. It will then check the results (.json files produced by your algorithm) against the .json files in ./test/.

If the tests run successfully, you will see Tests successfully passed....

Note: If you do not have a GPU available on your system, remove the --gpus all flag in test.sh/test.bat to run the test. Note: When you implemented your own algorithm using this template, please update the the .json files in ./test/ according to the output of your algorithm before running test.sh/test.bat.

Exporting

Run export.sh/export.bat to save the docker image to ./STOICAlgorithm.tar.gz. This script runs build.sh/build.bat as well as the following command: docker save stoicalgorithm | gzip -c > STOICAlgorithm.tar.gz

Creating an Algorithm on grand-challenge.org

After building, testing, and exporting your container, you are ready to create an Algorithm on grand-challenge.org. Note that there is no need to alter the algorithm implemented in this baseline repository to start this step. Once you have created an Algorithm on grand-challenge.org, you can later upload new docker containers to that same Algorithm as many times as you wish.

You can create an Algorithm by following this link. Some important fields are:

  • Please choose a Title and Description for your algorithm;
  • Enter CT at Modalities and Lung (Thorax) at Structures;
  • Select a logo to represent your algorithm (preferably square image);
  • For the interfaces of the algorithm, please select CT Image as Inputs, and as Outputs select both Probability COVID-19 and Probability Severe COVID-19;
  • Choose Viewer CIRRUS Core (Public) as a Workstation;
  • At the bottom of the page, indicate that you would like your Docker image to use GPU and how much memory it needs. After filling in the form, click the "Save" button at the bottom of the page to create your Algorithm.

Uploading your container to your Algorithm

Uploading manually

You have now built, tested, and exported your container and created an Algorithm on grand-challenge.org. To upload your container to your Algorithm, go to "Containers" on the page for your Algorithm on grand-challenge.org. Click on "upload a Container" button, and upload your .tar.gz file. You can later update your container by uploading a new .tar.gz file.

Linking a GitHub repo

Instead of uploading the .tar.gz file directly, you can also link your GitHub repo. Once your repo is linked, grand-challenge.org will automatically build the docker image for you, and add the updated container to your Algorithm.

  • First, click "Link Github Repo". You will then see a dropdown box, where your Github repo is listed only if it has the Grand-Challenge app already installed. Usually this is not the case to begin with, so you should click on "link a new Github Repo". This will guide you through the installation of the Grand-challenge app in your repository.
  • After the installation of the app in your repository is complete you should be automatically returned to the Grand Challenge page, where you will find your repository now in the dropdown list (In the case you are not automatically returned to the same page you can find your algorithm and click "Link Github Repo" again). Select your repository from the dropdown list and click "Save".
  • Finally, you need to tag your repository, this will trigger Grand-Challenge to start building the docker container.

Make sure your container is Active

Please note that it can take a while until the container becomes active (The status will change from "Ready: False" to "Active") after uploading it, or after linking your Github repo. Check back later or refresh the URL after some time.

Submitting to the STOIC2021 Qualification phase

With your Algorithm online, you are ready to submit to the STOIC2021 Qualification Leaderboard. On https://stoic2021.grand-challenge.org/, navigate to the "Submit" tab. Navigate to the "Qualification" tab, and select your Algorithm from the drop down list. You can optionally leave a comment with your submission.

Note that, depending on the availability of compute nodes on grand-challenge.org, it may take some time before the evaluation of your Algorithm finishes and its results can be found on the Leaderboard.

Implementing your own algorithm

You can implement your own solution by editing the predict function in ./process.py. Any additional imported packages should be added to ./requirements.txt, and any additional files and folders you add should be explicitly copied in the ./Dockerfile. See ./requirements.txt and ./Dockerfile for examples. To update your algorithm, you can simply test and export your new Docker container, after which you can upload it to your Algorithm. Once your new container is Active, you can resubmit your Algorithm.

Please note that your container will not have access to the internet when executing on grand-challenge.org, so all model weights must be present in your container image. You can test this locally using the --network=none option of docker run.

Good luck with the STOIC2021 COVID-19 AI Challenge!

Tip: Running your algorithm on a test folder:

Once you validated that the algorithm works as expected in the Testing step, you might want to simply run the algorithm on the test folder and check the output .json files for yourself. If you are on a native Linux system you will need to create a results folder that the docker container can write to as follows (WSL users can skip this step).

mkdir ./results
chmod 777 ./results

To write the output of the algorithm to the results folder use the following command:

docker run --rm --memory=11g -v ./test:/input/ -v ./results:/output/ STOICAlgorithm
Owner
Luuk Boulogne
Luuk Boulogne
Paper list of log-based anomaly detection

Paper list of log-based anomaly detection

Weibin Meng 411 Dec 05, 2022
Pytorch GUI(demo) for iVOS(interactive VOS) and GIS (Guided iVOS)

GUI for iVOS(interactive VOS) and GIS (Guided iVOS) GUI Implementation of CVPR2021 paper "Guided Interactive Video Object Segmentation Using Reliabili

Yuk Heo 13 Dec 09, 2022
ArtEmis: Affective Language for Art

ArtEmis: Affective Language for Art Created by Panos Achlioptas, Maks Ovsjanikov, Kilichbek Haydarov, Mohamed Elhoseiny, Leonidas J. Guibas Introducti

Panos 268 Dec 12, 2022
Kindle is an easy model build package for PyTorch.

Kindle is an easy model build package for PyTorch. Building a deep learning model became so simple that almost all model can be made by copy and paste from other existing model codes. So why code? wh

Jongkuk Lim 77 Nov 11, 2022
Inferred Model-based Fuzzer

IMF: Inferred Model-based Fuzzer IMF is a kernel API fuzzer that leverages an automated API model inferrence techinque proposed in our paper at CCS. I

SoftSec Lab 104 Sep 28, 2022
Face Alignment using python

Face Alignment Face Alignment using python Input Image Aligned Face Aligned Face Aligned Face Input Image Aligned Face Input Image Aligned Face Instal

Sajjad Aemmi 28 Nov 23, 2022
A python library to build Model Trees with Linear Models at the leaves.

A python library to build Model Trees with Linear Models at the leaves.

Marco Cerliani 212 Dec 30, 2022
Biomarker identification for COVID-19 Severity in BALF cells Single-cell RNA-seq data

scBALF Covid-19 dataset Analysis Here is the Github page that has the codes for the bioinformatics pipeline described in the paper COVID-Datathon: Bio

Nami Niyakan 2 May 21, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Dec 30, 2022
A visualization tool to show a TensorFlow's graph like TensorBoard

tfgraphviz tfgraphviz is a module to visualize a TensorFlow's data flow graph like TensorBoard using Graphviz. tfgraphviz enables to provide a visuali

44 Nov 09, 2022
Bulk2Space is a spatial deconvolution method based on deep learning frameworks

Bulk2Space Spatially resolved single-cell deconvolution of bulk transcriptomes using Bulk2Space Bulk2Space is a spatial deconvolution method based on

Dr. FAN, Xiaohui 60 Dec 27, 2022
pq is a jq-like Pickle file viewer

pq PQ is a jq-like viewer/processing tool for pickle files. howto # pq '' file.pkl {'other': 456, 'test': 123} # pq 'table' file.pkl |other|test| | 45

3 Mar 15, 2022
The coda and data for "Measuring Fine-Grained Domain Relevance of Terms: A Hierarchical Core-Fringe Approach" (ACL '21)

We propose a hierarchical core-fringe learning framework to measure fine-grained domain relevance of terms – the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., de

Jie Huang 14 Oct 21, 2022
Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training"

Saliency Guided Training Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training" by Aya Abdelsalam Ismail, Hector Cor

8 Sep 22, 2022
시각 장애인을 위한 스마트 지팡이에 활용될 딥러닝 모델 (DL Model Repo)

SmartCane-DL-Model Smart Cane using semantic segmentation 참고한 Github repositoy 🔗 https://github.com/JunHyeok96/Road-Segmentation.git 데이터셋 🔗 https://

반드시 졸업한다 (Team Just Graduate) 4 Dec 03, 2021
Moer Grounded Image Captioning by Distilling Image-Text Matching Model

Moer Grounded Image Captioning by Distilling Image-Text Matching Model Requirements Python 3.7 Pytorch 1.2 Prepare data Please use git clone --recurse

YE Zhou 60 Dec 16, 2022
这是一个unet-pytorch的源码,可以训练自己的模型

Unet:U-Net: Convolutional Networks for Biomedical Image Segmentation目标检测模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Downl

Bubbliiiing 567 Jan 05, 2023
BoxInst: High-Performance Instance Segmentation with Box Annotations

Introduction This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge, the paper is BoxInst: High-Performan

88 Dec 21, 2022
TensorFlow GNN is a library to build Graph Neural Networks on the TensorFlow platform.

TensorFlow GNN This is an early (alpha) release to get community feedback. It's under active development and we may break API compatibility in the fut

889 Dec 30, 2022
Unofficial implementation of Fast-SCNN: Fast Semantic Segmentation Network

Fast-SCNN: Fast Semantic Segmentation Network Unofficial implementation of the model architecture of Fast-SCNN. Real-time Semantic Segmentation and mo

Philip Popien 69 Aug 11, 2022