Collection of Docker images for ML/DL and video processing projects

Overview

dokai-logo

Build and push Generic badge

Collection of Docker images for ML/DL and video processing projects.

Overview of images

Three types of images differ by tag postfix:

  • base: Python with ML and CV packages, CUDA (11.4.2), cuDNN (8.2.4), FFmpeg (4.4) with NVENC support
  • pytorch: PyTorch (1.10.0-rc1), torchvision (0.10.1), torchaudio (0.9.1) and torch based libraries
  • tensor-stream: Tensor Stream for real-time video streams decoding on GPU

Example

Pull an image

docker pull ghcr.io/osai-ai/dokai:21.09-pytorch

Docker Hub mirror

docker pull osaiai/dokai:21.09-pytorch

Check available GPUs inside container

docker run --rm \
    --gpus=all \
    ghcr.io/osai-ai/dokai:21.09-pytorch \
    nvidia-smi

Example of using dokai image for DL pipeline you can find here.

Versions

base

dokai:20.09-base

ghcr.io/osai-ai/dokai:20.09-base

FFmpeg (release/4.3), nv-codec-headers (sdk/9.1)
Python (3.6.9)

pip==20.2.3
setuptools==50.3.0
packaging==20.4
numpy==1.19.2
opencv-python==4.4.0.42
scipy==1.5.2
matplotlib==3.3.2
pandas==1.1.2
notebook==6.1.4
scikit-learn==0.23.2
scikit-image==0.17.2
albumentations==0.4.6
Cython==0.29.21
Pillow==7.2.0
trafaret-config==2.0.2
pyzmq==19.0.2
librosa==0.8.0
psutil==5.7.2
dataclasses==0.7

dokai:20.10-base

ghcr.io/osai-ai/dokai:20.10-base

FFmpeg (release/4.3), nv-codec-headers (sdk/9.1)
Python (3.6.9)

pip==20.2.4
setuptools==50.3.2
packaging==20.4
numpy==1.19.2
opencv-python==4.4.0.44
scipy==1.5.3
matplotlib==3.3.2
pandas==1.1.3
notebook==6.1.4
scikit-learn==0.23.2
scikit-image==0.17.2
albumentations==0.5.0
Cython==0.29.21
Pillow==8.0.0
trafaret-config==2.0.2
pyzmq==19.0.2
librosa==0.8.0
psutil==5.7.2
dataclasses==0.7
pydantic==1.6.1
requests==2.24.0

dokai:20.12-base

ghcr.io/osai-ai/dokai:20.12-base

CUDA (11.1), cuDNN (8.0.5)
FFmpeg (release/4.3), nv-codec-headers (sdk/9.1)
Python (3.8.5)

pip==20.3.3
setuptools==51.0.0
packaging==20.8
numpy==1.19.4
opencv-python==4.4.0.46
scipy==1.5.4
matplotlib==3.3.3
pandas==1.1.5
notebook==6.1.5
scikit-learn==0.23.2
scikit-image==0.18.0
albumentations==0.5.2
Cython==0.29.21
Pillow==8.0.1
trafaret-config==2.0.2
pyzmq==20.0.0
librosa==0.8.0
psutil==5.8.0
pydantic==1.7.3
requests==2.25.1

dokai:21.01-base

ghcr.io/osai-ai/dokai:21.01-base

CUDA (11.1.1), cuDNN (8.0.5)
FFmpeg (release/4.3), nv-codec-headers (sdk/10.0)
Python (3.8.5)

pip==20.3.3
setuptools==51.3.3
packaging==20.8
numpy==1.19.5
opencv-python==4.5.1.48
scipy==1.6.0
matplotlib==3.3.3
pandas==1.2.0
notebook==6.2.0
scikit-learn==0.24.1
scikit-image==0.18.1
albumentations==0.5.2
Cython==0.29.21
Pillow==8.1.0
trafaret-config==2.0.2
pyzmq==21.0.1
librosa==0.8.0
psutil==5.8.0
pydantic==1.7.3
requests==2.25.1

dokai:21.02-base

ghcr.io/osai-ai/dokai:21.02-base

CUDA (11.2.1), cuDNN (8.1.0)
FFmpeg (release/4.3), nv-codec-headers (sdk/10.0)
Python (3.8.5)

pip==21.0.1
setuptools==53.0.0
packaging==20.9
numpy==1.20.1
opencv-python==4.5.1.48
scipy==1.6.1
matplotlib==3.3.4
pandas==1.2.2
scikit-learn==0.24.1
scikit-image==0.18.1
Pillow==8.1.0
librosa==0.8.0
albumentations==0.5.2
pyzmq==22.0.3
Cython==0.29.22
numba==0.52.0
requests==2.25.1
psutil==5.8.0
trafaret-config==2.0.2
pydantic==1.7.3
PyYAML==5.4.1
notebook==6.2.0
ipywidgets==7.6.3
tqdm==4.57.0
pytest==6.2.2
mypy==0.812
flake8==3.8.4

dokai:21.03-base

ghcr.io/osai-ai/dokai:21.03-base

CUDA (11.2.2), cuDNN (8.1.1)
FFmpeg (release/4.4), nv-codec-headers (sdk/10.0)
Python (3.8.5)

pip==21.0.1
setuptools==54.2.0
packaging==20.9
numpy==1.20.1
opencv-python==4.5.1.48
scipy==1.6.1
matplotlib==3.3.4
pandas==1.2.3
scikit-learn==0.24.1
scikit-image==0.18.1
Pillow==8.1.2
librosa==0.8.0
albumentations==0.5.2
pyzmq==22.0.3
Cython==0.29.22
numba==0.53.0
requests==2.25.1
psutil==5.8.0
trafaret-config==2.0.2
pydantic==1.8.1
PyYAML==5.4.1
notebook==6.3.0
ipywidgets==7.6.3
tqdm==4.59.0
pytest==6.2.2
mypy==0.812
flake8==3.9.0

dokai:21.05-base

ghcr.io/osai-ai/dokai:21.05-base

CUDA (11.3), cuDNN (8.2.0)
FFmpeg (release/4.4), nv-codec-headers (sdk/10.0)
Python (3.8.5)

pip==21.1.1
setuptools==56.2.0
packaging==20.9
numpy==1.20.3
opencv-python==4.5.2.52
scipy==1.6.3
matplotlib==3.4.2
pandas==1.2.4
scikit-learn==0.24.2
scikit-image==0.18.1
Pillow==8.2.0
librosa==0.8.0
albumentations==0.5.2
pyzmq==22.0.3
Cython==0.29.23
numba==0.53.1
requests==2.25.1
psutil==5.8.0
trafaret-config==2.0.2
pydantic==1.8.1
PyYAML==5.4.1
notebook==6.3.0
ipywidgets==7.6.3
tqdm==4.60.0
pytest==6.2.4
mypy==0.812
flake8==3.9.2

dokai:21.07-base

ghcr.io/osai-ai/dokai:21.07-base

CUDA (11.3.1), cuDNN (8.2.0)
FFmpeg (release/4.4), nv-codec-headers (sdk/10.0)
Python (3.8.10)

pip==21.1.3
setuptools==57.0.0
packaging==20.9
numpy==1.21.0
opencv-python==4.5.2.54
scipy==1.7.0
matplotlib==3.4.2
pandas==1.2.5
scikit-learn==0.24.2
scikit-image==0.18.2
Pillow==8.2.0
librosa==0.8.1
albumentations==1.0.0
pyzmq==22.1.0
Cython==0.29.23
numba==0.53.1
requests==2.25.1
psutil==5.8.0
trafaret-config==2.0.2
pydantic==1.8.2
PyYAML==5.4.1
notebook==6.4.0
ipywidgets==7.6.3
tqdm==4.61.1
pytest==6.2.4
mypy==0.910
flake8==3.9.2

dokai:21.08-base

ghcr.io/osai-ai/dokai:21.08-base

CUDA (11.4.1), cuDNN (8.2.2)
FFmpeg (release/4.4), nv-codec-headers (sdk/11.0)
Python (3.8.10)

pip==21.2.3
setuptools==57.4.0
packaging==21.0
numpy==1.21.1
opencv-python==4.5.3.56
scipy==1.7.1
matplotlib==3.4.2
pandas==1.3.1
scikit-learn==0.24.2
scikit-image==0.18.2
Pillow==8.3.1
librosa==0.8.1
albumentations==1.0.3
pyzmq==22.2.1
Cython==0.29.24
numba==0.53.1
requests==2.26.0
psutil==5.8.0
pydantic==1.8.2
PyYAML==5.4.1
notebook==6.4.3
ipywidgets==7.6.3
tqdm==4.62.0
pytest==6.2.4
mypy==0.910
flake8==3.9.2

dokai:21.09-base

ghcr.io/osai-ai/dokai:21.09-base

CUDA (11.4.2), cuDNN (8.2.4)
FFmpeg (release/4.4), nv-codec-headers (sdk/11.0)
Python (3.8.10)

pip==21.2.4
setuptools==58.1.0
packaging==21.0
numpy==1.21.2
opencv-python==4.5.3.56
scipy==1.7.1
matplotlib==3.4.3
pandas==1.3.3
scikit-learn==1.0
scikit-image==0.18.3
Pillow==8.3.2
librosa==0.8.1
albumentations==1.0.3
pyzmq==22.3.0
Cython==0.29.24
numba==0.53.1
requests==2.26.0
psutil==5.8.0
pydantic==1.8.2
PyYAML==5.4.1
notebook==6.4.4
ipywidgets==7.6.5
tqdm==4.62.3
pytest==6.2.5
mypy==0.910
flake8==3.9.2

pytorch

dokai:20.09-pytorch

ghcr.io/osai-ai/dokai:20.09-pytorch

additionally to dokai:20.09-base:

torch==1.6.0
torchvision==0.7.0
pytorch-argus==0.1.2
timm==0.2.1
apex (master)

dokai:20.10-pytorch

ghcr.io/osai-ai/dokai:20.10-pytorch

additionally to dokai:20.10-base:

torch==1.6.0
torchvision==0.7.0
pytorch-argus==0.1.2
timm==0.2.1
apex (master)

dokai:20.12-pytorch

ghcr.io/osai-ai/dokai:20.12-pytorch

additionally to dokai:20.12-base:

torch==1.7.1 (source, v1.7.1 tag)
torchvision==0.8.2 (source, v0.8.2 tag)
pytorch-argus==0.2.0
timm==0.3.2
kornia==0.4.1
apex (source, master branch)

dokai:21.01-pytorch

ghcr.io/osai-ai/dokai:21.01-pytorch

additionally to dokai:21.01-base:

torch==1.8.0a0+4aea007 (source, master branch)
torchvision==0.8.2 (source, v0.8.2 tag)
pytorch-argus==0.2.0
timm==0.3.4
kornia==0.4.1
apex (source, master branch)

dokai:21.02-pytorch

ghcr.io/osai-ai/dokai:21.02-pytorch

additionally to dokai:21.02-base:

torch==1.9.0a0+c2b9283 (source, master branch)
torchvision==0.8.2 (source, v0.8.2 tag)
pytorch-argus==0.2.0
timm==0.4.4 (source, master branch)
kornia==0.4.1
pretrainedmodels==0.7.4
efficientnet-pytorch==0.7.0
segmentation-models-pytorch==0.1.3
apex (source, master branch)

dokai:21.03-pytorch

ghcr.io/osai-ai/dokai:21.03-pytorch

additionally to dokai:21.03-base:

torch==1.8.0 (source, v1.8.0 tag)
torchvision==0.9.0 (source, v0.9.0 tag)
torchaudio==0.8.0 (source, v0.8.0 tag)
pytorch-argus==0.2.1
timm==0.4.5
kornia==0.5.0
pretrainedmodels==0.7.4
efficientnet-pytorch==0.7.0
segmentation-models-pytorch==0.1.3
apex (source, master branch)

dokai:21.05-pytorch

ghcr.io/osai-ai/dokai:21.05-pytorch

additionally to dokai:21.05-base:

torch==1.8.1 (source, v1.8.1 tag)
torchvision==0.9.1 (source, v0.9.1 tag)
torchaudio==0.8.1 (source, v0.8.1 tag)
pytorch-argus==0.2.1
timm==0.4.8 (source, master branch)
kornia==0.5.1
pretrainedmodels==0.7.4
efficientnet-pytorch==0.7.1
segmentation-models-pytorch==0.1.3
apex (source, master branch)

dokai:21.07-pytorch

ghcr.io/osai-ai/dokai:21.07-pytorch

additionally to dokai:21.07-base:

torch==1.9.0 (source, v1.9.0 tag)
torchvision==0.10.0 (source, v0.10.0 tag)
torchaudio==0.9.0 (source, v0.9.0 tag)
pytorch-argus==0.2.1
pretrainedmodels==0.7.4
efficientnet-pytorch==0.7.1
timm==0.4.12
segmentation-models-pytorch==0.1.3
kornia==0.5.5
apex (source, master branch)

dokai:21.08-pytorch

ghcr.io/osai-ai/dokai:21.08-pytorch

additionally to dokai:21.08-base:

MAGMA (2.6.1)

torch==1.10.0a0+git5b8389e (source, master branch)
torchvision==0.10.0 (source, v0.10.0 tag)
torchaudio==0.9.0 (source, v0.9.0 tag)
pytorch-ignite==0.4.6
pytorch-argus==0.2.1
pretrainedmodels==0.7.4
efficientnet-pytorch==0.7.1
timm==0.4.12
segmentation-models-pytorch==0.2.0
kornia==0.5.8
apex (source, master branch)

dokai:21.09-pytorch

ghcr.io/osai-ai/dokai:21.09-pytorch

additionally to dokai:21.09-base:

MAGMA (2.6.1)

torch==1.10.0-rc1 (source, v1.10.0-rc1 tag)
torchvision==0.10.1 (source, v0.10.1 tag)
torchaudio==0.9.1 (source, v0.9.1 tag)
pytorch-ignite==0.4.6
pytorch-argus==0.2.1
pretrainedmodels==0.7.4
efficientnet-pytorch==0.7.1
timm==0.4.12
segmentation-models-pytorch==0.2.0
kornia==0.5.11
apex (source, master branch)

tensor-stream

dokai:20.09-tensor-stream

ghcr.io/osai-ai/dokai:20.09-tensor-stream

additionally to dokai:20.09-pytorch:

tensor-stream==0.4.6 (dev)

dokai:20.10-tensor-stream

ghcr.io/osai-ai/dokai:20.10-tensor-stream

additionally to dokai:20.10-pytorch:

tensor-stream==0.4.6 (dev)

dokai:20.12-tensor-stream

ghcr.io/osai-ai/dokai:20.12-tensor-stream

additionally to dokai:20.12-pytorch:

tensor-stream==0.4.6 (source, dev branch)

dokai:21.01-tensor-stream

ghcr.io/osai-ai/dokai:21.01-tensor-stream

additionally to dokai:21.01-pytorch:

tensor-stream==0.4.6 (source, dev branch)

dokai:21.02-tensor-stream

ghcr.io/osai-ai/dokai:21.02-tensor-stream

additionally to dokai:21.02-pytorch:

tensor-stream==0.4.6 (source, dev branch)

dokai:21.03-tensor-stream

ghcr.io/osai-ai/dokai:21.03-tensor-stream

additionally to dokai:21.03-pytorch:

tensor-stream==0.4.6 (source, dev branch)

dokai:21.05-tensor-stream

ghcr.io/osai-ai/dokai:21.05-tensor-stream

additionally to dokai:21.05-pytorch:

tensor-stream==0.4.6 (source, dev branch)

dokai:21.07-tensor-stream

ghcr.io/osai-ai/dokai:21.07-tensor-stream

additionally to dokai:21.07-pytorch:

tensor-stream==0.4.6 (source, dev branch)

dokai:21.08-tensor-stream

ghcr.io/osai-ai/dokai:21.08-tensor-stream

additionally to dokai:21.08-pytorch:

tensor-stream==0.4.6 (source, dev branch)

dokai:21.09-tensor-stream

ghcr.io/osai-ai/dokai:21.09-tensor-stream

additionally to dokai:21.09-pytorch:

tensor-stream==0.4.6 (source, dev branch)

You might also like...
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Doods2 - API for detecting objects in images and video streams using Tensorflow

DOODS2 - Return of DOODS Dedicated Open Object Detection Service - Yes, it's a b

Search Youtube Video and Get Video info
Search Youtube Video and Get Video info

PyYouTube Get Video Data from YouTube link Installation pip install PyYouTube How to use it ? Get Videos Data from pyyoutube import Data yt = Data("ht

We present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances to transformations applied to both the audio and video streams.

Multi-Modal Self-Supervision using GDT and StiCa This is an official pytorch implementation of papers: Multi-modal Self-Supervision from Generalized D

Video lie detector using xgboost - A video lie detector using OpenFace and xgboost

video_lie_detector_using_xgboost a video lie detector using OpenFace and xgboost

 MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker
Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker

Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker This repository contai

[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior
[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior

pytorch-deep-video-prior (DVP) Official PyTorch implementation for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior TensorFlo

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

Comments
  • Does not work `torchaudio.transforms.MelSpectrogram`, no MKL

    Does not work `torchaudio.transforms.MelSpectrogram`, no MKL

    I used docker pulled from ghcr.io/osai-ai/dokai:21.05-pytorch.

    The following code gives an error:

    python -c 'import torchaudio; import torch; a = torch.randn(2, 4663744); torchaudio.transforms.MelSpectrogram(44100)(a)'

    /usr/local/lib/python3.8/dist-packages/torchaudio-0.8.0a0+e4e171a-py3.8-linux-x86_64.egg/torchaudio/functional/functional.py:357: UserWarning: At least one mel filterbank has all zero values. The value for `n_mels` (128) may be set too high. Or, the value for `n_freqs` (201) may be set too low.
      warnings.warn(
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/torchaudio-0.8.0a0+e4e171a-py3.8-linux-x86_64.egg/torchaudio/transforms.py", line 480, in forward
        specgram = self.spectrogram(waveform)
      File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/torchaudio-0.8.0a0+e4e171a-py3.8-linux-x86_64.egg/torchaudio/transforms.py", line 96, in forward
        return F.spectrogram(
      File "/usr/local/lib/python3.8/dist-packages/torchaudio-0.8.0a0+e4e171a-py3.8-linux-x86_64.egg/torchaudio/functional/functional.py", line 91, in spectrogram
        spec_f = torch.stft(
      File "/usr/local/lib/python3.8/dist-packages/torch/functional.py", line 580, in stft
        return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore
    RuntimeError: fft: ATen not compiled with MKL support
    

    and this check python -c 'import torch; a = torch.randn(10); print(a.to_mkldnn().layout)' works correctly.

    opened by Ayagoz 2
  • Expired link to nv-codec-headers repo

    Expired link to nv-codec-headers repo

    Hi, git.videolan.org is experiencing some issues again, it looks like the certificate for the domain is expired or something like that (but it was alive just a week ago!). Also, they are migrating to code.videolan.org, however nv-codec-headers is not there yet.

    The current link does not work: https://github.com/osai-ai/dokai/blob/6f99608b70881de43740bc84c34f42249f4f65aa/docker/Dockerfile.base#L43

    Temporary workaround: https://github.com/FFmpeg/nv-codec-headers.git

    opened by NikolasEnt 1
Releases(v22.11)
  • v22.11(Nov 22, 2022)

    Updates

    • TensorRT 8.5.1
    • torch 1.14.0a0+git71fe069 (source, close to v1.13.0 after commit "ada lovelace (arch 8.9) support #87436")
    • torchvision 0.14.0 (from source, v0.14.0 tag)
    • torchaudio 0.13.0 (from source, v0.13.0 tag)
    • Update other PyPI packages
    • Ada Lovelace architecture support
    • PyTorch image models benchmark link

    Images

    base

    Python with ML and CV packages, CUDA (11.8.0), cuDNN (8.6.0), FFmpeg (4.4) with NVENC/NVDEC support ghcr.io/osai-ai/dokai:22.11-base

    dokai:22.11-base

    Supported NVIDIA architectures: Pascal (sm_60, sm_61), Volta (sm_70), Turing (sm_75), Ampere (sm_80, sm_86), Ada Lovelace (sm_89).

    CUDA (11.8.0), cuDNN (8.6.0) FFmpeg (release/4.4), nv-codec-headers (sdk/11.0) Python (3.10.6) CMake (3.22.1)

    pip==22.3.1 setuptools==65.5.1 packaging==21.3 numpy==1.23.4 opencv-python==4.6.0.66 scipy==1.9.3 matplotlib==3.6.2 pandas==1.5.1 scikit-learn==1.1.3 scikit-image==0.19.3 Pillow==9.3.0 librosa==0.9.2 albumentations==1.3.0 pyzmq==24.0.1 Cython==0.29.32 numba==0.56.4 requests==2.28.1 psutil==5.9.4 pydantic==1.10.2 PyYAML==6.0 notebook==6.5.2 ipywidgets==8.0.2 tqdm==4.64.1 pytest==7.2.0 pytest-cov==4.0.0 mypy==0.991 flake8==5.0.4 pre-commit==2.20.0

    pytorch

    TensorRT (8.5.1) , PyTorch (1.13.0), torchvision (0.14.0), torchaudio (0.13.0) and torch based libraries. ghcr.io/osai-ai/dokai:22.11-pytorch

    dokai:22.11-pytorch

    additionally to dokai:22.11-base:

    TensorRT (8.5.1) MAGMA (2.6.2)

    torch==1.14.0a0+git71fe069 (source, close to v1.13.0 after commit "ada lovelace (arch 8.9) support #87436") torchvision==0.14.0 (source, v0.14.0 tag) torchaudio==0.13.0 (source, v0.13.0 tag) pytorch-ignite==0.4.10 pytorch-argus==1.0.0 pretrainedmodels==0.7.4 efficientnet-pytorch==0.7.1 pytorch-toolbelt==0.5.2 kornia==0.6.8 timm==0.6.11 segmentation-models-pytorch==0.3.0

    tensor-stream

    Tensor Stream for real-time video streams decoding on GPU.
    ghcr.io/osai-ai/dokai:22.11-tensor-stream

    dokai:22.11-tensor-stream

    additionally to dokai:22.11-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
    build_logs.zip(471.12 KB)
  • v22.03(Mar 28, 2022)

    Updates

    • CUDA 11.6.0
    • torch 1.11.0 (from source, v1.11.0 tag)
    • torchvision 0.12.0 (from source, v0.12.0 tag)
    • torchaudio 0.11.0 (from source, v0.11.0 tag)
    • CMake (3.22.2)
    • Update other PyPI packages
    • Update README

    Images

    base

    Python with ML and CV packages, CUDA (11.6.0), FFmpeg (4.4) with NVENC support.

    dokai:22.03-base

    ghcr.io/osai-ai/dokai:22.03-base

    CUDA (11.6.0) FFmpeg (release/4.4), nv-codec-headers (sdk/11.0)
    Python (3.8.10)
    CMake (3.22.2)

    pip==22.0.3
    setuptools==59.5.0
    packaging==21.3
    numpy==1.21.5
    opencv-python==4.5.5.62
    scipy==1.8.0
    matplotlib==3.5.1
    pandas==1.4.1
    scikit-learn==1.0.1
    scikit-image==0.18.3
    Pillow==8.4.0
    librosa==0.8.1
    albumentations==1.1.0
    pyzmq==22.3.0
    Cython==0.29.24
    numba==0.53.1
    requests==2.26.0
    psutil==5.8.0
    pydantic==1.8.2
    PyYAML==6.0
    notebook==6.4.5
    ipywidgets==7.6.5
    tqdm==4.62.3
    pytest==6.2.5
    mypy==0.910
    flake8==4.0.1

    pytorch

    PyTorch, torchvision and torch based libraries.

    dokai:22.03-pytorch

    ghcr.io/osai-ai/dokai:22.03-pytorch

    additionally to dokai:22.03-base:

    MAGMA (2.6.1)

    torch==1.11.0 (source, v1.11.0 tag)
    torchvision==0.12.0 (source, v0.12.0 tag)
    torchaudio==0.11.0 (source, v0.11.0 tag)
    pytorch-ignite==0.4.8
    pytorch-argus==1.0.0
    pretrainedmodels==0.7.4
    efficientnet-pytorch==0.7.1
    timm==0.5.4
    segmentation-models-pytorch==0.2.1
    kornia==0.6.3

    tensor-stream

    Tensor Stream.

    dokai:22.03-tensor-stream

    ghcr.io/osai-ai/dokai:22.03-tensor-stream

    additionally to dokai:22.03-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v21.11(Nov 9, 2021)

    Updates

    • torch 1.10.0 (from source, v1.10.0 tag)
    • torchvision 0.11.1 (from source, v0.11.1 tag)
    • torchaudio 0.10.0 (from source, v0.10.0 tag)
    • CMake (3.21.4)
    • Remove Apex installation
    • Update other PyPI packages

    Images

    base

    Python with ML and CV packages, CUDA (11.4.2), cuDNN (8.2.4), FFmpeg (4.4) with NVENC support.
    ghcr.io/osai-ai/dokai:21.11-base

    dokai:21.11-base

    CUDA (11.4.2), cuDNN (8.2.4)
    FFmpeg (release/4.4), nv-codec-headers (sdk/11.0)
    Python (3.8.10)
    CMake (3.21.4)

    pip==21.3.1
    setuptools==58.5.3
    packaging==21.2
    numpy==1.21.4
    opencv-python==4.5.4.58
    scipy==1.7.2
    matplotlib==3.4.3
    pandas==1.3.4
    scikit-learn==1.0.1
    scikit-image==0.18.3
    Pillow==8.4.0
    librosa==0.8.1
    albumentations==1.1.0
    pyzmq==22.3.0
    Cython==0.29.24
    numba==0.53.1
    requests==2.26.0
    psutil==5.8.0
    pydantic==1.8.2
    PyYAML==6.0
    notebook==6.4.5
    ipywidgets==7.6.5
    tqdm==4.62.3
    pytest==6.2.5
    mypy==0.910
    flake8==4.0.1

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:21.11-pytorch

    dokai:21.11-pytorch

    additionally to dokai:21.11-base:

    MAGMA (2.6.1)

    torch==1.10.0 (source, v1.10.0 tag)
    torchvision==0.11.1 (source, v0.11.1 tag)
    torchaudio==0.10.0 (source, v0.10.0 tag)
    pytorch-ignite==0.4.7
    pytorch-argus==1.0.0
    pretrainedmodels==0.7.4
    efficientnet-pytorch==0.7.1
    timm==0.4.12
    segmentation-models-pytorch==0.2.0
    kornia==0.6.1

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:21.11-tensor-stream

    dokai:21.11-tensor-stream

    additionally to dokai:21.11-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v21.09(Sep 30, 2021)

    Updates

    • CUDA 11.4.2, cuDNN 8.2.4
    • Build torch 1.10.0-rc1 (from source, v1.10.0-rc1 tag)
    • FFmpeg with HTTPS support
    • kornia 0.5.11
    • Update other PyPI packages

    Images

    base

    Python with ML and CV packages, CUDA (11.4.2), cuDNN (8.2.4), FFmpeg (4.4) with NVENC support.
    ghcr.io/osai-ai/dokai:21.09-base

    dokai:21.09-base

    CUDA (11.4.2), cuDNN (8.2.4)
    FFmpeg (release/4.4), nv-codec-headers (sdk/11.0)
    Python (3.8.10)

    pip==21.2.4
    setuptools==58.1.0
    packaging==21.0
    numpy==1.21.2
    opencv-python==4.5.3.56
    scipy==1.7.1
    matplotlib==3.4.3
    pandas==1.3.3
    scikit-learn==1.0
    scikit-image==0.18.3
    Pillow==8.3.2
    librosa==0.8.1
    albumentations==1.0.3
    pyzmq==22.3.0
    Cython==0.29.24
    numba==0.53.1
    requests==2.26.0
    psutil==5.8.0
    pydantic==1.8.2
    PyYAML==5.4.1
    notebook==6.4.4
    ipywidgets==7.6.5
    tqdm==4.62.3
    pytest==6.2.5
    mypy==0.910
    flake8==3.9.2

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:21.09-pytorch

    dokai:21.09-pytorch

    additionally to dokai:21.09-base:

    MAGMA (2.6.1)

    torch==1.10.0-rc1 (source, v1.10.0-rc1 tag)
    torchvision==0.10.1 (source, v0.10.1 tag)
    torchaudio==0.9.1 (source, v0.9.1 tag)
    pytorch-ignite==0.4.6
    pytorch-argus==0.2.1
    pretrainedmodels==0.7.4
    efficientnet-pytorch==0.7.1
    timm==0.4.12
    segmentation-models-pytorch==0.2.0
    kornia==0.5.11
    apex (source, master branch)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:21.09-tensor-stream

    dokai:21.09-tensor-stream

    additionally to dokai:21.09-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v21.08(Aug 12, 2021)

    Updates

    • CUDA 11.4.1, cuDNN 8.2.2
    • nv-codec-headers (sdk/11.0)
    • MAGMA 2.6.1
    • Build torch 1.10.0a0+git5b8389e from source (master branch)
    • pytorch-ignite 0.4.6
    • segmentation-models-pytorch 0.2.0
    • kornia 0.5.8
    • Update other PyPI packages

    Images

    base

    Python with ML and CV packages, CUDA (11.4.1), cuDNN (8.2.2), FFmpeg (4.4) with NVENC support.
    ghcr.io/osai-ai/dokai:21.08-base

    dokai:21.08-base

    CUDA (11.4.1), cuDNN (8.2.2)
    FFmpeg (release/4.4), nv-codec-headers (sdk/11.0)
    Python (3.8.10)

    pip==21.2.3
    setuptools==57.4.0
    packaging==21.0
    numpy==1.21.1
    opencv-python==4.5.3.56
    scipy==1.7.1
    matplotlib==3.4.2
    pandas==1.3.1
    scikit-learn==0.24.2
    scikit-image==0.18.2
    Pillow==8.3.1
    librosa==0.8.1
    albumentations==1.0.3
    pyzmq==22.2.1
    Cython==0.29.24
    numba==0.53.1
    requests==2.26.0
    psutil==5.8.0
    pydantic==1.8.2
    PyYAML==5.4.1
    notebook==6.4.3
    ipywidgets==7.6.3
    tqdm==4.62.0
    pytest==6.2.4
    mypy==0.910
    flake8==3.9.2

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:21.08-pytorch

    dokai:21.08-pytorch

    additionally to dokai:21.08-base:

    MAGMA (2.6.1)

    torch==1.10.0a0+git5b8389e (source, master branch)
    torchvision==0.10.0 (source, v0.10.0 tag)
    torchaudio==0.9.0 (source, v0.9.0 tag)
    pytorch-ignite==0.4.6
    pytorch-argus==0.2.1
    pretrainedmodels==0.7.4
    efficientnet-pytorch==0.7.1
    timm==0.4.12
    segmentation-models-pytorch==0.2.0
    kornia==0.5.8
    apex (source, master branch)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:21.08-tensor-stream

    dokai:21.08-tensor-stream

    additionally to dokai:21.08-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v21.07(Jul 2, 2021)

    Updates

    • CUDA 11.3.1
    • Build torch 1.9.0 from source (v1.9.0 tag)
    • torchvision 0.10.0 from source (v0.10.0 tag)
    • torchaudio 0.9.0 from source (v0.9.0 tag)
    • timm 0.4.12
    • kornia 0.5.5
    • Update other PyPI packages

    Images

    base

    Python with ML and CV packages, CUDA (11.3.1), cuDNN (8.2.0), FFmpeg (4.4) with NVENC support.
    ghcr.io/osai-ai/dokai:21.07-base

    dokai:21.07-base

    CUDA (11.3.1), cuDNN (8.2.0)
    FFmpeg (release/4.4), nv-codec-headers (sdk/10.0)
    Python (3.8.10)

    pip==21.1.3
    setuptools==57.0.0
    packaging==20.9
    numpy==1.21.0
    opencv-python==4.5.2.54
    scipy==1.7.0
    matplotlib==3.4.2
    pandas==1.2.5
    scikit-learn==0.24.2
    scikit-image==0.18.2
    Pillow==8.2.0
    librosa==0.8.1
    albumentations==1.0.0
    pyzmq==22.1.0
    Cython==0.29.23
    numba==0.53.1
    requests==2.25.1
    psutil==5.8.0
    trafaret-config==2.0.2
    pydantic==1.8.2
    PyYAML==5.4.1
    notebook==6.4.0
    ipywidgets==7.6.3
    tqdm==4.61.1
    pytest==6.2.4
    mypy==0.910
    flake8==3.9.2

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:21.07-pytorch

    dokai:21.07-pytorch

    additionally to dokai:21.07-base:

    torch==1.9.0 (source, v1.9.0 tag)
    torchvision==0.10.0 (source, v0.10.0 tag)
    torchaudio==0.9.0 (source, v0.9.0 tag)
    pytorch-argus==0.2.1
    pretrainedmodels==0.7.4
    efficientnet-pytorch==0.7.1
    timm==0.4.12
    segmentation-models-pytorch==0.1.3
    kornia==0.5.5
    apex (source, master branch)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:21.07-tensor-stream

    dokai:21.07-tensor-stream

    additionally to dokai:21.07-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v21.05(May 11, 2021)

    Updates

    • CUDA 11.3, cuDNN 8.2.0
    • Build torch 1.8.1 from source (v1.8.1 tag)
    • torchvision 0.9.1 from source (v0.9.1 tag)
    • torchaudio 0.8.1 from source (v0.8.1 tag)
    • timm 0.4.8 from source (master branch)
    • Update other PyPI packages

    Images

    base

    Python with ML and CV packages, CUDA (11.3), cuDNN (8.2.0), FFmpeg (4.4) with NVENC support.
    ghcr.io/osai-ai/dokai:21.05-base

    dokai:21.05-base

    CUDA (11.3), cuDNN (8.2.0)
    FFmpeg (release/4.4), nv-codec-headers (sdk/10.0)
    Python (3.8.5)

    pip==21.1.1
    setuptools==56.2.0
    packaging==20.9
    numpy==1.20.3
    opencv-python==4.5.2.52
    scipy==1.6.3
    matplotlib==3.4.2
    pandas==1.2.4
    scikit-learn==0.24.2
    scikit-image==0.18.1
    Pillow==8.2.0
    librosa==0.8.0
    albumentations==0.5.2
    pyzmq==22.0.3
    Cython==0.29.23
    numba==0.53.1
    requests==2.25.1
    psutil==5.8.0
    trafaret-config==2.0.2
    pydantic==1.8.1
    PyYAML==5.4.1
    notebook==6.3.0
    ipywidgets==7.6.3
    tqdm==4.60.0
    pytest==6.2.4
    mypy==0.812
    flake8==3.9.2

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:21.05-pytorch

    dokai:21.05-pytorch

    additionally to dokai:21.05-base:

    torch==1.8.1 (source, v1.8.1 tag)
    torchvision==0.9.1 (source, v0.9.1 tag)
    torchaudio==0.8.1 (source, v0.8.1 tag)
    pytorch-argus==0.2.1
    timm==0.4.8 (source, master branch)
    kornia==0.5.1
    pretrainedmodels==0.7.4
    efficientnet-pytorch==0.7.1
    segmentation-models-pytorch==0.1.3
    apex (source, master branch)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:21.05-tensor-stream

    dokai:21.05-tensor-stream

    additionally to dokai:21.05-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v21.03(Mar 25, 2021)

    Updates

    • CUDA 11.2.2, cuDNN 8.1.1
    • FFmpeg 4.4
    • Build torch 1.8.0 from source (v1.8.0 tag)
    • torchvision 0.9.0
    • Add PyTorch package: torchaudio 0.8.0
    • timm 0.4.5
    • pytorch-argus 0.2.1
    • Update other PyPI packages
    • Support more GPU architectures for FFmpeg

    Images

    base

    Python with ML and CV packages, CUDA (11.2.2), cuDNN (8.1.1), FFmpeg (4.4) with NVENC support.
    ghcr.io/osai-ai/dokai:21.03-base

    dokai:21.03-base

    ghcr.io/osai-ai/dokai:21.03-base

    CUDA (11.2.2), cuDNN (8.1.1)
    FFmpeg (release/4.4), nv-codec-headers (sdk/10.0)
    Python (3.8.5)

    pip==21.0.1
    setuptools==54.2.0
    packaging==20.9
    numpy==1.20.1
    opencv-python==4.5.1.48
    scipy==1.6.1
    matplotlib==3.3.4
    pandas==1.2.3
    scikit-learn==0.24.1
    scikit-image==0.18.1
    Pillow==8.1.2
    librosa==0.8.0
    albumentations==0.5.2
    pyzmq==22.0.3
    Cython==0.29.22
    numba==0.53.0
    requests==2.25.1
    psutil==5.8.0
    trafaret-config==2.0.2
    pydantic==1.8.1
    PyYAML==5.4.1
    notebook==6.3.0
    ipywidgets==7.6.3
    tqdm==4.59.0
    pytest==6.2.2
    mypy==0.812
    flake8==3.9.0

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:21.03-pytorch

    dokai:21.03-pytorch

    additionally to dokai:21.03-base:

    torch==1.8.0 (source, v1.8.0 tag)
    torchvision==0.9.0 (source, v0.9.0 tag)
    torchaudio==0.8.0 (source, v0.8.0 tag)
    pytorch-argus==0.2.1
    timm==0.4.5
    kornia==0.5.0
    pretrainedmodels==0.7.4
    efficientnet-pytorch==0.7.0
    segmentation-models-pytorch==0.1.3
    apex (source, master branch)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:21.03-tensor-stream

    dokai:21.03-tensor-stream

    additionally to dokai:21.03-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v21.02(Feb 23, 2021)

    New features

    • CUDA 11.2.1, cuDNN 8.1.0
    • Build torch 1.9.0a0+c2b9283 from source (master branch)
    • Install timm 0.4.4 from source (master branch)
    • Add more Python packages: tqdm, PyYAML, pytest, mypy, flake8
    • Add more PyTorch packages: pretrainedmodels, efficientnet-pytorch, segmentation-models-pytorch
    • Update other PyPI packages

    Images

    base

    Python with ML and CV packages, CUDA (11.2.1), cuDNN (8.1.0), FFmpeg with NVENC support.
    ghcr.io/osai-ai/dokai:21.02-base

    dokai:21.02-base

    CUDA (11.2.1), cuDNN (8.1.0)
    FFmpeg (release/4.3), nv-codec-headers (sdk/10.0)
    Python (3.8.5)

    pip==21.0.1
    setuptools==53.0.0
    packaging==20.9
    numpy==1.20.1
    opencv-python==4.5.1.48
    scipy==1.6.1
    matplotlib==3.3.4
    pandas==1.2.2
    scikit-learn==0.24.1
    scikit-image==0.18.1
    Pillow==8.1.0
    librosa==0.8.0
    albumentations==0.5.2
    pyzmq==22.0.3
    Cython==0.29.22
    numba==0.52.0
    requests==2.25.1
    psutil==5.8.0
    trafaret-config==2.0.2
    pydantic==1.7.3
    PyYAML==5.4.1
    notebook==6.2.0
    ipywidgets==7.6.3
    tqdm==4.57.0
    pytest==6.2.2
    mypy==0.812
    flake8==3.8.4

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:21.02-pytorch

    dokai:21.02-pytorch

    additionally to dokai:21.02-base:

    torch==1.9.0a0+c2b9283 (source, master branch)
    torchvision==0.8.2 (source, v0.8.2 tag)
    pytorch-argus==0.2.0
    timm==0.4.4 (source, master branch)
    kornia==0.4.1
    pretrainedmodels==0.7.4
    efficientnet-pytorch==0.7.0
    segmentation-models-pytorch==0.1.3
    apex (source, master branch)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:21.02-tensor-stream

    dokai:21.02-tensor-stream

    additionally to dokai:21.02-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v21.01(Jan 21, 2021)

    New features

    • CUDA 11.1.1
    • nv-codec-headers (sdk/10.0)
    • Build torch 1.8.0a0+4aea007 from source (master branch)
    • Update other PyPI packages
    • Docker Hub mirror

    Images

    base

    Python with ML and CV packages, CUDA, FFmpeg with NVENC support.
    ghcr.io/osai-ai/dokai:21.01-base

    dokai:21.01-base

    CUDA (11.1.1), cuDNN (8.0.5)
    FFmpeg (release/4.3), nv-codec-headers (sdk/10.0)
    Python (3.8.5)

    pip==20.3.3
    setuptools==51.3.3
    packaging==20.8
    numpy==1.19.5
    opencv-python==4.5.1.48
    scipy==1.6.0
    matplotlib==3.3.3
    pandas==1.2.0
    notebook==6.2.0
    scikit-learn==0.24.1
    scikit-image==0.18.1
    albumentations==0.5.2
    Cython==0.29.21
    Pillow==8.1.0
    trafaret-config==2.0.2
    pyzmq==21.0.1
    librosa==0.8.0
    psutil==5.8.0
    pydantic==1.7.3
    requests==2.25.1

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:21.01-pytorch

    dokai:21.01-pytorch

    additionally to dokai:21.01-base:

    torch==1.8.0a0+4aea007 (source, master branch)
    torchvision==0.8.2 (source, v0.8.2 tag)
    pytorch-argus==0.2.0
    timm==0.3.4
    kornia==0.4.1
    apex (source, master branch)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:21.01-tensor-stream

    dokai:21.01-tensor-stream

    additionally to dokai:21.01-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v20.12(Dec 24, 2020)

    New features

    • CUDA 11.1, cuDNN 8.0.5, Ubuntu 20.04, Python 3.8.5
    • Build PyTorch and torchvision from source
    • Build CUDA libraries for Ampere architecture (TORCH_CUDA_ARCH_LIST="6.0;6.1;7.0;7.5;8.0;8.6")
    • kornia

    Images

    base

    Python with ML and CV packages, CUDA, FFmpeg with NVENC support.
    ghcr.io/osai-ai/dokai:20.12-base

    dokai:20.12-base

    CUDA (11.1), cuDNN (8.0.5) FFmpeg (release/4.3), nv-codec-headers (sdk/9.1)
    Python (3.8.5)

    pip==20.3.3
    setuptools==51.0.0
    packaging==20.8
    numpy==1.19.4
    opencv-python==4.4.0.46
    scipy==1.5.4
    matplotlib==3.3.3
    pandas==1.1.5
    notebook==6.1.5
    scikit-learn==0.23.2
    scikit-image==0.18.0
    albumentations==0.5.2
    Cython==0.29.21
    Pillow==8.0.1
    trafaret-config==2.0.2
    pyzmq==20.0.0
    librosa==0.8.0
    psutil==5.8.0
    pydantic==1.7.3
    requests==2.25.1

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:20.12-pytorch

    dokai:20.12-pytorch

    additionally to dokai:20.12-base:

    torch==1.7.1 (source, v1.7.1 tag)
    torchvision==0.8.2 (source, v0.8.2 tag)
    pytorch-argus==0.2.0
    timm==0.3.2
    kornia==0.4.1
    apex (source, master branch)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:20.12-tensor-stream

    dokai:20.12-tensor-stream

    additionally to dokai:20.12-pytorch:

    tensor-stream==0.4.6 (source, dev branch)

    Source code(tar.gz)
    Source code(zip)
  • v20.10(Oct 22, 2020)

    New features

    • pydantic
    • requests

    Fix

    • Build Tensor Stream for lower cuDNN versions 3.7+PTX;5.0;6.0;6.1;7.0;7.5

    Images

    base

    Python with ML and CV packages, CUDA, FFmpeg with NVENC support.
    ghcr.io/osai-ai/dokai:20.10-base

    dokai:20.10-base

    FFmpeg (release/4.3), nv-codec-headers (sdk/9.1)
    Python (3.6.9)

    pip==20.2.4
    setuptools==50.3.2
    packaging==20.4
    numpy==1.19.2
    opencv-python==4.4.0.44
    scipy==1.5.3
    matplotlib==3.3.2
    pandas==1.1.3
    notebook==6.1.4
    scikit-learn==0.23.2
    scikit-image==0.17.2
    albumentations==0.5.0
    Cython==0.29.21
    Pillow==8.0.0
    trafaret-config==2.0.2
    pyzmq==19.0.2
    librosa==0.8.0
    psutil==5.7.2
    dataclasses==0.7
    pydantic==1.6.1
    requests==2.24.0

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:20.10-pytorch

    dokai:20.10-pytorch

    additionally to dokai:20.10-base:

    torch==1.6.0
    torchvision==0.7.0
    pytorch-argus==0.1.2
    timm==0.2.1
    apex (master)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:20.10-tensor-stream

    dokai:20.10-tensor-stream

    additionally to dokai:20.10-pytorch:

    tensor-stream==0.4.6 (dev)

    Source code(tar.gz)
    Source code(zip)
  • v20.09(Sep 29, 2020)

    base

    Python with ML and CV packages, CUDA, FFmpeg with NVENC support.
    ghcr.io/osai-ai/dokai:20.09-base

    dokai:20.09-base

    FFmpeg (release/4.3), nv-codec-headers (sdk/9.1)
    Python (3.6.9)

    pip==20.2.3
    setuptools==50.3.0
    packaging==20.4
    numpy==1.19.2
    opencv-python==4.4.0.42
    scipy==1.5.2
    matplotlib==3.3.2
    pandas==1.1.2
    notebook==6.1.4
    scikit-learn==0.23.2
    scikit-image==0.17.2
    albumentations==0.4.6
    Cython==0.29.21
    Pillow==7.2.0
    trafaret-config==2.0.2
    pyzmq==19.0.2
    librosa==0.8.0
    psutil==5.7.2
    dataclasses==0.7

    pytorch

    PyTorch, torchvision, Apex and torch based libraries.
    ghcr.io/osai-ai/dokai:20.09-pytorch

    dokai:20.09-pytorch

    additionally to dokai:20.09-base:

    torch==1.6.0
    torchvision==0.7.0
    pytorch-argus==0.1.2
    timm==0.2.1
    apex (master)

    tensor-stream

    Tensor Stream.
    ghcr.io/osai-ai/dokai:20.09-tensor-stream

    dokai:20.09-tensor-stream

    additionally to dokai:20.09-pytorch:

    tensor-stream==0.4.6 (dev)

    Source code(tar.gz)
    Source code(zip)
Owner
OSAI
OSAI is developing automatic systems that help to analyze a game and provide real-time game data with Computer Vision and AI in Sports.
OSAI
A python bot to move your mouse every few seconds to appear active on Skype, Teams or Zoom as you go AFK. 🐭 🤖

PyMouseBot If you're from GT and annoyed with SGVPN idle timeouts while working on development laptop, You might find this useful. A python cli bot to

Oaker Min 6 Oct 24, 2022
This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework

neon_course This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework. For more information, see

Nervana 92 Jan 03, 2023
Dynamics-aware Adversarial Attack of 3D Sparse Convolution Network

Leaded Gradient Method (LGM) This repository contains the PyTorch implementation for paper Dynamics-aware Adversarial Attack of 3D Sparse Convolution

An Tao 2 Oct 18, 2022
Car Parking Tracker Using OpenCv

Car Parking Vacancy Tracker Using OpenCv I used basic image processing methods i

Adwait Kelkar 30 Dec 03, 2022
Library for 8-bit optimizers and quantization routines.

bitsandbytes Bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers and quantization functions. Paper -- V

Facebook Research 687 Jan 04, 2023
Yolo ros - YOLO-ROS for HUAWEI ATLAS200

YOLO-ROS YOLO-ROS for NVIDIA YOLO-ROS for HUAWEI ATLAS200, please checkout for b

ChrisLiu 5 Oct 18, 2022
Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"

ON-LSTM This repository contains the code used for word-level language model and unsupervised parsing experiments in Ordered Neurons: Integrating Tree

Yikang Shen 572 Nov 21, 2022
Pre-trained NFNets with 99% of the accuracy of the official paper

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale

Benjamin Schmidt 133 Dec 09, 2022
Chainer Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

fcn - Fully Convolutional Networks Chainer implementation of Fully Convolutional Networks. Installation pip install fcn Inference Inference is done as

Kentaro Wada 218 Oct 27, 2022
Super Resolution for images using deep learning.

Neural Enhance Example #1 — Old Station: view comparison in 24-bit HD, original photo CC-BY-SA @siv-athens. As seen on TV! What if you could increase

Alex J. Champandard 11.7k Dec 29, 2022
Array Camera Ptychography

Array Camera Ptychography This repository provides the code for the following papers: Schulz, Timothy J., David J. Brady, and Chengyu Wang. "Photon-li

Brady lab in Optical Sciences 1 Nov 15, 2021
BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins

BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins Deep learning has brought most profound contributio

Narinder Singh Punn 12 Dec 04, 2022
Emotion Recognition from Facial Images

Reconhecimento de Emoções a partir de imagens faciais Este projeto implementa um classificador simples que utiliza técncias de deep learning e transfe

Gabriel 2 Feb 09, 2022
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation This is a pytorch project for the paper Dynamic Divide-and-Conquer Ad

DV Lab 29 Nov 21, 2022
Implements the training, testing and editing tools for "Pluralistic Image Completion"

Pluralistic Image Completion ArXiv | Project Page | Online Demo | Video(demo) This repository implements the training, testing and editing tools for "

Chuanxia Zheng 615 Dec 08, 2022
Tree-based Search Graph for Approximate Nearest Neighbor Search

TBSG: Tree-based Search Graph for Approximate Nearest Neighbor Search. TBSG is a graph-based algorithm for ANNS based on Cover Tree, which is also an

Fanxbin 2 Dec 27, 2022
Introduction to Statistics and Basics of Mathematics for Data Science - The Hacker's Way

HackerMath for Machine Learning “Study hard what interests you the most in the most undisciplined, irreverent and original manner possible.” ― Richard

Amit Kapoor 1.4k Dec 22, 2022
A custom-designed Spider Robot trained to walk using Deep RL in a PyBullet Simulation

SpiderBot_DeepRL Title: Implementation of Single and Multi-Agent Deep Reinforcement Learning Algorithms for a Walking Spider Robot Authors(s): Arijit

Arijit Dasgupta 9 Jul 28, 2022
Investigating Attention Mechanism in 3D Point Cloud Object Detection (arXiv 2021)

Investigating Attention Mechanism in 3D Point Cloud Object Detection (arXiv 2021) This repository is for the following paper: "Investigating Attention

52 Nov 19, 2022
This repository is for Competition for ML_data class

This repository is for Competition for ML_data class. Based on mmsegmentatoin,mainly using swin transformer to completed the competition.

jianlong 2 Oct 23, 2022