Using VapourSynth with super resolution models and speeding them up with TensorRT.

Overview

VSGAN-tensorrt-docker

Using image super resolution models with vapoursynth and speeding them up with TensorRT. Using NVIDIA/Torch-TensorRT combined with rlaphoenix/VSGAN. This repo makes the usage of tiling and ESRGAN models very easy. Models can be found on the wiki page. Further model architectures are planned to be added later on.

Currently working:

  • ESRGAN
  • RealESRGAN (adjust model load manually in inference.py, settings wont be adjusted automatically currently)

Usage:

# install docker, command for arch
yay -S docker nvidia-docker nvidia-container-toolkit
# Put the dockerfile in a directory and run that inside that directory
docker build -t vsgan_tensorrt:latest .
# run with a mounted folder
docker run --privileged --gpus all -it --rm -v /home/Desktop/tensorrt:/workspace/tensorrt vsgan_tensorrt:latest
# you can use it in various ways, ffmpeg example
vspipe --y4m inference.py - | ffmpeg -i pipe: example.mkv

If docker does not want to start, try this before you use docker:

# fixing docker errors
systemctl start docker
sudo chmod 666 /var/run/docker.sock

Windows is mostly similar, but the path needs to be changed slightly:

Example for C://path
docker run --privileged --gpus all -it --rm -v //c/path:/workspace/tensorrt vsgan_tensorrt:latest

If you don't want to use docker, vapoursynth install commands are here and a TensorRT example is here.

Set the input video path in inference.py and access videos with the mounted folder.

It is also possible to directly pipe the video into mpv, but you most likely wont be able to archive realtime speed. Change the mounted folder path to your own videofolder and use the mpv dockerfile instead. If you use a very efficient model, it may be possible on a very good GPU. Only tested in Manjaro.

yay -S pulseaudio

# i am not sure if it is needed, but go into pulseaudio settings and check "make pulseaudio network audio devices discoverable in the local network" and reboot

# start docker
docker run --rm -i -t \
    --network host \
    -e DISPLAY \
    -v /home/Schreibtisch/test/:/home/mpv/media \
    --ipc=host \
    --privileged \
    --gpus all \
    -e PULSE_COOKIE=/run/pulse/cookie \
    -v ~/.config/pulse/cookie:/run/pulse/cookie \
    -e PULSE_SERVER=unix:${XDG_RUNTIME_DIR}/pulse/native \
    -v ${XDG_RUNTIME_DIR}/pulse/native:${XDG_RUNTIME_DIR}/pulse/native \
    vsgan_tensorrt:latest
    
# run mpv
vspipe --y4m inference.py - | mpv -
Comments
  • Invalid data found when processing input

    Invalid data found when processing input

    Hey when i start the inference.py script this happen :

    someone can help me ?

    
    > ffmpeg version N-62110-g4d45f5acbd-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2022 the FFmpeg developers
    >   built with gcc 8 (Debian 8.3.0-6)
    >   configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
    >   libavutil      57. 26.100 / 57. 26.100
    >   libavcodec     59. 33.100 / 59. 33.100
    >   libavformat    59. 24.100 / 59. 24.100
    >   libavdevice    59.  6.100 / 59.  6.100
    >   libavfilter     8. 40.100 /  8. 40.100
    >   libswscale      6.  6.100 /  6.  6.100
    >   libswresample   4.  6.100 /  4.  6.100
    >   libpostproc    56.  5.100 / 56.  5.100
    > Information: Generating grammar tables from /usr/lib/python3.8/lib2to3/Grammar.txt
    > Information: Generating grammar tables from /usr/lib/python3.8/lib2to3/PatternGrammar.txt
    > Script evaluation failed:
    > Python exception: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
    > 
    > Traceback (most recent call last):
    >   File "src\cython\vapoursynth.pyx", line 2890, in vapoursynth._vpy_evaluate
    >   File "src\cython\vapoursynth.pyx", line 2891, in vapoursynth._vpy_evaluate
    >   File "inference.py", line 85, in <module>
    >     clip = ESRGAN_inference(clip=clip, model_path="/workspace/RealESRGAN_x4plus_anime_6B.pth", tile_x=480, tile_y=480, tile_pad=16, fp16=False, tta=False, tta_mode=1)
    >   File "/workspace/tensorrt/src/esrgan.py", line 680, in ESRGAN_inference
    >     import torch_tensorrt
    >   File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/__init__.py", line 11, in <module>
    >     from torch_tensorrt._compile import *
    >   File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/_compile.py", line 2, in <module>
    >     from torch_tensorrt import _enums
    >   File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/_enums.py", line 1, in <module>
    >     from torch_tensorrt._C import dtype, DeviceType, EngineCapability, TensorFormat
    > ImportError: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
    > 
    > pipe:: Invalid data found when processing input
    
    
    opened by NeoBurgerYT 10
  • Module not found 'scipy'

    Module not found 'scipy'

    I can't run my inference.py without getting this error message. Can someone direct me to where I can get the repo?

    File "/usr/local/lib/python3.8/dist-packages/mmedit/core/evaluation/metrics.py", line 7, in from scipy.ndimage import convolve ModuleNotFoundError: No module named 'scipy'

    pipe:: Invalid data found when processing input

    opened by terminatedkhla 8
  • Tutorial?

    Tutorial?

    Hi! This is amazing technology! I’m blown away. I’d love to contact you directly on how to use it in colab, I’m quite confused with the process. I’ve tried running it but not sure I’m running it correctly. Thanks in advance!

    opened by AIManifest 6
  • Trying On A M1 Mac

    Trying On A M1 Mac

    So I followed this tutorial https://www.youtube.com/watch?v=B134jvhO8yk&t=0s But when docker run --privileged --gpus all -it --rm -v /home/vsgan_path/:/workspace/tensorrt styler00dollar/vsgan_tensorrt:latest it just gives me an error that it doesn't find the right amd64 or somthing and I rage quit deleted it without seeing the full error. PLS HELP ME :(

    opened by Ghostkwebb 6
  • Crash when using RIFE ensemble models in vsmlrt

    Crash when using RIFE ensemble models in vsmlrt

    I get this error

    vapoursynth.Error: operator (): expects 8 input planes
    

    from this

    import vapoursynth as vs
    from vapoursynth import core
    core = vs.core
    import vsmlrt
    
    clip = core.lsmas.LWLibavSource(source=r"R:\output.mkv",cache=1, prefer_hw=1)
    clip = core.resize.Bicubic(clip, matrix_in_s="709", transfer_in_s='709', format=vs.RGBS)
    clip = vsmlrt.RIFE(clip, multi=4, model=46, backend=vsmlrt.Backend.TRT(fp16=True), tilesize=[1920,1088])
    clip = core.std.AssumeFPS(clip=clip, fpsnum=60, fpsden=1)
    clip = core.resize.Bicubic(clip, format=vs.RGB24, matrix_in_s="709")
    clip.set_output()
    
    opened by banjaminicc 4
  • Support for AITemplate?

    Support for AITemplate?

    There is something that came out recently and it's look promising in terms of performance/speed. Would it be possible to implement it for ESERGAN mode? https://github.com/facebookincubator/AITemplate

    opened by kodxana 4
  • CUDA out of Memory

    CUDA out of Memory

    System Specs: Ryzen 9 5900HX, NVidia 3070 Mobile, Arch Linux (EndeavorOS) on Kernel 5.17.2

    Whenever I try to run a model that is relying on CUDA, for example cugan, the program exits with

    Error: Failed to retrieve frame 0 with error: CUDA out of memory. Tried to allocate 148.00 MiB (GPU 0; 7.80 GiB total capacity; 5.53 GiB already allocated; 68.56 MiB free; 5.69 GiB reserved in total by PyTorch)

    and stops after having output 4 frames.

    However, TensorRT works fine for models that support it (like RealESRGAN for example).

    Edit: Running nvidia-smi while the command is executed reveals that vspipe is allocating GPU Memory, but <2 GiB of VRAM, far from the 8GiB my model has.

    opened by mmkzer0 4
  • No module named 'vsbasicvsrpp'

    No module named 'vsbasicvsrpp'

    Traceback (most recent call last): File "src\cython\vapoursynth.pyx", line 2832, in vapoursynth._vpy_evaluate File "src\cython\vapoursynth.pyx", line 2833, in vapoursynth._vpy_evaluate File "inference.py", line 12, in from vsbasicvsrpp import BasicVSRPP ModuleNotFoundError: No module named 'vsbasicvsrpp'

    opened by xt851231 4
  • Google colab request?

    Google colab request?

    I recently stumbled upon this VSGAN-tensorrt-docker and found it so incredible! Could anyone make a google colab notebook that features everything from this VSGAN-tensorrt-docker, so that we could experience the speed of TensorRT! Thanks in advance!

    opened by mikebilly 3
  • model conversion from onnx to trt

    model conversion from onnx to trt

    @styler00dollar this is not issue but a question, I read the scripts in inference.py and found real-esrgan 2x is loaded from trt engine file, since real-2x uses dynamic shapes as input, could you share any ideas how to convert this model to trt, thanks!

    opened by deism 3
  • ESRGAN with full episode

    ESRGAN with full episode

    Hello,

    I'm trying to upscale MKV files of full episodes with ESRGAN. I tried using vspipe -c y4m inference.py - | ffmpeg -i pipe: example.mkv, and it seems to run up to the point where it starts to give an ETA. Once there the time doesn't move and eventually, it says it was killed.

    Can you give me some tips on how to make this work better? I'm not familiar with most of the tools I've been given.

    opened by Ultramonte 2
  • [SUGGESTION] per-scene processing

    [SUGGESTION] per-scene processing

    Hi there, this project is awesome so thanks for your - voluntary - work !

    Since GANs-based processing is quite heavy computing task, it could be very useful to split it into multiple "segments" to allow parallel/scalable/collaborative/resumable instances.

    We suggest you to check @master-of-zen's Av1an framework, wich implements it.

    Hope that inspires.

    opened by forart 1
Releases(models)
Owner
I like Google Colab and Python.
Introduction to Statistics and Basics of Mathematics for Data Science - The Hacker's Way

HackerMath for Machine Learning “Study hard what interests you the most in the most undisciplined, irreverent and original manner possible.” ― Richard

Amit Kapoor 1.4k Dec 22, 2022
Self Driving RC Car Code

Derp Learning Derp Learning is a Python package that collects data, trains models, and then controls an RC car for track racing. Hardware You will nee

Not Karol 39 Dec 07, 2022
PyTorch implementation of Convolutional Neural Fabrics http://arxiv.org/abs/1606.02492

PyTorch implementation of Convolutional Neural Fabrics arxiv:1606.02492 There are some minor differences: The raw image is first convolved, to obtain

Anuvabh Dutt 25 Dec 22, 2021
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF

Google 625 Dec 30, 2022
Conflict-aware Inference of Python Compatible Runtime Environments with Domain Knowledge Graph, ICSE 2022

PyCRE Conflict-aware Inference of Python Compatible Runtime Environments with Domain Knowledge Graph, ICSE 2022 Dependencies This project is developed

<a href=[email protected]"> 7 May 06, 2022
Official implementation of "Learning Not to Reconstruct" (BMVC 2021)

Official PyTorch implementation of "Learning Not to Reconstruct Anomalies" This is the implementation of the paper "Learning Not to Reconstruct Anomal

Marcella Astrid 13 Dec 04, 2022
Pytorch implementation of the AAAI 2022 paper "Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification"

[AAAI22] Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification We point out the overlooked unbiasedness in long-tailed clas

PatatiPatata 28 Oct 18, 2022
[ICCV 2021] Released code for Causal Attention for Unbiased Visual Recognition

CaaM This repo contains the codes of training our CaaM on NICO/ImageNet9 dataset. Due to my recent limited bandwidth, this codebase is still messy, wh

Wang Tan 66 Dec 31, 2022
Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework

This repo is the official implementation of "Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework". @inproceedings{zhou2021insta

34 Dec 31, 2022
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Dec 29, 2022
A modular application for performing anomaly detection in networks

Deep-Learning-Models-for-Network-Annomaly-Detection The modular app consists for mainly three annomaly detection algorithms. The system supports model

Shivam Patel 1 Dec 09, 2021
Pseudo lidar - (CVPR 2019) Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving

Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving This paper has been accpeted by Conference o

Yan Wang 881 Dec 27, 2022
Finetuning Pipeline

KLUE Baseline Korean(한국어) KLUE-baseline contains the baseline code for the Korean Language Understanding Evaluation (KLUE) benchmark. See our paper fo

74 Dec 13, 2022
subpixel: A subpixel convnet for super resolution with Tensorflow

subpixel: A subpixel convolutional neural network implementation with Tensorflow Left: input images / Right: output images with 4x super-resolution af

Atrium LTS 2.1k Dec 23, 2022
Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI

Hourglass Transformer - Pytorch (wip) Implementation of Hourglass Transformer, in Pytorch. It will also contain some of my own ideas about how to make

Phil Wang 61 Dec 25, 2022
Codebase for the Summary Loop paper at ACL2020

Summary Loop This repository contains the code for ACL2020 paper: The Summary Loop: Learning to Write Abstractive Summaries Without Examples. Training

Canny Lab @ The University of California, Berkeley 44 Nov 04, 2022
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
A list of multi-task learning papers and projects.

This page contains a list of papers on multi-task learning for computer vision. Please create a pull request if you wish to add anything. If you are interested, consider reading our recent survey pap

svandenh 297 Dec 17, 2022
A Framework for Encrypted Machine Learning in TensorFlow

TF Encrypted is a framework for encrypted machine learning in TensorFlow. It looks and feels like TensorFlow, taking advantage of the ease-of-use of t

TF Encrypted 0 Jul 06, 2022
Code for DeepCurrents: Learning Implicit Representations of Shapes with Boundaries

DeepCurrents | Webpage | Paper DeepCurrents: Learning Implicit Representations of Shapes with Boundaries David Palmer*, Dmitriy Smirnov*, Stephanie Wa

Dima Smirnov 36 Dec 08, 2022