Tf alloc - Simplication of GPU allocation for Tensorflow2

Related tags

Deep Learningtf_alloc
Overview

tf_alloc

Simpliying GPU allocation for Tensorflow

  • Developer: korkite (Junseo Ko)

Installation

pip install tf-alloc

⭐️ Why tf_alloc? Problems?

  • Compare to pytorch, tensorflow allocate all GPU memory to single training.
  • However, it is too much waste because, some training does not use whole GPU memory.
  • To solve this problem, TF engineers use two methods.
  1. Limit to use only single GPU
  2. Limit the use of only a certain percentage of GPUs.
  • However, these methods require complex code and memory management.

⭐️ Why tf_alloc? How to solve?

tf_alloc simplfy and automate GPU allocation using two methods.

⭐️ How to allocate?

  • Before using tf_alloc, you have to install tensorflow fits for your environment.
  • This library does not install specific tensorflow version.
# On the top of the code
from tf_alloc import allocate as talloc
talloc(gpu=1, percentage=0.5)

import tensorflow as tf
""" your code"""

It is only code for allocating GPU in certain percentage.

Parameters:

  • gpu = which gpu you want to use (if you have two gpu than [0, 1] is possible)
  • percentage = the percentage of memory usage on single gpu. 1.0 for maximum use.

⭐️ Additional Function.

GET GPU Objects

gpu_objs = get_gpu_objects()
  • To use this code, you can get gpu objects that contains gpu information.
  • You can set GPU backend by using this function.

GET CURRENT STATE

Defualt
current(
    gpu_id = False, 
    total_memory=False, 
    used = False, 
    free = False, 
    percentage_of_use = False,
    percentage_of_free = False,
)
  • You can use this functions to see current GPU state and possible maximum allocation percentage.
  • Without any parameters, than it only visualize possible maximum allocation percentage.
  • It is cmd line visualizer. It doesn't return values.

Parameters

  • gpu_id = visualize the gpu id number
  • total_memory = visualize the total memory of GPU
  • used = visualize the used memory of GPU
  • free = visualize the free memory of GPU
  • percentage_of_used = visualize the percentage of used memory of GPU
  • percentage_of_free = visualize the percentage of free memory of GPU

한국어는 간단하게!

설치

pip install tf-alloc

문제정의:

  • 텐서플로우는 파이토치와 다르게 훈련시 GPU를 전부 할당해버립니다.
  • 그러나 실제로 GPU를 모두 사용하지 않기 때문에 큰 낭비가 발생합니다.
  • 이를 막기 위해 두가지 방법이 사용되는데
  1. GPU를 1개만 쓰도록 제한하기
  2. GPU에서 특정 메모리만큼만 사용하도록 제한하기
  • 이 두가지 입니다. 그러나 이 방법을 위해선 복잡한 코드와 메모리 관리가 필요합니다.

해결책:

  • 이것을 해결하기 위해 자동으로 몇번 GPU를 얼만큼만 할당할지 정해주는 코드를 만들었습니다.
  • 함수 하나만 사용하면 됩니다.
# On the top of the code
from tf_alloc import allocate as talloc
talloc(gpu=1, percentage=0.5)

import tensorflow as tf
""" your code"""
  • 맨위에 tf_alloc에서 allocate함수를 불러다가 gpu파라미터와 percentage 파라미터를 주어 호출합니다.
  • 그러면 자동으로 몇번의 GPU를 얼만큼의 비율로 사용할지 정해서 할당합니다.
  • 매우 쉽습니다.

파라미터 설명

  • gpu = 몇범 GPU를 쓸 것인지 GPU의 아이디를 넣어줍니다. (만약 gpu가 2개 있다면 0, 1 이 아이디가 됩니다.)

  • percentage = 선택한 GPU를 몇의 비율로 쓸건지 정해줍니다. (1.0을 넣으면 해당 GPU를 전부 씁니다)

  • 만약 percentage가 몇인지 모른다면 0에서 1 사이의 값을 넣어서 할당해보면 최대 사용가능량이 얼만큼이라고 에러를 출력하니까 걱정없이 사용하시면 됩니다. 다른 훈련에 방해를 주지 않기 때문에, nvidia-smi를 쳐가면서 할당을 하는 것보다 매우 안정적입니다.

  • 핵심기능만 한국어로 써 놓았고, 다른 기능은 영문버전을 확인해보시면 감사하겠습니다.

Owner
Junseo Ko
🙃 AI Engineer 😊
Junseo Ko
Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch

Retrieval-Augmented Denoising Diffusion Probabilistic Models (wip) Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in P

Phil Wang 55 Jan 01, 2023
Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate

News 05/17/2021 To make the comparison on ZJU-MoCap easier, we save quantitative and qualitative results of other methods at here, including Neural Vo

ZJU3DV 748 Jan 07, 2023
SCALoss: Side and Corner Aligned Loss for Bounding Box Regression (AAAI2022).

SCALoss PyTorch implementation of the paper "SCALoss: Side and Corner Aligned Loss for Bounding Box Regression" (AAAI 2022). Introduction IoU-based lo

TuZheng 20 Sep 07, 2022
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"

Transparency-by-Design networks (TbD-nets) This repository contains code for replicating the experiments and visualizations from the paper Transparenc

David Mascharka 351 Nov 18, 2022
Codebase for the solution that won first place and was awarded the most human-like agent in the 2021 NeurIPS Competition MineRL BASALT Challenge.

KAIROS MineRL BASALT Codebase for the solution that won first place and was awarded the most human-like agent in the 2021 NeurIPS Competition MineRL B

Vinicius G. Goecks 37 Oct 30, 2022
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM

Taehoon Kim 1k Jan 04, 2023
An implementation of MobileFormer

MobileFormer An implementation of MobileFormer proposed by Yinpeng Chen, Xiyang Dai et al. Including [1] Mobile-Former proposed in:

slwang9353 62 Dec 28, 2022
Codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing

Contrast and Mix (CoMix) The repository contains the codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Backgroun

Computer Vision and Intelligence Research (CVIR) 13 Dec 10, 2022
Public scripts, services, and configuration for running a smart home K3S network cluster

makerhouse_network Public scripts, services, and configuration for running MakerHouse's home network. This network supports: TODO features here For mo

Scott Martin 1 Jan 15, 2022
Progressive Image Deraining Networks: A Better and Simpler Baseline

Progressive Image Deraining Networks: A Better and Simpler Baseline [arxiv] [pdf] [supp] Introduction This paper provides a better and simpler baselin

190 Dec 01, 2022
A package to predict protein inter-residue geometries from sequence data

trRosetta This package is a part of trRosetta protein structure prediction protocol developed in: Improved protein structure prediction using predicte

Ivan Anishchenko 185 Jan 07, 2023
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
DiffWave is a fast, high-quality neural vocoder and waveform synthesizer.

DiffWave DiffWave is a fast, high-quality neural vocoder and waveform synthesizer. It starts with Gaussian noise and converts it into speech via itera

LMNT 498 Jan 03, 2023
Code for STFT Transformer used in BirdCLEF 2021 competition.

STFT_Transformer Code for STFT Transformer used in BirdCLEF 2021 competition. The STFT Transformer is a new way to use Transformers similar to Vision

Jean-François Puget 69 Sep 29, 2022
Large-scale open domain KNOwledge grounded conVERsation system based on PaddlePaddle

Knover Knover is a toolkit for knowledge grounded dialogue generation based on PaddlePaddle. Knover allows researchers and developers to carry out eff

607 Dec 31, 2022
Conversion between units used in magnetism

convmag Conversion between various units used in magnetism The conversions between base units available are: T - G : 1e4

0 Jul 15, 2021
Depth image based mouse cursor visual haptic

Depth image based mouse cursor visual haptic How to run it. Install pyqt5. Install python modules pip install Pillow pip install numpy For illustrati

Xiong Jie 17 Dec 20, 2022
PyTorch implementation of UNet++ (Nested U-Net).

PyTorch implementation of UNet++ (Nested U-Net) This repository contains code for a image segmentation model based on UNet++: A Nested U-Net Architect

4ui_iurz1 642 Jan 04, 2023
Source code and dataset of the paper "Contrastive Adaptive Propagation Graph Neural Networks forEfficient Graph Learning"

CAPGNN Source code and dataset of the paper "Contrastive Adaptive Propagation Graph Neural Networks forEfficient Graph Learning" Paper URL: https://ar

1 Mar 12, 2022
The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

9 Nov 14, 2022