Tree Nested PyTorch Tensor Lib

Overview

DI-treetensor

PyPI PyPI - Python Version Loc Comments

Docs Deploy Code Test Badge Creation Package Release codecov

GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

treetensor is a generalized tree-based tensor structure mainly developed by OpenDILab Contributors.

Almost all the operation can be supported in form of trees in a convenient way to simplify the structure processing when the calculation is tree-based.

Installation

You can simply install it with pip command line from the official PyPI site.

pip install di-treetensor

For more information about installation, you can refer to Installation.

Documentation

The detailed documentation are hosted on https://opendilab.github.io/DI-treetensor.

Only english version is provided now, the chinese documentation is still under development.

Quick Start

You can easily create a tree value object based on FastTreeValue.

import builtins
import os
from functools import partial

import treetensor.torch as torch

print = partial(builtins.print, sep=os.linesep)

if __name__ == '__main__':
    # create a tree tensor
    t = torch.randn({'a': (2, 3), 'b': {'x': (3, 4)}})
    print(t)
    print(torch.randn(4, 5))  # create a normal tensor
    print()

    # structure of tree
    print('Structure of tree')
    print('t.a:', t.a)  # t.a is a native tensor
    print('t.b:', t.b)  # t.b is a tree tensor
    print('t.b.x', t.b.x)  # t.b.x is a native tensor
    print()

    # math calculations
    print('Math calculation')
    print('t ** 2:', t ** 2)
    print('torch.sin(t).cos()', torch.sin(t).cos())
    print()

    # backward calculation
    print('Backward calculation')
    t.requires_grad_(True)
    t.std().arctan().backward()
    print('grad of t:', t.grad)
    print()

    # native operation
    # all the ops can be used as the original usage of `torch`
    print('Native operation')
    print('torch.sin(t.a)', torch.sin(t.a))  # sin of native tensor

The result should be

<Tensor 0x7f0dae602760>
├── a --> tensor([[-1.2672, -1.5817, -0.3141],
│                 [ 1.8107, -0.1023,  0.0940]])
└── b --> <Tensor 0x7f0dae602820>
    └── x --> tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
                      [ 1.5956,  0.8825, -0.5702, -0.2247],
                      [ 0.9235,  0.4538,  0.8775, -0.2642]])

tensor([[-0.9559,  0.7684,  0.2682, -0.6419,  0.8637],
        [ 0.9526,  0.2927, -0.0591,  1.2804, -0.2455],
        [ 0.4699, -0.9998,  0.6324, -0.6885,  1.1488],
        [ 0.8920,  0.4401, -0.7785,  0.5931,  0.0435]])

Structure of tree
t.a:
tensor([[-1.2672, -1.5817, -0.3141],
        [ 1.8107, -0.1023,  0.0940]])
t.b:
<Tensor 0x7f0dae602820>
└── x --> tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
                  [ 1.5956,  0.8825, -0.5702, -0.2247],
                  [ 0.9235,  0.4538,  0.8775, -0.2642]])

t.b.x
tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
        [ 1.5956,  0.8825, -0.5702, -0.2247],
        [ 0.9235,  0.4538,  0.8775, -0.2642]])

Math calculation
t ** 2:
<Tensor 0x7f0dae602eb0>
├── a --> tensor([[1.6057, 2.5018, 0.0986],
│                 [3.2786, 0.0105, 0.0088]])
└── b --> <Tensor 0x7f0dae60c040>
    └── x --> tensor([[1.4943, 0.1187, 0.9960, 0.1669],
                      [2.5458, 0.7789, 0.3252, 0.0505],
                      [0.8528, 0.2059, 0.7699, 0.0698]])

torch.sin(t).cos()
<Tensor 0x7f0dae621910>
├── a --> tensor([[0.5782, 0.5404, 0.9527],
│                 [0.5642, 0.9948, 0.9956]])
└── b --> <Tensor 0x7f0dae6216a0>
    └── x --> tensor([[0.5898, 0.9435, 0.6672, 0.9221],
                      [0.5406, 0.7163, 0.8578, 0.9753],
                      [0.6983, 0.9054, 0.7185, 0.9661]])


Backward calculation
grad of t:
<Tensor 0x7f0dae60c400>
├── a --> tensor([[-0.0435, -0.0535, -0.0131],
│                 [ 0.0545, -0.0064, -0.0002]])
└── b --> <Tensor 0x7f0dae60cbe0>
    └── x --> tensor([[ 0.0357, -0.0141, -0.0349, -0.0162],
                      [ 0.0476,  0.0249, -0.0213, -0.0103],
                      [ 0.0262,  0.0113,  0.0248, -0.0116]])


Native operation
torch.sin(t.a)
tensor([[-0.9543, -0.9999, -0.3089],
        [ 0.9714, -0.1021,  0.0939]], grad_fn=<SinBackward>)

For more quick start explanation and further usage, take a look at:

Extension

If you need to translate treevalue object to runnable source code, you may use the potc-treevalue plugin with the installation command below

pip install DI-treetensor[potc]

In potc, you can translate the objects to runnable python source code, which can be loaded to objects afterwards by the python interpreter, like the following graph

potc_system

For more information, you can refer to

Contribution

We appreciate all contributions to improve DI-treetensor, both logic and system designs. Please refer to CONTRIBUTING.md for more guides.

And users can join our slack communication channel, or contact the core developer HansBug for more detailed discussion.

License

DI-treetensor released under the Apache 2.0 license.

You might also like...
 Pretty Tensor - Fluent Neural Networks in TensorFlow
Pretty Tensor - Fluent Neural Networks in TensorFlow

Pretty Tensor provides a high level builder API for TensorFlow. It provides thin wrappers on Tensors so that you can easily build multi-layer neural networks.

A torch.Tensor-like DataFrame library supporting multiple execution runtimes and Arrow as a common memory format

TorchArrow (Warning: Unstable Prototype) This is a prototype library currently under heavy development. It does not currently have stable releases, an

Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train format

ttopt Description Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train (TT) format and maximu

 (Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework
(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework

(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework Background: Outlier detection (OD) is a key data mining task for identify

Code to reproduce the results in the paper
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.
mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms. It provides easily interchangeable modeling and planning components, and a set of utility functions that allow writing model-based RL algorithms with only a few lines of code.

OpenDILab RL Kubernetes Custom Resource and Operator Lib

DI Orchestrator DI Orchestrator is designed to manage DI (Decision Intelligence) jobs using Kubernetes Custom Resource and Operator. Prerequisites A w

Jittor Medical Segmentation Lib -- The assignment of Pattern Recognition course (2021 Spring) in Tsinghua University
Jittor Medical Segmentation Lib -- The assignment of Pattern Recognition course (2021 Spring) in Tsinghua University

THU模式识别2021春 -- Jittor 医学图像分割 模型列表 本仓库收录了课程作业中同学们采用jittor框架实现的如下模型: UNet SegNet DeepLab V2 DANet EANet HarDNet及其改动HarDNet_alter PSPNet OCNet OCRNet DL

Comments
  • PyTorch OP List(P0)

    PyTorch OP List(P0)

    reference: https://pytorch.org/docs/1.8.0/torch.html

    common

    • [x] numel
    • [x] cpu
    • [x] cuda
    • [x] to

    Creation Ops

    • [x] torch.zeros_like
    • [x] torch.randn_like
    • [x] torch.randint_like
    • [x] torch.ones_like
    • [x] torch.full_like
    • [x] torch.empty_like
    • [x] torch.zeros
    • [x] torch.randn
    • [x] torch.randint
    • [x] torch.ones
    • [x] torch.full
    • [x] torch.empty

    Indexing, Slicing, Joining, Mutating Ops

    • [x] cat
    • [x] chunk
    • [ ] gather
    • [x] index_select
    • [x] masked_select
    • [x] reshape
    • [ ] scatter
    • [x] split
    • [x] squeeze
    • [x] stack
    • [ ] tile
    • [ ] unbind
    • [x] unsqueeze
    • [x] where

    Math Ops

    Pointwise Ops
    • [x] add
    • [x] sub
    • [x] mul
    • [x] div
    • [x] pow
    • [x] neg
    • [x] abs
    • [x] sign
    • [x] floor
    • [x] ceil
    • [x] round
    • [x] sigmoid
    • [x] clamp
    • [x] exp
    • [x] exp2
    • [x] sqrt
    • [x] log
    • [x] log10
    • [x] log2
    Reduction Ops
    • [ ] argmax
    • [ ] argmin
    • [x] all
    • [x] any
    • [x] max
    • [x] min
    • [x] dist
    • [ ] logsumexp
    • [x] mean
    • [ ] median
    • [x] norm
    • [ ] prod
    • [x] std
    • [x] sum
    • [ ] unique
    Comparison Ops
    • [ ] argsort
    • [x] eq
    • [x] ge
    • [x] gt
    • [x] isfinite
    • [x] isinf
    • [x] isnan
    • [x] le
    • [x] lt
    • [x] ne
    • [ ] sort
    • [ ] topk
    Other Ops
    • [ ] cdist
    • [x] clone
    • [ ] flip

    BLAS and LAPACK Ops

    • [ ] addbmm
    • [ ] addmm
    • [ ] bmm
    • [x] dot
    • [x] matmul
    • [x] mm
    enhancement 
    opened by PaParaZz1 3
  • PyTorch OP Doc List

    PyTorch OP Doc List

    P0

    • [x] cpu
    • [x] cuda
    • [x] to
    • [x] torch.zeros_like
    • [x] torch.randn_like
    • [x] torch.ones_like
    • [x] torch.zeros
    • [x] torch.randn
    • [x] torch.randint
    • [x] torch.ones
    • [x] cat
    • [x] reshape
    • [x] split
    • [x] squeeze
    • [x] stack
    • [x] unsqueeze
    • [x] where
    • [x] abs
    • [x] add
    • [x] clamp
    • [x] div
    • [x] exp
    • [x] log
    • [x] sqrt
    • [x] sub
    • [x] sigmoid
    • [x] pow
    • [x] mul
    • [ ] argmax
    • [ ] argmin
    • [x] all
    • [x] any
    • [x] max
    • [x] min
    • [x] dist
    • [x] mean
    • [x] std
    • [x] sum
    • [x] eq
    • [x] ge
    • [x] gt
    • [x] le
    • [x] lt
    • [x] ne
    • [x] clone
    • [x] dot
    • [x] matmul
    • [x] mm

    P1

    • [x] numel
    • [x] torch.randint_like
    • [x] torch.full_like
    • [x] torch.empty_like
    • [x] torch.full
    • [x] torch.empty
    • [x] chunk
    • [ ] gather
    • [x] index_select
    • [x] masked_select
    • [ ] scatter
    • [ ] tile
    • [ ] unbind
    • [x] ceil
    • [x] exp2
    • [x] floor
    • [x] log10
    • [x] log2
    • [x] neg
    • [x] round
    • [x] sign
    • [ ] bmm

    P2

    • [ ] logsumexp
    • [ ] median
    • [x] norm
    • [ ] prod
    • [ ] unique
    • [ ] argsort
    • [x] isfinite
    • [x] isinf
    • [x] isnan
    • [ ] sort
    • [ ] topk
    • [ ] cdist
    • [ ] flip
    • [ ] addbmm
    • [ ] addmm
    opened by PaParaZz1 2
  • dev(hansbug): add stream support for paralleling the calculations in tree

    dev(hansbug): add stream support for paralleling the calculations in tree

    Here is an example:

    import time
    
    import numpy as np
    import torch
    
    import treetensor.torch as ttorch
    
    N, M, T = 200, 2, 50
    S1, S2, S3 = 512, 1024, 2048
    
    
    def test_min():
        a = ttorch.randn({f'a{i}': (S1, S2) for i in range(N // M)}, device='cuda')
        b = ttorch.randn({f'a{i}': (S2, S3) for i in range(N // M)}, device='cuda')
    
        result = []
        for i in range(T):
            _start_time = time.time()
    
            _ = ttorch.matmul(a, b)
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def test_native():
        a = {f'a{i}': torch.randn(S1, S2, device='cuda') for i in range(N)}
        b = {f'a{i}': torch.randn(S2, S3, device='cuda') for i in range(N)}
    
        result = []
        for i in range(T):
            _start_time = time.time()
    
            for key in a.keys():
                _ = torch.matmul(a[key], b[key])
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def test_linear():
        a = ttorch.randn({f'a{i}': (S1, S2) for i in range(N)}, device='cuda')
        b = ttorch.randn({f'a{i}': (S2, S3) for i in range(N)}, device='cuda')
    
        result = []
        for i in range(T):
            _start_time = time.time()
    
            _ = ttorch.matmul(a, b)
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def test_stream():
        a = ttorch.randn({f'a{i}': (S1, S2) for i in range(N)}, device='cuda')
        b = ttorch.randn({f'a{i}': (S2, S3) for i in range(N)}, device='cuda')
    
        ttorch.stream(M)
        result = []
        for i in range(T):
            _start_time = time.time()
    
            _ = ttorch.matmul(a, b)
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def warmup():
        # warm up
        a = torch.randn(1024, 1024).cuda()
        b = torch.randn(1024, 1024).cuda()
        for _ in range(20):
            c = torch.matmul(a, b)
    
    
    if __name__ == '__main__':
        warmup()
        test_min()
        test_native()
        test_linear()
        test_stream()
    
    

    不过讲真,这个stream实际效果挺脆弱的,非常看tensor尺寸,大了小了都不行,GPU性能不够也不行,一弄不好还容易负优化,总之挺难伺候的。这部分如果想实用化的话得再研究研究。

    enhancement 
    opened by HansBug 1
  • Failure when try to convert between numpy and torch on Windows Python3.10

    Failure when try to convert between numpy and torch on Windows Python3.10

    See here: https://github.com/opendilab/DI-treetensor/runs/7820313811?check_suite_focus=true

    The bug is like

        @method_treelize(return_type=_get_tensor_class)
        def tensor(self: numpy.ndarray, *args, **kwargs):
    >       tensor_: torch.Tensor = torch.from_numpy(self)
    E       RuntimeError: Numpy is not available
    

    The only way I found to 'solve' this is to downgrade python to version3.9 to lower. So these tests will be skipped temporarily.

    bug 
    opened by HansBug 0
Releases(v0.4.0)
  • v0.4.0(Aug 14, 2022)

    What's Changed

    • dev(hansbug): remove support for py3.6 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/12
    • pytorch upgrade to 1.12 by @zjowowen in https://github.com/opendilab/DI-treetensor/pull/11
    • dev(hansbug): add test for torch1.12.0 and python3.10 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/13
    • dev(hansbug): add stream support for paralleling the calculations in tree by @HansBug in https://github.com/opendilab/DI-treetensor/pull/10

    New Contributors

    • @zjowowen made their first contribution in https://github.com/opendilab/DI-treetensor/pull/11

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.3.0...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jul 15, 2022)

    What's Changed

    • dev(hansbug): use newer version of treevalue 1.4.1 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/9

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.2.1...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Mar 22, 2022)

    What's Changed

    • fix(hansbug): fix uncompitable problem with walk by @HansBug in https://github.com/opendilab/DI-treetensor/pull/5
    • dev(hansbug): add tensor method for treetensor.numpy.ndarray by @HansBug in https://github.com/opendilab/DI-treetensor/pull/6
    • fix(hansbug): add subside support to all the functions. by @HansBug in https://github.com/opendilab/DI-treetensor/pull/7
    • doc(hansbug): add documentation for np.stack, np.split and other 3 functions. by @HansBug in https://github.com/opendilab/DI-treetensor/pull/8
    • release(hansbug): use version 0.2.1 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/4

    New Contributors

    • @HansBug made their first contribution in https://github.com/opendilab/DI-treetensor/pull/5

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 4, 2022)

    • Use newer version of treevalue>=1.2.0
    • Add support of torch 1.10.0
    • Add support of potc

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.1.0...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Dec 26, 2021)

  • v0.0.1(Sep 30, 2021)

Owner
OpenDILab
Open sourced Decision Intelligence (DI)
OpenDILab
Isaac Gym Reinforcement Learning Environments

Isaac Gym Reinforcement Learning Environments

NVIDIA Omniverse 714 Jan 08, 2023
Train Dense Passage Retriever (DPR) with a single GPU

Gradient Cached Dense Passage Retrieval Gradient Cached Dense Passage Retrieval (GC-DPR) - is an extension of the original DPR library. We introduce G

Luyu Gao 92 Jan 02, 2023
This repo provides the base code for pytorch-lightning and weight and biases simultaneous integration.

Write your model faster with pytorch-lightning-wadb-code-backbone This repository provides the base code for pytorch-lightning and weight and biases s

9 Mar 29, 2022
The code for paper Efficiently Solve the Max-cut Problem via a Quantum Qubit Rotation Algorithm

Quantum Qubit Rotation Algorithm Single qubit rotation gates $$ U(\Theta)=\bigotimes_{i=1}^n R_x (\phi_i) $$ QQRA for the max-cut problem This code wa

SheffieldWang 0 Oct 18, 2021
Clean and readable code for Decision Transformer: Reinforcement Learning via Sequence Modeling

Minimal implementation of Decision Transformer: Reinforcement Learning via Sequence Modeling in PyTorch for mujoco control tasks in OpenAI gym

Nikhil Barhate 104 Jan 06, 2023
Rainbow: Combining Improvements in Deep Reinforcement Learning

Rainbow Rainbow: Combining Improvements in Deep Reinforcement Learning [1]. Results and pretrained models can be found in the releases. DQN [2] Double

Kai Arulkumaran 1.4k Dec 29, 2022
Production First and Production Ready End-to-End Speech Recognition Toolkit

WeNet 中文版 Discussions | Docs | Papers | Runtime (x86) | Runtime (android) | Pretrained Models We share neural Net together. The main motivation of WeN

2.7k Jan 04, 2023
Official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

CrossViT This repository is the official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification. ArXiv If

International Business Machines 168 Dec 29, 2022
Semantic similarity computation with different state-of-the-art metrics

Semantic similarity computation with different state-of-the-art metrics Description • Installation • Usage • License Description TaxoSS is a semantic

6 Jun 22, 2022
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

ELECTRA Introduction ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using

Google Research 2.1k Dec 28, 2022
A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization

Website, Tutorials, and Docs    Uncertainty Toolbox A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualizatio

Uncertainty Toolbox 1.4k Dec 28, 2022
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 08, 2023
General Multi-label Image Classification with Transformers

General Multi-label Image Classification with Transformers Jack Lanchantin, Tianlu Wang, Vicente Ordóñez Román, Yanjun Qi Conference on Computer Visio

QData 154 Dec 21, 2022
Fake videos detection by tracing the source using video hashing retrieval.

Vision Transformer Based Video Hashing Retrieval for Tracing the Source of Fake Videos 🎉️ 📜 Directory Introduction VTL Trace Samples and Acc of Hash

56 Dec 22, 2022
Coarse implement of the paper "A Simultaneous Denoising and Dereverberation Framework with Target Decoupling", On DNS-2020 dataset, the DNSMOS of first stage is 3.42 and second stage is 3.47.

SDDNet Coarse implement of the paper "A Simultaneous Denoising and Dereverberation Framework with Target Decoupling", On DNS-2020 dataset, the DNSMOS

Cyril Lv 43 Nov 21, 2022
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
Summary of related papers on visual attention

This repo is built for paper: Attention Mechanisms in Computer Vision: A Survey paper Vision-Attention-Papers Channel attention Spatial attention Temp

MenghaoGuo 2.1k Dec 30, 2022
TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral

One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain level of

264 Jan 09, 2023
3rd Place Solution for ICCV 2021 Workshop SSLAD Track 3A - Continual Learning Classification Challenge

Online Continual Learning via Multiple Deep Metric Learning and Uncertainty-guided Episodic Memory Replay 3rd Place Solution for ICCV 2021 Workshop SS

Rifki Kurniawan 6 Nov 10, 2022
EASY - Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients.

EASY - Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients. This repository is the official im

Yassir BENDOU 57 Dec 26, 2022