Cupytorch - A small framework mimics PyTorch using CuPy or NumPy

Overview

CuPyTorch

CuPyTorch是一个小型PyTorch,名字来源于:

  1. 不同于已有的几个使用NumPy实现PyTorch的开源项目,本项目通过CuPy支持cuda计算
  2. 发音与Cool PyTorch接近,因为使用不超过1000行纯Python代码实现PyTorch确实很cool

CuPyTorch支持numpy和cupy两种计算后端,实现大量PyTorch常用功能,力求99%兼容PyTorch语法语义,并能轻松扩展,以下列出已经完成的功能:

  • tensor:

    • tensor: 创建张量
    • arange: 区间等差张量
    • stack: 堆叠张量
    • ones/zeros, ones/zeros_like: 全1/0张量
    • rand/randn, rand/randn_like: 0~1均匀分布/高斯分布张量
    • +, -, *, /, @, **: 双目数值运算及其右值和原地操作
    • >, <, ==, >=, <=, !=: 比较运算
    • &, |, ^: 双目逻辑运算
    • ~, -: 取反/取负运算
    • []: 基本和花式索引和切片操作
    • abs, exp, log, sqrt: 数值运算
    • sum, mean: 数据归约操作
    • max/min, amax/amin, argmax/argmin: 最大/小值及其索引计算
  • autograd: 支持以上所有非整数限定运算的自动微分

  • nn:

    • Module: 模型基类,管理参数,格式化打印
    • activation: ReLU, GeLU, Sigmoid, Tanh, Softmax, LogSoftmax
    • loss: L1Loss, MSELoss, NLLLoss, CrossEntropyLoss
    • layer: Linear, Dropout ,LSTM
  • optim:

    • Optimizer: 优化器基类,管理参数,格式化打印
    • SGD, Adam: 两个最常见的优化器
    • lr_scheduler: LambdaLRStepLR学习率调度器
  • utils.data:

    • DataLoader: 批量迭代Tensor数据,支持随机打乱
    • Dataset: 数据集基类,用于继承
    • TensorDataset: 纯用Tensor构成的数据集

cloc的代码统计结果:

Language files blank comment code
Python 22 353 27 992

自动微分示例:

import cupytorch as ct

a = ct.tensor([[-1., 2], [-3., 4.]], requires_grad=True)
b = ct.tensor([[4., 3.], [2., 1.]], requires_grad=True)
c = ct.tensor([[1., 2.], [0., 2.]], requires_grad=True)
d = ct.tensor([1., -2.], requires_grad=True)
e = a @ b.T
f = (c.max(1)[0].exp() + e[:, 0] + b.pow(2) + 2 * d.reshape(2, 1).abs()).mean()
print(f)
f.backward()
print(a.grad)
print(b.grad)
print(c.grad)
print(d.grad)

# tensor(18.889057, grad_fn=<MeanBackward>)
# tensor([[2.  1.5]
#         [2.  1.5]])
# tensor([[0.  4.5]
#         [1.  0.5]])
# tensor([[0.       3.694528]
#         [0.       3.694528]])
# tensor([ 1. -1.])

手写数字识别示例:

from pathlib import Path
import cupytorch as ct
from cupytorch import nn
from cupytorch.optim import SGD
from cupytorch.optim.lr_scheduler import StepLR
from cupytorch.utils.data import TensorDataset, DataLoader


class Net(nn.Module):
    
    def __init__(self, num_pixel: int, num_class: int):
        super().__init__()
        self.num_pixel = num_pixel
        self.fc1 = nn.Linear(num_pixel, 256)
        self.fc2 = nn.Linear(256, 64)
        self.fc3 = nn.Linear(64, num_class)
        self.act = nn.ReLU()
        self.drop = nn.Dropout(0.1)
    
    def forward(self, input: ct.Tensor) -> ct.Tensor:
        output = input.view(-1, self.num_pixel)
        output = self.drop(self.act(self.fc1(output)))
        output = self.drop(self.act(self.fc2(output)))
        return self.fc3(output)


def load(path: Path):
    # define how to load data as tensor
    pass


path = Path('../datasets/MNIST')
train_dl = DataLoader(TensorDataset(load(path / 'train-images-idx3-ubyte.gz'),
                                    load(path / 'train-labels-idx1-ubyte.gz')),
                      batch_size=20, shuffle=True)
test_dl = DataLoader(TensorDataset(load(path / 't10k-images-idx3-ubyte.gz'),
                                   load(path / 't10k-labels-idx1-ubyte.gz')),
                     batch_size=20, shuffle=False)
model = Net(28 * 28, 10)
criterion = nn.CrossEntropyLoss()
optimizer = SGD(model.parameters(), lr=1e-3, momentum=0.9)
scheduler = StepLR(optimizer, 5, 0.5)

print(model)
print(optimizer)
print(criterion)

for epoch in range(10):
    losses = 0
    for step, (x, y) in enumerate(train_dl, 1):
        optimizer.zero_grad()
        z = model(x)
        loss = criterion(z, y)
        loss.backward()
        optimizer.step()
        losses += loss.item()
        if step % 500 == 0:
            losses /= 500
            print(f'Epoch: {epoch}, Train Step: {step}, Train Loss: {losses:.6f}')
            losses = 0
    scheduler.step()

examples文件夹中提供了两个完整示例:

  • MNIST数据集上使用MLP做手写数字分类
  • NN5数据集上使用LSTM做ATM机取款预测

参考:

Owner
Xingkai Yu
Xingkai Yu
Proto-RL: Reinforcement Learning with Prototypical Representations

Proto-RL: Reinforcement Learning with Prototypical Representations This is a PyTorch implementation of Proto-RL from Reinforcement Learning with Proto

Denis Yarats 74 Dec 06, 2022
High dimensional black-box optimizer using Latent Action Monte Carlo Tree Search algorithm

LA-MCTS The code is based of paper Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search. Component LA-MCTS has thr

Meta Research 18 Oct 24, 2022
This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021.

inverse_attention This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021. Le

Firas Laakom 5 Jul 08, 2022
Picasso: a methods for embedding points in 2D in a way that respects distances while fitting a user-specified shape.

Picasso Code to generate Picasso embeddings of any input matrix. Picasso maps the points of an input matrix to user-defined, n-dimensional shape coord

Pachter Lab 45 Dec 23, 2022
MAGMA - a GPT-style multimodal model that can understand any combination of images and language

MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning Authors repo (alphabetical) Constantin (CoEich), Mayukh (Mayukh

Aleph Alpha GmbH 331 Jan 03, 2023
The official implementation of "Rethink Dilated Convolution for Real-time Semantic Segmentation"

RegSeg The official implementation of "Rethink Dilated Convolution for Real-time Semantic Segmentation" Paper: arxiv D block Decoder Setup Install the

Roland 61 Dec 27, 2022
Scalable and Elastic Deep Reinforcement Learning Using PyTorch. Please star. 🔥

ElegantRL “小雅”: Scalable and Elastic Deep Reinforcement Learning ElegantRL is developed for researchers and practitioners with the following advantage

AI4Finance Foundation 2.5k Jan 05, 2023
PAWS 🐾 Predicting View-Assignments with Support Samples

This repo provides a PyTorch implementation of PAWS (predicting view assignments with support samples), as described in the paper Semi-Supervised Learning of Visual Features by Non-Parametrically Pre

Facebook Research 437 Dec 23, 2022
Changing the Mind of Transformers for Topically-Controllable Language Generation

We will first introduce the how to run the IPython notebook demo by downloading our pretrained models. Then, we will introduce how to run our training and evaluation code.

IESL 20 Dec 06, 2022
Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers Results results on COCO val Backbone Method Lr Schd PQ Config Download

155 Dec 20, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

2 Oct 20, 2022
Python package to add text to images, textures and different backgrounds

nider Python package for text images generation and watermarking Free software: MIT license Documentation: https://nider.readthedocs.io. nider is an a

Vladyslav Ovchynnykov 131 Dec 30, 2022
Generating Videos with Scene Dynamics

Generating Videos with Scene Dynamics This repository contains an implementation of Generating Videos with Scene Dynamics by Carl Vondrick, Hamed Pirs

Carl Vondrick 706 Jan 04, 2023
NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling @ INTERSPEECH 2021 Accepted

NU-Wave — Official PyTorch Implementation NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling Junhyeok Lee, Seungu Han @ MINDsLab Inc

MINDs Lab 242 Dec 23, 2022
Image Classification - A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

0 Jan 23, 2022
Audio Visual Emotion Recognition using TDA

Audio Visual Emotion Recognition using TDA RAVDESS database with two datasets analyzed: Video and Audio dataset: Audio-Dataset: https://www.kaggle.com

Combinatorial Image Analysis research group 3 May 11, 2022
N-Person-Check-Checker-Splitter - A calculator app use to divide checks

N-Person-Check-Checker-Splitter This is my from-scratch programmed calculator ap

2 Feb 15, 2022
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Andy_Ge 62 Dec 21, 2022
Generating Radiology Reports via Memory-driven Transformer

R2Gen This is the implementation of Generating Radiology Reports via Memory-driven Transformer at EMNLP-2020. Citations If you use or extend our work,

CUHK-SZ NLP Group 101 Dec 13, 2022
Dual Attention Network for Scene Segmentation (CVPR2019)

Dual Attention Network for Scene Segmentation(CVPR2019) Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu Introduction W

Jun Fu 2.2k Dec 28, 2022