Fast, differentiable sorting and ranking in PyTorch

Overview

Torchsort

Tests

Fast, differentiable sorting and ranking in PyTorch.

Pure PyTorch implementation of Fast Differentiable Sorting and Ranking (Blondel et al.). Much of the code is copied from the original Numpy implementation at google-research/fast-soft-sort, with the isotonic regression solver rewritten as a PyTorch C++ and CUDA extension.

Install

pip install torchsort

To build the CUDA extension you will need the CUDA toolchain installed. If you want to build in an environment without a CUDA runtime (e.g. docker), you will need to export the environment variable TORCH_CUDA_ARCH_LIST="Pascal;Volta;Turing;Ampere" before installing.

Usage

torchsort exposes two functions: soft_rank and soft_sort, each with parameters regularization ("l2" or "kl") and regularization_strength (a scalar value). Each will rank/sort the last dimension of a 2-d tensor, with an accuracy dependant upon the regularization strength:

import torch
import torchsort

x = torch.tensor([[8, 0, 5, 3, 2, 1, 6, 7, 9]])

torchsort.soft_sort(x, regularization_strength=1.0)
# tensor([[0.5556, 1.5556, 2.5556, 3.5556, 4.5556, 5.5556, 6.5556, 7.5556, 8.5556]])
torchsort.soft_sort(x, regularization_strength=0.1)
# tensor([[-0., 1., 2., 3., 5., 6., 7., 8., 9.]])

torchsort.soft_rank(x)
# tensor([[8., 1., 5., 4., 3., 2., 6., 7., 9.]])

Both operations are fully differentiable, on CPU or GPU:

x = torch.tensor([[8., 0., 5., 3., 2., 1., 6., 7., 9.]], requires_grad=True).cuda()
y = torchsort.soft_sort(x)

torch.autograd.grad(y[0, 0], x)
# (tensor([[0.1111, 0.1111, 0.1111, 0.1111, 0.1111, 0.1111, 0.1111, 0.1111, 0.1111]],
#         device='cuda:0'),)

Example

Spearman's Rank Coefficient

Spearman's rank coefficient is a very useful metric for measuring how monotonically related two variables are. We can use Torchsort to create a differentiable Spearman's rank coefficient function so that we can optimize a model directly for this metric:

import torch
import torchsort

def spearmanr(pred, target, **kw):
    pred = torchsort.soft_rank(pred, **kw)
    target = torchsort.soft_rank(target, **kw)
    pred = pred - pred.mean()
    pred = pred / pred.norm()
    target = target - target.mean()
    target = target / target.norm()
    return (pred * target).sum()

pred = torch.tensor([[1., 2., 3., 4., 5.]], requires_grad=True)
target = torch.tensor([[5., 6., 7., 8., 7.]])
spearman = spearmanr(pred, target)
# tensor(0.8321)

torch.autograd.grad(spearman, pred)
# (tensor([[-5.5470e-02,  2.9802e-09,  5.5470e-02,  1.1094e-01, -1.1094e-01]]),)

Benchmark

Benchmark

torchsort and fast_soft_sort each operate with a time complexity of O(n log n), each with some additional overhead when compared to the built-in torch.sort. With a batch size of 1 (see left), the Numba JIT'd forward pass of fast_soft_sort performs about on-par with the torchsort CPU kernel, however its backward pass still relies on some Python code, which greatly penalizes its performance.

Furthermore, the torchsort kernel supports batches, and yields much better performance than fast_soft_sort as the batch size increases.

Benchmark

The torchsort CUDA kernel performs quite well with sequence lengths under ~2000, and scales to extremely large batch sizes. In the future the CUDA kernel can likely be further optimized to achieve performance closer to that of the built in torch.sort.

Reference

@inproceedings{blondel2020fast,
  title={Fast differentiable sorting and ranking},
  author={Blondel, Mathieu and Teboul, Olivier and Berthet, Quentin and Djolonga, Josip},
  booktitle={International Conference on Machine Learning},
  pages={950--959},
  year={2020},
  organization={PMLR}
}
Comments
  • Failing to recognise torchsort cuda even when torch with cuda is successfully installed

    Failing to recognise torchsort cuda even when torch with cuda is successfully installed

    Thanks for the awesome package!

    system: linux (arch) python: 3.8

    So i've got a weird chicken-egg issue.

    I successfully install pytorch using the suggested conda command

    install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

    and verify it recognises the gpu a = torch.tensor([1,2,3]).cuda(device=0)

    then i run the demo code

    ImportError: You are trying to use the torchsort CUDA extension, but it looks like it is not available. Make sure you have the CUDA toolchain installed, and reinstall torchsort with pip install --force-reinstall --no-cache-dir torchsort to rebuild the extension.

    x = torch.tensor([[8., 0., 5., 3., 2., 1., 6., 7., 9.]], requires_grad=True).cuda()
    y = torchsort.soft_sort(x)
    

    and it gives me the error

    ImportError: You are trying to use the torchsort CUDA extension, but it looks like it is not available. Make sure you have the CUDA toolchain installed, and reinstall torchsort with `pip install --force-reinstall --no-cache-dir torchsort` to rebuild the extension.
    

    ...so I go to install (again) it as suggested. this is where the first weirdness happens: torchsort tries to download pytorch again!

    Collecting torch
    Downloading torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl (776.3 MB)
    ...
    Attempting uninstall: torch
    Found existing installation: torch 1.12.1
    Uninstalling torch-1.12.1:
    Successfully uninstalled torch-1.12.1
    ...
    Successfully installed torch-1.12.1 torchsort-0.1.9 typing-extensions-4.4.0
    

    so it succeeds, but when I got back into code the gpu is no longer recognised!

    And then re-install again with the conda snippet and pytorch itself is working again (but torchsort with cuda doesn't)

    Is there some procedure I'm missing? Or some python version weirdness that's breaking things?

    cheers!

    opened by MushroomHunting 13
  • pip install failed in windows

    pip install failed in windows

    Hi, I faced an installation error in windows. It installed fine in my ubuntu system. Could you tell me how I can fix it?

    Internal error: assertion failed at: "C:/dvs/p4/build/sw/rel/gpu_drv/r400/r400_00/drivers/compiler/edg/EDG_4.14/
    src/decl_spec.c", line 9596
        
        
        1 catastrophic error detected in the compilation of "C:/Users/Reasat/AppData/Local/Temp/tmpxft_000028b0_00000000
    -5_isotonic_cuda.cpp4.ii".
        Compilation aborted.
        isotonic_cuda.cu
        nvcc error   : 'cudafe++' died with status 0xC0000409
        error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.0\\bin\\nvcc.exe' failed with exit st
    atus 9
        Error in atexit._run_exitfuncs:
        Traceback (most recent call last):
          File "C:\Users\Reasat\AppData\Roaming\Python\Python37\site-packages\colorama\ansitowin32.py", line 59, in clos
    ed
            return stream.closed
        ValueError: underlying buffer has been detached
        ----------------------------------------
    ERROR: Command errored out with exit status 1: 'D:\Miniconda3\envs\pytorch\python.exe' -u -c 'import sys, setuptools
    , tokenize; sys.argv[0] = '"'"'C:\\Users\\Reasat\\AppData\\Local\\Temp\\pip-install-uw3b5i5w\\torchsort_f8d66d1aaac6
    44a78d37585cc7273f94\\setup.py'"'"'; __file__='"'"'C:\\Users\\Reasat\\AppData\\Local\\Temp\\pip-install-uw3b5i5w\\to
    rchsort_f8d66d1aaac644a78d37585cc7273f94\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.r
    ead().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --recor
    d 'C:\Users\Reasat\AppData\Local\Temp\pip-record-nbabfu8b\install-record.txt' --single-version-externally-managed --
    compile --install-headers 'D:\Miniconda3\envs\pytorch\Include\torchsort' Check the logs for full command output.
    
    opened by Reasat 12
  • cuda TypeError: 'NoneType' object is not callable

    cuda TypeError: 'NoneType' object is not callable

    >>> import torch
    >>> import torchsort
    >>> x = torch.tensor([[8., 0., 5., 3., 2., 1., 6., 7., 9.]], requires_grad=True).cuda()
    >>> y = torchsort.soft_sort(x)
    
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/shuiy/anaconda3/envs/pytorch_py3/lib/python3.7/site-packages/torchsort/ops.py", line 48, in soft_sort
        return SoftSort.apply(values, regularization, regularization_strength)
      File "/home/shuiy/anaconda3/envs/pytorch_py3/lib/python3.7/site-packages/torchsort/ops.py", line 132, in forward
        sol = isotonic_l2[s.device.type](w - s)
    TypeError: 'NoneType' object is not callable
    

    on jupyter notebook is:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    /tmp/ipykernel_8938/1075883647.py in <module>
          1 x = torch.tensor([[8., 0., 5., 3., 2., 1., 6., 7., 9.]], requires_grad=True).cuda()
    ----> 2 y = torchsort.soft_sort(x)
    
    ~/anaconda3/envs/pytorch_py3/lib/python3.7/site-packages/torchsort/ops.py in soft_sort(values, regularization, regularization_strength)
         46     if regularization not in ["l2", "kl"]:
         47         raise ValueError(f"'regularization' should be a 'l2' or 'kl'")
    ---> 48     return SoftSort.apply(values, regularization, regularization_strength)
         49 
         50 
    
    ~/anaconda3/envs/pytorch_py3/lib/python3.7/site-packages/torchsort/ops.py in forward(ctx, tensor, regularization, regularization_strength)
        130         # note reverse order of args
        131         if ctx.regularization == "l2":
    --> 132             sol = isotonic_l2[s.device.type](w - s)
        133         else:
        134             sol = isotonic_kl[s.device.type](w, s)
    
    TypeError: 'NoneType' object is not callable
    

    if x is on cpu(), run code is ok python 3.7.10, pytorch 1.9.0 , cudatoolkit=11.1, ubuntu 18.04

    opened by shuiyuejihua 11
  • Problem installing when no GPU present (in docker build step for example)

    Problem installing when no GPU present (in docker build step for example)

    Doesn't install during docker build phase (that does not have GPUs configured).

    Get error: /root/miniconda/lib/python3.8/site-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /opt/conda/conda-bld/pytorch_1607370172916/work/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0

    If I install on the same image after running it with GPUs enabled it installs fine.

    opened by pcnudde 8
  • Unable to install with CUDA

    Unable to install with CUDA

    Hi, I'm excited to use this package but unfortunately am having issues getting it working with CUDA. I am using a conda env and have followed the steps in the README related to that. My torch version is 1.11.0 and my cudatoolkit version is 11.3.1. My Python version is 3.8.13 on a Linux machine if that is relevant.

    From a fresh environment:

    conda install -c pytorch pytorch torchvision cudatoolkit=11.3
    pip install torchsort 
    

    then if I try to use torchsort on a CUDA tensor, I get ImportError: You are trying to use the torchsort CUDA extension, but it looks like it is not available. Make sure you have the CUDA toolchain installed, and reinstall torchsort withpip install --force-reinstall --no-cache-dir torchsortto rebuild the extension. (which I have tried a few times now).

    Any help in getting this working would be amazing! Thanks so much!

    opened by sachit-menon 7
  • Hi, NVIDIA CUDA version >= 11.4 does not seem to install successfully.

    Hi, NVIDIA CUDA version >= 11.4 does not seem to install successfully.

    I tried to install the package in Tesla A100 and GeForce RTX 3090 with CUDA version 11.4 both failed. Can you provide some help please? Thank you very much!

    opened by XiaoqiWang 7
  • Any help a pip3 install --user issue?

    Any help a pip3 install --user issue?

    Here is my error code:

    Using legacy 'setup.py install' for torchsort, since package 'wheel' is not installed.
    Installing collected packages: torchsort
        Running setup.py install for torchsort ... \       error
    ERROR: Command errored out with exit status 1:  
         command: /share/software/user/open/python/3.9.0/bin/python3.9 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rc2snp_l/torchsort_02ab43cb664b4f778f8057965fe69d12/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rc2snp_l/torchsort_02ab43cb664b4f778f8057965fe69d12/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-0etipwtv/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/users/huangda/.local/include/python3.9/torchsort
             cwd: /tmp/pip-install-rc2snp_l/torchsort_02ab43cb664b4f778f8057965fe69d12/
        Complete output (31 lines):  
        No CUDA runtime is found, using CUDA_HOME='/share/software/user/open/cuda/11.2.0'  
        running install  
        running build  
        running build_py  
        creating build  
        creating build/lib.linux-x86_64-3.9  
        creating build/lib.linux-x86_64-3.9/torchsort  
        copying torchsort/__init__.py -> build/lib.linux-x86_64-3.9/torchsort  
        copying torchsort/ops.py -> build/lib.linux-x86_64-3.9/torchsort  
        running egg_info  
        writing torchsort.egg-info/PKG-INFO  
        writing dependency_links to torchsort.egg-info/dependency_links.txt    
        writing requirements to torchsort.egg-info/requires.txt  
        writing top-level names to torchsort.egg-info/top_level.txt  
        /share/software/user/open/py-pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/utils/cpp_extension.py:369: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.  
          warnings.warn(msg.format('we could not find ninja.'))  
        reading manifest file 'torchsort.egg-info/SOURCES.txt'  
        reading manifest template 'MANIFEST.in'  
        writing manifest file 'torchsort.egg-info/SOURCES.txt'  
        copying torchsort/isotonic_cpu.cpp -> build/lib.linux-x86_64-3.9/torchsort  
        copying torchsort/isotonic_cuda.cu -> build/lib.linux-x86_64-3.9/torchsort  
        running build_ext   
        building 'torchsort.isotonic_cpu' extension  
        creating build/temp.linux-x86_64-3.9  
        creating build/temp.linux-x86_64-3.9/torchsort  
        gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/share/software/user/open/py- 
     pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/include -I/share/software/user/open/py-pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/share/software/user/open/py-pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/include/TH -I/share/software/user/open/py-pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/include/THC -I/share/software/user/open/python/3.9.0/include/python3.9 -c torchsort/isotonic_cpu.cpp -o build/temp.linux-x86_64-3.9/torchsort/isotonic_cpu.o -fopenmp -ffast-math -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=isotonic_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
        c++ -pthread -shared -L/share/software/user/open/libffi/3.2.1/lib64 -L/share/software/user/open/libressl/3.2.1/lib -L/share/software/user/open/sqlite/3.18.0/lib -L/share/software/user/open/tcltk/8.6.6/lib -L/share/software/user/open/xz/5.2.3/lib -L/share/software/user/open/zlib/1.2.11/lib build/temp.linux-x86_64-3.9/torchsort/isotonic_cpu.o -L/share/software/user/open/py-pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/lib -L/share/software/user/open/python/3.9.0/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.9/torchsort/isotonic_cpu.cpython-39-x86_64-linux-gnu.so
        building 'torchsort.isotonic_cuda' extension
        /share/software/user/open/cuda/11.2.0/bin/nvcc -I/share/software/user/open/py-pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/include -I/share/software/user/open/py-pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/share/software/user/open/py-pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/include/TH -I/share/software/user/open/py-pytorch/1.8.1_py39/lib/python3.9/site-packages/torch/include/THC -I/share/software/user/open/cuda/11.2.0/include -I/share/software/user/open/python/3.9.0/include/python3.9 -c torchsort/isotonic_cuda.cu -o build/temp.linux-x86_64-3.9/torchsort/isotonic_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=isotonic_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_70,code=compute_70 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -ccbin gcc -std=c++14
        nvcc error   : 'cicc' died due to signal 9 (Kill signal)  
        error: command '/share/software/user/open/cuda/11.2.0/bin/nvcc' failed with exit code 9  
        ----------------------------------------
    ERROR: Command errored out with exit status 1: /share/software/user/open/python/3.9.0/bin/python3.9 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rc2snp_l/torchsort_02ab43cb664b4f778f8057965fe69d12/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rc2snp_l/torchsort_02ab43cb664b4f778f8057965fe69d12/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-0etipwtv/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/users/huangda/.local/include/python3.9/torchsort Check the logs for full command output.
    

    Has anybody else experienced this before? The full command I am running is:

    TORCH_CUDA_ARCH_LIST="Pascal;Volta;Turing;Ampere" pip3 install --user torchsort

    opened by derekahuang 6
  • soft_rank has memory leak?

    soft_rank has memory leak?

    Hi I have installed the main branch, and I'm seeing that the torchsort.soft_rank function is causing memory leaks. Looking at, nvidia-smi, it does not free up any memory and I see the following printed out over and over:

    [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
    
    opened by ashkan-leo 6
  • RuntimeError: CUDA error: an illegal memory access was encountered

    RuntimeError: CUDA error: an illegal memory access was encountered

    I got the below error when training my network for a while (10-20 epochs). Traceback (most recent call last): File "train.py", line 232, in <module> main() File "train.py", line 179, in main train(cfg, train_loader, model, criterion, optimizer, lr_scheduler, epoch, final_output_dir, tb_log_dir, writer_dict) File "/home/maxchu/Fin/numerai_dev/function.py", line 62, in train loss, loss_indv = criterion(pred, target, auto_pred, auto_target) File "/home/maxchu/fin_venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/maxchu/Fin/numerai_dev/loss.py", line 39, in forward - 0.1 * spearman(pred, target, regularization_strength=1e-2), File "/home/maxchu/Fin/numerai_dev/loss.py", line 24, in spearman pred = torchsort.soft_rank( File "/home/maxchu/fin_venv/lib/python3.8/site-packages/torchsort-0.1.4-py3.8-linux-x86_64.egg/torchsort/ops.py", line 40, in soft_rank return SoftRank.apply(values, regularization, regularization_strength) File "/home/maxchu/fin_venv/lib/python3.8/site-packages/torchsort-0.1.4-py3.8-linux-x86_64.egg/torchsort/ops.py", line 96, in forward ret = (s - dual_sol).gather(1, inv_permutation) RuntimeError: CUDA error: an illegal memory access was encountered Some details:

    1. System: Ubuntu 18.06
    2. Python 3.8 using venv
    3. Install method: Manual compile (git clone -> python setup.py install)
    4. PyTorch 1.8.0 with cuda 10.2 (and correspinding pytorch geometric package)

    Please let me know if you need more informations.

    opened by MaxChu719 6
  • Cannot install using poetry

    Cannot install using poetry

    Hello I am installing the package using poetry and I get the following error. Any tip how I can do this?

    
      Command ['/home/ashkan/w/numersub/.venv/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/home/ashkan/w/numersub/.venv', '--no-deps', '/home/ashkan/.cache/pypoetry/artifacts/0e/71/4c/36d2482ca69c4e6c8ff5dbed48ecc16ad2bf8ea2c110395f1b50bfffc5/torchsort-0.1.9.tar.gz'] errored with the following return code 1, and output:
      Processing /home/ashkan/.cache/pypoetry/artifacts/0e/71/4c/36d2482ca69c4e6c8ff5dbed48ecc16ad2bf8ea2c110395f1b50bfffc5/torchsort-0.1.9.tar.gz
        Installing build dependencies: started
        Installing build dependencies: finished with status 'done'
        Getting requirements to build wheel: started
        Getting requirements to build wheel: finished with status 'error'
        error: subprocess-exited-with-error
    
        × Getting requirements to build wheel did not run successfully.
        │ exit code: 1
        ╰─> [17 lines of output]
            Traceback (most recent call last):
              File "/home/ashkan/w/numersub/.venv/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
                main()
              File "/home/ashkan/w/numersub/.venv/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
                json_out['return_val'] = hook(**hook_input['kwargs'])
              File "/home/ashkan/w/numersub/.venv/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 130, in get_requires_for_build_wheel
                return hook(config_settings)
              File "/tmp/pip-build-env-zc4jpjhm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel
                return self._get_build_requires(config_settings, requirements=['wheel'])
              File "/tmp/pip-build-env-zc4jpjhm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires
                self.run_setup()
              File "/tmp/pip-build-env-zc4jpjhm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 483, in run_setup
                super(_BuildMetaLegacyBackend,
              File "/tmp/pip-build-env-zc4jpjhm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 335, in run_setup
                exec(code, locals())
              File "<string>", line 8, in <module>
            ModuleNotFoundError: No module named 'torch'
            [end of output]
    
        note: This error originates from a subprocess, and is likely not a problem with pip.
      error: subprocess-exited-with-error
    
      × Getting requirements to build wheel did not run successfully.
      │ exit code: 1
      ╰─> See above for output.
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
    
    
    
    
    opened by ashkan-leo 5
  • isotonic_cpu.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor8data_ptrIfEEPT_v

    isotonic_cpu.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor8data_ptrIfEEPT_v

    Hello, thank you for the library. I am trying to use spearmanr example from the main README.md on V100s and A100s GPUs but getting error below. cuda 11.3 pytorch 1.10.0 python 3.7.11 torchsort 0.1.7

    File "/truba/home/fkahraman/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torchsort/ops.py", line 18, in from .isotonic_cpu import isotonic_kl as isotonic_kl_cpu ImportError: /truba/home/fkahraman/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torchsort/isotonic_cpu.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor8data_ptrIfEEPT_v from .isotonic_cpu import isotonic_kl as isotonic_kl_cpu ImportError: /truba/home/fkahraman/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torchsort/isotonic_cpu.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor8data_ptrIfEEPT_v ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 99273) of binary: /truba/home/fkahraman/miniconda3/envs/openmmlab/bin/python

    opened by fehmikahraman 5
  • Using TorchSort for TSP Solver

    Using TorchSort for TSP Solver

    Hello,

    I am using torchsort.soft_rank to get ranked indices from the model output logits, and then calculate a loss function for TSP (Travelling Salesman Problem) as follows. (I had previously tried it with argsort but switched to soft_rank as it was not differentiable).

    points = torch.rand(args.funcd, 2).cuda().requires_grad_()
    
    def path_reward(rank):
        #a = points[rank]
        a = points.index_select(0, rank)
        b = torch.cat((a[1:], a[0].unsqueeze(0)))
        return pdist(a,b).sum()
    
    rank = torchsort.soft_rank(logits, regularization_strength=0.001).floor().long() - 1
    

    However, network weights are still not getting updated by optimizer. I suspect this may be because index_select, as I have read that it is non-differentiable with respect to the index. Could you recommend an alternative solution for solving TSP via soft_rank?

    Happy new year.

    Sincerely, Kamer

    opened by kayuksel 1
  • CUDA benchmarks might be misleading

    CUDA benchmarks might be misleading

    I wanted to try to improve/modify the torchsort code a little so I tried making a copy of the SoftSort class and the soft_sort function.

    Running some benchmarks I got the following results: benchmark_custom benchmark_custom_cuda

    Which was worrying. The carbon copy diverges at a similar point to the figure in the readme:

    I then re-ran the benchmark with the exact same function twice (not even a copy) and got the same results.

    That code can be found here:

    import sys
    from collections import defaultdict
    from timeit import timeit
    
    import matplotlib.pyplot as plt
    import torch
    
    import torchsort
    
    try:
        import fast_soft_sort.pytorch_ops as fss
    except ImportError:
        print("install fast_soft_sort:")
        print("pip install git+https://github.com/google-research/fast-soft-sort")
        sys.exit()
    
    
    N = list(range(1, 5_000, 100))
    B = [2 ** i for i in range(9)]
    B_CUDA = [2 ** i for i in range(13)]
    SAMPLES = 100
    CONVERT = 1e-6  # convert seconds to micro-seconds
    
    
    def time(f):
        return timeit(f, number=SAMPLES) / SAMPLES / CONVERT
    
    
    def backward(f, x):
        y = f(x)
        torch.autograd.grad(y.sum(), x)
    
    
    def style(name):
        if name == "torch.sort":
            return {"color": "blue"}
        linestyle = "--" if "backward" in name else "-"
        if "fast_soft_sort" in name:
            return {"color": "green", "linestyle": linestyle}
        elif "again" in name:
            return {"color": "red", "linestyle": linestyle}
        else:
            return {"color": "orange", "linestyle": linestyle}
    
    
    def batch_size(ax):
        data = defaultdict(list)
        for b in B:
            x = torch.randn(b, 100)
            # data["torch.sort"].append(time(lambda: torch.sort(x)))
            data["torchsort"].append(time(lambda: torchsort.soft_sort(x)))
            data["torchsort_again"].append(time(lambda: torchsort.soft_sort(x)))
            # data["fast_soft_sort"].append(time(lambda: fss.soft_sort(x)))
            x = torch.randn(b, 100, requires_grad=True)
            data["torchsort (with backward)"].append(
                time(lambda: backward(torchsort.soft_sort, x))
            )
            data["torchsort_again (with backward)"].append(
                time(lambda: backward(torchsort.soft_sort, x))
            )
            # data["fast_soft_sort (with backward)"].append(
            #     time(lambda: backward(fss.soft_sort, x))
            # )
    
        for label in data.keys():
            ax.plot(B, data[label], label=label, **style(label))
        ax.set_xlabel("Batch Size")
        ax.set_ylim(0, 5000)
        ax.set_ylabel("Execution Time (μs)")
        ax.legend()
    
    
    def sequence_length(ax):
        data = defaultdict(list)
        for n in N:
            x = torch.randn(1, n)
            # data["torch.sort"].append(time(lambda: torch.sort(x)))
            data["torchsort"].append(time(lambda: torchsort.soft_sort(x)))
            data["torchsort_again"].append(time(lambda: torchsort.soft_sort(x)))
            # data["fast_soft_sort"].append(time(lambda: fss.soft_sort(x)))
            x = torch.randn(1, n, requires_grad=True)
            data["torchsort (with backward)"].append(
                time(lambda: backward(torchsort.soft_sort, x))
            )
            data["torchsort_again (with backward)"].append(
                time(lambda: backward(torchsort.soft_sort, x))
            )
            # data["fast_soft_sort (with backward)"].append(
            #     time(lambda: backward(fss.soft_sort, x))
            # )
    
        for label in data.keys():
            ax.plot(N, data[label], label=label, **style(label))
        ax.set_xlabel("Sequence Length")
        ax.set_ylim(0, 1000)
        ax.set_ylabel("Execution Time (μs)")
        ax.legend()
    
    
    def batch_size_cuda(ax):
        data = defaultdict(list)
        for b in B_CUDA:
            x = torch.randn(b, 100).cuda()
            # data["torch.sort"].append(time(lambda: torch.sort(x)))
            data["torchsort"].append(time(lambda: torchsort.soft_sort(x)))
            data["torchsort_again"].append(time(lambda: torchsort.soft_sort(x)))
            x = torch.randn(b, 100, requires_grad=True).cuda()
            data["torchsort (with backward)"].append(
                time(lambda: backward(torchsort.soft_sort, x))
            )
            data["torchsort_again (with backward)"].append(
                time(lambda: backward(torchsort.soft_sort, x))
            )
        for label in data.keys():
            ax.plot(B_CUDA, data[label], label=label, **style(label))
        ax.set_xlabel("Batch Size")
        ax.set_ylabel("Execution Time (μs)")
        ax.legend()
    
    
    def sequence_length_cuda(ax):
        data = defaultdict(list)
        for n in N:
            x = torch.randn(1, n).cuda()
            # data["torch.sort"].append(time(lambda: torch.sort(x)))
            data["torchsort"].append(time(lambda: torchsort.soft_sort(x)))
            data["torchsort_again"].append(time(lambda: torchsort.soft_sort(x)))
            x = torch.randn(1, n, requires_grad=True).cuda()
            data["torchsort (with backward)"].append(
                time(lambda: backward(torchsort.soft_sort, x))
            )
            data["torchsort_again (with backward)"].append(
                time(lambda: backward(torchsort.soft_sort, x))
            )
        for label in data.keys():
            ax.plot(N, data[label], label=label, **style(label))
        ax.set_xlabel("Sequence Length")
        ax.set_ylabel("Execution Time (μs)")
        ax.legend()
    
    
    if __name__ == "__main__":
        # jit/warmup
        x = torch.randn(1, 10, requires_grad=True)
        backward(torchsort.soft_sort, x)
        backward(fss.soft_sort, x)
    
        fig, (ax1, ax2) = plt.subplots(figsize=(10, 4), ncols=2)
        sequence_length(ax1)
        batch_size(ax2)
        fig.suptitle("Torchsort Benchmark: CPU")
        fig.tight_layout()
        plt.savefig("extra/benchmark3.png")
    
        if torch.cuda.is_available():
            # warmup
            x = torch.randn(1, 10, requires_grad=True).cuda()
            backward(torchsort.soft_sort, x)
    
            fig, (ax1, ax2) = plt.subplots(figsize=(10, 4), ncols=2)
            sequence_length_cuda(ax1)
            batch_size_cuda(ax2)
            fig.suptitle("Torchsort Benchmark: CUDA")
            fig.tight_layout()
            plt.savefig("extra/benchmark_cuda3.png")
    

    Any idea what this might depend on?

    opened by zimonitrome 1
  • Can I sort by specific column?

    Can I sort by specific column?

    Is there any way to sort a tensor by a given column?

    For example, soring by first column:

    input_tensor = torch.tensor([
            [1, 5], 
            [30, 30], 
            [6, 9], 
            [80, -2]
    ])
    
    target_tensor = torch.tensor([
            [80, -2],
            [30, 30], 
            [6, 9], 
            [1, 5], 
    ])
    
    enhancement 
    opened by zimonitrome 7
  • Reproducing CIFAR results

    Reproducing CIFAR results

    Thanks a lot for this implementation. I was wondering how can I use the repo to reproduce the results on CIFAR as reported in the paper. As I understand, the target one-hot encoding will serve as top-k classification(k=1). But, after obtaining the logits and passing through the softmax(putting output [0, 1]) the objective is to make the output follow the target ordering. How can this be achieved?

    question 
    opened by paganpasta 8
Releases(v0.1.9)
Owner
Teddy Koker
Machine Learning Researcher @mit-ll
Teddy Koker
People movement type classifier with YOLOv4 detection and SORT tracking.

Movement classification The goal of this project would be movement classification of people, in other words, walking (normal and fast) and running. Yo

4 Sep 21, 2021
PyTorch implementation of PSPNet segmentation network

pspnet-pytorch PyTorch implementation of PSPNet segmentation network Original paper Pyramid Scene Parsing Network Details This is a slightly different

Roman Trusov 532 Dec 29, 2022
Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own!

Serpent.AI - Game Agent Framework (Python) Update: Revival (May 2020) Development work has resumed on the framework with the aim of bringing it into 2

Serpent.AI 6.4k Jan 05, 2023
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
[CVPR'2020] DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data

DeepDeform (CVPR'2020) DeepDeform is an RGB-D video dataset containing over 390,000 RGB-D frames in 400 videos, with 5,533 optical and scene flow imag

Aljaz Bozic 165 Jan 09, 2023
Weakly Supervised End-to-End Learning (NeurIPS 2021)

WeaSEL: Weakly Supervised End-to-end Learning This is a PyTorch-Lightning-based framework, based on our End-to-End Weak Supervision paper (NeurIPS 202

Auton Lab, Carnegie Mellon University 131 Jan 06, 2023
TabNet for fastai

TabNet for fastai This is an adaptation of TabNet (Attention-based network for tabular data) for fastai (=2.0) library. The original paper https://ar

Mikhail Grankin 116 Oct 21, 2022
ReGAN: Sequence GAN using RE[INFORCE|LAX|BAR] based PG estimators

Sequence Generation with GANs trained by Gradient Estimation Requirements: PyTorch v0.3 Python 3.6 CUDA 9.1 (For GPU) Origin The idea is from paper Se

40 Nov 03, 2022
Based on the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral

Geometry-aware Instance-reweighted Adversarial Training This repository provides codes for Geometry-aware Instance-reweighted Adversarial Training (ht

Jingfeng 47 Dec 22, 2022
Recreate CenternetV2 based on MMDET.

Introduction This project is trying to Recreate CenternetV2 based on MMDET, which is proposed in paper Probabilistic two-stage detection. This project

25 Dec 09, 2022
IDRLnet, a Python toolbox for modeling and solving problems through Physics-Informed Neural Network (PINN) systematically.

IDRLnet IDRLnet is a machine learning library on top of PyTorch. Use IDRLnet if you need a machine learning library that solves both forward and inver

IDRL 105 Dec 17, 2022
Weakly-Supervised Semantic Segmentation Network with Deep Seeded Region Growing (CVPR 2018).

Weakly-Supervised Semantic Segmentation Network with Deep Seeded Region Growing (CVPR2018) By Zilong Huang, Xinggang Wang, Jiasi Wang, Wenyu Liu and J

Zilong Huang 245 Dec 13, 2022
A `Neural = Symbolic` framework for sound and complete weighted real-value logic

Logical Neural Networks LNNs are a novel Neuro = symbolic framework designed to seamlessly provide key properties of both neural nets (learning) and s

International Business Machines 138 Dec 19, 2022
A more easy-to-use implementation of KPConv based on PyTorch.

A more easy-to-use implementation of KPConv This repo contains a more easy-to-use implementation of KPConv based on PyTorch. Introduction KPConv is a

Zheng Qin 36 Dec 29, 2022
A fast poisson image editing implementation that can utilize multi-core CPU or GPU to handle a high-resolution image input.

Poisson Image Editing - A Parallel Implementation Jiayi Weng (jiayiwen), Zixu Chen (zixuc) Poisson Image Editing is a technique that can fuse two imag

Jiayi Weng 110 Dec 27, 2022
Code to reproduce results from the paper "AmbientGAN: Generative models from lossy measurements"

AmbientGAN: Generative models from lossy measurements This repository provides code to reproduce results from the paper AmbientGAN: Generative models

Ashish Bora 87 Oct 19, 2022
Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition"

Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition" Pre-trained Deep Convo

Ankush Malaker 5 Nov 11, 2022
Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks

Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks Abstract Facial expression recognition in video

Bogireddy Sai Prasanna Teja Reddy 103 Dec 29, 2022
You Only Look One-level Feature (YOLOF), CVPR2021, Detectron2

You Only Look One-level Feature (YOLOF), CVPR2021 A simple, fast, and efficient object detector without FPN. This repo provides a neat implementation

qiang chen 273 Jan 03, 2023
Numenta published papers code and data

Numenta research papers code and data This repository contains reproducible code for selected Numenta papers. It is currently under construction and w

Numenta 293 Jan 06, 2023