###################################################################
# #
# Structured Edge Detection Toolbox V3.0 #
# Piotr Dollar (pdollar-at-gmail.com) #
# #
###################################################################
1. Introduction.
Very fast edge detector (up to 60 fps depending on parameter settings) that achieves excellent accuracy. Can serve as input to any vision algorithm requiring high quality edge maps. Toolbox also includes the Edge Boxes object proposal generation method and fast superpixel code.
If you use the Structured Edge Detection Toolbox, we appreciate it if you cite an appropriate subset of the following papers:
@inproceedings{DollarICCV13edges,
author = {Piotr Doll\'ar and C. Lawrence Zitnick},
title = {Structured Forests for Fast Edge Detection},
booktitle = {ICCV},
year = {2013},
}
@article{DollarARXIV14edges,
author = {Piotr Doll\'ar and C. Lawrence Zitnick},
title = {Fast Edge Detection Using Structured Forests},
journal = {ArXiv},
year = {2014},
}
@inproceedings{ZitnickECCV14edgeBoxes,
author = {C. Lawrence Zitnick and Piotr Doll\'ar},
title = {Edge Boxes: Locating Object Proposals from Edges},
booktitle = {ECCV},
year = {2014},
}
###################################################################
2. License.
This code is published under the MSR-LA Full Rights License.
Please read license.txt for more info.
###################################################################
3. Installation.
a) This code is written for the Matlab interpreter (tested with versions R2013a-2013b) and requires the Matlab Image Processing Toolbox.
b) Additionally, Piotr's Matlab Toolbox (version 3.26 or later) is also required. It can be downloaded at:
https://pdollar.github.io/toolbox/.
c) Next, please compile mex code from within Matlab (note: win64/linux64 binaries included):
mex private/edgesDetectMex.cpp -outdir private [OMPPARAMS]
mex private/edgesNmsMex.cpp -outdir private [OMPPARAMS]
mex private/spDetectMex.cpp -outdir private [OMPPARAMS]
mex private/edgeBoxesMex.cpp -outdir private
Here [OMPPARAMS] are parameters for OpenMP and are OS and compiler dependent.
Windows: [OMPPARAMS] = '-DUSEOMP' 'OPTIMFLAGS="$OPTIMFLAGS' '/openmp"'
Linux V1: [OMPPARAMS] = '-DUSEOMP' CFLAGS="\$CFLAGS -fopenmp" LDFLAGS="\$LDFLAGS -fopenmp"
Linux V2: [OMPPARAMS] = '-DUSEOMP' CXXFLAGS="\$CXXFLAGS -fopenmp" LDFLAGS="\$LDFLAGS -fopenmp"
To compile without OpenMP simply omit [OMPPARAMS]; note that code will be single threaded in this case.
d) Add edge detection code to Matlab path (change to current directory first):
>> addpath(pwd); savepath;
e) Finally, optionally download the BSDS500 dataset (necessary for training/evaluation):
http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/
After downloading BSR/ should contain BSDS500, bench, and documentation.
f) A fully trained edge model for RGB images is available as part of this release. Additional models are available online, including RGBD/D/RGB models trained on the NYU depth dataset and a larger more accurate BSDS model.
###################################################################
4. Getting Started.
- Make sure to carefully follow the installation instructions above.
- Please see "edgesDemo.m", "edgeBoxesDemo" and "spDemo.m" to run demos and get basic usage information.
- For a detailed list of functionality see "Contents.m".
###################################################################
5. History.
Version NEW
- now hosting on github (https://github.com/pdollar/edges)
- suppress Mac warnings, added Mac binaries
- edgeBoxes: added adaptive nms variant described in arXiv15 paper
Version 3.01 (09/08/2014)
- spAffinities: minor fix (memory initialization)
- edgesDetect: minor fix (multiscale / multiple output case)
Version 3.0 (07/23/2014)
- added Edge Boxes code corresponding to ECCV paper
- added Sticky Superpixels code
- edge detection code unchanged
Version 2.0 (06/20/2014)
- second version corresponding to arXiv paper
- added sharpening option
- added evaluation and visualization code
- added NYUD demo and sweep support
- various tweaks/improvements/optimizations
Version 1.0 (11/12/2013)
- initial version corresponding to ICCV paper
###################################################################
Structured Edge Detection Toolbox
Overview
Implementation of CVPR 2020 Dual Super-Resolution Learning for Semantic Segmentation
Dual super-resolution learning for semantic segmentation 2021-01-02 Subpixel Update Happy new year! The 2020-12-29 update of SISR with subpixel conv p
Implementation of the paper "Language-agnostic representation learning of source code from structure and context".
Code Transformer This is an official PyTorch implementation of the CodeTransformer model proposed in: D. Zügner, T. Kirschstein, M. Catasta, J. Leskov
Implementing Vision Transformer (ViT) in PyTorch
Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re
Practical Single-Image Super-Resolution Using Look-Up Table
Practical Single-Image Super-Resolution Using Look-Up Table [Paper] Dependency Python 3.6 PyTorch glob numpy pillow tqdm tensorboardx 1. Training deep
Animate molecular orbital transitions using Psi4 and Blender
Molecular Orbital Transitions (MOT) Animate molecular orbital transitions using Psi4 and Blender Author: Maximilian Paradiz Dominguez, University of A
SwinIR: Image Restoration Using Swin Transformer
SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win
Code for the paper "Attention Approximates Sparse Distributed Memory"
Attention Approximates Sparse Distributed Memory - Codebase This is all of the code used to run analyses in the paper "Attention Approximates Sparse D
ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
ALFRED A Benchmark for Interpreting Grounded Instructions for Everyday Tasks Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han,
A colab notebook for training Stylegan2-ada on colab, transfer learning onto your own dataset.
Stylegan2-Ada-Google-Colab-Starter-Notebook A no thrills colab notebook for training Stylegan2-ada on colab. transfer learning onto your own dataset h
PyTorch implementation of the Flow Gaussian Mixture Model (FlowGMM) model from our paper
Flow Gaussian Mixture Model (FlowGMM) This repository contains a PyTorch implementation of the Flow Gaussian Mixture Model (FlowGMM) model from our pa
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection
CIFS This repository provides codes for CIFS (ICML 2021). CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Sel
Learning the Beauty in Songs: Neural Singing Voice Beautifier; ACL 2022 (Main conference); Official code
Learning the Beauty in Songs: Neural Singing Voice Beautifier Jinglin Liu, Chengxi Li, Yi Ren, Zhiying Zhu, Zhou Zhao Zhejiang University ACL 2022 Mai
Revisiting Self-Training for Few-Shot Learning of Language Model.
SFLM This is the implementation of the paper Revisiting Self-Training for Few-Shot Learning of Language Model. SFLM is short for self-training for few
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
CURL Rainbow Status: Archive (code is provided as-is, no updates expected) This is an implementation of CURL: Contrastive Unsupervised Representations
Pytorch implementation of Compressive Transformers, from Deepmind
Compressive Transformer in Pytorch Pytorch implementation of Compressive Transformers, a variant of Transformer-XL with compressed memory for long-ran
Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data
SEDE SEDE (Stack Exchange Data Explorer) is new dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their natural language description
An All-MLP solution for Vision, from Google AI
MLP Mixer - Pytorch An All-MLP solution for Vision, from Google AI, in Pytorch. No convolutions nor attention needed! Yannic Kilcher video Install $ p
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)
2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with
Unofficial implementation of MUSIQ (Multi-Scale Image Quality Transformer)
MUSIQ: Multi-Scale Image Quality Transformer Unofficial pytorch implementation of the paper "MUSIQ: Multi-Scale Image Quality Transformer" (paper link
Federated learning on graph, especially on graph neural networks (GNNs), knowledge graph, and private GNN.
Federated learning on graph, especially on graph neural networks (GNNs), knowledge graph, and private GNN.