DFM: A Performance Baseline for Deep Feature Matching

Related tags

Deep LearningDFM
Overview

DFM: A Performance Baseline for Deep Feature Matching

Python (Pytorch) and Matlab (MatConvNet) implementations of our paper DFM: A Performance Baseline for Deep Feature Matching at CVPR 2021 Image Matching Workshop.

Paper (CVF) | Paper (arXiv)
Presentation (live) | Presentation (recording)

Overview

Setup Environment

We strongly recommend using Anaconda. Open a terminal in ./python folder, and simply run the following lines to create the environment:

conda env create -f environment.yml
conda activte dfm

Dependencies
If you do not use conda, DFM needs the following dependencies:
(Versions are not strict; however, we have tried DFM with these specific versions.)

  • python=3.7.1
  • pytorch=1.7.1
  • torchvision=0.8.2
  • cudatoolkit=11.0
  • matplotlib=3.3.4
  • pillow=8.2.0
  • opencv=3.4.2
  • ipykernel=5.3.4
  • pyyaml=5.4.1

Enjoy with DFM!

Now you are ready to test DFM by the following command:

python dfm.py --input_pairs image_pairs.txt

You should make the image_pairs.txt file as following:

1A> 1B>
2A> 2B>
.
.
.
nA> nB>

If you want to run DFM with a specific configuration, you can make changes to the following arguments in config.yml:

  • Use enable_two_stage to enable or disable two stage approach (default: True)
    (Note: Make it enable for planar scenes with significant viewpoint changes, otherwise disable.)
  • Use model to change the pre-trained model (default: VGG19)
    (Note: DFM only supports VGG19 and VGG19_BN right now, we plan to add other backbones.)
  • Use ratio_th to change ratio test thresholds (default: [0.9, 0.9, 0.9, 0.9, 0.95, 1.0])
    (Note: These ratio test thresholds are for 1st to 5th layer, the last threshold (6th) are for Stage-0 and only usable when --enable_two_stage=True)
  • Use bidirectional to enable or disable bidirectional ratio test. (default: True)
    (Note: Make it enable to find more robust matches. Naturally, it should be enabled, make it False is only for similar results with our Matlab implementation since Matlab's matchFeatures function does not execute ratio test in a bidirectional way.)
  • Use display_results to enable or disable displaying results (default: True)
    (Note: If True, DFM saves matched image pairs to output_directory.)
  • Use output_directory to define output directory. (default: 'results')
    (Note: imageA_imageB_matches.npz will be created in output_directory for each image pair.)

Evaluation

Currently, we do not have support evaluation for our Python implementation. You can use our Image Matching Evaluation repository (coming soon), in which we have support to evaluate SuperPoint, SuperGlue, Patch2Pix, and DFM algorithms on HPatches. Also, you can use our Matlab implementation (see For Matlab Users section) to reproduce the results presented in the paper.

Notice

To reproduce our results given in the paper, use our Matlab implementation.
You can get more accurate results (but with fewer features) using Python implementation. It is mainly because MATLAB’s matchFeatures function does not execute ratio test in a bidirectional way, where our Python implementation performs bidirectional ratio test. Nevertheless, we made bidirectionality adjustable in our Python implementation as well.

For Matlab Users

We have implemented and tested DFM on MATLAB R2017b.

Prerequisites

You need to install MatConvNet (we have support for matconvnet-1.0-beta24). Follow the instructions on the official website.

Once you finished the installation of MatConvNet, you should download pretratined VGG-19 network to the ./matlab/models folder.

Running DFM

Now, you are ready to try DFM!

Just open and run main_DFM.m with your own images.

Evaluation on HPatches

Download HPatches sequences and extract it to ./matlab/data folder.

Run main_hpatches.m which is in ./matlab/HPatches Evaluation folder.

A results.txt file will be generetad in ./matlab/results/HPatches folder.

  • In the first column you can find the pair names.
  • In the 2-11 column you can find the Mean Matching Accuracy (MMA) results for 1-10 pixel thresholds.
  • In 12th column you can find number of matched features.
  • Columns 13-17 are for best homography estimation results (denoted as boe in the paper)
  • Columns 18-22 are for worst homography estimation results (denoted as woe in the paper)
  • Columns 22-71 are for 10 different homography estimation tests.

BibTeX Citation

Please cite our paper if you use the code:

@InProceedings{Efe_2021_CVPR,
    author    = {Efe, Ufuk and Ince, Kutalmis Gokalp and Alatan, Aydin},
    title     = {DFM: A Performance Baseline for Deep Feature Matching},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2021},
    pages     = {4284-4293}
}
Owner
MSc student @ METU
rliable is an open-source Python library for reliable evaluation, even with a handful of runs, on reinforcement learning and machine learnings benchmarks.

Open-source library for reliable evaluation on reinforcement learning and machine learning benchmarks. See NeurIPS 2021 oral for details.

Google Research 529 Jan 01, 2023
Testbed of AI Systems Quality Management

qunomon Description A testbed for testing and managing AI system qualities. Demo Sorry. Not deployment public server at alpha version. Requirement Ins

AIST AIRC 15 Nov 27, 2021
Multi-task Learning of Order-Consistent Causal Graphs (NeuRIPs 2021)

Multi-task Learning of Order-Consistent Causal Graphs (NeuRIPs 2021) Authors: Xinshi Chen, Haoran Sun, Caleb Ellington, Eric Xing, Le Song Link to pap

Xinshi Chen 2 Dec 20, 2021
Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Coming soon!

ToxiChat Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Install depen

Ashutosh Baheti 11 Jan 01, 2023
QilingLab challenge writeup

qiling lab writeup shielder 在 2021/7/21 發布了 QilingLab 來幫助學習 qiling framwork 的用法,剛好最近有用到,順手解了一下並寫了一下 writeup。 前情提要 Qiling 是一款功能強大的模擬框架,和 qemu user mode

Yuan 17 Nov 17, 2022
Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression.

Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression. Not an official Google product. Me

Google Research 27 Dec 12, 2022
PyTorch implementation of neural style transfer algorithm

neural-style-pt This is a PyTorch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias

770 Jan 02, 2023
Modular Probabilistic Programming on MXNet

MXFusion | | | | Tutorials | Documentation | Contribution Guide MXFusion is a modular deep probabilistic programming library. With MXFusion Modules yo

Amazon 100 Dec 10, 2022
DFFNet: An IoT-perceptive Dual Feature Fusion Network for General Real-time Semantic Segmentation

DFFNet Paper DFFNet: An IoT-perceptive Dual Feature Fusion Network for General Real-time Semantic Segmentation. Xiangyan Tang, Wenxuan Tu, Keqiu Li, J

4 Sep 23, 2022
[NeurIPS 2021] ORL: Unsupervised Object-Level Representation Learning from Scene Images

Unsupervised Object-Level Representation Learning from Scene Images This repository contains the official PyTorch implementation of the ORL algorithm

Jiahao Xie 55 Dec 03, 2022
Funnels: Exact maximum likelihood with dimensionality reduction.

Funnels This repository contains the code needed to reproduce the experiments from the paper: Funnels: Exact maximum likelihood with dimensionality re

2 Apr 21, 2022
This is an official implementation for "DeciWatch: A Simple Baseline for 10x Efficient 2D and 3D Pose Estimation"

DeciWatch: A Simple Baseline for 10× Efficient 2D and 3D Pose Estimation This repo is the official implementation of "DeciWatch: A Simple Baseline for

117 Dec 24, 2022
Python package to generate image embeddings with CLIP without PyTorch/TensorFlow

imgbeddings A Python package to generate embedding vectors from images, using OpenAI's robust CLIP model via Hugging Face transformers. These image em

Max Woolf 81 Jan 04, 2023
Two types of Recommender System : Content-based Recommender System and Colaborating filtering based recommender system

Recommender-Systems Two types of Recommender System : Content-based Recommender System and Colaborating filtering based recommender system So the data

Yash Kumar 0 Jan 20, 2022
Build tensorflow keras model pipelines in a single line of code. Created by Ram Seshadri. Collaborators welcome. Permission granted upon request.

deep_autoviml Build keras pipelines and models in a single line of code! Table of Contents Motivation How it works Technology Install Usage API Image

AutoViz and Auto_ViML 102 Dec 17, 2022
Non-Attentive-Tacotron - This is Pytorch Implementation of Google's Non-attentive Tacotron.

Non-attentive Tacotron - PyTorch Implementation This is Pytorch Implementation of Google's Non-attentive Tacotron, text-to-speech system. There is som

Jounghee Kim 46 Dec 19, 2022
Rule Based Classification Project For Python

Rule-Based-Classification-Project (ENG) Business Problem: A game company wants to create new level-based customer definitions (personas) by using some

Deniz Can OĞUZ 4 Oct 29, 2022
Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Yam Peleg 63 Sep 21, 2022
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

Kim Seonghyeon 502 Jan 03, 2023
Supervised Contrastive Learning for Downstream Optimized Sequence Representations

SupCL-Seq 📖 Supervised Contrastive Learning for Downstream Optimized Sequence representations (SupCS-Seq) accepted to be published in EMNLP 2021, ext

Hooman Sedghamiz 18 Oct 21, 2022