Fully Adaptive Bayesian Algorithm for Data Analysis (FABADA) is a new approach of noise reduction methods. In this repository is shown the package developed for this new method based on \citepaper.

Overview

Contributors Forks Stargazers Issues GNU License LinkedIn

Fully Adaptive Bayesian Algorithm for Data Analysis

FABADA

FABADA is a novel non-parametric noise reduction technique which arise from the point of view of Bayesian inference that iteratively evaluates possible smoothed models of the data, obtaining an estimation of the underlying signal that is statistically compatible with the noisy measurements. Iterations stop based on the evidence $E$ and the $\chi^2$ statistic of the last smooth model, and we compute the expected value of the signal as a weighted average of the smooth models. You can find the entire paper describing the new method in (link will be available soon).
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Method
  2. Getting Started
  3. Usage
  4. Results
  5. Contributing
  6. License
  7. Contact
  8. Cite

About The Method

This automatic method is focused in astronomical data, such as images (2D) or spectra (1D). Although, this doesn't mean it can be treat like a general noise reduction algorithm and can be use in any kind of two and one-dimensional data reproducing reliable results. The only requisite of the input data is an estimation of its variance.

(back to top)

Getting Started

We try to make the usage of FABADA as simple as possible. For that purpose, we have create a PyPI and Conda package to install FABADA in its latest version.

Prerequisites

The first requirement is to have a version of Python greater than 3.5. Although PyPI install the prerequisites itself, FABADA has two dependecies.

Installation

To install fabada we can, use the Python Package Index (PyPI) or Conda.

Using pip

  pip install fabada

we are currently working on uploading the package to the Conda system.

(back to top)

Usage

Along with the package two examples are given.

  • fabada_demo_image.py

In here we show how to use fabada for an astronomical grey image (two dimensional) First of all we have to import our library previously install and some dependecies

    from fabada import fabada
    import numpy as np
    from PIL import Image

Then we read the bubble image borrowed from the Hubble Space Telescope gallery. In our case we use the Pillow library for that. We also add some random Gaussian white noise using numpy.random.

    # IMPORTING IMAGE
    y = np.array(Image.open("bubble.png").convert('L'))

    # ADDING RANDOM GAUSSIAN NOISE
    np.random.seed(12431)
    sig      = 15             # Standard deviation of noise
    noise    = np.random.normal(0, sig ,y.shape)
    z        = y + noise
    variance = sig**2

Once the noisy image is generated we can apply fabada to produce an estimation of the underlying image, which we only have to call fabada and give it the variance of the noisy image

    y_recover = fabada(z,variance)

And its done 😉

As easy as one line of code.

The results obtained running this example would be:

Image Results

The left, middle and right panel corresponds to the true signal, the noisy meassurents and the estimation of fabada respectively. There is also shown the Peak Signal to Noise Ratio (PSNR) in dB and the Structural Similarity Index Measure (SSIM) at the bottom of the middle and right panel (PSNR/SSIM).

  • fabada_demo_spectra.py

In here we show how to use fabada for an astronomical spectrum (one dimensional), basically is the same as the example above since fabada is the same for one and two-dimensional data. First of all, we have to import our library previously install and some dependecies

    from fabada import fabada
    import pandas as pd
    import numpy as np

Then we read the interacting galaxy pair Arp 256 spectra, taken from the ASTROLIB PYSYNPHOT package which is store in arp256.csv. Again we add some random Gaussian white noise

    # IMPORTING SPECTRUM
    y = np.array(pd.read_csv('arp256.csv').flux)
    y = (y/y.max())*255  # Normalize to 255

    # ADDING RANDOM GAUSSIAN NOISE
    np.random.seed(12431)
    sig      = 10             # Standard deviation of noise
    noise    = np.random.normal(0, sig ,y.shape)
    z        = y + noise
    variance = sig**2

Once the noisy image is generated we can, again, apply fabada to produce an estimation of the underlying spectrum, which we only have to call fabada and give it the variance of the noisy image

    y_recover = fabada(z,variance)

And done again 😉

Which is exactly the same as for two dimensional data.

The results obtained running this example would be:

Spectra Results

The red, grey and black line represents the true signal, the noisy meassurents and the estimation of fabada respectively. There is also shown the Peak Signal to Noise Ratio (PSNR) in dB and the Structural Similarity Index Measure (SSIM) in the legend of the figure (PSNR/SSIM).

(back to top)

Results

All the results of the paper of this algorithm can be found in the folder results along with a jupyter notebook that allows to explore all of them through an interactive interface. You can run the jupyter notebook through Google Colab in this link --> Explore the results.

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the GNU General Public License. See LICENSE.txt for more information.

(back to top)

Contact

Pablo M Sánchez Alarcón - [email protected]

Yago Ascasibar Sequeiros - [email protected]

Project Link: https://github.com/PabloMSanAla/fabada

(back to top)

Cite

Thank you for using FABADA.

Citations and acknowledgement are vital for the continued work on this kind of algorithms.

Please cite the following record if you used FABADA in any of your publications.

@ARTICLE{2022arXiv220105145S,
author = {{Sanchez-Alarcon}, Pablo M and {Ascasibar Sequeiros}, Yago},
title = "{Fully Adaptive Bayesian Algorithm for Data Analysis, FABADA}",
journal = {arXiv e-prints},
keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Astrophysics of Galaxies, Astrophysics - Solar and Stellar Astrophysics, Computer Science - Computer Vision and Pattern Recognition, Physics - Data Analysis, Statistics and Probability},
year = 2022,
month = jan,
eid = {arXiv:2201.05145},
pages = {arXiv:2201.05145},
archivePrefix = {arXiv},
eprint = {2201.05145},
primaryClass = {astro-ph.IM},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220105145S}
}

Sanchez-Alarcon, P. M. and Ascasibar Sequeiros, Y., “Fully Adaptive Bayesian Algorithm for Data Analysis, FABADA”, arXiv e-prints, 2022.

https://arxiv.org/abs/2201.05145

(back to top)

Readme file taken from Best README Template.

You might also like...
pyhsmm - library for approximate unsupervised inference in Bayesian Hidden Markov Models (HMMs) and explicit-duration Hidden semi-Markov Models (HSMMs), focusing on the Bayesian Nonparametric extensions, the HDP-HMM and HDP-HSMM, mostly with weak-limit approximations.
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Hierarchical-Bayesian-Defense - Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference (Openreview) How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work
How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work

Deep Q-Learning Recommend papers The first step is to read and understand the method that you will implement. It was first introduced in a 2013 paper

PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build
PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build

simple, elegant and safe Introduction PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to ha

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

A variational Bayesian method for similarity learning in non-rigid image registration (CVPR 2022)
A variational Bayesian method for similarity learning in non-rigid image registration (CVPR 2022)

A variational Bayesian method for similarity learning in non-rigid image registration We provide the source code and the trained models used in the re

We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
Comments
  • chi2pdf

    chi2pdf

    https://github.com/PabloMSanAla/fabada/blob/44a0ae025d21a11235f6591f8fcacbf7c0cec1ec/fabada/init.py#L129

    The chi2pdf estimation is dependent on df. df, in the example demos, is set to data.size.

    In the case of fabada_demo_spectrum, data.size is 1430 samples.

    per wolfram alpha, the gamma function value of 715 is 1x10^1729, which is well out of the calculation range of any desktop computer.

    chi2_data = np.sum <-- a float chi2_pdf = stats.chi2.pdf(chi2_data, df=data.size)

    https://lost-contact.mit.edu/afs/inf.ed.ac.uk/group/teaching/matlab-help/R2014a/stats/chi2pdf.html

    chi2_pdf = (chi2data** (N - 2) / 2) * numpy.exp(-chi2sum / 2)
    / ((2 ** (N / 2)) * math.gamma(N / 2))

    As a result, this function is going to fail without any question, and numpy /python will happily ignore the NaN value which is always returned. this then turns chi2_pdf_derivative chi2_pdf_previous chi2_pdf_snd_derivative chi2_pdf_derivative_previous into NaN values as well.

    opened by falseywinchnet 0
  • data variance fixing unreachable

    data variance fixing unreachable

    https://github.com/PabloMSanAla/fabada/blob/master/fabada/init.py#L83 this line of code is unreachable: since all the nan's are already set to 0 previously

    opened by falseywinchnet 0
  • python equivalance

    python equivalance

    https://github.com/PabloMSanAla/fabada/blob/44a0ae025d21a11235f6591f8fcacbf7c0cec1ec/fabada/init.py#L115 This sets a reference, and afterwards, any update to the array being referenced also modifies the array referencing it.

    opened by falseywinchnet 2
Releases(v0.2)
Power Core Simulator!

Power Core Simulator Power Core Simulator is a simulator based off the Roblox game "Pinewood Builders Computer Core". In this simulator, you can choos

BananaJeans 1 Nov 13, 2021
An open source library for face detection in images. The face detection speed can reach 1000FPS.

libfacedetection This is an open source library for CNN-based face detection in images. The CNN model has been converted to static variables in C sour

Shiqi Yu 11.4k Dec 27, 2022
1st ranked 'driver careless behavior detection' for AI Online Competition 2021, hosted by MSIT Korea.

2021AICompetition-03 본 repo 는 mAy-I Inc. 팀으로 참가한 2021 인공지능 온라인 경진대회 중 [이미지] 운전 사고 예방을 위한 운전자 부주의 행동 검출 모델] 태스크 수행을 위한 레포지토리입니다. mAy-I 는 과학기술정보통신부가 주최하

Junhyuk Park 9 Dec 01, 2022
.NET bindings for the Pytorch engine

TorchSharp TorchSharp is a .NET library that provides access to the library that powers PyTorch. It is a work in progress, but already provides a .NET

Matteo Interlandi 17 Aug 30, 2021
PyTorch implementation of Lip to Speech Synthesis with Visual Context Attentional GAN (NeurIPS2021)

Lip to Speech Synthesis with Visual Context Attentional GAN This repository contains the PyTorch implementation of the following paper: Lip to Speech

6 Nov 02, 2022
BigbrotherBENL - Face recognition on the Big Brother episodes in Belgium and the Netherlands.

BigbrotherBENL - Face recognition on the Big Brother episodes in Belgium and the Netherlands. Keeping statistics of whom are most visible and recognisable in the series and wether or not it has an im

Frederik 2 Jan 04, 2022
Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback

CoSMo.pytorch Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback, Seungmin Lee*, Dongwan Kim*, Bohyung

Seung Min Lee 54 Dec 08, 2022
Read and write layered TIFF ImageSourceData and ImageResources tags

Read and write layered TIFF ImageSourceData and ImageResources tags Psdtags is a Python library to read and write the Adobe Photoshop(r) specific Imag

Christoph Gohlke 4 Feb 05, 2022
Open CV - Convert a picture to look like a cartoon sketch in python

Use the video https://www.youtube.com/watch?v=k7cVPGpnels for initial learning.

Sammith S Bharadwaj 3 Jan 29, 2022
Supplementary code for the AISTATS 2021 paper "Matern Gaussian Processes on Graphs".

Matern Gaussian Processes on Graphs This repo provides an extension for gpflow with Matérn kernels, inducing variables and trainable models implemente

41 Dec 17, 2022
Self-Supervised Learning with Kernel Dependence Maximization

Self-Supervised Learning with Kernel Dependence Maximization This is the code for SSL-HSIC, a self-supervised learning loss proposed in the paper Self

DeepMind 29 Dec 29, 2022
Localization Distillation for Object Detection

Localization Distillation for Object Detection This repo is based on mmDetection. This is the code for our paper: Localization Distillation

274 Dec 26, 2022
Deep learning operations reinvented (for pytorch, tensorflow, jax and others)

This video in better quality. einops Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and

Alex Rogozhnikov 6.2k Jan 01, 2023
Deep functional residue identification

DeepFRI Deep functional residue identification Citing @article {Gligorijevic2019, author = {Gligorijevic, Vladimir and Renfrew, P. Douglas and Koscio

Flatiron Institute 156 Dec 25, 2022
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Yahoo 3.8k Jan 04, 2023
Rule Extraction Methods for Interactive eXplainability

REMIX: Rule Extraction Methods for Interactive eXplainability This repository contains a variety of tools and methods for extracting interpretable rul

Mateo Espinosa Zarlenga 21 Jan 03, 2023
Code for CVPR 2021 paper: Anchor-Free Person Search

Introduction This is the implementationn for Anchor-Free Person Search in CVPR2021 License This project is released under the Apache 2.0 license. Inst

158 Jan 04, 2023
Generic U-Net Tensorflow implementation for image segmentation

Tensorflow Unet Warning This project is discontinued in favour of a Tensorflow 2 compatible reimplementation of this project found under https://githu

Joel Akeret 1.8k Dec 10, 2022
AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)

News 11 Jan 2020: We clean up the code to make it more readable! The old version is here: v1. AttGAN TIP Nov. 2019, arXiv Nov. 2017 TensorFlow impleme

Zhenliang He 568 Dec 14, 2022
Reusable constraint types to use with typing.Annotated

annotated-types PEP-593 added typing.Annotated as a way of adding context-specific metadata to existing types, and specifies that Annotated[T, x] shou

125 Dec 26, 2022