Transform-Invariant Non-Negative Matrix Factorization

Overview

Flake8 Linter Pylint Linter Pytest and Coverage Build Documentation Publish to PyPI Open in Streamlit

Logo

Transform-Invariant Non-Negative Matrix Factorization

A comprehensive Python package for Non-Negative Matrix Factorization (NMF) with a focus on learning transform-invariant representations.

The packages supports multiple optimization backends and can be easily extended to handle application-specific types of transforms.

General Introduction

A general introduction to Non-Negative Matrix Factorization and the purpose of this package can be found on the corresponding GitHub Pages.

Installation

For using this package, you will need Python version 3.7 (or higher). The package is available via PyPI.

Installation is easiest using pip:

pip install tnmf

Demos and Examples

The package comes with a streamlit demo and a number of examples that demonstrate the capabilities of the TNMF model. They provide a good starting point for your own experiments.

Online Demo

Without requiring any installation, the demo is accessible via streamlit sharing.

Local Execution

Once the package is installed, the demo and the examples can be conveniently executed locally using the tnmf command:

  • To execute the demo, run tnmf demo.
  • A specific example can be executed by calling tnmf example .

To show the list of available examples, type tnmf example --help.

License

Copyright (c) 2021 Merck KGaA, Darmstadt, Germany

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

The full text of the license can be found in the file LICENSE in the repository root directory.

Contributing

Contributions to the package are always welcome and can be submitted via a pull request. Please note, that you have to agree to the Contributor License Agreement to contribute.

Working with the Code

To checkout the code and set up a working environment with all required Python packages, execute the following commands:

git checkout https://github.com/emdgroup/tnmf.git ./tnmf
cd tmnf
python3 -m virtualenv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt

Now, you should be able to execute the unit tests by calling pytest to verify that the code is running as expected.

Pull Requests

Before creating a pull request, you should always try to ensure that the automated code quality and unit tests do not fail. This section explains how to run them locally to understand and fix potential issues.

Code Style and Quality

Code style and quality are checked using flake8 and pylint. To execute them, change into the repository root directory, run the following commands and inspect their output:

flake8
pylint tnmf

In order for a pull request to be accaptable, no errors may be reported here.

Unit Tests

Automated unit tests reside inside the folder tnmf/tests. They can be executed via pytest by changing into the repository root directory and running

pytest

Debugging potential failures from the command line might be cumbersome. Most Python IDEs, however, also support pytest natively in their debugger. Again, for a pull request to be acceptable, no failures may be reported here.

Code Coverage

Code coverage in the unit tests is measured using coverage. A coverage report can be created locally from the repository root directory via

coverage run
coverage combine
coverage report

This will output a concise table with an overview of python files that are not fully covered with unit tests along with the line numbers of code that has not been executed. A more detailed, interactive report can be created using

coverage html

Then, you can open the file htmlcov/index.html in a web browser of your choice to navigate through code annotated with coverage data. Required overall coverage to is configured in setup.cfg, under the key fail_under in section [coverage:report].

Building the Documentation

To build the documentation locally, change into the doc subdirectory and run make html. Then, the documentation resides at doc\_build\html\index.html.

peptides.py is a pure-Python package to compute common descriptors for protein sequences

peptides.py Physicochemical properties and indices for amino-acid sequences. 🗺️ Overview peptides.py is a pure-Python package to compute common descr

Martin Larralde 32 Dec 31, 2022
A forecasting system dedicated to smart city data

smart-city-predictions System prognostyczny dedykowany dla danych inteligentnych miast Praca inżynierska realizowana przez Michała Stawikowskiego and

Kevin Lai 1 Nov 08, 2021
PyPSA: Python for Power System Analysis

1 Python for Power System Analysis Contents 1 Python for Power System Analysis 1.1 About 1.2 Documentation 1.3 Functionality 1.4 Example scripts as Ju

758 Dec 30, 2022
PyPDC is a Python package for calculating asymptotic Partial Directed Coherence estimations for brain connectivity analysis.

Python asymptotic Partial Directed Coherence and Directed Coherence estimation package for brain connectivity analysis. Free software: MIT license Doc

Heitor Baldo 3 Nov 26, 2022
2019 Data Science Bowl

Kaggle-2019-Data-Science-Bowl-Solution - Here i present my solution to kaggle 2019 data science bowl and how i improved it to win a silver medal in that competition.

Deepak Nandwani 1 Jan 01, 2022
ICLR 2022 Paper submission trend analysis

Visualize ICLR 2022 OpenReview Data

Jintang Li 75 Dec 06, 2022
BioMASS - A Python Framework for Modeling and Analysis of Signaling Systems

Mathematical modeling is a powerful method for the analysis of complex biological systems. Although there are many researches devoted on produ

BioMASS 22 Dec 27, 2022
Spaghetti: an open-source Python library for the analysis of network-based spatial data

pysal/spaghetti SPAtial GrapHs: nETworks, Topology, & Inference Spaghetti is an open-source Python library for the analysis of network-based spatial d

Python Spatial Analysis Library 203 Jan 03, 2023
Import, connect and transform data into Excel

xlwings_query Import, connect and transform data into Excel. Description The concept is to apply data transformations to a main query object. When the

George Karakostas 1 Jan 19, 2022
Bearsql allows you to query pandas dataframe with sql syntax.

Bearsql adds sql syntax on pandas dataframe. It uses duckdb to speedup the pandas processing and as the sql engine

14 Jun 22, 2022
Techdegree Data Analysis Project 2

Basketball Team Stats Tool In this project you will be writing a program that reads from the "constants" data (PLAYERS and TEAMS) in constants.py. Thi

2 Oct 23, 2021
Finding project directories in Python (data science) projects, just like there R rprojroot and here packages

Find relative paths from a project root directory Finding project directories in Python (data science) projects, just like there R here and rprojroot

Daniel Chen 102 Nov 16, 2022
A set of functions and analysis classes for solvation structure analysis

SolvationAnalysis The macroscopic behavior of a liquid is determined by its microscopic structure. For ionic systems, like batteries and many enzymes,

MDAnalysis 19 Nov 24, 2022
Stochastic Gradient Trees implementation in Python

Stochastic Gradient Trees - Python Stochastic Gradient Trees1 by Henry Gouk, Bernhard Pfahringer, and Eibe Frank implementation in Python. Based on th

John Koumentis 2 Nov 18, 2022
AptaMat is a simple script which aims to measure differences between DNA or RNA secondary structures.

AptaMAT Purpose AptaMat is a simple script which aims to measure differences between DNA or RNA secondary structures. The method is based on the compa

GEC UTC 3 Nov 03, 2022
Synthetic data need to preserve the statistical properties of real data in terms of their individual behavior and (inter-)dependences

Synthetic data need to preserve the statistical properties of real data in terms of their individual behavior and (inter-)dependences. Copula and functional Principle Component Analysis (fPCA) are st

32 Dec 20, 2022
Find exposed data in Azure with this public blob scanner

BlobHunter A tool for scanning Azure blob storage accounts for publicly opened blobs. BlobHunter is a part of "Hunting Azure Blobs Exposes Millions of

CyberArk 250 Jan 03, 2023
A distributed block-based data storage and compute engine

Nebula is an extremely-fast end-to-end interactive big data analytics solution. Nebula is designed as a high-performance columnar data storage and tabular OLAP engine.

Columns AI 131 Dec 26, 2022
Office365 (Microsoft365) audit log analysis tool

Office365 (Microsoft365) audit log analysis tool The header describes it all WHY?? The first line of code was written long time before other colleague

Anatoly 1 Jul 27, 2022