Using knowledge-informed machine learning on the PRONOSTIA (FEMTO) and IMS bearing data sets. Predict remaining-useful-life (RUL).

Overview

Knowledge Informed Machine Learning using a Weibull-based Loss Function

Exploring the concept of knowledge-informed machine learning with the use of a Weibull-based loss function. Used to predict remaining useful life (RUL) on the IMS and PRONOSTIA (also called FEMTO) bearing data sets.

Open In Colab Source code arXiv

Knowledge-informed machine learning is used on the IMS and PRONOSTIA bearing data sets for remaining useful life (RUL) prediction. The knowledge is integrated into a neural network through a novel Weibull-based loss function. A thorough statistical analysis of the Weibull-based loss function is conducted, demonstrating the effectiveness of the method on the PRONOSTIA data set. However, the Weibull-based loss function is less effective on the IMS data set.

The experiment will be detailed in the Journal of Prognostics and Health Management (accepted and pending publication -- preprint here), with an extensive discussion on the results, shortcomings, and benefits analysis. The paper also gives an overview of knowledge informed machine learning as it applies to prognostics and health management (PHM).

You can replicate the work, and all figures, by following the instructions in the Setup section. Even easier: run the Colab notebook!

If you have any questions, leave a comment in the discussion, or email me ([email protected]).

Summary

In this work, we use the definition of knowledge informed machine learning from von Rueden et al. (their excellent paper is here). Here's the general taxonomy of our knowledge informed machine learning experiment:

source_rep_int

Bearing vibration data (from the frequency domain) was used as input to feed-forward neural networks. The below figure demonstrates the data as a spectrogram (a) and the spectrogram after "binning" (b). The binned data was used as input.

spectrogram

A large hyper-parameter search was conducted on neural networks. Nine different Weibull-based loss functions were tested on each unique network.

The below chart is a qualitative method of showing the effectiveness of the Weibull-based loss functions on the two data sets.

loss function percentage

We also conducted a statistical analysis of the results, as shown below.

correlation of the weibull-based loss function to results

The top performing models' RUL trends are shown below, for both the IMS and PRONOSTIA data sets.

IMS RUL  trend
PRONOSTIA RUL  trend

Setup

Tested in linux (MacOS should also work). If you run windows you'll have to do much of the environment setup and data download/preprocessing manually.

To reproduce results:

  1. Clone this repo - clone https://github.com/tvhahn/weibull-knowledge-informed-ml.git

  2. Create virtual environment. Assumes that Conda is installed.

    • Linux/MacOS: use command from the Makefile in the root directory - make create_environment
    • Windows: from root directory - conda env create -f envweibull.yml
    • HPC: make create_environment will detect HPC environment and automatically create environment from make_hpc_venv.sh. Tested on Compute Canada. Modify make_hpc_venv.sh for your own HPC cluster.
  3. Download raw data.

    • Linux/MacOS: use make download. Will automatically download to appropriate data/raw directory.
    • Windows: Manually download the the IMS and PRONOSTIA (FEMTO) data sets from NASA prognostics data repository. Put in data/raw folder.
    • HPC: use make download. Will automatically detect HPC environment.
  4. Extract raw data.

    • Linux/MacOS: use make extract. Will automatically extract to appropriate data/raw directory.
    • Windows: Manually extract data. See the Project Organization section for folder structure.
    • HPC: use make download. Will automatically detect HPC environment. Again, modify for your HPC cluster.
  5. Ensure virtual environment is activated. conda activate weibull or source ~/weibull/bin/activate

  6. From root directory of weibull-knowledge-informed-ml, run pip install -e . -- this will give the python scripts access to the src folders.

  7. Train!

    • Linux/MacOS: use make train_ims or make train_femto. Note: set constants in the makefile for changing random search parameters. Currently set as default.

    • Windows: run manually by calling the script - python train_ims or python train_femto with the appropriate arguments. For example: src/models/train_models.py --data_set femto --path_data your_data_path --proj_dir your_project_directory_path

    • HPC: use make train_ims or make train_femto. The HPC environment should be automatically detected. A SLURM script will be run for a batch job.

      • Modify the train_modify_ims_hpc.sh or train_model_femto_hpc.sh in the src/models directory to meet the needs of your HPC cluster. This should work on Compute Canada out of the box.
  8. Filter out the poorly performing models and collate the results. This will create several results files in the models/final folder.

    • Linux/MacOS: use make summarize_ims_models or make summarize_femto_models. (note: set filter boundaries in summarize_model_results.py. Will eventually modify for use with Argparse...)
    • Windows: run manually by calling the script.
    • HPC: use make summarize_ims_models or make summarize_femto_models. Again, change filter requirements in the summarize_model_results.py script.
  9. Make the figures of the data and results.

    • Linux/MacOS: use make figures_data and make figures_results. Figures will be generated and placed in the reports/figures folder.
    • Windows: run manually by calling the script.
    • HPC: use make figures_data and make figures_results

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands to reproduce work, lik `make data` or `make train_ims`
├── README.md          <- The top-level README.
├── data
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump. Downloaded from the NASA Prognostic repository.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details (nothing in here yet)
│
├── models             <- Trained models, model predictions, and model summaries
│   ├── interim        <- Intermediate models that have not analyzed. Output from the random search.
│   ├── final          <- Final models that have been filtered and summarized. Several outpu csv files as well.
│
├── notebooks          <- Jupyter notebooks used for data exploration and analysis. Of varying quality.
│   ├── scratch        <- Scratch notebooks for quick experimentation.     
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials (empty).
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── envweibull.yml    <- The Conda environment file for reproducing the analysis environment
│                        recommend using Conda).
│
├── make_hpc_venv.sh  <- Bash script to create the HPC venv. Setup for my Compute Canada cluster.
│                        Modify to suit your own HPC cluster.
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models               
│   │   └── predict_model.py
│   │
│   └── visualization  <- Scripts to create figures of the data, results, and training progress
│       ├── visualize_data.py       
│       ├── visualize_results.py     
│       └── visualize_training.py    

Future List

As noted in the paper, the best thing would be to test out Weibull-based loss functions on large, and real-world, industrial datasets. Suitable applications may include large fleets of pumps or gas turbines.

Owner
Tim
Data science. Innovation. ML practitioner.
Tim
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".

SimMIM By Zhenda Xie*, Zheng Zhang*, Yue Cao*, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai and Han Hu*. This repo is the official implementation of

Microsoft 674 Dec 26, 2022
Simple ray intersection library similar to coldet - succedeed by libacc

Ray Intersection This project offers a header only acceleration structure library including implementations for a BVH- and KD-Tree. Applications may i

Nils Moehrle 29 Jun 23, 2022
A curated list of awesome resources combining Transformers with Neural Architecture Search

A curated list of awesome resources combining Transformers with Neural Architecture Search

Yash Mehta 173 Jan 03, 2023
(ICCV 2021) ProHMR - Probabilistic Modeling for Human Mesh Recovery

ProHMR - Probabilistic Modeling for Human Mesh Recovery Code repository for the paper: Probabilistic Modeling for Human Mesh Recovery Nikos Kolotouros

Nikos Kolotouros 209 Dec 13, 2022
Self-supervised learning (SSL) is a method of machine learning

Self-supervised learning (SSL) is a method of machine learning. It learns from unlabeled sample data. It can be regarded as an intermediate form between supervised and unsupervised learning.

Ashish Patel 4 May 26, 2022
Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style

Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style [NeurIPS 2021] Official code to reproduce the results and data p

Yash Sharma 27 Sep 19, 2022
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
Code in conjunction with the publication 'Contrastive Representation Learning for Hand Shape Estimation'

HanCo Dataset & Contrastive Representation Learning for Hand Shape Estimation Code in conjunction with the publication: Contrastive Representation Lea

Computer Vision Group, Albert-Ludwigs-Universität Freiburg 38 Dec 13, 2022
An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax

Simple Transformer An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax. Note: The only ex

29 Jun 16, 2022
EvoJAX is a scalable, general purpose, hardware-accelerated neuroevolution toolkit

EvoJAX: Hardware-Accelerated Neuroevolution EvoJAX is a scalable, general purpose, hardware-accelerated neuroevolution toolkit. Built on top of the JA

Google 598 Jan 07, 2023
Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Occlusion Robust 3D face Reconstruction Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee Code for Occlusion Robust 3D Face Reconstruction

Yeongjoon 31 Dec 19, 2022
Multivariate Boosted TRee

Multivariate Boosted TRee What is MBTR MBTR is a python package for multivariate boosted tree regressors trained in parameter space. The package can h

SUPSI-DACD-ISAAC 61 Dec 19, 2022
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Graph Transformer - Pytorch Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2. This was recently used by bot

Phil Wang 97 Dec 28, 2022
🐦 Quickly annotate data from the comfort of your Jupyter notebook

🐦 pigeon - Quickly annotate data on Jupyter Pigeon is a simple widget that lets you quickly annotate a dataset of unlabeled examples from the comfort

Anastasis Germanidis 647 Jan 05, 2023
[TOG 2021] PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling.

This repository contains the official PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling. We propose a SofGAN image generator to decouple the latent space o

Anpei Chen 694 Dec 23, 2022
A fast model to compute optical flow between two input images.

DCVNet: Dilated Cost Volumes for Fast Optical Flow This repository contains our implementation of the paper: @InProceedings{jiang2021dcvnet, title={

Huaizu Jiang 8 Sep 27, 2021
Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions

Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions Usage Clone the code to local. https://github.com/tanlab/MI

Computational Biology and Machine Learning lab @ TOBB ETU 3 Oct 18, 2022
Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate

News 05/17/2021 To make the comparison on ZJU-MoCap easier, we save quantitative and qualitative results of other methods at here, including Neural Vo

ZJU3DV 748 Jan 07, 2023
Learning Versatile Neural Architectures by Propagating Network Codes

Learning Versatile Neural Architectures by Propagating Network Codes Mingyu Ding, Yuqi Huo, Haoyu Lu, Linjie Yang, Zhe Wang, Zhiwu Lu, Jingdong Wang,

Mingyu Ding 36 Dec 06, 2022