Sleep staging from ECG, assisted with EEG

Overview

Sleep_Staging_Knowledge Distillation

This codebase implements knowledge distillation approach for ECG based sleep staging assisted by EEG based sleep staging model. Knowledge distillation is incorporated here by softmax distillation and another approach by Attention transfer based feature training. The combination of both is the proposed model.

The code implementation is done with Pytorch-lightning framework. Dependencies can be found in requirements.txt

RESEARCH

DATASET

Montreal Archive of Sleep Studies (MASS) - Complete 200 subject data used.

  • SS1 and SS3 subsets follow AASM guidelines
  • SS2, SS4, SS5 subsets follow R_K guidelines

KNOWLEDGE DISTILLATION FRAMEWORK

Knowledge distillation framework using minor modifications in U-Time as base model.

Improvement in bottleneck features from ECG_Base model to KD_model as a result of Knowledge distillation compared to EEG_base model features.

Case 1 : KD_model predicting correctly, ECG_Base predicting incorrectly

Case 2 : KD_model predicting incorrectly, ECG_Base predicting correctly

Run Training

Run train.py from 3-class or 4-class directories

To train baseline models

  python train.py --model_type <"base model type"> --model_ckpt_name <"ckpt name">

To run Knowledge Distillation

  • Feature Training
  python train.py --model_type "feat_train" --model_ckpt_name <"ckpt name"> --eeg_baseline_path <"eeg base ckpt path">
  • Feat_Temp (AT+SD+CL)
  python train.py --model_type "Feat_Temp" --model_ckpt_name <"ckpt name"> --feat_path <"path to feature trained ckpt">
  • Feat_WCE (AT+CL)
  python train.py --model_type "feat_wce" --model_ckpt_name <"ckpt name"> --feat_path <"path to feature trained ckpt">
  • KD-Temp (SD+CL)
  python train.py --model_type "kd_temp" --model_ckpt_name <"ckpt name"> --eeg_baseline_path <"eeg base ckpt path">

Run Testing

Run test.py from 3-class or 4-class directories

To test from checkpoints

  python test.py --model_type <"model type"> --test_ckpt <"Path to checkpoint>

Other arguments can be used for training and testing as per requirements

Reproducing experiments

Checkpoints to reproduce the test results can be found in this link

Directory Map

Dataset Spliting:

Splits Data in train-val-test for 4-class and 3-class cases (AASM and R_K both)

├─ Dataset_split
   ├── Data_split_3class_AllData30s_R_K.py
   ├── Data_split_3class_AllData_AASM.py
   ├── Data_split_AllData_30s_R_K.py
   └── Data_split_All_Data_AASM.py

3 Class Classification:

Run train.py with neccessary arguments for training 3-class sleep staging

├── 3_class
│   ├── datasets
│   │   ├── __init__.py
│   │   └── mass.py
│   │   
│   ├── models
│   │   ├── __init__.py
│   │   ├── ecg_base.py
│   │   ├── eeg_base.py
│   │   ├── FEAT_TEMP.py
│   │   ├── FEAT_TRAINING.py
│   │   ├── FEAT_WCE.py
│   │   └── KD_TEMP.py
│   │   
│   ├── test.py
│   ├── train.py
│   └── utils
│       ├── __init__.py
│       ├── arg_utils.py
│       ├── callback_utils.py
│       ├── dataset_utils.py
│       └── model_utils.py

4 Class Classification:

Run train.py with neccessary arguments for training 4-class sleep staging

├── 4_class
│   ├── datasets
│   │   ├── __init__.py
│   │   └── mass.py
│   │
│   ├── models
│   │   ├── __init__.py
│   │   ├── ecg_base.py
│   │   ├── eeg_base.py
│   │   ├── FEAT_TEMP.py
│   │   ├── FEAT_TRAINING.py
│   │   ├── FEAT_WCE.py
│   │   └── KD_TEMP.py
│   │   
│   ├── test.py
│   ├── train.py
│   └── utils
│       ├── __init__.py
│       ├── arg_utils.py
│       ├── callback_utils.py
│       ├── dataset_utils.py
│       └── model_utils.py

Acknowledgements

Authors

Negative Interactions for Improved Collaborative Filtering:

Negative Interactions for Improved Collaborative Filtering: Don’t go Deeper, go Higher This notebook provides an implementation in Python 3 of the alg

Harald Steck 21 Mar 05, 2022
This package is for running the semantic SLAM algorithm using extracted planar surfaces from the received detection

Semantic SLAM This package can perform optimization of pose estimated from VO/VIO methods which tend to drift over time. It uses planar surfaces extra

Hriday Bavle 125 Dec 02, 2022
Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021

NPMs: Neural Parametric Models Project Page | Paper | ArXiv | Video NPMs: Neural Parametric Models for 3D Deformable Shapes Pablo Palafox, Aljaz Bozic

PabloPalafox 109 Nov 22, 2022
We present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances to transformations applied to both the audio and video streams.

Multi-Modal Self-Supervision using GDT and StiCa This is an official pytorch implementation of papers: Multi-modal Self-Supervision from Generalized D

Facebook Research 42 Dec 09, 2022
Steer OpenAI's Jukebox with Music Taggers

TagBox Steer OpenAI's Jukebox with Music Taggers! The closest thing we have to VQGAN+CLIP for music! Unsupervised Source Separation By Steering Pretra

Ethan Manilow 34 Nov 02, 2022
Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Instrument Recognition.

Music Trees Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Ins

Hugo Flores García 32 Nov 22, 2022
PyTorch version implementation of DORN

DORN_PyTorch This is a PyTorch version implementation of DORN Reference H. Fu, M. Gong, C. Wang, K. Batmanghelich and D. Tao: Deep Ordinal Regression

Zilin.Zhang 3 Apr 27, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 05, 2022
Repository for "Improving evidential deep learning via multi-task learning," published in AAAI2022

Improving evidential deep learning via multi task learning It is a repository of AAAI2022 paper, “Improving evidential deep learning via multi-task le

deargen 11 Nov 19, 2022
《Rethinking Sptil Dimensions of Vision Trnsformers》(2021)

Rethinking Spatial Dimensions of Vision Transformers Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh | Paper NAVER

NAVER AI 224 Dec 27, 2022
An end-to-end image translation model with weight-map for color constancy

CCUnet An end-to-end image translation model with weight-map for color constancy 1. Download the dataset (take Colorchecker_recommended dataset as an

Jianhui Qiu 1 Dec 21, 2021
TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)

TAP: Text-Aware Pre-training TAP: Text-Aware Pre-training for Text-VQA and Text-Caption by Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Flo

Microsoft 61 Nov 14, 2022
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

46 Nov 09, 2022
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

CLIP (Contrastive Language–Image Pre-training) Experiments (Evaluation) Model Dataset Acc (%) ViT-B/32 (Paper) CIFAR100 65.1 ViT-B/32 (Our) CIFAR100 6

Myeongjun Kim 52 Jan 07, 2023
Model Zoo for MindSpore

Welcome to the Model Zoo for MindSpore In order to facilitate developers to enjoy the benefits of MindSpore framework, we will continue to add typical

MindSpore 226 Jan 07, 2023
Code for the upcoming CVPR 2021 paper

The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel J. Brostow and Michael

Niantic Labs 496 Dec 30, 2022
Data from "HateCheck: Functional Tests for Hate Speech Detection Models" (Röttger et al., ACL 2021)

In this repo, you can find the data from our ACL 2021 paper "HateCheck: Functional Tests for Hate Speech Detection Models". "test_suite_cases.csv" con

Paul Röttger 43 Nov 11, 2022
DeepRec is a recommendation engine based on TensorFlow.

DeepRec Introduction DeepRec is a recommendation engine based on TensorFlow 1.15, Intel-TensorFlow and NVIDIA-TensorFlow. Background Sparse model is a

Alibaba 676 Jan 03, 2023
moving object detection for satellite videos.

DSFNet: Dynamic and Static Fusion Network for Moving Object Detection in Satellite Videos Algorithm Introduction DSFNet: Dynamic and Static Fusion Net

xiaochao 39 Dec 16, 2022
Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning

MaCan 4.2k Dec 29, 2022