Python package for analyzing sensor-collected human motion data

Overview

Installation | Requirements | Usage | Contribution | Getting Help

Sensor Motion

PyPI - Python Version PyPI GitHub issues https://readthedocs.org/projects/sensormotion/badge/?version=latest https://badges.gitter.im/gitterHQ/gitter.png

Python package for analyzing sensor-collected human motion data (e.g. physical activity levels, gait dynamics).

Dedicated accelerometer devices, such as those made by Actigraph, usually bundle software for the analysis of the sensor data. In my work I often collect sensor data from smartphones and have not been able to find any comparable analysis software.

This Python package allows the user to extract human motion data, such as gait/walking dynamics, directly from accelerometer signals. Additionally, the package allows for the calculation of physical activity (PA) or moderate-to-vigorous physical activity (MVPA) counts, similar to activity count data offered by companies like Actigraph.

Installation

You can install this package using pip:

pip install sensormotion

Requirements

This package has the following dependencies, most of which are just Python packages:

  • Python 3.x
    • The easiest way to install Python is using the Anaconda distribution, as it also includes the other dependencies listed below
    • Python 2.x has not been tested, so backwards compatibility is not guaranteed
  • numpy
    • Included with Anaconda. Otherwise, install using pip (pip install numpy)
  • scipy
    • Included with Anaconda. Otherwise, install using pip (pip install scipy)
  • matplotlib
    • Included with Anaconda. Otherwise, install using pip (pip install matplotlib)

Usage

Here is brief example of extracting step-based metrics from raw vertical acceleration data:

Import the package:

import sensormotion as sm

If you have a vertical acceleration signal x, and its corresponding time signal t, we can begin by filtering the signal using a low-pass filter:

b, a = sm.signal.build_filter(frequency=10,
                              sample_rate=100,
                              filter_type='low',
                              filter_order=4)

x_filtered = sm.signal.filter_signal(b, a, signal=x)

images/filter.png

Next, we can detect the peaks (or valleys) in the filtered signal, which gives us the time and value of each detection. Optionally, we can include a plot of the signal and detected peaks/valleys:

peak_times, peak_values = sm.peak.find_peaks(time=t, signal=x_filtered,
                                             peak_type='valley',
                                             min_val=0.6, min_dist=30,
                                             plot=True)

images/peak_detection.png

From the detected peaks, we can then calculate step metrics like cadence and step time:

cadence = sm.gait.cadence(time=t, peak_times=peak_times, time_units='ms')
step_mean, step_sd, step_cov = sm.gait.step_time(peak_times=peak_times)

Physical activity counts and intensities can also be calculated from the acceleration data:

x_counts = sm.pa.convert_counts(x, time, integrate='simpson')
y_counts = sm.pa.convert_counts(y, time, integrate='simpson')
z_counts = sm.pa.convert_counts(z, time, integrate='simpson')
vm = sm.signal.vector_magnitude(x_counts, y_counts, z_counts)
categories, time_spent = sm.pa.cut_points(vm, set_name='butte_preschoolers', n_axis=3)

images/pa_counts.png

For a more in-depth tutorial, and more workflow examples, please take a look at the tutorial.

I would also recommend looking over the documentation to see other functionalities of the package.

Contribution

I work on this package in my spare time, on an "as needed" basis for my research projects. However, pull requests for bug fixes and new features are always welcome!

Please see the develop branch for the development version of the package, and check out the issues page for bug reports and feature requests.

Getting Help

You can find the full documentation for the package here

Python's built-in help function will show documentation for any module or function: help(sm.gait.step_time)

You're encouraged to post questions, bug reports, or feature requests as an issue

Alternatively, ask questions on Gitter

Comments
  • Question

    Question

    I am using sensormotion.py package for finding peaks for one of my applications. I want to know how normalized min_value (0-1) in peak.find_peaks is related to minimum detectable peak value.

    opened by vivekmahadev 2
  • I need help using this library!

    I need help using this library!

    Hi

    I'm very interested in using this library in my project. I have a test of 2min walking at 100Hz and I collect the data from accelerometer, gyro and magnetometer of an Iphone 6.

    I'm trying to use the library with my data but I could understand some things. For example this function sm.peak.find_peaks(ac_lags, ac, peak_type='peak', min_val= 0.6, min_dist=32, plot=True). What are the suitable values of min_val and min_dist parameters? Are they problem dependent? I have tried with many values and the step estimation is not correct.

    Please, could you help me?

    Best regards

    opened by ogreyesp 1
  • sm.gait.step_regularity IndexError

    sm.gait.step_regularity IndexError

    step_reg, stride_reg = sm.gait.step_regularity(ac_peak_values) File ".../python3.6/site-packages/sensormotion-1.1.0-py3.6.egg/sensormotion/gait.py", line 128, in step_regularity ac_d2 = peaks_half[2] # second dominant period i.e. a stride (left-left) sm.gait.step_regularity IndexError: index 2 is out of bounds for axis 0 with size 2

    opened by jiakang 1
  • Example: Importing from live cvs file?

    Example: Importing from live cvs file?

    opened by RandoSY 1
  • Question about step regularity

    Question about step regularity

    Hey, I'm using your package right now to generate features for a dataset. I have looked at the paper by Moe Nilssen et al. and tried to follow the steps for calculating step and stride regularity. However, I wonder why you still do the following calculation at the end:

    step_reg = ac_d1 / ac_lag0 stride_reg = ac_d2 / ac_lag0

    Can you help me with this?

    opened by vanessabin 1
Releases(1.1.4)
Owner
Simon Ho
Data Science | Machine Learning | Statistics | Gaming
Simon Ho
PyTorch implementation for NCL (Neighborhood-enrighed Contrastive Learning)

NCL (Neighborhood-enrighed Contrastive Learning) This is the official PyTorch implementation for the paper: Zihan Lin*, Changxin Tian*, Yupeng Hou* Wa

RUCAIBox 73 Jan 03, 2023
Advanced Pandas Vault — Utilities, Functions and Snippets (by @firmai).

PandasVault ⁠— Advanced Pandas Functions and Code Snippets The only Pandas utility package you would ever need. It has no exotic external dependencies

Derek Snow 374 Jan 07, 2023
This is a repo documenting the best practices in PySpark.

Spark-Syntax This is a public repo documenting all of the "best practices" of writing PySpark code from what I have learnt from working with PySpark f

Eric Xiao 447 Dec 25, 2022
ForecastGA is a Python tool to forecast Google Analytics data using several popular time series models.

ForecastGA is a tool that combines a couple of popular libraries, Atspy and googleanalytics, with a few enhancements.

JR Oakes 36 Jan 03, 2023
Stitch together Nanopore tiled amplicon data without polishing a reference

Stitch together Nanopore tiled amplicon data using a reference guided approach Tiled amplicon data, like those produced from primers designed with pri

Amanda Warr 14 Aug 30, 2022
Analyzing Earth Observation (EO) data is complex and solutions often require custom tailored algorithms.

eo-grow Earth observation framework for scaled-up processing in Python. Analyzing Earth Observation (EO) data is complex and solutions often require c

Sentinel Hub 18 Dec 23, 2022
Additional tools for particle accelerator data analysis and machine information

PyLHC Tools This package is a collection of useful scripts and tools for the Optics Measurements and Corrections group (OMC) at CERN. Documentation Au

PyLHC 3 Apr 13, 2022
CINECA molecular dynamics tutorial set

High Performance Molecular Dynamics Logging into CINECA's computer systems To logon to the M100 system use the following command from an SSH client ss

J. W. Dell 0 Mar 13, 2022
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.

Statistical Analysis 📈 This repository focuses on statistical analysis and the exploration used on various data sets for personal and professional pr

Andy Pham 1 Sep 03, 2022
statDistros is a Python library for dealing with various statistical distributions

StatisticalDistributions statDistros statDistros is a Python library for dealing with various statistical distributions. Now it provides various stati

1 Oct 03, 2021
Handle, manipulate, and convert data with units in Python

unyt A package for handling numpy arrays with units. Often writing code that deals with data that has units can be confusing. A function might return

The yt project 304 Jan 02, 2023
Intake is a lightweight package for finding, investigating, loading and disseminating data.

Intake: A general interface for loading data Intake is a lightweight set of tools for loading and sharing data in data science projects. Intake helps

Intake 851 Jan 01, 2023
Candlestick Pattern Recognition with Python and TA-Lib

Candlestick-Pattern-Recognition-with-Python-and-TA-Lib Goal Look at the S&P500 to try and get a better understanding of these candlestick patterns and

Ganesh Jainarain 11 Oct 07, 2022
CubingB is a timer/analyzer for speedsolving Rubik's cubes, with smart cube support

CubingB is a timer/analyzer for speedsolving Rubik's cubes (and related puzzles). It focuses on supporting "smart cubes" (i.e. bluetooth cubes) for recording the exact moves of a solve in real time.

Zach Wegner 5 Sep 18, 2022
MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]

MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020] by Kaisiyuan Wang, Qianyi Wu, Linsen Song, Zhuoqian Yang, Wa

112 Dec 28, 2022
INFO-H515 - Big Data Scalable Analytics

INFO-H515 - Big Data Scalable Analytics Jacopo De Stefani, Giovanni Buroni, Théo Verhelst and Gianluca Bontempi - Machine Learning Group Exercise clas

Yann-Aël Le Borgne 58 Dec 11, 2022
Cleaning and analysing aggregated UK political polling data.

Analysing aggregated UK polling data The tweet collection & storage pipeline used in email-service is used to also collect tweets from @britainelects.

Ajay Pethani 0 Dec 22, 2021
TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI) data

tedana: TE Dependent ANAlysis TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI)

136 Dec 22, 2022
Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

Feng Ruohang 88 Nov 04, 2022
Provide a market analysis (R)

market-study Provide a market analysis (R) - FRENCH Produisez une étude de marché Prérequis Pour effectuer ce projet, vous devrez maîtriser la manipul

1 Feb 13, 2022