Package for decomposing EMG signals into motor unit firings, as used in Formento et al 2021.

Overview

EMGDecomp

DOI

Package for decomposing EMG signals into motor unit firings, created for Formento et al 2021. Based heavily on Negro et al, 2016. Supports GPU via CUDA and distributed computation via Dask.

Installation

pip install emgdecomp

For those that want to either use Dask and/or CUDA, you can alternatively run:

pip install emgdecomp[dask]
pip install emgdecomp[cuda]

Usage

Basic

# data should be a numpy array of n_channels x n_samples
sampling_rate, data = fetch_data(...)

decomp = EmgDecomposition(
  params=EmgDecompositionParams(
    sampling_rate=sampling_rate
  ))

firings = decomp.decompose(data)
print(firings)

The resulting firings object is a NumPy structured array containing the columns source_idx, discharge_samples, and discharge_seconds. source_idx is a 0-indexed ID for each "source" learned from the data; each source is a putative motor unit. discharge_samples indicates the sample at which the source was detected as "firing"; note that the algorithm can only detect sources up to a delay. discharge_seconds is the conversion of discharge_samples into seconds via the passed-in sampling rate.

As a structured NumPy array, the resulting firings object is suitable for conversion into a Pandas DataFrame:

import pandas as pd
print(pd.DataFrame(firings))

And the "sources" (i.e. components corresponding to motor units) can be interrogated as needed via the decomp.model property:

model = decomp.model
print(model.components)

Advanced

Given an already-fit EmgDecomposition object, you can then decompose a new batch of EMG data with its existing sources via transform:

# Assumes decomp is already fit
new_data = fetch_more_data(...)
new_firings = decomp.transform(new_data)
print(new_firings)

Alternatively, you can add new sources (i.e. new putative motor units) while retaining the existing sources with decompose_batch:

# Assumes decomp is already fit

more_data = fetch_even_more_data(...)
# Firings corresponding to sources that were both existing and newly added
firings2 = decomp.decompose_batch(more_data)
# Should have at least as many components as before decompose_batch()
print(decomp.model.components)

Finally, basic plotting capabilities are included as well:

from emgdecomp.plots import plot_firings, plot_muaps
plot_muaps(decomp, data, firings)
plot_firings(decomp, data, firings)

File I/O

The EmgDecomposition class is equipped with load and save methods that can save/load parameters to disk as needed; for example:

with open('/path/to/decomp.pkl', 'wb') as f:
  decomp.save(f)

with open('/path/to/decomp.pkl', 'rb') as f:
  decomp_reloaded = EmgDecomposition.load(f)

Dask and/or CUDA

Both Dask and CUDA are supported within EmgDecomposition for support for distributed computation across workers and/or use of GPU acceleration. Each are controlled via the use_dask and use_cuda boolean flags in the EmgDecomposition constructor.

Parameter Tuning

See the list of parameters in EmgDecompositionParameters. The defaults on master are set as they were used for Formento et. al, 2021 and should be reasonable defaults for others.

Documentation

See documentation on classes EmgDecomposition and EmgDecompositionParameters for more details.

Acknowledgements

If you enjoy this package and use it for your research, you can:

  • cite the Journal of Neural Engineering paper, Formento et. al 2021, for which this package was developed: TODO
  • cite this github repo using its DOI: 10.5281/zenodo.5641426
  • star this repo using the top-right star button.

Contributing / Questions

Feel free to open issues in this project if there are questions or feature requests. Pull requests for feature requests are very much encouraged, but feel free to create an issue first before implementation to ensure the desired change sounds appropriate.

You might also like...
Useful tool for inserting DataFrames into the Excel sheet.

PyCellFrame Insert Pandas DataFrames into the Excel sheet with a bunch of conditions Install pip install pycellframe Usage Examples Let's suppose that

Import, connect and transform data into Excel

xlwings_query Import, connect and transform data into Excel. Description The concept is to apply data transformations to a main query object. When the

Used for data processing in machine learning, and help us to construct ML model more easily from scratch

Used for data processing in machine learning, and help us to construct ML model more easily from scratch. Can be used in linear model, logistic regression model, and decision tree.

A Python package for Bayesian forecasting with object-oriented design and probabilistic models under the hood.
A Python package for Bayesian forecasting with object-oriented design and probabilistic models under the hood.

Disclaimer This project is stable and being incubated for long-term support. It may contain new experimental code, for which APIs are subject to chang

Statistical package in Python based on Pandas
Statistical package in Python based on Pandas

Pingouin is an open-source statistical package written in Python 3 and based mostly on Pandas and NumPy. Some of its main features are listed below. F

A Python package for the mathematical modeling of infectious diseases via compartmental models
A Python package for the mathematical modeling of infectious diseases via compartmental models

A Python package for the mathematical modeling of infectious diseases via compartmental models. Originally designed for epidemiologists, epispot can be adapted for almost any type of modeling scenario.

GWpy is a collaboration-driven Python package providing tools for studying data from ground-based gravitational-wave detectors

GWpy is a collaboration-driven Python package providing tools for studying data from ground-based gravitational-wave detectors. GWpy provides a user-f

A powerful data analysis package based on mathematical step functions.  Strongly aligned with pandas.
A powerful data analysis package based on mathematical step functions. Strongly aligned with pandas.

The leading use-case for the staircase package is for the creation and analysis of step functions. Pretty exciting huh. But don't hit the close button

Python Package for DataHerb: create, search, and load datasets.
Python Package for DataHerb: create, search, and load datasets.

The Python Package for DataHerb A DataHerb Core Service to Create and Load Datasets.

Comments
  • Expose functions for validation

    Expose functions for validation

    From https://github.com/carmenalab/emgdecomp/issues/3:

    Another question is that could you please provide some interface like '_assert_decomp_successful' at https://github.com/carmenalab/emgdecomp/blob/master/emgdecomp/tests/test_decomposition.py#L140 for validation?

    cc @shihan-ma

    opened by pbotros 1
  • Server restart error

    Server restart error

    Hi, Thanks for your repository!

    I used the scripts in the readme and tried to decompose a 10-s simulated signal (64 channels * 20480 samples). It works at most times, producing around 10 MUs against 18 real ones. However, sometimes our server restarted after running the scripts three or four times. We found that the program stuck at https://github.com/carmenalab/emgdecomp/blob/master/emgdecomp/decomposition.py#L405. After converting 'whitening_matrix' and 'normalized_data' to np.float32, the error decreases but still happens sometimes. Could you please give me some advice on the reason that induced the restart of the server? The memory seems okay and we did not use CUDA at this point.

    Another question is that could you please provide some interface like '_assert_decomp_successful' at https://github.com/carmenalab/emgdecomp/blob/master/emgdecomp/tests/test_decomposition.py#L140 for validation?

    Thanks!

    opened by shihan-ma 3
Releases(v0.1.0)
simple way to build the declarative and destributed data pipelines with python

unipipeline simple way to build the declarative and distributed data pipelines. Why you should use it Declarative strict config Scaffolding Fully type

aliaksandr-master 0 Jan 26, 2022
A pipeline that creates consensus sequences from a Nanopore reads. I

A pipeline that creates consensus sequences from a Nanopore reads. It clusters reads that are similar to each other and creates a consensus that is then identified using BLAST.

Ada Madejska 2 May 15, 2022
PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams

PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams Motivation When dataset freshness is critical, the annotating of high speed

4 Aug 02, 2022
Developed for analyzing the covariance for OrcVIO

about This repo is developed for analyzing the covariance for OrcVIO environment setup platform ubuntu 18.04 using conda conda env create --file envir

Sean 1 Dec 08, 2021
peptides.py is a pure-Python package to compute common descriptors for protein sequences

peptides.py Physicochemical properties and indices for amino-acid sequences. 🗺️ Overview peptides.py is a pure-Python package to compute common descr

Martin Larralde 32 Dec 31, 2022
Galvanalyser is a system for automatically storing data generated by battery cycling machines in a database

Galvanalyser is a system for automatically storing data generated by battery cycling machines in a database, using a set of "harvesters", whose job it

Battery Intelligence Lab 20 Sep 28, 2022
A notebook to analyze Amazon Recommendation Review Dataset.

Amazon Recommendation Review Dataset Analyzer A notebook to analyze Amazon Recommendation Review Dataset. Features Calculates distinct user count, dis

isleki 3 Aug 22, 2022
Synthetic Data Generation for tabular, relational and time series data.

An Open Source Project from the Data to AI Lab, at MIT Website: https://sdv.dev Documentation: https://sdv.dev/SDV User Guides Developer Guides Github

The Synthetic Data Vault Project 1.2k Jan 07, 2023
An Integrated Experimental Platform for time series data anomaly detection.

Curve Sorry to tell contributors and users. We decided to archive the project temporarily due to the employee work plan of collaborators. There are no

Baidu 486 Dec 21, 2022
Statistical package in Python based on Pandas

Pingouin is an open-source statistical package written in Python 3 and based mostly on Pandas and NumPy. Some of its main features are listed below. F

Raphael Vallat 1.2k Dec 31, 2022
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Emmanuel Boateng Sifah 1 Jan 19, 2022
Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with Theano

PyMC3 is a Python package for Bayesian statistical modeling and Probabilistic Machine Learning focusing on advanced Markov chain Monte Carlo (MCMC) an

PyMC 7.2k Dec 30, 2022
OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase working capital.

Overview OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase

Tom 3 Feb 12, 2022
Using Python to derive insights on particular Pokemon, Types, Generations, and Stats

Pokémon Analysis Andreas Nikolaidis February 2022 Introduction Exploratory Analysis Correlations & Descriptive Statistics Principal Component Analysis

Andreas 1 Feb 18, 2022
Deep universal probabilistic programming with Python and PyTorch

Getting Started | Documentation | Community | Contributing Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notab

7.7k Dec 30, 2022
An easy-to-use feature store

A feature store is a data storage system for data science and machine-learning. It can store raw data and also transformed features, which can be fed straight into an ML model or training script.

ByteHub AI 48 Dec 09, 2022
Hangar is version control for tensor data. Commit, branch, merge, revert, and collaborate in the data-defined software era.

Overview docs tests package Hangar is version control for tensor data. Commit, branch, merge, revert, and collaborate in the data-defined software era

Tensorwerk 193 Nov 29, 2022
SNV calling pipeline developed explicitly to process individual or trio vcf files obtained from Illumina based pipeline (grch37/grch38).

SNV Pipeline SNV calling pipeline developed explicitly to process individual or trio vcf files obtained from Illumina based pipeline (grch37/grch38).

East Genomics 1 Nov 02, 2021
Mining the Stack Overflow Developer Survey

Mining the Stack Overflow Developer Survey A prototype data mining application to compare the accuracy of decision tree and random forest regression m

1 Nov 16, 2021
A set of tools to analyse the output from TraDIS analyses

QuaTradis (Quadram TraDis) A set of tools to analyse the output from TraDIS analyses Contents Introduction Installation Required dependencies Bioconda

Quadram Institute Bioscience 2 Feb 16, 2022