Skip to content

SleepKit

🚧 SleepKit is under active development


Documentation: https://ambiqai.github.io/sleepkit

Source Code: https://github.com/AmbiqAI/sleepkit


SleepKit is an AI Development Kit (ADK) that enables developers to easily build and deploy real-time sleep-monitoring models on Ambiq's family of ultra-low power SoCs. SleepKit explores a number of sleep related tasks including sleep staging, and sleep apnea detection. The kit includes a variety of datasets, feature sets, efficient model architectures, and a number of pre-trained models. The objective of the models is to outperform conventional, hand-crafted algorithms with efficient AI models that still fit within the stringent resource constraints of embedded devices. Furthermore, the included models are trainined using a large variety datasets- using a subset of biological signals that can be captured from a single body location such as head, chest, or wrist/hand. The goal is to enable models that can be deployed in real-world commercial and consumer applications that are viable for long-term use.

Key Features:

  • Real-time: Inference is performed in real-time on battery-powered, edge devices.
  • Efficient: Leverage modern AI techniques coupled with Ambiq's ultra-low power SoCs
  • Generalizable: Multi-modal, multi-task, multi-dataset
  • Open Source: SleepKit is open source and available on GitHub.

Please explore the SleepKit Docs, a comprehensive resource designed to help you understand and utilize all the built-in features and capabilities.

Getting Started

Installation

To get started, first install the local python package sleepkit along with its dependencies via pip or Poetry:

$ poetry install .

---> 100%
$ pip install sleepkit

---> 100%

Usage

SleepKit can be used as either a CLI-based tool or as a Python package to perform advanced development. In both forms, SleepKit exposes a number of modes and tasks outlined below. In addition, by leveraging highly-customizable configurations, SleepKit can be used to create custom workflows for a given application with minimal coding. Refer to the Quickstart to quickly get up and running in minutes.


Modes

SleepKit provides a number of modes that can be invoked for a given task. These modes can be accessed via the CLI or directly within the Python package.

  • Download: Download specified datasets
  • Feature: Generate features from datasets
  • Train: Train a model for specified task and feature set
  • Evaluate: Evaluate a model for specified task and feature set
  • Export: Export a trained model to TensorFlow Lite and TFLM
  • Demo: Run task-level demo on PC or remotely on Ambiq EVB

Task Factory

SleepKit includes a number of built-in tasks. Each task provides reference routines for training, evaluating, and exporting the model. The routines can be customized by providing a configuration file or by setting the parameters directly in the code. Additional tasks can be easily added to the SleepKit framework by creating a new task class and registering it to the task factory.

  • Detect: Detect sustained sleep/inactivity bouts
  • Stage: Perform advanced sleep stage assessment
  • Apnea: Detect hypopnea/apnea events
  • BYOT: Bring-Your-Own-Task (BYOT) to create custom tasks

Model Factory

SleepKit provides a model factory that allows you to easily create and train customized models. The model factory includes a number of modern networks well suited for efficient, real-time edge applications. Each model architecture exposes a number of high-level parameters that can be used to customize the network for a given application. These parameters can be set as part of the configuration accessible via the CLI and Python package. Check out the Model Factory Guide to learn more about the available network architectures.


Dataset Factory

SleepKit exposes several open-source datasets via the dataset factory. Each dataset has a corresponding Python class to aid in downloading and extracting the data. The datasets are used to generate feature sets that are then used to train and evaluate the models. Check out the Dataset Factory Guide to learn more about the available datasets along with their corresponding licenses and limitations.

  • MESA: A longitudinal investigation of factors associated with the development of subclinical cardiovascular disease and the progression of subclinical to clinical cardiovascular disease in 6,814 black, white, Hispanic, and Chinese

  • CMIDSS: The Child Mind Institute - Detect Sleep States (CMIDSS) dataset comprises 300 subjects with over 500 multi-day recordings of wrist-worn accelerometer data annotated with two event types: onset, the beginning of sleep, and wakeup, the end of sleep.

  • YSYW: A total of 1,983 PSG recordings were provided by the Massachusetts General Hospital’s (MGH) Sleep Lab in the Sleep Division together with the Computational Clinical Neurophysiology Laboratory, and the Clinical Data Ani- mation Center.

  • STAGES: The Stanford Technology Analytics and Genomics in Sleep (STAGES) study is a prospective cross-sectional, multi-site study involving 20 data collection sites from six centers including Stanford University, Bogan Sleep Consulting, Geisinger Health, Mayo Clinic, MedSleep, and St. Luke's Hospital.


Feature Store

SleepKit provides a feature store that allows you to easily create and extract features from the datasets. The feature store includes a number of feature sets used to train the included model zoo. Each feature set exposes a number of high-level parameters that can be used to customize the feature extraction process for a given application. These parameters can be set as part of the configuration accessible via the CLI and Python package. Check out the Feature Store Guide to learn more about the available feature set generators.


Model Zoo

A number of pre-trained models are available for each task. These models are trained on a variety of datasets and are optimized for deployment on Ambiq's ultra-low power SoCs. In addition to providing links to download the models, SleepKit provides the corresponding configuration files and performance metrics. The configuration files allow you to easily recreate the models or use them as a starting point for custom solutions. Furthermore, the performance metrics provide insights into the model's accuracy, precision, recall, and F1 score. For a number of the models, we provide experimental and ablation studies to showcase the impact of various design choices. Check out the Model Zoo to learn more about the available models and their corresponding performance metrics. Also explore the Experiments to learn more about the ablation studies and experimental results.


References