Skip to content

Python Usage

SleepKit python package allows for more fine-grained control and customization. You can use the package to train, evaluate, and deploy models for both built-in taks and custom tasks. In addition, custom datasets and model architectures can be created and registered with corresponding factories.

Overview

The main components of SleepKit include the following:

Tasks

A Task inherits from the sk.Task class and provides implementations for each of the main modes: download, feature, train, evaluate, export, and demo. Each mode is provided with a set of parameters defined by sk.TaskParams. Additional task-specific parameters can be extended to the TaskParams class. These tasks are then registered and accessed via the sk.TaskFactory using a unique task name as the key and the custom Task class as the value.

1
2
3
import sleepkit as sk

task = sk.TaskFactory.get('stage')

Datasets

A dataset inherits from the sk.Dataset class and provides implementations for downloading, preparing, and loading the dataset. Each dataset is provided with a set of custom parameters for initialization. The datasets are registered and accessed via the DatasetFactory using a unique dataset name as the key and the Dataset class as the value.

1
2
3
import sleepkit as sk

ds = sk.DatasetFactory.get('ecg-synthetic')(num_pts=100)

Features

Since each task will require specific transformations of the data, a feature store is used to generate features from the dataset. The feature store provides a set of feature sets that can be used by the task. Each feature set is provided with a set of custom parameters for initialization. The feature sets are registered and accessed via the sk.FeatureFactory using a unique feature set name as the key and the Feature class as the value.

Models

Lastly, SleepKit leverages neuralspot-edge's customizable model architectures. To enable creating custom network topologies from configuration files, SleepKit provides a sk.ModelFactory that allows you to create models by specifying the model key and the model parameters. Each item in the factory is a callable that takes a keras.Input, model parameters, and number of classes as arguments and returns a keras.Model.

import keras
import sleepkit as sk

inputs = keras.Input((256, 1), dtype="float32")
num_classes = 4
model_params = dict(...)

model = sk.ModelFactory.get('tcn')(
    inputs=inputs,
    params=model_params,
    num_classes=num_classes
)

Usage

Running a built-in task w/ existing datasets

  1. Create a task configuration file defining the model, datasets, class labels, mode parameters, and so on. Have a look at the sk.TaskParams for more details on the available parameters.

  2. Leverage sk.TaskFactory to get the desired built-in task.

  3. Run the task's main modes: download, feature, train, evaluate, export, and/or demo.

import sleepkit as hk

params = sk.TaskParams(...)  # (1)

task = sk.TaskFactory.get("stage")

task.download(params)  # Download dataset(s)

task.feature(params)  # Generate features

task.train(params)  # Train the model

task.evaluate(params)  # Evaluate the model

task.export(params)  # Export to TFLite
  1. Example configuration:
    sk.TaskParams(
        name="sd-2-tcn-sm",
        job_dir="./results/sd-2-tcn-sm",
        verbose=2,
    
        datasets=[
            hk.NamedParams(
                name="cmidss",
                params={
                    "path": "./datasets/cmidss"
                }
            )
        ],
    
        feature=hk.FeatureParams(
            name="FS-W-A-5",
            sampling_rate=0.2,
            frame_size=12,
            loader="hdf5",
            feat_key="features",
            label_key="detect_labels",
            mask_key="mask",
            feat_cols=None,
            save_path="./datasets/store/fs-w-a-5-60",
            params={}
        ),
    
        sampling_rate=0.0083333,
        frame_size=240,
    
        num_classes=2,
        class_map={
            0: 0,
            1: 1,
            2: 1,
            3: 1,
            4: 1,
            5: 1
        },
        class_names=["WAKE", "SLEEP"],
    
        samples_per_subject=100,
        val_samples_per_subject=100,
        test_samples_per_subject=50,
    
        val_size=4000,
        test_size=2500,
    
        val_subjects=0.20,
        batch_size=128,
        buffer_size=10000,
        epochs=200,
        steps_per_epoch=25,
        val_steps_per_epoch=25,
        val_metric="loss",
        lr_rate=1e-3,
        lr_cycles=1,
        label_smoothing=0,
    
        test_metric="f1",
        test_metric_threshold=0.02,
        tflm_var_name="sk_detect_flatbuffer",
        tflm_file="sk_detect_flatbuffer.h",
    
        backend="pc",
        display_report=True,
    
        quantization=hk.QuantizationParams(
            qat=False,
            mode="INT8",
            io_type="int8",
            concrete=True,
            debug=False
        ),
    
        model_file="model.keras",
        use_logits=False,
        architecture=hk.NamedParams(
            name="tcn",
            params={
                "input_kernel": [1, 5],
                "input_norm": "batch",
                "blocks": [
                    {"depth": 1, "branch": 1, "filters": 16, "kernel": [1, 5], "dilation": [1, 1], "dropout": 0.10, "ex_ratio": 1, "se_ratio": 4, "norm": "batch"},
                    {"depth": 1, "branch": 1, "filters": 32, "kernel": [1, 5], "dilation": [1, 2], "dropout": 0.10, "ex_ratio": 1, "se_ratio": 4, "norm": "batch"},
                    {"depth": 1, "branch": 1, "filters": 48, "kernel": [1, 5], "dilation": [1, 4], "dropout": 0.10, "ex_ratio": 1, "se_ratio": 4, "norm": "batch"},
                    {"depth": 1, "branch": 1, "filters": 64, "kernel": [1, 5], "dilation": [1, 8], "dropout": 0.10, "ex_ratio": 1, "se_ratio": 4, "norm": "batch"}
                ],
                "output_kernel": [1, 5],
                "include_top": True,
                "use_logits": True,
                "model_name": "tcn"
            }
        )
    

Running a custom task w/ custom datasets

To create a custom task, check out the Bring-Your-Own-Task Guide.

To create a custom dataset, check out the Bring-Your-Own-Dataset Guide.