Skip to content

Python Usage

HeartKit python package allows for more fine-grained control and customization. You can use the package to train, evaluate, and deploy models for both built-in taks and custom tasks. In addition, custom datasets and model architectures can be created and registered with corresponding factories.

Overview

The main components of HeartKit include the following:

Tasks

A Task inherits from the HKTask class and provides implementations for each of the main modes: download, train, evaluate, export, and demo. Each mode is provided with a set of parameters defined by HKTaskParams. Additional task-specific parameters can be extended to the HKTaskParams class. These tasks are then registered and accessed via the TaskFactory using a unique task name as the key and the custom Task class as the value.

1
2
3
import heartkit as hk

task = hk.TaskFactory.get('rhythm')

Datasets

A dataset inherits from the HKDataset class and provides implementations for downloading, preparing, and loading the dataset. Each dataset is provided with a set of custom parameters for initialization. Since each task will require specific transformations of the data, the dataset class provides only a general interface for loading the data. Each task must then provide a set of corresponding HKDataloader classes to transform the dataset into a format that can be used by the task. The datasets are registered and accessed via the DatasetFactory using a unique dataset name as the key and the Dataset class as the value. Each Task can then create its own DataloaderFactory that will provide a corresponding dataloader for each supported dataset. The Task's DataloaderFactory should use the same dataset names as the DatasetFactory to ensure that the correct dataloader is used for each dataset.

1
2
3
import heartkit as hk

ds = hk.DatasetFactory.get('ecg-synthetic')(num_pts=100)

Models

Lastly, HeartKit leverages neuralspot-edge's customizable model architectures. To enable creating custom network topologies from configuration files, HeartKit provides a ModelFactory that allows you to create models by specifying the model key and the model parameters. Each item in the factory is a callable that takes a keras.Input, model parameters, and number of classes as arguments and returns a keras.Model.

import keras
import heartkit as hk

inputs = keras.Input((256, 1), dtype="float32")
num_classes = 4
model_params = dict(...)

model = hk.ModelFactory.get('tcn')(
    inputs=inputs,
    params=model_params,
    num_classes=num_classes
)

Usage

Running a built-in task w/ existing datasets

  1. Create a task configuration file defining the model, datasets, class labels, mode parameters, and so on. Have a look at the HKTaskParams for more details on the available parameters.

  2. Leverage TaskFactory to get the desired built-in task.

  3. Run the task's main modes: download, train, evaluate, export, and/or demo.

import heartkit as hk

params = hk.HKTaskParams(...)  # (1)

task = hk.TaskFactory.get("rhythm")

task.download(params)  # Download dataset(s)

task.train(params)  # Train the model

task.evaluate(params)  # Evaluate the model

task.export(params)  # Export to TFLite
  1. Example configuration:
    hk.HKTaskParams(
        name="arr-2-eff-sm",
        project="hk-rhythm-2",
        job_dir="./results/arr-2-eff-sm",
        verbose=2,
        datasets=[hk.NamedParams(
            name="ptbxl",
            params=dict(
                path="./datasets/ptbxl"
            )
        )],
        num_classes=2,
        class_map={
            "0": 0,
            "7": 1,
            "8": 1
        },
        class_names=[
            "NORMAL", "AFIB/AFL"
        ],
        class_weights="balanced",
        sampling_rate=100,
        frame_size=512,
        samples_per_patient=[10, 10],
        val_samples_per_patient=[5, 5],
        test_samples_per_patient=[5, 5],
        val_patients=0.20,
        val_size=20000,
        test_size=20000,
        batch_size=256,
        buffer_size=20000,
        epochs=100,
        steps_per_epoch=50,
        val_metric="loss",
        lr_rate=1e-3,
        lr_cycles=1,
        threshold=0.75,
        val_metric_threshold=0.98,
        tflm_var_name="g_rhythm_model",
        tflm_file="rhythm_model_buffer.h",
        backend="pc",
        demo_size=896,
        display_report=True,
        quantization=hk.QuantizationParams(
            qat=False,
            format="INT8",
            io_type="int8",
            conversion="CONCRETE",
            debug=False
        ),
        preprocesses=[
            hk.NamedParams(
                name="layer_norm",
                params=dict(
                    epsilon=0.01,
                    name="znorm"
                )
            )
        ],
        augmentations=[
        ],
        model_file="model.keras",
        use_logits=False,
        architecture=hk.NamedParams(
            name="efficientnetv2",
            params=dict(
                input_filters=16,
                input_kernel_size=[1, 9],
                input_strides=[1, 2],
                blocks=[
                    {"filters": 24, "depth": 2, "kernel_size": [1, 9], "strides": [1, 2], "ex_ratio": 1,  "se_ratio": 2},
                    {"filters": 32, "depth": 2, "kernel_size": [1, 9], "strides": [1, 2], "ex_ratio": 1,  "se_ratio": 2},
                    {"filters": 40, "depth": 2, "kernel_size": [1, 9], "strides": [1, 2], "ex_ratio": 1,  "se_ratio": 2},
                    {"filters": 48, "depth": 1, "kernel_size": [1, 9], "strides": [1, 2], "ex_ratio": 1,  "se_ratio": 2}
                ],
                output_filters=0,
                include_top=True,
                use_logits=True
            )
        }
    )
    

Running a custom task w/ custom datasets

To create a custom task, check out the Bring-Your-Own-Task Guide.

To create a custom dataset, check out the Bring-Your-Own-Dataset Guide.