Skip to content

CLI Reference

HeliosAOT is a Python-based CLI that supports various commands with configurable arguments. Each command can accept options directly via flags, via a YAML configuration file, or both, with flags taking precedence over YAML options. This allows for flexible and convenient configuration of the CLI.

Commands

  • convert: Convert a LiteRT model to a standalone C inference module.

Command: convert

The convert command emits a standalone C inference module—a set of portable .c/.h files implementing your model’s operators as optimized Ambiq-tuned kernels.

Usage

helios-aot convert [OPTIONS]

Passing arguments

  • Via flags: specify each option on the command line:

    helios-aot convert \
      --model-path ad01-int8.tflite \
      --output-path ./out \
      --module-name ad01-int8 \
      --module-type zephyr \
      --prefix ad01 \
      --memory-planner greedy \
      --include-test \
      --subgraph 0 \
      --verbose 1
    
  • Via YAML: provide a config file:

    helios-aot convert --path ad01-int8.yaml
    
Mixing YAML and CLI flags

The CLI allows providing both YAML and CLI flags. CLI flags take precedance over YAML options and will override any equivalent options in the YAML file. For example, you can use a YAML file for most options and then override the --verbose level directly in the command line.

helios-aot convert --path ad01-int8.yaml --verbose 2

Available Arguments

The following arguments are available for the convert command. The default values are shown in the table below. The --path is used to provide a YAML configuration file, which can contain all the options listed below.

Flag Type Default Description
--path <FILE> Path None Load conversion arguments from a YAML configuration file.
--model-path <FILE> Path model.tflite Path to LiteRT flatbuffer file.
--output-path <DIR> Path output.zip Base path for output module. Can also be a zip file.
--module-name <NAME> string helios_aot_nn Name used for module.
--module-type <TYPE> enum neuralspot Type of output module: neuralspot or zephyr.
--prefix <PREFIX> string aot Prefix added to sources to provide unique namespace.
--memory-planner <TYPE> enum greedy Memory planning strategy for tensor allocation.
--operator-attributes <JSON> JSON array [] Operator attributes. For complex scenarios, define attributes via YAML file.
--include-test flag false Include a test harness in the generated module.
--subgraph <INDEX> integer 0 Subgraph index to compile (for multi-subgraph LiteRT models).
--model-version <VERSION> string v1.0.0 Version of the model being compiled.
--verbose <LEVEL> integer 1 Verbosity level (0 = quiet, 1 = normal, 2 = debug).
--log-file <FILE> Path None Optional log file path.
Operator attributes

Operator attributes provides a powerful way to customize the behavior of specific operators in your model. You can specify attributes for individual operators, including their memory placement and other properties. Refer to the Operator attributes section for more details.


For use cases and examples of the convert command, see the Examples section.