HeliaAOT for First-Time Users
This short guide is aimed at data scientists and firmware engineers who are new to HeliaAOT and need a fast mental model of what the tool does (and does not do) when targeting Ambiq MCUs.
What HeliaAOT Gives You
- Ahead-of-time compilation of a trained model (TFLite or AIR input) into self-contained C code tuned for Ambiq Cortex-M parts.
- Static memory planning that lays out buffers across MRAM/SRAM/DTCM/ITCM/PSRAM based on the selected platform.
- Optional Zephyr/neuralSPOT/CMake scaffolding so the generated module can be dropped into an app without extra glue.
- A deterministic, interpreter-free inference path—helpful for code review, certification, and power/latency profiling.
What You Need to Provide
- A model file (TFLite today; ONNX is planned).
- A target platform choice: either a built-in board name or a custom platform with
cpu,speeds, memory sizes, preferred memory order, and minimum alignment. - Conversion flags or a small YAML file that sets module output paths and optional test generation.
What HeliaAOT Does Not Do
- Training, quantization, or dataset handling (do these upstream and feed in a ready model).
- Automatic platform inference—custom boards must declare CPU, clocks, and memory explicitly.
- Runtime scheduling on dynamic shapes; the emitted code assumes static shapes and buffer sizes.
Quick Start Pattern
- Start with a known board (e.g.,
apollo510_evb) to validate your model flow end-to-end. - Generate a module with
--test.enabledso you get a small correctness harness for on-device sanity checks. - Profile the same model with neuralSPOT's Autodeploy workflow to compare TFLM vs HeliaAOT performance and footprint.
- When ready, fold the generated module into your firmware app (Zephyr or neuralSPOT) and run the on-device tests.
Limitations to Keep in Mind
- Operator support is currently limited to the kernels registered in this repo; check the registries reference before relying on a layer.
- Mixed-precision flows are constrained by the available kernels.
- Generated sources are static; rebuild when the model, platform, or planner settings change.
When to Reach for YAML vs CLI
- Use CLI flags for quick experiments and smoke tests.
- Use a YAML config when you want a repeatable recipe checked into source control (models, module paths, planner, and platform in one place).