Getting Started
Welcome to the heliaRT getting-started guide. heliaRT keeps the familiar TensorFlow Lite for Microcontrollers programming model and adds Ambiq-focused runtime and kernel optimizations for Apollo platforms.
Start Here
Choose the setup path that matches how you want to evaluate or integrate heliaRT:
- Zephyr setup: integrate heliaRT into a Zephyr west workspace using either the raw module or the prebuilt release bundle.
- neuralSPOT setup: profile and deploy a
.tflitemodel withns_autodeployusing a fast Ambiq-oriented workflow. - Source builds: build heliaRT directly when you need a custom environment or tighter control over the build.
Core Concepts
If you have already used TFLM, the high-level model is the same:
.tfliteflatbuffer models- operator resolvers
- tensor arenas
MicroInterpreter-based inference- embedded-friendly logging and profiling
The main differences are in packaging, supported integration paths, and Ambiq-optimized kernels.
Setup Paths at a Glance
| Path | Best for | Notes |
|---|---|---|
| Zephyr raw module | Source-visible integration and custom builds | Public-safe source path uses Reference or open CMSIS-NN; HELIA requires a separate Ambiq-provided module |
| Zephyr prebuilt bundle | Fast-start Zephyr integration | Ambiq-optimized kernels embedded in the archive |
neuralSPOT with ns_autodeploy |
Quick profiling and deployment | Good first step when evaluating a model on hardware |
| Source build | Custom build systems and low-level integration | Most flexible, but most manual |
Recommended Order
- Start with neuralSPOT setup if your first goal is profiling and basic model validation.
- Move to Zephyr setup when you are integrating heliaRT into a product or application workspace.
- Use source builds when you need direct control over toolchains, archives, or custom packaging.