Skip to content

Built for Ambiq Edge AI

heliaRT is Ambiq's optimized TensorFlow Lite for Microcontrollers runtime for Apollo platforms. It is designed to help developers bring efficient inference to ultra-low-power Ambiq silicon, with tuned kernels that take advantage of Apollo CPU, DSP, and MVE capabilities where available.

Start Here

  • Getting Started: choose a setup path for Zephyr, neuralSPOT, or source builds
  • Features: understand how heliaRT maps onto familiar TFLM concepts
  • Examples: see recommended starting points for Ambiq application bring-up
  • Benchmarks: review performance-focused documentation for supported Ambiq targets

Why heliaRT

  • Optimized specifically for Ambiq Apollo devices
  • Focused on low-power embedded inference
  • Available as both source and prebuilt integration paths
  • Aligned with Ambiq developer workflows such as neuralSPOT AutoDeploy and Zephyr

Supported Ambiq Targets

heliaRT is maintained for Ambiq Apollo devices, including:

  • Apollo3
  • Apollo4
  • Apollo4 Plus
  • Apollo510

Start with neuralSPOT

Use neuralSPOT setup with ns_autodeploy when you want the fastest path to profiling a .tflite model on Ambiq hardware. See the neuralSPOT AutoDeploy guide for an end-to-end walkthrough.

Integrate into Zephyr

Start with Zephyr setup to choose between the supported heliaRT integration paths:

  • source module + open cmsis-nn
  • source module + ns-cmsis-nn (HELIA)
  • prebuilt release module

Then use Zephyr example for the exact application recipe, including module placement, CMakeLists.txt, prj.conf, minimal bring-up code, build, flash, and UART logs.

Build from source

Use Getting Started when you need direct control over architecture, toolchain, and build configuration.

Documentation Highlights


Ready to get started? Head over to the Getting Started guide and bring up heliaRT on Ambiq hardware.