Skip to content

HELIA AI runtime for Ambiq silicon

Proven LiteRT workflows. Turbocharged on Ambiq silicon.

heliaRT is Ambiq's optimized LiteRT runtime: the same .tflite models and MicroInterpreter API, backed by HELIA kernels tuned for Cortex-M and Apollo SPOT silicon.

HELIA backendv1.13
230+kernel variants
36operators
18CI build combos
3toolchains
// LiteRT API, Ambiq-tuned implementation underneath
tflite::MicroMutableOpResolver<5> resolver;
resolver.AddConv2D();
resolver.AddFullyConnected();
resolver.AddSoftmax();

tflite::MicroInterpreter interpreter(
    model, resolver, arena, kArenaSize);
interpreter.AllocateTensors();
interpreter.Invoke();
Zephyr Arm CMSIS-Pack neuralSPOT-X GCC ATfE Apollo510

What it is

A silicon-adjacent runtime layer in the HELIA AI stack

heliaRT sits between your LiteRT application and Ambiq silicon. It keeps the upstream programming model intact while routing supported operations through HELIA kernel paths that are tuned for Apollo-class MCUs and Ambiq's SPOT® (Subthreshold Power Optimized Technology) platform.

How it works

Keep your LiteRT surface. Swap the backend underneath.

1Build or bring a quantized model

Use the same LiteRT model workflow you already have. Int8 and int16 variants map to separate optimized kernel paths where supported.

2Select HELIA at build time

Choose Zephyr Kconfig, neuralSPOT-X deployment, source / CMake, or prebuilt static libraries. The application-facing runtime stays familiar.

3Ship on Apollo silicon

Reference and CMSIS-NN remain available as fallbacks, while HELIA-covered ops take the Ambiq-tuned path for better latency and coverage.

Integration paths

Start from the environment you already use

Toolchains

Three compiler paths, one tested release matrix

Every release is built across architecture, toolchain, and build-type combinations. ATfE is the recommended path for Cortex-M55 + MVE workloads, with GCC and Arm Compiler 6 available for teams that already standardize there.

Read the toolchain guide

Coverage

HELIA expands the optimized surface beyond CMSIS-NN

CMSIS-NN covers the common convolutional core. HELIA adds optimized paths for activation, reduce, data movement, comparison, arithmetic, and other categories that often fall back to Reference C upstream.

Open full operator matrix

CategoryCMSIS-NNHELIA
Conv / FC / Pooling
Activations-
Reduce-
Data movement-
230+ kernel variants across int8 / int16 / float paths

HELIA AI platform

Start with the runtime path. Grow into ahead-of-time deployment.

heliaRT is part of Ambiq's broader HELIA AI platform: silicon-adjacent tools for bringing trained models onto Apollo-class devices. Use heliaRT when your application already follows the LiteRT workflow and you want a familiar runtime path onto Ambiq silicon.

As models and products mature, heliaAOT opens a deeper deployment path: Ambiq's ahead-of-time compiler for teams that want to move beyond runtime integration and generate device-oriented inference code.

Explore heliaAOT

RTheliaRTLiteRT runtime path

AOTheliaAOTahead-of-time compiler

Quick start

Add heliaRT to Zephyr in three small steps

manifest:
  projects:
    - name: helia-rt
      url: https://github.com/AmbiqAI/helia-rt
      revision: main
      path: modules/lib/helia-rt
CONFIG_HELIA_RT=y
CONFIG_HELIA_RT_BACKEND_HELIA=y
west update
west build -b apollo510_evb app
west flash

Next steps

Go deeper when you are ready to build

The landing page gives the product shape; these pages get you into implementation details, working examples, and measurement data.