Troubleshooting
Common build, link, and runtime issues — and how to fix them.
Build Issues
Missing HELIA backend module
FATAL_ERROR: CONFIG_HELIA_RT_BACKEND_HELIA requires Ambiq's HELIA
acceleration module, currently distributed as ns-cmsis-nn.
Cause: The HELIA backend needs the ns-cmsis-nn module, which is not bundled in the public repo.
Fix: Either:
- Switch to the open CMSIS-NN backend:
CONFIG_HELIA_RT_BACKEND_CMSIS_NN=y - Switch to Reference:
CONFIG_HELIA_RT_BACKEND_REFERENCE=y - Contact support.aitg@ambiq.com for access to ns-cmsis-nn
Missing CMSIS-NN module (Zephyr)
Fix: Add the CMSIS-NN module to your west.yml:
- name: cmsis-nn
url: https://github.com/zephyrproject-rtos/cmsis-nn
revision: main
path: modules/lib/cmsis-nn
Third-party download failures
Cause: The Makefile build auto-downloads CMSIS and ns-cmsis-nn on first run. This can fail behind corporate proxies.
Fix:
- Set
https_proxy/http_proxyenvironment variables -
Or manually download and set
CMSIS_PATH/NS_CMSIS_NN_PATH:
Link Issues
Flash / ITCM overflow
Fix options:
- Switch to the SIZE build variant (
-Os/-Oz) - Reduce your operator resolver — only register operators your model actually uses
- Use the prebuilt archive for the SIZE variant
- Place code in a larger memory region via linker script
Duplicate symbol errors
Cause: Usually from linking both a prebuilt heliaRT archive and compiling kernels from source.
Fix: Use one or the other — never both. Remove either the .a or the source files from your build.
Runtime Issues
Arena too small
or
Fix:
- Increase the arena size in your application code
- Use
interpreter.arena_used_bytes()afterAllocateTensors()to find the actual minimum - Align arena to 16 bytes:
alignas(16) uint8_t tensor_arena[ARENA_SIZE];
Incorrect output values
Possible causes:
- Quantization mismatch: model expects int8 input but you're feeding float (or vice versa). Check
input->type. - Wrong input scaling: the input must match the model's quantization parameters (
input->params.scaleandzero_point). - Model not compatible: ensure the
.tflitewas quantized for int8/int16 LiteRT for Micro, not float-only LiteRT.
Model loads but no inference output
Check: Did you call interpreter.AllocateTensors() before interpreter.Invoke()? Forgetting this is the most common beginner mistake.