Skip to content

Operator Coverage

heliaRT provides three kernel backends. Every operator has a Reference implementation. The CMSIS-NN and HELIA columns show where optimized implementations replace the generic code.

How to read this table

  • REF = Reference (generic C, all architectures)
  • CMSIS = open-source Arm CMSIS-NN (Cortex-M only)
  • HELIA = Ambiq-optimized heliaCORE (Cortex-M only)
  • ✅ = optimized kernel exists
  • = falls back to Reference

Compute Operators

Operator REF CMSIS HELIA Notes
CONV_2D ✅ ✅ ✅
DEPTHWISE_CONV_2D ✅ ✅ ✅
FULLY_CONNECTED ✅ ✅ ✅ HELIA adds A16W16 path
TRANSPOSE_CONV ✅ ✅ ✅
BATCH_MATMUL ✅ ✅ ✅
SVDF ✅ ✅ ✅
UNIDIRECTIONAL_SEQUENCE_LSTM ✅ ✅ ✅

Pooling & Padding

Operator REF CMSIS HELIA Notes
AVERAGE_POOL_2D / MAX_POOL_2D ✅ ✅ ✅
PAD / PADV2 ✅ ✅ ✅
SOFTMAX ✅ ✅ ✅
TRANSPOSE ✅ ✅ ✅
MAXIMUM / MINIMUM ✅ ✅ ✅

Activations

Operator REF CMSIS HELIA Notes
RELU / RELU6 / RELU_N1_TO_1 ✅ ✅ HELIA-exclusive
LOGISTIC (sigmoid) ✅ ✅ HELIA-exclusive
TANH ✅ ✅ HELIA-exclusive
LEAKY_RELU ✅ ✅ HELIA-exclusive
HARD_SWISH ✅ ✅ HELIA adds int16 path

Arithmetic

Operator REF CMSIS HELIA Notes
ADD ✅ ✅ ✅
MUL ✅ ✅ ✅
SUB ✅ ✅ HELIA-exclusive
EQUAL / NOT_EQUAL / GREATER / LESS / etc. ✅ ✅ HELIA-exclusive

Data Movement

Operator REF CMSIS HELIA Notes
CONCATENATION ✅ ✅ HELIA-exclusive
RESHAPE ✅ ✅ HELIA-exclusive
SPLIT ✅ ✅ HELIA-exclusive
SPLIT_V ✅ ✅ HELIA-exclusive
PACK ✅ ✅ HELIA-exclusive
SQUEEZE ✅ ✅ HELIA-exclusive
STRIDED_SLICE ✅ ✅ HELIA-exclusive
FILL ✅ ✅ HELIA-exclusive
ZEROS_LIKE ✅ ✅ HELIA-exclusive
DEQUANTIZE ✅ ✅ HELIA-exclusive

Quantization

Operator REF CMSIS HELIA Notes
QUANTIZE ✅ ✅ HELIA-exclusive (common path)

Reduce

Operator REF CMSIS HELIA Notes
MEAN / REDUCE_MAX ✅ ✅ HELIA-exclusive

Summary

Backend Optimized kernels Coverage
Reference 109 All operators (generic C)
CMSIS-NN 14 Core compute-heavy ops
HELIA 36 Superset of CMSIS-NN + 22 additional

HELIA advantage

HELIA covers every operator that CMSIS-NN does, plus 22 additional operators that would otherwise fall back to slow Reference kernels. This means fewer "silent fallbacks" and more consistent performance across your entire model.

Next Steps