Index
Operators API
The 'operators' module provides classes to represent operators in the network graph. These classes all
inherit from the AotOperator class, which provides a common interface for all operators. Each operator class
is responsible for generating the C code for its specific operation. The OperatorMap dictionary maps the
LiteRT operator IDs to the corresponding operator classes. This allows for easy lookup and instantiation of
the appropriate operator class based on the operator ID in the LiteRT model.
Available Operators
- Add: Element-wise addition
- ArgMax: Argmax reduction
- ArgMin: Argmin reduction
- AssignVariable: Assigns a value to a variable
- AveragePool: Average pooling operation
- BatchMatMul: Batch matrix multiplication
- BatchToSpaceND: Batch to space transformation
- Comparison: Comparison operations (e.g., equal, not equal, less than)
- Concatenation: Concatenates tensors along a specified axis
- Conv: 2D convolution operation
- DepthwiseConv: Depthwise separable convolution operation
- DepthToSpace: Rearranges data from depth into blocks of spatial data
- Dequantize: Dequantizes a tensor
- EthosU: Ethos-U NPU operator
- ExpandDims: Expands the shape of a tensor
- Fill: Fills a tensor with a specified value
- Gather: Gathers elements along an axis
- GatherNd: Gathers slices using multi-dimensional indices
- FullyConnected: Fully connected layer
- HardSwish: Hard swish activation function
- LeakyRelu: Leaky ReLU activation function
- Logistic: Logistic activation function
- MaxPool: Max pooling operation
- Maximum: Element-wise maximum
- Mean: Computes the mean of a tensor along specified axes
- Minimum: Element-wise minimum
- Mul: Element-wise multiplication
- Pack: Packs a list of tensors into a single tensor
- Pad: Pads a tensor
- Quantize: Quantizes a tensor
- ReadVariable: Reads a variable
- ReduceMax: Reduces a tensor by taking the maximum along specified axes
- ReduceMin: Reduces a tensor by taking the minimum along specified axes
- Relu: Rectified Linear Unit activation
- Reshape: Reshapes a tensor
- Shape: Returns the shape of a tensor
- Softmax: Softmax activation
- SpaceToBatchND: Space to batch transformation
- SpaceToDepth: Rearranges data from spatial blocks into depth
- Split: Splits a tensor into multiple tensors along a specified axis
- Squeeze: Removes dimensions of size 1 from the shape of a tensor
- StridedSlice: Slices a tensor with strides
- Sub: Element-wise subtraction
- Svdf: SVDF operation
- Tanh: Hyperbolic tangent activation function
- TransposeConv: Transpose convolution operation
- Transpose: Transposes a tensor
- Unpack: Unpacks a tensor into multiple tensors along a specified axis
- ZerosLike: Generates a tensor of zeros
Copyright 2025 Ambiq. All Rights Reserved.