Skip to content

Operators

Supported Operators

The following operators are supported in the conversion process. The operator_attributes feature allows you to customize the behavior of these operators.

Operator Data Types Notes
ADD INT8 Fully supported
ASSIGN_VARIABLE INT8 Fully supported
AVERAGE_POOL_2D INT8 Fully supported
BATCH_MATMUL INT8 Fully supported
CONCATENATION INT8 Fully supported
CONV_2D INT8 Fully supported
DEPTHWISE_CONV_2D INT8 Fully supported
DEQUANTIZE INT8 S8->FP32
FILL INT8 Fully supported
FULLY_CONNECTED INT8 Fully supported
HARD_SWISH INT8 Fully supported
LEAKY_RELU INT8 Fully supported
LOGISTIC INT8 Fully supported
MAX_POOL_2D INT8 Fully supported
MAXIMUM INT8 Fully supported
MINIMUM INT8 Fully supported
MUL INT8 Fully supported
PACK INT8 Fully supported
PAD INT8 Fully supported
READ_VARIABLE INT8 Fully supported
QUANTIZE INT8 FP32->S8
RELU INT8 Fully supported
RELU6 INT8 Fully supported
RESHAPE INT8 Fully supported
SHAPE INT8 Fully supported
SOFTMAX INT8 Fully supported
SQUEEZE INT8 Fully supported
STRIDED_SLICE INT8 Fully supported
TANH INT8 Fully supported
TRANSPOSE_CONV_2D INT8 Fully supported
TRANSPOSE INT8 Fully supported
ZEROS_LIKE INT8 Fully supported
Missing an operator?

If there are operators missing needed for your use case, please reach out to us. We are continuously working to expand the list of supported operators and would love to hear your feedback: Ambiq AI team.

Operator Attributes

The operator attributes feature allows you to customize the behavior of specific operators during the conversion process. This is useful for optimizing performance or adapting to specific hardware constraints. For example, you can specify the memory placement for certain operators. You can define operator attributes in the YAML configuration file. The operator_attributes section allows you to specify a list of attributes for different operators. Each attribute can include the operator type, operator ID(s), followed by key-value pairs for the attributes.

operator_attributes:
  # Matches all operators
  - type: "*"
    attributes:
      weights_memory: tcm
      scratch_memory: tcm
  # Matches all operators of type CONV_2D
  - type: "CONV_2D"
    attributes:
      weights_memory: sram
      scratch_memory: sram
  # Match operators with IDs: 1, 2
  - ident:
        - 1
        - 2
    attributes:
      weights_memory: mram
      scratch_memory: mram
Attribute Precedence

For each operator in the model, the convert routine will select all attributes that match the operator type and ID. The attributes with higher specificity (i.e., more specific operator IDs) will take precedence over more general ones. The attributes will be consolidated into a single set of attributes for each operator. To verify the attributes applied to each operator, you can set the verbose level to 2 during conversion. This will provide detailed information about the attributes assigned to each operator in the generated module.

Example: Customizing Memory Placement

Let's say we have a model (simple.tflite) with the following architecture:

%%{ init: {
    "flowchart": {
      "nodeSpacing": 20,
      "rankSpacing": 30
    }
  }
}%%
flowchart TB;
    A[Input] ==> B["CONV_2D (ID:1)"]
    B ==> C["CONV_2D (ID:2)"]
    C ==> D["MAX_POOL_2D (ID:3)"]
    D ==> E["FULLY_CONNECTED (ID:4)"]
    E ==> F["SOFTMAX (ID:5)"]

Let's define an initial YAML configuration file (simple.yaml) for the conversion:

module_name: simple
module_type: zephyr
model_path: simple.tflite
output_path: simple.zip
prefix: simple
memory_planner: greedy
verbose: 1

By default, all operators are typically placed in TCM. However, TCM is extremely limited in size. In addition, CONV_2D, MAX_POOL_2D, and SOFTMAX are least sensitive to memory placement. Only FULLY_CONNECTED is sensitive to memory placement. Therefore, we can set the default behavior for these operators. We will by default map all operators to MRAM, and then attribute the memory placement for FULLY_CONNECTED to TCM. The new YAML configuration file (simple.yaml) will look like this:

module_name: simple
module_type: zephyr
model_path: simple.tflite
output_path: simple.zip
prefix: simple
memory_planner: greedy
verbose: 1
operator_attributes:
  # Default memory placement for all operators
  - type: *
    attributes:
      weights_memory: mram
      scratch_memory: sram
  # All Fully Connected operators will be placed in TCM
  - type: FULLY_CONNECTED
    attributes:
      weights_memory: tcm
      scratch_memory: tcm

After experimenting, let's say we find the CONV_2D operator w/ ID:1 is too slow when placed in MRAM. We can override the memory placement for this operator to SRAM. The final YAML configuration file (simple.yaml) will be the following:

module_name: simple
module_type: zephyr
model_path: simple.tflite
output_path: simple.zip
prefix: simple
memory_planner: greedy
verbose: 1
operator_attributes:
  # Default memory placement for all operators
  - type: *
    attributes:
      weights_memory: mram
      scratch_memory: sram
  # Place CONV_2D operator w/ ID:1 in SRAM
  - ident:
      - 1
    attributes:
      weights_memory: sram
      scratch_memory: sram
  # All Fully Connected operators will be placed in TCM
  - type: FULLY_CONNECTED
    attributes:
      weights_memory: tcm
      scratch_memory: tcm