Model Exporting
Introduction
Export mode is used to convert the trained TensorFlow model into a format that can be used for deployment onto Ambiq's family of SoCs. Currently, the command will convert the TensorFlow model into both TensorFlow Lite (TFL) and TensorFlow Lite for micro-controller (TFLM) variants. The command will also verify the models' outputs match. The activations and weights can be quantized by configuring the quantization
section in the configuration file or by setting the quantization
parameter in the code.
- Load the configuration data (e.g.
configuration.json
) - Load the test data (e.g.
test.pkl
) - Load the trained model (e.g.
model.keras
) - Quantize the model (e.g.
16x8
) - Convert the model (e.g.
TFL
,TFLM
) - Verify the models' outputs match
- Save artifacts (e.g.
model.tflite
)
Usage
CLI
The following command will export a rhythm model using the reference configuration.
Python
The model can be evaluated using the following snippet:
Arguments
Please refer to HKTaskParams for the list of arguments that can be used with the export
command.