softmax
Classes
SoftmaxOperator
SoftmaxOperator(op: AirOperator, model: AirModel, platform: SocPlatform, prefix: str = 'aot', attributes: dict[str, str] = {})
SOFTMAX operator.
This operator computes the softmax activation function on the input tensor.
Parameters:
-
(opAirOperator) –The AIR operator to wrap.
-
(modelAirModel) –The AIR model.
-
(platformSocPlatform) –The target platform for code generation.
-
(prefixstr, default:'aot') –Prefix for generated code files. Defaults to "aot".
-
(attributesdict[str, str], default:{}) –Attributes for template values. Defaults to {}.
Functions
compute_values
emit
Generate the source code for the operator.
Parameters:
-
(save_pathPath) –Path to save the generated code.
Functions
preprocess_softmax_scaling
preprocess_softmax_scaling(beta: float, input_scale: float, scaled_diff_integer_bits: int = 5) -> AirFixedPointScale
Mimics litert::PreprocessSoftmaxScaling.
Parameters:
-
(betafloat) –The softmax beta parameter (often 1.0).
-
(input_scalefloat) –The quantization scale for the input tensor.
-
(scaled_diff_integer_bitsint, default:5) –Number of integer bits to reserve in the scaled difference (typically 5).
Returns:
make_softmax_s16_exp_luts
Build the exp LUT NS-CMSIS-NN expects, matching TFLM.
exp_lut: input_scale = 10.0 / 65535 input_zp = INT16_MAX output_scale = 2.0 / 65535 output_zp = 0 transform = exp(x)
Returns:
-
–
np.ndarray: The generated exp LUT (shape: (513,))
make_softmax_s16_luts
Build the one_by_one_lut LUT NS-CMSIS-NN expects, matching TFLM.
one_by_one_lut: input_scale = 1.0 / 65535 input_zp = INT16_MIN output_scale = 2.0 / 65535 output_zp = 0 transform = 1 / (1 + x)
Returns:
-
–
np.ndarray: The generated one_by_one_lut (shape: (513,))