Skip to content

random_augmentation_pipeline

Random Augmentation Pipeline Layer API

This module provides classes to build random augmentation pipeline layers.

Classes:

Classes

RandomAugmentation1DPipeline

RandomAugmentation1DPipeline(layers: list[BaseAugmentation1D], augmentations_per_sample: int = 1, rate: float = 1.0, batchwise: bool = False, force_training: bool = False, **kwargs)

Apply N random augmentations from a list of augmentation layers to each sample.

Parameters:

  • layers (list[BaseAugmentation1D]) –

    List of augmentation layers to choose from.

  • augmentations_per_sample (int, default: 1 ) –

    Number of augmentations to apply to each sample.

  • rate (float, default: 1.0 ) –

    Probability of applying the augmentation pipeline.

  • batchwise (bool, default: False ) –

    If True, apply same layer to all samples in the batch.

  • force_training (bool, default: False ) –

    Force training mode. Defaults to False.

Source code in neuralspot_edge/layers/preprocessing/random_augmentation_pipeline.py
def __init__(
    self,
    layers: list[BaseAugmentation1D],
    augmentations_per_sample: int = 1,
    rate: float = 1.0,
    batchwise: bool = False,
    force_training: bool = False,
    **kwargs,
):
    """Apply N random augmentations from a list of augmentation layers to each sample.

    Args:
        layers (list[BaseAugmentation1D]): List of augmentation layers to choose from.
        augmentations_per_sample (int): Number of augmentations to apply to each sample.
        rate (float): Probability of applying the augmentation pipeline.
        batchwise (bool): If True, apply same layer to all samples in the batch.
        force_training (bool, optional): Force training mode. Defaults to False.
    """
    super().__init__(**kwargs)
    self.layers = layers
    self.augmentations_per_sample = augmentations_per_sample
    self.rate = rate
    self.batchwise = batchwise
    kwargs.update({"name": "random_choice"})
    self._random_choice = RandomChoice(layers=layers, batchwise=batchwise, **kwargs)
    self.force_training = force_training
    if not self.layers:
        raise ValueError("At least one layer must be provided.")

Functions

batch_augment
batch_augment(inputs)

Apply N random augmentations to each

Source code in neuralspot_edge/layers/preprocessing/random_augmentation_pipeline.py
def batch_augment(self, inputs):
    """Apply N random augmentations to each"""
    return keras.ops.fori_loop(
        lower=0,
        upper=self.augmentations_per_sample,
        body_fun=lambda _, x: self.apply_random_choice(x),
        init_val=inputs,
    )
get_config
get_config()

Serializes the configuration of the layer.

Source code in neuralspot_edge/layers/preprocessing/random_augmentation_pipeline.py
def get_config(self):
    """Serializes the configuration of the layer."""
    config = super().get_config()
    config.update(
        {
            "layers": [lyr.get_config() for lyr in self.layers],
            "augmentations_per_sample": self.augmentations_per_sample,
            "rate": self.rate,
            "batchwise": self.batchwise,
            "force_training": self.force_training,
        }
    )
    return config

RandomAugmentation2DPipeline

RandomAugmentation2DPipeline(*args, **kwargs)
Source code in neuralspot_edge/layers/preprocessing/random_augmentation_pipeline.py
def __init__(self, *args, **kwargs):
    super().__init__(*args, **kwargs)
    self._random_choice = RandomChoice(layers=self.layers, batchwise=self.batchwise, **kwargs)

Functions

batch_augment
batch_augment(inputs)

Apply N random augmentations to each

Source code in neuralspot_edge/layers/preprocessing/random_augmentation_pipeline.py
def batch_augment(self, inputs):
    """Apply N random augmentations to each"""
    return keras.ops.fori_loop(
        lower=0,
        upper=self.augmentations_per_sample,
        body_fun=lambda _, x: self.apply_random_choice(x),
        init_val=inputs,
    )
get_config
get_config()

Serializes the configuration of the layer.

Source code in neuralspot_edge/layers/preprocessing/random_augmentation_pipeline.py
def get_config(self):
    """Serializes the configuration of the layer."""
    config = super().get_config()
    config.update(
        {
            "layers": [lyr.get_config() for lyr in self.layers],
            "augmentations_per_sample": self.augmentations_per_sample,
            "rate": self.rate,
            "batchwise": self.batchwise,
            "force_training": self.force_training,
        }
    )
    return config