Skip to content

vector_quantizer

Classes

VectorQuantizer

VectorQuantizer(num_embeddings, embedding_dim, beta=0.25, **kw)

Vector-quantization bottleneck (VQ-VAE style) with straight-through estimator.

Input: [..., D] (last dim == embedding_dim) Output: [..., D] (quantized/dequantized vectors; gradients pass through x)

Tracks (logged automatically via metrics property): - vq_perplexity : effective # active codes (1..K) - vq_usage : fraction of codes used at least once (0..1) - vq_bits_per_index : entropy lower bound in bits/index (~ log2 perplexity)

Adds losses via add_loss: - beta * ||stop(quant) - x||^2 (commitment) - ||quant - stop(x)||^2 (codebook)

Source code in helia_edge/layers/vector_quantizer.py
def __init__(self, num_embeddings, embedding_dim, beta=0.25, **kw):
    super().__init__(**kw)
    if num_embeddings <= 0 or embedding_dim <= 0 or beta <= 0:
        raise ValueError("num_embeddings>0, embedding_dim>0, beta>0 required.")
    self.K = int(num_embeddings)
    self.D = int(embedding_dim)
    self.beta = float(beta)
    self._perplexity = keras.metrics.Mean(name="vq_perplexity")
    self._usage = keras.metrics.Mean(name="vq_usage")
    self._bpi = keras.metrics.Mean(name="vq_bits_per_index")