Batch normalization layer (Ioffe and Szegedy, 2014).

    Normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.

      axis = -1L,
      momentum = 0.99,
      epsilon = 0.001,
      center = TRUE,
      scale = TRUE,
      beta_initializer = "zeros",
      gamma_initializer = "ones",
      moving_mean_initializer = "zeros",
      moving_variance_initializer = "ones",
      beta_regularizer = NULL,
      gamma_regularizer = NULL,
      beta_constraint = NULL,
      gamma_constraint = NULL,
      renorm = FALSE,
      renorm_clipping = NULL,
      renorm_momentum = 0.99,
      fused = NULL,
      virtual_batch_size = NULL,
      adjustment = NULL,
      input_shape = NULL,
      batch_input_shape = NULL,
      batch_size = NULL,
      dtype = NULL,
      name = NULL,
      trainable = NULL,
      weights = NULL



    Model or layer object


    Integer, the axis that should be normalized (typically the features axis). For instance, after a Conv2D layer with data_format="channels_first", set axis=1 in BatchNormalization.


    Momentum for the moving mean and the moving variance.


    Small float added to variance to avoid dividing by zero.


    If TRUE, add offset of beta to normalized tensor. If FALSE, beta is ignored.


    If TRUE, multiply by gamma. If FALSE, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.


    Initializer for the beta weight.


    Initializer for the gamma weight.


    Initializer for the moving mean.


    Initializer for the moving variance.


    Optional regularizer for the beta weight.


    Optional regularizer for the gamma weight.


    Optional constraint for the beta weight.


    Optional constraint for the gamma weight.


    Whether to use Batch Renormalization ( This adds extra variables during training. The inference is the same for either value of this parameter.


    A named list or dictionary that may map keys rmax, rmin, dmax to scalar Tensors used to clip the renorm correction. The correction (r, d) is used as corrected_value = normalized_value * r + d, with r clipped to [rmin, rmax], and d to [-dmax, dmax]. Missing rmax, rmin, dmax are set to Inf, 0, Inf, respectively.


    Momentum used to update the moving means and standard deviations with renorm. Unlike momentum, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that momentum is still applied to get the means and variances for inference.


    TRUE, use a faster, fused implementation, or raise a ValueError if the fused implementation cannot be used. If NULL, use the faster implementation if possible. If FALSE, do not use the fused implementation.


    An integer. By default, virtual_batch_size is NULL, which means batch normalization is performed across the whole batch. When virtual_batch_size is not NULL, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.


    A function taking the Tensor containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, adjustment <- function(shape) { tuple(tf$random$uniform(shape[-1:NULL, style = "python"], 0.93, 1.07), tf$random$uniform(shape[-1:NULL, style = "python"], -0.1, 0.1)) } will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If NULL, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.


    Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model.


    Shapes, including the batch size. For instance, batch_input_shape=c(10, 32) indicates that the expected input will be batches of 10 32-dimensional vectors. batch_input_shape=list(NULL, 32) indicates batches of an arbitrary number of 32-dimensional vectors.


    Fixed batch size for layer


    The data type expected by the input, as a string (float32, float64, int32...)


    An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.


    Whether the layer weights will be updated during training.


    Initial weights for layer.

    Input shape

    Arbitrary. Use the keyword argument input_shape (list of integers, does not include the samples axis) when using this layer as the first layer in a model.

    Output shape

    Same shape as input.