Leaky version of a Rectified Linear Unit.
Allows a small gradient when the unit is not active:
f(x) = alpha * x for
x < 0,
f(x) = x for
x >= 0.
layer_activation_leaky_relu(object, alpha = 0.3, input_shape = NULL, batch_input_shape = NULL, batch_size = NULL, dtype = NULL, name = NULL, trainable = NULL, weights = NULL)
Model or layer object
float >= 0. Negative slope coefficient.
Input shape (list of integers, does not include the samples axis) which is required when using this layer as the first layer in a model.
Shapes, including the batch size. For instance,
Fixed batch size for layer
The data type expected by the input, as a string (
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
Whether the layer weights will be updated during training.
Initial weights for layer.