Stochastic gradient descent optimizer

    Stochastic gradient descent optimizer with support for momentum, learning rate decay, and Nesterov momentum.

    optimizer_sgd(
      lr = 0.01,
      momentum = 0,
      decay = 0,
      nesterov = FALSE,
      clipnorm = NULL,
      clipvalue = NULL
    )

    Arguments

    lr

    float >= 0. Learning rate.

    momentum

    float >= 0. Parameter that accelerates SGD in the relevant direction and dampens oscillations.

    decay

    float >= 0. Learning rate decay over each update.

    nesterov

    boolean. Whether to apply Nesterov momentum.

    clipnorm

    Gradients will be clipped when their L2 norm exceeds this value.

    clipvalue

    Gradients will be clipped when their absolute value exceeds this value.

    Value

    Optimizer for use with compile.keras.engine.training.Model.

    See also