STATE VECTOR
x₁
0.000
x₂
0.000
f(x)
0.000
||∇f||
0.000
∂f/∂x₁
0.000
∂f/∂x₂
0.000
iteration
0
α (effective)
0.000
HESSIAN ANALYSIS
λ₁ (max)
-
λ₂ (min)
-
κ (condition)
-
det(H)
-
f(x₁,x₂) = x₁² + x₂²
OPTIMIZER INFO
algorithm
SGD
learning rate
0.010
momentum (β₁)
-
RMS decay (β₂)
-
CONFIGURATION
OBJECTIVE FUNCTION
Sphere
Rosenbrock
Booth
Matyas
Himmelblau
Six-Hump Camel
Rastrigin
Ackley
Holder Table
Griewank
McCormick
OPTIMIZER
Stochastic Gradient Descent
Momentum (β=0.9)
Nesterov Momentum (β=0.9)
AdaGrad
RMSprop
Adam (β₁=0.9, β₂=0.999)
LEARNING RATE:
0.010
RESET
CONVERGENCE
Click surface to start | Drag to rotate | Scroll to zoom