STATE VECTOR

x₁ 0.000
x₂ 0.000
f(x) 0.000
||∇f|| 0.000
∂f/∂x₁ 0.000
∂f/∂x₂ 0.000
iteration 0
α (effective) 0.000

HESSIAN ANALYSIS

λ₁ (max) -
λ₂ (min) -
κ (condition) -
det(H) -
f(x₁,x₂) = x₁² + x₂²

OPTIMIZER INFO

algorithm SGD
learning rate 0.010
momentum (β₁) -
RMS decay (β₂) -

CONFIGURATION

CONVERGENCE

Click surface to start | Drag to rotate | Scroll to zoom