Gradient Descent Visualizer
Visualize how different optimizers navigate loss landscapes. Click on the contour plot to set a starting point, then watch the optimization path unfold.
Loss Function
Quadratic Bowl
Elongated Valley (High Condition Number)
Rosenbrock (Curved Valley)
Saddle Point
Rastrigin (Many Local Minima)
Optimizer
SGD
SGD + Momentum
Nesterov Momentum
RMSprop
Adam
Learning Rate
0.01
Momentum (β₁)
0.9
RMSprop Decay (β₂)
0.999
Max Steps
200
Run Optimization
Step Once
Reset
Optimization Path
Start Point
Current Position
Step:
0
Loss:
-
Grad Norm:
-
Position:
-