Gradient Descent: The Ultimate Optimizer
NeurIPSSep 29, 2019Outstanding Paper
Working with any gradient-based machine learning algorithm involves the
tedious task of tuning the optimizer's hyperparameters, such as its step size.
Recent work has shown how the step size can itself be optimized alongside the
model parameters by manually deriving expressions for "hypergradients" ahead of
time.
We show how to automatically compute hypergradients with a simple and elegant
modification to backpropagation. This allows us to easily apply the method to
other optimizers and hyperparameters (e.g. momentum coefficients). We can even
recursively apply the method to its own hyper-hyperparameters, and so on ad
infinitum. As these towers of optimizers grow taller, they become less
sensitive to the initial choice of hyperparameters. We present experiments
validating this for MLPs, CNNs, and RNNs. Finally, we provide a simple PyTorch
implementation of this algorithm (see
people.csail.mit.edu/kach/gradient-descent-the-ultimate-optimizer).