How to Escape Saddle Points Efficiently

Citation:

C. Jin, R. Ge, P. Netrapalli, S. M. Kakade, and M. I. Jordan, How to Escape Saddle Points Efficiently. ICML: ArXiv Report, 2017.

Abstract:

This work shows that policies with simple linear and RBF parameterizations can be trained to solve a variety of continuous control tasks, including the OpenAI gym benchmarks. The performance of these trained policies are competitive with state of the art results, obtained with more elaborate parameterizations such as fully connected neural networks. Furthermore, existing training and testing scenarios are shown to be very limited and prone to over-fitting, thus giving rise to only trajectory-centric policies. Training with a diverse initial state distribution is shown to produce more global policies with better generalization. This allows for interactive control scenarios where the system recovers from large on-line perturbations; as shown in the supplementary video.

Publisher's Version

See also: 2017
Last updated on 10/10/2021