Registered Data

[00986] Approximation results for Gradient Descent trained Shallow Neural Networks

  • Session Time & Room : 1C (Aug.21, 13:20-15:00) @F310
  • Type : Contributed Talk
  • Abstract : Neural networks show strong performance for function approximation, but provable guarantees typically rely on hand-picked weights and are therefore not fully practical. The aim for a small number of weights in approximation is opposed to over-parametrization by very wide or even infinitely wide networks in contemporary optimization results. The talk reconciles approximation and optimization results and provides approximation bounds that are guaranteed for gradient descent trained neural networks.
  • Classification : 41A46, 65K10, 68T07
  • Author(s) :
    • Gerrit Welper (University of Central Florida)
    • Russell Gentile (n/a)