Ï㽶ÊÓƵ

Event

Anastasis Kratsios, ETH Zurich

Wednesday, October 16, 2019 15:30to16:30
Room LB 921-4, Seminar Statistique Concordia, CA

Title: Universal Approximation Theorems

Abstract: The universal approximation theorem established the density of specific families of neural networks in the space of continuous functions and in certain Bochner-Lebesgue spaces, defined between any two Euclidean spaces. We extend and refine this result by proving that there exist dense neural network architectures on a larger class of function spaces and that these architectures may be written down using only a small number of functions. Refinements of the classical results of Hornik 1989 are also obtained. We prove that upon appropriately randomly selecting the neural networks architecture's activation function we may still obtain a dense set of neural networks, with positive probability. This last result is used to overcome the difficulty of appropriately selecting an activation function in more exotic architectures.

Follow us on

Back to top