Page 657 - 8th European Congress of Mathematics ∙ 20-26 June 2021 ∙ Portorož, Slovenia ∙ Book of Abstracts
P. 657
NUMERICAL ANALYSIS AND SCIENTIFIC COMPUTING
References
[1] Computing the complex zeros of special functions. J. Segura, Numerische Mathematik
124(4), 2013, pp 723-752
[2] On the complex zeros of Airy and Bessel functions and those of their derivatives. A. Gil,
J. Segura. Anal. Appl. 12(5) (2014) 573-561
[3] An algorithm for computing the complex zeros of Bessel and Hankel functions A. Gil, D.
Ruiz-Antolin, J. Segura In progress
Neural network approximations for high-dimensional PDEs
Diyora Salimova, sdiyora@mins.ee.ethz.ch
ETH Zurich, Switzerland
Most of the numerical approximation methods for PDEs in the scientific literature suffer from
the so-called curse of dimensionality (CoD) in the sense that the number of computational
operations employed in the corresponding approximation scheme to obtain an approximation
precision ε > 0 grows exponentially in the PDE dimension and/or the reciprocal of ε. Recently,
certain deep learning based approximation methods for PDEs have been proposed and various
numerical simulations for such methods suggest that deep neural network (DNN) approxima-
tions might have the capacity to indeed overcome the CoD in the sense that the number of real
parameters used to describe the approximating DNNs grows at most polynomially in both the
PDE dimension d ∈ N and the reciprocal of the prescribed approximation accuracy ε > 0. In
this talk we show that for every a ∈ R, b ∈ (a, ∞) solutions of suitable Kolmogorov PDEs can
be approximated by DNNs on the entire space-time region [0, T ] × [a, b]d without the CoD.
655
References
[1] Computing the complex zeros of special functions. J. Segura, Numerische Mathematik
124(4), 2013, pp 723-752
[2] On the complex zeros of Airy and Bessel functions and those of their derivatives. A. Gil,
J. Segura. Anal. Appl. 12(5) (2014) 573-561
[3] An algorithm for computing the complex zeros of Bessel and Hankel functions A. Gil, D.
Ruiz-Antolin, J. Segura In progress
Neural network approximations for high-dimensional PDEs
Diyora Salimova, sdiyora@mins.ee.ethz.ch
ETH Zurich, Switzerland
Most of the numerical approximation methods for PDEs in the scientific literature suffer from
the so-called curse of dimensionality (CoD) in the sense that the number of computational
operations employed in the corresponding approximation scheme to obtain an approximation
precision ε > 0 grows exponentially in the PDE dimension and/or the reciprocal of ε. Recently,
certain deep learning based approximation methods for PDEs have been proposed and various
numerical simulations for such methods suggest that deep neural network (DNN) approxima-
tions might have the capacity to indeed overcome the CoD in the sense that the number of real
parameters used to describe the approximating DNNs grows at most polynomially in both the
PDE dimension d ∈ N and the reciprocal of the prescribed approximation accuracy ε > 0. In
this talk we show that for every a ∈ R, b ∈ (a, ∞) solutions of suitable Kolmogorov PDEs can
be approximated by DNNs on the entire space-time region [0, T ] × [a, b]d without the CoD.
655