Please login to view abstract download link
We consider a parametric PDE F(u(x; µ); µ) = 0, x ∈ Ω, µ ∈ P, where Ω denotes the physical domain and P the parameters domain. During an offline phase, param- eters µ_i , i = 1, . . . , N are selected. For each of them a finite element approximation u_h (·; µ_i ) of u(·; µ_i ) is computed. Data from these numerical simulations are then used to train a neural network in order to approximate the map (x; µ) → u_h (x; µ). We denote by u_N such an approximation. The L^2 error between u and u_N in Ω × P can be decomposed as ||u − u_N || ≤ ||u − u_h || + ||u_h − u_N ||. Both error terms in the right-hand side can be estimated by using Monte-Carlo type estimates over the parameters space, and an a posteriori error estimator in the physical space for the first error term. An adaptive algorithm is used to control the first error term, which corresponds to the error of the finite element method. The second error term, which corresponds to the error of the neural network, depends both on the training examples µ_i , i = 1, . . . , N and on the architecture of the network. We discuss how to balance the two errors and to which extent we can guarantee that the overall error between u and u_N is bounded by some given tolerance.