All-at-once versus reduced formulations of inverse problems and their regularization
by Barbara Kaltenbacher (University of Klagenfurt, Austria)
Parameter identification problems typically consist of a model equation, e.g. (systems of) ordinary or partial differential equations, and the observation equation. In the conventional reduced setting, the model equation is eliminated via the parameter-to-state map. Alternatively, one might consider both sets of equations (model and observations) as one large system, to which some regularization method is applied. The choice of the formulation -- reduced or all-at-once -- can make a considerable difference computationally, depending on which regularization method is used: Whereas almost the same optimality system arises for the reduced and the all-at-once Tikhonov method, the situation is different for iterative methods, especially in the context of nonlinear models. In this talk we will exemplarily provide some convergence results for all-at-once versions of variational, Newton type and gradient based regularization methods. Moreover we will compare the implementation requirements for the respective all-at-one and reduced versions and provide some numerical illustration. Finally we will give an outlook on two further aspects in this context namely
a) all-at-once methods for parameter identification in time dependent PDEs
b) minimization based formulations of inverse problems and their regularization.