This study examines learning-based inversion through the lens of inverse problem theory, focusing on uncertainty propagation, conditioning, and identifiability rather than pointwise prediction accuracy alone. Inverse estimation is formulated as a stochastic mapping in which observational noise is explicitly propagated through learned inverse models. A controlled one-dimensional nonlinear inverse problem is constructed using synthetic forward operators to systematically isolate noise-induced instability and non-uniqueness effects. For an injective nonlinear forward mapping, Support Vector Regression (SVR) with a radial basis function kernel and linear regression are trained to approximate the inverse operator from noisy observations. Monte Carlo noise propagation is employed to estimate bias and variance of inverse predictions and to compare empirical uncertainty amplification with theoretical predictions derived from local inverse conditioning. While SVR significantly outperforms linear regression in terms of inverse accuracy, the results demonstrate that inverse uncertainty is primarily governed by the conditioning of the forward operator and is modulated by model regularization. The analysis is extended to a non-injective forward operator to investigate identifiability loss in learning-based inversion. In this setting, both models collapse inherently multi-valued inverse mappings into unimodal and overconfident estimates, revealing implicit solution selection driven by data distribution and regularization. These findings show that low prediction error can be misleading in non-identifiable inverse problems. Overall, this work highlights the limitations of deterministic learning-based inversion and underscores the need for uncertainty-aware and distribution-preserving approaches when addressing ill-conditioned or non-injective inverse problems.
Copyrights © 2026