Analyzing Inverse Problems with Invertible Neural Networks. Ardizzone, L., Kruse, J., Wirkert, S., Rahner, D., Pellegrini, E. W., Klessen, R. S., Maier-Hein, L., Rother, C., & Köthe, U. August, 2018.
Analyzing Inverse Problems with Invertible Neural Networks [link]Paper  abstract   bibtex   
In many tasks, in particular in natural science, the goal is to determine hidden system parameters from a set of measurements. Often, the forward process from parameter- to measurement-space is a well-defined function, whereas the inverse problem is ambiguous: one measurement may map to multiple different sets of parameters. In this setting, the posterior parameter distribution, conditioned on an input measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task – so-called Invertible Neural Networks (INNs). Although INNs are not new, they have, so far, received little attention in literature. While classical neural networks attempt to solve the ambiguous inverse problem directly, INNs are able to learn it jointly with the well-defined forward process, using additional latent output variables to capture the information otherwise lost. Given a specific measurement and sampled latent variables, the inverse pass of the INN provides a full distribution over parameter space. We verify experimentally, on artificial data and real-world problems from astrophysics and medicine, that INNs are a powerful analysis tool to find multi-modalities in parameter space, to uncover parameter correlations, and to identify unrecoverable parameters.
@article{ardizzone_analyzing_2018,
	title = {Analyzing {Inverse} {Problems} with {Invertible} {Neural} {Networks}},
	url = {https://arxiv.org/abs/1808.04730v3},
	abstract = {In many tasks, in particular in natural science, the goal is to determine
hidden system parameters from a set of measurements. Often, the forward process
from parameter- to measurement-space is a well-defined function, whereas the
inverse problem is ambiguous: one measurement may map to multiple different
sets of parameters. In this setting, the posterior parameter distribution,
conditioned on an input measurement, has to be determined. We argue that a
particular class of neural networks is well suited for this task -- so-called
Invertible Neural Networks (INNs). Although INNs are not new, they have, so
far, received little attention in literature. While classical neural networks
attempt to solve the ambiguous inverse problem directly, INNs are able to learn
it jointly with the well-defined forward process, using additional latent
output variables to capture the information otherwise lost. Given a specific
measurement and sampled latent variables, the inverse pass of the INN provides
a full distribution over parameter space. We verify experimentally, on
artificial data and real-world problems from astrophysics and medicine, that
INNs are a powerful analysis tool to find multi-modalities in parameter space,
to uncover parameter correlations, and to identify unrecoverable parameters.},
	language = {en},
	urldate = {2019-11-07},
	author = {Ardizzone, Lynton and Kruse, Jakob and Wirkert, Sebastian and Rahner, Daniel and Pellegrini, Eric W. and Klessen, Ralf S. and Maier-Hein, Lena and Rother, Carsten and Köthe, Ullrich},
	month = aug,
	year = {2018}
}

Downloads: 0