Efficient Inference Amortization in Graphical Models using Structured Continuous Conditional Normalizing Flows. Weilbach, C., Beronov, B., Harvey, W., & Wood, F. In 2nd Symposium on Advances in Approximate Bayesian Inference (AABI), 2019.
Efficient Inference Amortization in Graphical Models using Structured Continuous Conditional Normalizing Flows [link]Link  Efficient Inference Amortization in Graphical Models using Structured Continuous Conditional Normalizing Flows [link]Paper  abstract   bibtex   
We introduce a more efficient neural architecture for amortized inference, which combines continuous and conditional normalizing flows using a principled choice of structure. Our gradient flow derives its sparsity pattern from the minimally faithful inverse of its underlying graphical model. We find that this factorization reduces the necessary numbers both of parameters in the neural network and of adaptive integration steps in the ODE solver. Consequently, the throughput at training time and inference time is increased, without decreasing performance in comparison to unconstrained flows. By expressing the structural inversion and the flow construction as compilation passes of a probabilistic programming language, we demonstrate their applicability to the stochastic inversion of realistic models such as convolutional neural networks (CNN).
@inproceedings{WEI-19,
  title={Efficient Inference Amortization in Graphical Models using Structured Continuous Conditional Normalizing Flows},
  author={Weilbach, Christian and Beronov, Boyan and Harvey, William and Wood, Frank},
  booktitle={2nd Symposium on Advances in Approximate Bayesian Inference (AABI)},
  support = {D3M},
  url_Link={https://openreview.net/forum?id=BJlhYknNFS},
  url_Paper={https://openreview.net/pdf?id=BJlhYknNFS},
  abstract = {We introduce a more efficient neural architecture for amortized inference, which combines continuous and conditional normalizing flows using a principled choice of structure. Our gradient flow derives its sparsity pattern from the minimally faithful inverse of its underlying graphical model. We find that this factorization reduces the necessary numbers both of parameters in the neural network and of adaptive integration steps in the ODE solver. Consequently, the throughput at training time and inference time is increased, without decreasing performance in comparison to unconstrained flows. By expressing the structural inversion and the flow construction as compilation passes of a probabilistic programming language, we demonstrate their applicability to the stochastic inversion of realistic models such as convolutional neural networks (CNN).},
  year={2019}
}

Downloads: 0