var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https://raw.githubusercontent.com/plai-group/bibliography/master/group_publications.bib&jsonp=1&theme=dividers&group0=year&group1=type&folding=0&filter=support:ETALUMIS&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https://raw.githubusercontent.com/plai-group/bibliography/master/group_publications.bib&jsonp=1&theme=dividers&group0=year&group1=type&folding=0&filter=support:ETALUMIS\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https://raw.githubusercontent.com/plai-group/bibliography/master/group_publications.bib&jsonp=1&theme=dividers&group0=year&group1=type&folding=0&filter=support:ETALUMIS\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2022\n \n \n (1)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Planning as Inference in Epidemiological Dynamics Models.\n \n \n \n \n\n\n \n Wood, F.; Warrington, A.; Naderiparizi, S.; Weilbach, C.; Masrani, V.; Harvey, W.; Åšcibior, A.; Beronov, B.; Grefenstette, J.; Campbell, D.; and Nasseri, S. A.\n\n\n \n\n\n\n Frontiers in Artificial Intelligence, 4. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Planning paper\n  \n \n \n \"Planning arxiv\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 10 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{WOO-22,\n\tAUTHOR={Wood, Frank and Warrington, Andrew and Naderiparizi, Saeid and Weilbach, Christian and Masrani, Vaden and Harvey, William and Åšcibior, Adam and Beronov, Boyan and Grefenstette, John and Campbell, Duncan and Nasseri, S. Ali},   \n\tTITLE={Planning as Inference in Epidemiological Dynamics Models},      \n\tJOURNAL={Frontiers in Artificial Intelligence},      \n\tVOLUME={4},      \n\tYEAR={2022},      \n\tURL_Paper={https://www.frontiersin.org/article/10.3389/frai.2021.550603},       \n\turl_ArXiv={https://arxiv.org/abs/2003.13221},\n\tDOI={10.3389/frai.2021.550603},      \n\tISSN={2624-8212},   \n\tsupport = {D3M,COVID,ETALUMIS},\n  \tABSTRACT={In this work we demonstrate how to automate parts of the infectious disease-control policy-making process via performing inference in existing epidemiological models. The kind of inference tasks undertaken include computing the posterior distribution over controllable, via direct policy-making choices, simulation model parameters that give rise to acceptable disease progression outcomes. Among other things, we illustrate the use of a probabilistic programming language that automates inference in existing simulators. Neither the full capabilities of this tool for automating inference nor its utility for planning is widely disseminated at the current time. Timely gains in understanding about how such simulation-based models and inference automation tools applied in support of policy-making could lead to less economically damaging policy prescriptions, particularly during the current COVID-19 pandemic.}\n}\n\n
\n
\n\n\n
\n In this work we demonstrate how to automate parts of the infectious disease-control policy-making process via performing inference in existing epidemiological models. The kind of inference tasks undertaken include computing the posterior distribution over controllable, via direct policy-making choices, simulation model parameters that give rise to acceptable disease progression outcomes. Among other things, we illustrate the use of a probabilistic programming language that automates inference in existing simulators. Neither the full capabilities of this tool for automating inference nor its utility for planning is widely disseminated at the current time. Timely gains in understanding about how such simulation-based models and inference automation tools applied in support of policy-making could lead to less economically damaging policy prescriptions, particularly during the current COVID-19 pandemic.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (1)\n \n \n
\n
\n \n \n
\n
\n  \n inproceedings\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Assisting the Adversary to Improve GAN Training.\n \n \n \n \n\n\n \n Munk, A.; Harvey, W.; and Wood, F.\n\n\n \n\n\n\n In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1-8, July 2021. \n \n\n\n\n
\n\n\n\n \n \n \"Assisting arxiv\n  \n \n \n \"Assisting paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{9533449,  \n\tauthor={Munk, Andreas and Harvey, William and Wood, Frank},  \n\tbooktitle={2021 International Joint Conference on Neural Networks (IJCNN)},   \n\ttitle={Assisting the Adversary to Improve GAN Training},   \n\tyear={2021},\n\tpages={1-8},  \n\tabstract={Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each iteration. In practice, this is essentially never true, often leading to poor gradient estimates for the generator. To address this, AdvAs is a penalty imposed on the generator based on the norm of the gradients used to train the discriminator. This encourages the generator to move towards points where the discriminator is optimal. We demonstrate the effect of applying AdvAs to several GAN objectives, datasets and network architectures. The results indicate a reduction in the mismatch between theory and practice and that AdvAs can lead to improvement of GAN training, as measured by FID scores.},  \n\tdoi={10.1109/IJCNN52387.2021.9533449},  \n\tISSN={2161-4407},  \n\tmonth={July},\n\turl_ArXiv = {https://arxiv.org/abs/2010.01274},\n\turl_Paper = {https://ieeexplore.ieee.org/document/9533449},\n\tsupport = {D3M,ETALUMIS}\n}\n\n
\n
\n\n\n
\n Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each iteration. In practice, this is essentially never true, often leading to poor gradient estimates for the generator. To address this, AdvAs is a penalty imposed on the generator based on the norm of the gradients used to train the discriminator. This encourages the generator to move towards points where the discriminator is optimal. We demonstrate the effect of applying AdvAs to several GAN objectives, datasets and network architectures. The results indicate a reduction in the mismatch between theory and practice and that AdvAs can lead to improvement of GAN training, as measured by FID scores.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (2)\n \n \n
\n
\n \n \n
\n
\n  \n inproceedings\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Coping With Simulators That Don’t Always Return.\n \n \n \n \n\n\n \n Warrington, A; Naderiparizi, S; and Wood, F\n\n\n \n\n\n\n In The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 2020. \n \n\nPMLR 108:1748-1758\n\n
\n\n\n\n \n \n \"Coping link\n  \n \n \n \"Coping paper\n  \n \n \n \"Coping poster\n  \n \n \n \"Coping arxiv\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 7 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{WAR-20,\n  title={Coping With Simulators That Don’t Always Return},\n  author={Warrington, A and Naderiparizi, S and Wood, F},\n  booktitle={The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS)},\n  archiveprefix = {arXiv},\n  eprint = {1906.05462},\n  year={2020},\n  url_Link = {http://proceedings.mlr.press/v108/warrington20a.html},\n  url_Paper = {http://proceedings.mlr.press/v108/warrington20a/warrington20a.pdf},\n  url_Poster = {https://github.com/plai-group/bibliography/blob/master/presentations_posters/WAR-20.pdf},\n  url_ArXiv = {https://arxiv.org/abs/2003.12908},\n  keywords = {simulators, smc, autoregressive flow},\n  support = {D3M,ETALUMIS},\n  bibbase_note={PMLR 108:1748-1758},\n  abstract = {Deterministic models are approximations of reality that are easy to interpret and often easier to build than stochastic alternatives. Unfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data. We investigate and address computational inefficiencies that arise from adding process noise to deterministic simulators that fail to return for certain inputs; a property we describe as "brittle." We show how to train a conditional normalizing flow to propose perturbations such that the simulator succeeds with high probability, increasing computational efficiency.}\n  }\n\n
\n
\n\n\n
\n Deterministic models are approximations of reality that are easy to interpret and often easier to build than stochastic alternatives. Unfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data. We investigate and address computational inefficiencies that arise from adding process noise to deterministic simulators that fail to return for certain inputs; a property we describe as \"brittle.\" We show how to train a conditional normalizing flow to propose perturbations such that the simulator succeeds with high probability, increasing computational efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep probabilistic surrogate networks for universal simulator approximation.\n \n \n \n \n\n\n \n Munk, A.; Ścibior, A.; Baydin, A.; Stewart, A; Fernlund, A; Poursartip, A; and Wood, F.\n\n\n \n\n\n\n In The second International Conference on Probabilistic Programming (PROBPROG), 2020. \n \n\n\n\n
\n\n\n\n \n \n \"Deep paper\n  \n \n \n \"Deep arxiv\n  \n \n \n \"Deep poster\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{MUN-20,\n  title={Deep probabilistic surrogate networks for universal simulator approximation},\n  author={Munk, Andreas and Ścibior, Adam and Baydin, AG and Stewart, A and Fernlund, A and Poursartip, A and Wood, Frank},\n  booktitle={The second International Conference on Probabilistic Programming (PROBPROG)},\n  year={2020},\n  archiveprefix = {arXiv},\n  eprint = {1910.11950},\n  support = {D3M,ETALUMIS},\n  url_Paper={https://arxiv.org/pdf/1910.11950.pdf},\n  url_ArXiv={https://arxiv.org/abs/1910.11950},\n  url_Poster={https://github.com/plai-group/bibliography/blob/master/presentations_posters/PROBPROG2020_MUN.pdf},\n  abstract = {We present a framework for automatically structuring and training fast, approximate, deep neural surrogates of existing stochastic simulators. Unlike traditional approaches to surrogate modeling, our surrogates retain the interpretable structure of the reference simulators. The particular way we achieve this allows us to replace the reference simulator with the surrogate when undertaking amortized inference in the probabilistic programming sense. The fidelity and speed of our surrogates allow for not only faster "forward" stochastic simulation but also for accurate and substantially faster inference. We support these claims via experiments that involve a commercial composite-materials curing simulator. Employing our surrogate modeling technique makes inference an order of magnitude faster, opening up the possibility of doing simulator-based, non-invasive, just-in-time parts quality testing; in this case inferring safety-critical latent internal temperature profiles of composite materials undergoing curing from surface temperature profile measurements.},\n}\n\n
\n
\n\n\n
\n We present a framework for automatically structuring and training fast, approximate, deep neural surrogates of existing stochastic simulators. Unlike traditional approaches to surrogate modeling, our surrogates retain the interpretable structure of the reference simulators. The particular way we achieve this allows us to replace the reference simulator with the surrogate when undertaking amortized inference in the probabilistic programming sense. The fidelity and speed of our surrogates allow for not only faster \"forward\" stochastic simulation but also for accurate and substantially faster inference. We support these claims via experiments that involve a commercial composite-materials curing simulator. Employing our surrogate modeling technique makes inference an order of magnitude faster, opening up the possibility of doing simulator-based, non-invasive, just-in-time parts quality testing; in this case inferring safety-critical latent internal temperature profiles of composite materials undergoing curing from surface temperature profile measurements.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n unpublished\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Uncertainty in Neural Processes.\n \n \n \n \n\n\n \n Naderiparizi, S.; Chiu, K.; Bloem-Reddy, B.; and Wood, F.\n\n\n \n\n\n\n 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Uncertainty arxiv\n  \n \n \n \"Uncertainty paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@unpublished{NAD-20a,\n  title={Uncertainty in Neural Processes}, \n  author={Saeid Naderiparizi and Kenny Chiu and Benjamin Bloem-Reddy and Frank Wood},\n  journal={arXiv preprint arXiv:1906.05462},\n  year={2020},\n  eid = {arXiv:2010.03753},\n  archivePrefix = {arXiv},\n  eprint = {2010.03753},\n  url_ArXiv={https://arxiv.org/abs/2010.03753},\n  url_Paper={https://arxiv.org/pdf/2010.03753.pdf},\n  support = {D3M,ETALUMIS},\n  abstract={We explore the effects of architecture and training objective choice on amortized posterior predictive inference in probabilistic conditional generative models. We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large. We instead focus our attention on the case where the amount of conditioning data is small. We highlight specific architecture and objective choices that we find lead to qualitative and quantitative improvement to posterior inference in this low data regime. Specifically we explore the effects of choices of pooling operator and variational family on posterior quality in neural processes. Superior posterior predictive samples drawn from our novel neural process architectures are demonstrated via image completion/in-painting experiments.}\n}\n\n
\n
\n\n\n
\n We explore the effects of architecture and training objective choice on amortized posterior predictive inference in probabilistic conditional generative models. We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large. We instead focus our attention on the case where the amount of conditioning data is small. We highlight specific architecture and objective choices that we find lead to qualitative and quantitative improvement to posterior inference in this low data regime. Specifically we explore the effects of choices of pooling operator and variational family on posterior quality in neural processes. Superior posterior predictive samples drawn from our novel neural process architectures are demonstrated via image completion/in-painting experiments.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (1)\n \n \n
\n
\n \n \n
\n
\n  \n inproceedings\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Coping With Simulators That Don’t Always Return.\n \n \n \n \n\n\n \n Warrington, A; Naderiparizi, S; and Wood, F\n\n\n \n\n\n\n In 2nd Symposium on Advances in Approximate Bayesian Inference (AABI), 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Coping link\n  \n \n \n \"Coping paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{WAR-19a,\n  title={Coping With Simulators That Don’t Always Return},\n  author={Warrington, A and Naderiparizi, S and Wood, F},\n  booktitle={2nd Symposium on Advances in Approximate Bayesian Inference (AABI)},\n  year={2019},\n  url_Link={https://openreview.net/forum?id=SJecKyhEKr&noteId=SJecKyhEKr},\n  url_Paper={https://openreview.net/pdf?id=SJecKyhEKr},\n  keywords = {simulators, smc, autoregressive flow},\n  support = {D3M,ETALUMIS},\n  abstract = {Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives.  \nUnfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice.  \nObservation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data.\nAdding process noise to deterministic simulators can induce a failure in the simulator resulting in no return value for certain inputs -- a property we describe as ``brittle.''\nWe investigate and address the wasted computation that arises from these failures, and the effect of such failures on downstream inference tasks.\nWe show that performing inference in this space can be viewed as rejection sampling, and train a conditional normalizing flow as a proposal over noise values such that there is a low probability that the simulator crashes, increasing computational efficiency and inference fidelity for a fixed sample budget when used as the proposal in an approximate inference algorithm.}\n}\n\n
\n
\n\n\n
\n Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives. Unfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data. Adding process noise to deterministic simulators can induce a failure in the simulator resulting in no return value for certain inputs – a property we describe as ``brittle.'' We investigate and address the wasted computation that arises from these failures, and the effect of such failures on downstream inference tasks. We show that performing inference in this space can be viewed as rejection sampling, and train a conditional normalizing flow as a proposal over noise values such that there is a low probability that the simulator crashes, increasing computational efficiency and inference fidelity for a fixed sample budget when used as the proposal in an approximate inference algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale.\n \n \n \n \n\n\n \n Baydin, A. G.; Shao, L.; Bhimji, W.; Heinrich, L.; Meadows, L.; Liu, J.; Munk, A.; Naderiparizi, S.; Gram-Hansen, B.; Louppe, G.; and others\n\n\n \n\n\n\n In the International Conference for High Performance Computing, Networking, Storage and Analysis (SC ’19), 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Etalumis: paper\n  \n \n \n \"Etalumis: arxiv\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{BAY-19,\n  title={Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale},\n  author={Baydin, At{\\i}l{\\i}m G{\\"u}ne{\\c{s}} and Shao, Lei and Bhimji, Wahid and Heinrich, Lukas and Meadows, Lawrence and Liu, Jialin and Munk, Andreas and Naderiparizi, Saeid and Gram-Hansen, Bradley and Louppe, Gilles and others},\n  booktitle={the International Conference for High Performance Computing, Networking, Storage and Analysis (SC ’19)},\n  archiveprefix = {arXiv},\n  eprint = {1907.03382},\n  support = {D3M,ETALUMIS},\n  url_Paper={https://arxiv.org/pdf/1907.03382.pdf},\n  url_ArXiv={https://arxiv.org/abs/1907.03382},\n  abstract={Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models. However, applications to science remain limited because of the impracticability of rewriting complex scientific simulators in a PPL, the computational cost of inference, and the lack of scalable implementations. To address these, we present a novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo (MCMC) and deep-learning-based inference compilation (IC) engines for tractable inference. To guide IC inference, we perform distributed training of a dynamic 3DCNN--LSTM architecture with a PyTorch-MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with a global minibatch size of 128k: achieving a performance of 450 Tflop/s through enhancements to PyTorch. We demonstrate a Large Hadron Collider (LHC) use-case with the C++ Sherpa simulator and achieve the largest-scale posterior inference in a Turing-complete PPL.},\n  year={2019},\n  doi={10.1145/3295500.3356180}\n}\n\n
\n
\n\n\n
\n Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models. However, applications to science remain limited because of the impracticability of rewriting complex scientific simulators in a PPL, the computational cost of inference, and the lack of scalable implementations. To address these, we present a novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo (MCMC) and deep-learning-based inference compilation (IC) engines for tractable inference. To guide IC inference, we perform distributed training of a dynamic 3DCNN–LSTM architecture with a PyTorch-MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with a global minibatch size of 128k: achieving a performance of 450 Tflop/s through enhancements to PyTorch. We demonstrate a Large Hadron Collider (LHC) use-case with the C++ Sherpa simulator and achieve the largest-scale posterior inference in a Turing-complete PPL.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);