In *Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics (AI-STATS) (To appear)*, 2019.

Paper abstract bibtex

Paper abstract bibtex

To understand the empirical success of approximate MAP inference, recent work (Lang et al., 2018) has shown that some popular approximation algorithms perform very well when the input instance is stable. The simplest stability condition assumes that the MAP solution does not change at all when some of the pairwise potentials are (adversarially) perturbed. Unfortunately, this strong condition does not seem to be satisfied in practice. In this paper, we introduce a significantly more relaxed condition that only requires blocks (portions) of an input instance to be stable. Under this block stability condition, we prove that the pairwise LP relaxation is persistent on the stable blocks. We complement our theoretical results with an empirical evaluation of real-world MAP inference instances from computer vision. We design an algorithm to find stable blocks, and find that these real instances have large stable regions. Our work gives a theoretical explanation for the widespread empirical phenomenon of persistency for this LP relaxation.

@inproceedings{lang2019block, author = {Hunter Lang and David Sontag and Aravindan Vijayaraghavan}, title = {Block Stability for MAP Inference}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics (AI-STATS) (To appear)}, year = 2019, keywords = {Machine learning, Approximate inference in graphical models, Structured prediction}, url_Paper = {https://arxiv.org/pdf/1810.05305}, abstract = {To understand the empirical success of approximate MAP inference, recent work (Lang et al., 2018) has shown that some popular approximation algorithms perform very well when the input instance is stable. The simplest stability condition assumes that the MAP solution does not change at all when some of the pairwise potentials are (adversarially) perturbed. Unfortunately, this strong condition does not seem to be satisfied in practice. In this paper, we introduce a significantly more relaxed condition that only requires blocks (portions) of an input instance to be stable. Under this block stability condition, we prove that the pairwise LP relaxation is persistent on the stable blocks. We complement our theoretical results with an empirical evaluation of real-world MAP inference instances from computer vision. We design an algorithm to find stable blocks, and find that these real instances have large stable regions. Our work gives a theoretical explanation for the widespread empirical phenomenon of persistency for this LP relaxation.} }

Downloads: 0