Introduction to Dual Decomposition for Inference. Sontag, D., Globerson, A., & Jaakkola, T. In Sra, S., Nowozin, S., & Wright, S. J., editors, Optimization for Machine Learning, pages 219–254. MIT Press, 2012.
Introduction to Dual Decomposition for Inference [pdf]Paper  abstract   bibtex   
Many inference problems with discrete variables result in a difficult combinatorial optimization problem. In recent years, the technique of dual decomposition, also called Lagrangian relaxation, has proven to be a powerful means of solving these inference problems by decomposing them into simpler components that are repeatedly solved independently and combined into a global solution. In this chapter, we introduce the general technique of dual decomposition through its application to the problem of finding the most likely (MAP) assignment in Markov random fields. We discuss both subgradient and block coordinate descent approaches to solving the dual problem. The resulting message-passing algorithms are similar to max-product, but can be shown to solve a linear programming relaxation of the MAP problem. We show how many of the MAP algorithms are related to each other, and also quantify when the MAP solution can and cannot be decoded directly from the dual solution.

Downloads: 0