Introduction to Dual Decomposition for Inference. Sontag, D., Globerson, A., & Jaakkola, T. In Sra, S., Nowozin, S., & Wright, S. J., editors, Optimization for Machine Learning, pages 219–254. MIT Press, 2012. Paper abstract bibtex Many inference problems with discrete variables result in a difficult combinatorial optimization problem. In recent years, the technique of dual decomposition, also called Lagrangian relaxation, has proven to be a powerful means of solving these inference problems by decomposing them into simpler components that are repeatedly solved independently and combined into a global solution. In this chapter, we introduce the general technique of dual decomposition through its application to the problem of finding the most likely (MAP) assignment in Markov random fields. We discuss both subgradient and block coordinate descent approaches to solving the dual problem. The resulting message-passing algorithms are similar to max-product, but can be shown to solve a linear programming relaxation of the MAP problem. We show how many of the MAP algorithms are related to each other, and also quantify when the MAP solution can and cannot be decoded directly from the dual solution.
@incollection{SonGloJaa_optbook,
author = {David Sontag and Amir Globerson and Tommi Jaakkola},
title = {Introduction to Dual Decomposition for Inference},
booktitle = {Optimization for Machine Learning},
editor = {Suvrit Sra and Sebastian Nowozin and Stephen J. Wright},
pages = {219--254},
publisher = {MIT Press},
year = {2012},
keywords = {Machine learning, Approximate inference in graphical models},
url_Paper = {http://people.csail.mit.edu/dsontag/papers/SonGloJaa_optbook.pdf},
abstract = {Many inference problems with discrete variables result in a difficult combinatorial optimization problem. In recent years, the technique of dual decomposition, also called Lagrangian relaxation, has proven to be a powerful means of solving these inference problems by decomposing them into simpler components that are repeatedly solved independently and combined into a global solution. In this chapter, we introduce the general technique of dual decomposition through its application to the problem of finding the most likely (MAP) assignment in Markov random fields. We discuss both subgradient and block coordinate descent approaches to solving the dual problem. The resulting message-passing algorithms are similar to max-product, but can be shown to solve a linear programming relaxation of the MAP problem. We show how many of the MAP algorithms are related to each other, and also quantify when the MAP solution can and cannot be decoded directly from the dual solution.}
}
Downloads: 0
{"_id":"QRY7DgkRFgc3AGZwp","bibbaseid":"sontag-globerson-jaakkola-introductiontodualdecompositionforinference-2012","author_short":["Sontag, D.","Globerson, A.","Jaakkola, T."],"bibdata":{"bibtype":"incollection","type":"incollection","author":[{"firstnames":["David"],"propositions":[],"lastnames":["Sontag"],"suffixes":[]},{"firstnames":["Amir"],"propositions":[],"lastnames":["Globerson"],"suffixes":[]},{"firstnames":["Tommi"],"propositions":[],"lastnames":["Jaakkola"],"suffixes":[]}],"title":"Introduction to Dual Decomposition for Inference","booktitle":"Optimization for Machine Learning","editor":[{"firstnames":["Suvrit"],"propositions":[],"lastnames":["Sra"],"suffixes":[]},{"firstnames":["Sebastian"],"propositions":[],"lastnames":["Nowozin"],"suffixes":[]},{"firstnames":["Stephen","J."],"propositions":[],"lastnames":["Wright"],"suffixes":[]}],"pages":"219–254","publisher":"MIT Press","year":"2012","keywords":"Machine learning, Approximate inference in graphical models","url_paper":"http://people.csail.mit.edu/dsontag/papers/SonGloJaa_optbook.pdf","abstract":"Many inference problems with discrete variables result in a difficult combinatorial optimization problem. In recent years, the technique of dual decomposition, also called Lagrangian relaxation, has proven to be a powerful means of solving these inference problems by decomposing them into simpler components that are repeatedly solved independently and combined into a global solution. In this chapter, we introduce the general technique of dual decomposition through its application to the problem of finding the most likely (MAP) assignment in Markov random fields. We discuss both subgradient and block coordinate descent approaches to solving the dual problem. The resulting message-passing algorithms are similar to max-product, but can be shown to solve a linear programming relaxation of the MAP problem. We show how many of the MAP algorithms are related to each other, and also quantify when the MAP solution can and cannot be decoded directly from the dual solution.","bibtex":"@incollection{SonGloJaa_optbook,\n author = {David Sontag and Amir Globerson and Tommi Jaakkola},\n title = {Introduction to Dual Decomposition for Inference},\n booktitle = {Optimization for Machine Learning},\n editor = {Suvrit Sra and Sebastian Nowozin and Stephen J. Wright},\n pages = {219--254},\n publisher = {MIT Press},\n year = {2012},\n keywords = {Machine learning, Approximate inference in graphical models},\n url_Paper = {http://people.csail.mit.edu/dsontag/papers/SonGloJaa_optbook.pdf},\n abstract = {Many inference problems with discrete variables result in a difficult combinatorial optimization problem. In recent years, the technique of dual decomposition, also called Lagrangian relaxation, has proven to be a powerful means of solving these inference problems by decomposing them into simpler components that are repeatedly solved independently and combined into a global solution. In this chapter, we introduce the general technique of dual decomposition through its application to the problem of finding the most likely (MAP) assignment in Markov random fields. We discuss both subgradient and block coordinate descent approaches to solving the dual problem. The resulting message-passing algorithms are similar to max-product, but can be shown to solve a linear programming relaxation of the MAP problem. We show how many of the MAP algorithms are related to each other, and also quantify when the MAP solution can and cannot be decoded directly from the dual solution.}\n}\n\n","author_short":["Sontag, D.","Globerson, A.","Jaakkola, T."],"editor_short":["Sra, S.","Nowozin, S.","Wright, S. J."],"key":"SonGloJaa_optbook","id":"SonGloJaa_optbook","bibbaseid":"sontag-globerson-jaakkola-introductiontodualdecompositionforinference-2012","role":"author","urls":{" paper":"http://people.csail.mit.edu/dsontag/papers/SonGloJaa_optbook.pdf"},"keyword":["Machine learning","Approximate inference in graphical models"],"metadata":{"authorlinks":{}}},"bibtype":"incollection","biburl":"http://people.csail.mit.edu/dsontag/papers/bibtex/david_sontag_papers_all.bib","dataSources":["g3ofqhxNQWsRWkCrp"],"keywords":["machine learning","approximate inference in graphical models"],"search_terms":["introduction","dual","decomposition","inference","sontag","globerson","jaakkola"],"title":"Introduction to Dual Decomposition for Inference","year":2012}