\n\n Volume 1 of Academic Press Library in Signal ProcessingElsevier, 2014.\n
\n
@book{\n title = {Academic Press Library in Signal Processing: Volume 1 - Signal Processing Theory and Machine Learning},\n type = {book},\n year = {2014},\n source = {Academic Press Library in Signal Processing},\n keywords = {Bayesian network,Factor graph,Markov network,Parameter learning,Probabilistic graphical model,Probabilistic inference},\n pages = {989-1064},\n volume = {1},\n websites = {http://www.sciencedirect.com/science/article/pii/B9780123965028000188},\n publisher = {Elsevier},\n series = {Academic Press Library in Signal Processing},\n id = {2a6abaab-f5f7-34ba-a615-39f16e254899},\n created = {2015-04-11T20:41:36.000Z},\n accessed = {2015-04-11},\n file_attached = {false},\n profile_id = {95e10851-cdf3-31de-9f82-1ab629e601b0},\n group_id = {09500bf6-14e8-379d-a953-ea715d61ca19},\n last_modified = {2017-03-14T14:28:50.201Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Over the last decades, probabilistic graphical models have become the method of choice for representing uncertainty. They are used in many research areas such as computer vision, speech processing, time-series and sequential data modeling, cognitive science, bioinformatics, probabilistic robotics, signal processing, communications and error-correcting coding theory, and in the area of artificial intelligence. This tutorial provides an introduction to probabilistic graphical models. We review three representations of probabilistic graphical models, namely, Markov networks or undirected graphical models, Bayesian networks or directed graphical models, and factor graphs. Then, we provide an overview about structure and parameter learning techniques. In particular, we discuss maximum likelihood and Bayesian learning, as well as generative and discriminative learning. Subsequently, we overview exact inference methods and briefly cover approximate inference techniques. Finally, we present typical applications for each of the three representations, namely, Bayesian networks for expert systems, dynamic Bayesian networks for speech processing, Markov random fields for image processing, and factor graphs for decoding error-correcting codes.},\n bibtype = {book},\n author = {Pernkopf, Franz and Peharz, Robert and Tschiatschek, Sebastian},\n doi = {10.1016/B978-0-12-396502-8.00018-8}\n}
\n
\n Over the last decades, probabilistic graphical models have become the method of choice for representing uncertainty. They are used in many research areas such as computer vision, speech processing, time-series and sequential data modeling, cognitive science, bioinformatics, probabilistic robotics, signal processing, communications and error-correcting coding theory, and in the area of artificial intelligence. This tutorial provides an introduction to probabilistic graphical models. We review three representations of probabilistic graphical models, namely, Markov networks or undirected graphical models, Bayesian networks or directed graphical models, and factor graphs. Then, we provide an overview about structure and parameter learning techniques. In particular, we discuss maximum likelihood and Bayesian learning, as well as generative and discriminative learning. Subsequently, we overview exact inference methods and briefly cover approximate inference techniques. Finally, we present typical applications for each of the three representations, namely, Bayesian networks for expert systems, dynamic Bayesian networks for speech processing, Markov random fields for image processing, and factor graphs for decoding error-correcting codes.\n