StructBoost: Boosting Methods for Predicting Structured Output Variables. Shen, C., Lin, G., & van den Hengel , A. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(10):2089--2103, October, 2014. Paper abstract bibtex Boosting is a method for learning a single accurate predictor by linearly combining a set of less accurate weak learners. Recently, structured learning has found many applications in computer vision. Thus far it has not been clear how one can train a boosting model that is directly optimized for predicting multivariate or structured outputs. To bridge this gap, inspired by structured support vector machines (SSVM), here we propose a boosting algorithm for structured output prediction, which we refer to as StructBoost. StructBoost supports nonlinear structured learning by combining a set of weak structured learners. As SSVM generalizes SVM, our StructBoost generalizes standard boosting approaches such as AdaBoost, or LPBoost to structured learning. The resulting optimization problem of StructBoost is more challenging than SSVM in the sense that it may involve exponentially many variables and constraints. In contrast, for SSVM one usually has an exponential number of constraints and a cutting-plane method is used. In order to efficiently solve StructBoost, we formulate an equivalent 1-slack formulation and solve it using a combination of cutting planes and column generation. We show the versatility and usefulness of StructBoost on a range of problems such as optimizing the tree loss for hierarchical multi-class classification, optimizing the Pascal overlap criterion for robust visual tracking and learning conditional random field parameters for image segmentation.
@article{Shen2014SBoosting,
author = {Chunhua Shen and Guosheng Lin and Anton {van den Hengel}},
title = {{StructBoost}: {B}oosting Methods for Predicting Structured Output Variables},
journal= {IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {36},
number = {10},
year = {2014},
url = {http://dx.doi.org/10.1109/TPAMI.2014.2315792},
month = {October},
pages = {2089--2103},
eprint = {1302.3283},
venue = {TPAMI},
note = {},
pdf = {http://goo.gl/goCVLK},
abstract={
Boosting is a method for learning a single accurate predictor by linearly
combining a set of less accurate weak learners. Recently, structured
learning has found many applications in computer vision. Thus far it has
not been clear how one can train a boosting model that is directly
optimized for predicting multivariate or structured outputs. To bridge this
gap, inspired by structured support vector machines (SSVM), here we propose
a boosting algorithm for structured output prediction, which we refer to as
StructBoost. StructBoost supports nonlinear structured learning by
combining a set of weak structured learners. As SSVM generalizes SVM, our
StructBoost generalizes standard boosting approaches such as AdaBoost, or
LPBoost to structured learning. The resulting optimization problem of
StructBoost is more challenging than SSVM in the sense that it may involve
exponentially many variables and constraints. In contrast, for SSVM one
usually has an exponential number of constraints and a cutting-plane method
is used. In order to efficiently solve StructBoost, we formulate an
equivalent 1-slack formulation and solve it using a combination of cutting
planes and column generation. We show the versatility and usefulness of
StructBoost on a range of problems such as optimizing the tree loss for
hierarchical multi-class classification, optimizing the Pascal overlap
criterion for robust visual tracking and learning conditional random field
parameters for image segmentation.
},
}
Downloads: 0
{"_id":"ayKkWPM5LKidHKCoy","bibbaseid":"shen-lin-vandenhengel-structboostboostingmethodsforpredictingstructuredoutputvariables-2014","downloads":0,"creationDate":"2016-08-12T01:53:44.331Z","title":"StructBoost: Boosting Methods for Predicting Structured Output Variables","author_short":["Shen, C.","Lin, G.","van den Hengel , A."],"year":2014,"bibtype":"article","biburl":"http://cs.adelaide.edu.au/~chhshen/cs.bib","bibdata":{"bibtype":"article","type":"article","author":[{"firstnames":["Chunhua"],"propositions":[],"lastnames":["Shen"],"suffixes":[]},{"firstnames":["Guosheng"],"propositions":[],"lastnames":["Lin"],"suffixes":[]},{"firstnames":["Anton"],"propositions":["van den Hengel"],"lastnames":[],"suffixes":[]}],"title":"StructBoost: Boosting Methods for Predicting Structured Output Variables","journal":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"36","number":"10","year":"2014","url":"http://dx.doi.org/10.1109/TPAMI.2014.2315792","month":"October","pages":"2089--2103","eprint":"1302.3283","venue":"TPAMI","note":"","pdf":"http://goo.gl/goCVLK","abstract":"Boosting is a method for learning a single accurate predictor by linearly combining a set of less accurate weak learners. Recently, structured learning has found many applications in computer vision. Thus far it has not been clear how one can train a boosting model that is directly optimized for predicting multivariate or structured outputs. To bridge this gap, inspired by structured support vector machines (SSVM), here we propose a boosting algorithm for structured output prediction, which we refer to as StructBoost. StructBoost supports nonlinear structured learning by combining a set of weak structured learners. As SSVM generalizes SVM, our StructBoost generalizes standard boosting approaches such as AdaBoost, or LPBoost to structured learning. The resulting optimization problem of StructBoost is more challenging than SSVM in the sense that it may involve exponentially many variables and constraints. In contrast, for SSVM one usually has an exponential number of constraints and a cutting-plane method is used. In order to efficiently solve StructBoost, we formulate an equivalent 1-slack formulation and solve it using a combination of cutting planes and column generation. We show the versatility and usefulness of StructBoost on a range of problems such as optimizing the tree loss for hierarchical multi-class classification, optimizing the Pascal overlap criterion for robust visual tracking and learning conditional random field parameters for image segmentation. ","bibtex":"@article{Shen2014SBoosting,\r\n author = {Chunhua Shen and Guosheng Lin and Anton {van den Hengel}},\r\n title = {{StructBoost}: {B}oosting Methods for Predicting Structured Output Variables},\r\n journal= {IEEE Transactions on Pattern Analysis and Machine Intelligence},\r\n volume = {36},\r\n number = {10},\r\n year = {2014},\r\n url = {http://dx.doi.org/10.1109/TPAMI.2014.2315792},\r\n month = {October},\r\n pages = {2089--2103},\r\n eprint = {1302.3283},\r\n venue = {TPAMI},\r\n note = {},\r\n pdf = {http://goo.gl/goCVLK},\r\n abstract={\r\nBoosting is a method for learning a single accurate predictor by linearly\r\n combining a set of less accurate weak learners. Recently, structured\r\n learning has found many applications in computer vision. Thus far it has\r\n not been clear how one can train a boosting model that is directly\r\n optimized for predicting multivariate or structured outputs. To bridge this\r\n gap, inspired by structured support vector machines (SSVM), here we propose\r\n a boosting algorithm for structured output prediction, which we refer to as\r\n StructBoost. StructBoost supports nonlinear structured learning by\r\n combining a set of weak structured learners. As SSVM generalizes SVM, our\r\n StructBoost generalizes standard boosting approaches such as AdaBoost, or\r\n LPBoost to structured learning. The resulting optimization problem of\r\n StructBoost is more challenging than SSVM in the sense that it may involve\r\n exponentially many variables and constraints. In contrast, for SSVM one\r\n usually has an exponential number of constraints and a cutting-plane method\r\n is used. In order to efficiently solve StructBoost, we formulate an\r\n equivalent 1-slack formulation and solve it using a combination of cutting\r\n planes and column generation. We show the versatility and usefulness of\r\n StructBoost on a range of problems such as optimizing the tree loss for\r\n hierarchical multi-class classification, optimizing the Pascal overlap\r\n criterion for robust visual tracking and learning conditional random field\r\n parameters for image segmentation.\r\n},\r\n}\r\n\r\n","author_short":["Shen, C.","Lin, G.","van den Hengel , A."],"key":"Shen2014SBoosting","id":"Shen2014SBoosting","bibbaseid":"shen-lin-vandenhengel-structboostboostingmethodsforpredictingstructuredoutputvariables-2014","role":"author","urls":{"Paper":"http://dx.doi.org/10.1109/TPAMI.2014.2315792"},"downloads":0,"html":""},"search_terms":["structboost","boosting","methods","predicting","structured","output","variables","shen","lin","van den hengel "],"keywords":[],"authorIDs":["57ad2c26482541360800049f"],"dataSources":["QpRy9mEoxTeyawTNL"]}