StructBoost: Boosting Methods for Predicting Structured Output Variables. Shen, C., Lin, G., & van den Hengel , A. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(10):2089--2103, October, 2014.
StructBoost: Boosting Methods for Predicting Structured Output Variables [link]Paper  abstract   bibtex   
Boosting is a method for learning a single accurate predictor by linearly combining a set of less accurate weak learners. Recently, structured learning has found many applications in computer vision. Thus far it has not been clear how one can train a boosting model that is directly optimized for predicting multivariate or structured outputs. To bridge this gap, inspired by structured support vector machines (SSVM), here we propose a boosting algorithm for structured output prediction, which we refer to as StructBoost. StructBoost supports nonlinear structured learning by combining a set of weak structured learners. As SSVM generalizes SVM, our StructBoost generalizes standard boosting approaches such as AdaBoost, or LPBoost to structured learning. The resulting optimization problem of StructBoost is more challenging than SSVM in the sense that it may involve exponentially many variables and constraints. In contrast, for SSVM one usually has an exponential number of constraints and a cutting-plane method is used. In order to efficiently solve StructBoost, we formulate an equivalent 1-slack formulation and solve it using a combination of cutting planes and column generation. We show the versatility and usefulness of StructBoost on a range of problems such as optimizing the tree loss for hierarchical multi-class classification, optimizing the Pascal overlap criterion for robust visual tracking and learning conditional random field parameters for image segmentation.
@article{Shen2014SBoosting,
    author = {Chunhua Shen  and  Guosheng Lin  and  Anton {van den Hengel}},
    title  = {{StructBoost}: {B}oosting Methods for Predicting Structured Output Variables},
    journal= {IEEE Transactions on Pattern Analysis and Machine Intelligence},
    volume = {36},
    number = {10},
    year   = {2014},
    url    = {http://dx.doi.org/10.1109/TPAMI.2014.2315792},
    month  = {October},
    pages  = {2089--2103},
    eprint = {1302.3283},
    venue  = {TPAMI},
    note   = {},
    pdf    = {http://goo.gl/goCVLK},
    abstract={
Boosting is a method for learning a single accurate predictor by linearly
    combining a set of less accurate weak learners. Recently, structured
    learning has found many applications in computer vision. Thus far it has
    not been clear how one can train a boosting model that is directly
    optimized for predicting multivariate or structured outputs. To bridge this
    gap, inspired by structured support vector machines (SSVM), here we propose
    a boosting algorithm for structured output prediction, which we refer to as
    StructBoost. StructBoost supports nonlinear structured learning by
    combining a set of weak structured learners. As SSVM generalizes SVM, our
    StructBoost generalizes standard boosting approaches such as AdaBoost, or
    LPBoost to structured learning. The resulting optimization problem of
    StructBoost is more challenging than SSVM in the sense that it may involve
    exponentially many variables and constraints. In contrast, for SSVM one
    usually has an exponential number of constraints and a cutting-plane method
    is used. In order to efficiently solve StructBoost, we formulate an
    equivalent 1-slack formulation and solve it using a combination of cutting
    planes and column generation. We show the versatility and usefulness of
    StructBoost on a range of problems such as optimizing the tree loss for
    hierarchical multi-class classification, optimizing the Pascal overlap
    criterion for robust visual tracking and learning conditional random field
    parameters for image segmentation.
},
}

Downloads: 0