Learning Hierarchical Features for Scene Labeling. Farabet, C., Couprie, C., Najman, L., & LeCun, Y. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1915–1929, 2013. Paper doi bibtex @article{Farabet:2013eu,
author = {Farabet, Cl{\'e}ment and Couprie, Camille and Najman, Laurent and LeCun, Yann},
title = {{Learning Hierarchical Features for Scene Labeling}},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2013},
volume = {35},
number = {8},
pages = {1915--1929},
annote = {Good point: hierarchy, multi resolution
Bad point: complicated training procedure.
This tree cover method is actually faster than CRF, as it simply involves finding best $C_k$ for each pixel, without caring about relationships between pixels. The set of C_k found is non-disjoint. See Fig 5's caption to understand it better.
The segmentation method used are not consistent. Sometimes they use gPb, a tree method, sometimes they produce multiple levels, using felzenszwalb and huttenlocher. But anyway.
pp. 1
> A striking characteristic of the system proposed here is that the use of a large contextual window to label pixels reduces the requirement for sophisticated postprocessing methods that ensure the consistency of the labeling.
However, I think postprocessing is sophisticated here as well
Section 4.1 Superpixel method
They trained a classifier in superpixel method. Maybe this is better than using classifer from the per-pixel classifer.
Seciton 4.2 CRF method
Eq. (15) since i and j are neighbors, using graideint of i or j in energy function shouldn't matter.
Section 4.3.2 Cover method.
To compute S, You always first restrict feature map vectors at locations covered by $C_k$, and then do spatial pyramid pooling with 3x3 (See Fig 6 caption) bins. Then we can predict its class to get purity.},
keywords = {deep learning},
doi = {10.1109/TPAMI.2012.231},
read = {Yes},
rating = {3},
date-added = {2017-02-17T21:27:20GMT},
date-modified = {2017-02-21T15:07:35GMT},
url = {http://ieeexplore.ieee.org/document/6338939/},
local-url = {file://localhost/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2013/Farabet/TPAMI%202013%20Farabet.pdf},
file = {{TPAMI 2013 Farabet.pdf:/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2013/Farabet/TPAMI 2013 Farabet.pdf:application/pdf}},
uri = {\url{papers3://publication/doi/10.1109/TPAMI.2012.231}}
}
Downloads: 0
{"_id":"eb3yy5f5H3qhWScRi","bibbaseid":"farabet-couprie-najman-lecun-learninghierarchicalfeaturesforscenelabeling-2013","downloads":0,"creationDate":"2017-03-02T15:49:35.303Z","title":"Learning Hierarchical Features for Scene Labeling","author_short":["Farabet, C.","Couprie, C.","Najman, L.","LeCun, Y."],"year":2013,"bibtype":"article","biburl":"https://leelabcnbc.github.io/lab-wiki/reference_library/computer_vision/bib.bib","bibdata":{"bibtype":"article","type":"article","author":[{"propositions":[],"lastnames":["Farabet"],"firstnames":["Clément"],"suffixes":[]},{"propositions":[],"lastnames":["Couprie"],"firstnames":["Camille"],"suffixes":[]},{"propositions":[],"lastnames":["Najman"],"firstnames":["Laurent"],"suffixes":[]},{"propositions":[],"lastnames":["LeCun"],"firstnames":["Yann"],"suffixes":[]}],"title":"Learning Hierarchical Features for Scene Labeling","journal":"IEEE Transactions on Pattern Analysis and Machine Intelligence","year":"2013","volume":"35","number":"8","pages":"1915–1929","annote":"Good point: hierarchy, multi resolution Bad point: complicated training procedure. This tree cover method is actually faster than CRF, as it simply involves finding best $C_k$ for each pixel, without caring about relationships between pixels. The set of C_k found is non-disjoint. See Fig 5's caption to understand it better. The segmentation method used are not consistent. Sometimes they use gPb, a tree method, sometimes they produce multiple levels, using felzenszwalb and huttenlocher. But anyway. pp. 1 > A striking characteristic of the system proposed here is that the use of a large contextual window to label pixels reduces the requirement for sophisticated postprocessing methods that ensure the consistency of the labeling. However, I think postprocessing is sophisticated here as well Section 4.1 Superpixel method They trained a classifier in superpixel method. Maybe this is better than using classifer from the per-pixel classifer. Seciton 4.2 CRF method Eq. (15) since i and j are neighbors, using graideint of i or j in energy function shouldn't matter. Section 4.3.2 Cover method. To compute S, You always first restrict feature map vectors at locations covered by $C_k$, and then do spatial pyramid pooling with 3x3 (See Fig 6 caption) bins. Then we can predict its class to get purity.","keywords":"deep learning","doi":"10.1109/TPAMI.2012.231","read":"Yes","rating":"3","date-added":"2017-02-17T21:27:20GMT","date-modified":"2017-02-21T15:07:35GMT","url":"http://ieeexplore.ieee.org/document/6338939/","local-url":"file://localhost/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2013/Farabet/TPAMI%202013%20Farabet.pdf","file":"TPAMI 2013 Farabet.pdf:/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2013/Farabet/TPAMI 2013 Farabet.pdf:application/pdf","uri":"˘rlpapers3://publication/doi/10.1109/TPAMI.2012.231","bibtex":"@article{Farabet:2013eu,\nauthor = {Farabet, Cl{\\'e}ment and Couprie, Camille and Najman, Laurent and LeCun, Yann},\ntitle = {{Learning Hierarchical Features for Scene Labeling}},\njournal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},\nyear = {2013},\nvolume = {35},\nnumber = {8},\npages = {1915--1929},\nannote = {Good point: hierarchy, multi resolution\nBad point: complicated training procedure.\n\n\n\nThis tree cover method is actually faster than CRF, as it simply involves finding best $C_k$ for each pixel, without caring about relationships between pixels. The set of C_k found is non-disjoint. See Fig 5's caption to understand it better.\n\n\nThe segmentation method used are not consistent. Sometimes they use gPb, a tree method, sometimes they produce multiple levels, using felzenszwalb and huttenlocher. But anyway.\n\n\n\npp. 1\n\n> A striking characteristic of the system proposed here is that the use of a large contextual window to label pixels reduces the requirement for sophisticated postprocessing methods that ensure the consistency of the labeling.\n\nHowever, I think postprocessing is sophisticated here as well\n\nSection 4.1 Superpixel method\n\nThey trained a classifier in superpixel method. Maybe this is better than using classifer from the per-pixel classifer.\n\nSeciton 4.2 CRF method\n\nEq. (15) since i and j are neighbors, using graideint of i or j in energy function shouldn't matter.\n\nSection 4.3.2 Cover method.\n\nTo compute S, You always first restrict feature map vectors at locations covered by $C_k$, and then do spatial pyramid pooling with 3x3 (See Fig 6 caption) bins. Then we can predict its class to get purity.},\nkeywords = {deep learning},\ndoi = {10.1109/TPAMI.2012.231},\nread = {Yes},\nrating = {3},\ndate-added = {2017-02-17T21:27:20GMT},\ndate-modified = {2017-02-21T15:07:35GMT},\nurl = {http://ieeexplore.ieee.org/document/6338939/},\nlocal-url = {file://localhost/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2013/Farabet/TPAMI%202013%20Farabet.pdf},\nfile = {{TPAMI 2013 Farabet.pdf:/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2013/Farabet/TPAMI 2013 Farabet.pdf:application/pdf}},\nuri = {\\url{papers3://publication/doi/10.1109/TPAMI.2012.231}}\n}\n\n\n\n","author_short":["Farabet, C.","Couprie, C.","Najman, L.","LeCun, Y."],"key":"Farabet:2013eu","id":"Farabet:2013eu","bibbaseid":"farabet-couprie-najman-lecun-learninghierarchicalfeaturesforscenelabeling-2013","role":"author","urls":{"Paper":"http://ieeexplore.ieee.org/document/6338939/"},"keyword":["deep learning"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"search_terms":["learning","hierarchical","features","scene","labeling","farabet","couprie","najman","lecun"],"keywords":["deep learning"],"authorIDs":[],"dataSources":["cZx6HBtKLkt7G6ZCp","bzxc3uBcwMv3h47xE","KZFesWbmGy4yc4ZLC"]}