Exploiting Local and Global Patch Rarities for Saliency Detection. Borji, A. & Itti, L. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island, pages 1-8, Jun, 2012. abstract bibtex We introduce a saliency model based on two key ideas. The first one is considering local and global image patch rarities as two complementary processes. The second one is based on our observation that for different images, one of the RGB and Lab color spaces outperforms the other in saliency detection. We propose a framework that measures patch rarities in each color space and combines them in a final map. For each color channel, first, the input image is partitioned into non-overlapping patches and then each patch is represented by a vector of coefficients that linearly reconstruct it from a learned dictionary of patches from natural scenes. Next, two measures of saliency (Local and Global) are calculated and fused to indicate saliency of each patch. Local saliency is distinctiveness of a patch from its surrounding patches. Global saliency is the inverse of a patch’s probability of happening over the entire image. The final saliency map is built by normalizing and fusing local and global saliency maps of all channels from both color systems. Extensive evaluation over four benchmark eye-tracking datasets shows the significant advantage of our approach over 10 state-of-the-art saliency models.
@inproceedings{ Borji_Itti12cvpr,
author = {A. Borji and L. Itti},
title = {Exploiting Local and Global Patch Rarities for Saliency Detection},
booktitle = {Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island},
abstract = {We introduce a saliency model based on two key ideas. The first one is considering local and global image
patch rarities as two complementary processes. The second one is based on our observation that for
different images, one of the RGB and Lab color spaces outperforms the other in saliency detection. We
propose a framework that measures patch rarities in each color space and combines them in a final
map. For each color channel, first, the input image is partitioned into non-overlapping patches and
then each patch is represented by a vector of coefficients that linearly reconstruct it from a learned
dictionary of patches from natural scenes. Next, two measures of saliency (Local and Global) are
calculated and fused to indicate saliency of each patch. Local saliency is distinctiveness of a patch
from its surrounding patches. Global saliency is the inverse of a patch’s probability of happening
over the entire image. The final saliency map is built by normalizing and fusing local and global
saliency maps of all channels from both color systems. Extensive evaluation over four benchmark
eye-tracking datasets shows the significant advantage of our approach over 10 state-of-the-art
saliency models.},
pages = {1-8},
month = {Jun},
year = {2012},
review = {full/conf},
type = {bu;mod},
if = {2012 acceptance rate: 26.2%},
file = {http://ilab.usc.edu/publications/doc/Borji_Itti12cvpr.pdf}
}
Downloads: 0
{"_id":{"_str":"5298a1a19eb585cc2600091f"},"__v":0,"authorIDs":[],"author_short":["Borji, A.","Itti, L."],"bibbaseid":"borji-itti-exploitinglocalandglobalpatchraritiesforsaliencydetection-2012","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Borji_Itti12cvpr\"> </a>Exploiting Local and Global Patch Rarities for Saliency Detection.</span>\n\t<span class=\"bibbase_paper_author\">\nBorji, A.; and Itti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2012</span>. -->\n</span>\n\n\n\nIn\n<i>Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island</i>, page 1-8, Jun 2012.\n\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Borji_Itti12cvpr')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Exploiting Local and Global Patch Rarities for Saliency Detection [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Borji_Itti12cvpr')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Borji_Itti12cvpr\"\n style=\"display:none\">\n <pre>@inproceedings{ Borji_Itti12cvpr,\n author = {A. Borji and L. Itti},\n title = {Exploiting Local and Global Patch Rarities for Saliency Detection},\n booktitle = {Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island},\n abstract = {We introduce a saliency model based on two key ideas. The first one is considering local and global image\n patch rarities as two complementary processes. The second one is based on our observation that for\n different images, one of the RGB and Lab color spaces outperforms the other in saliency detection. We\n propose a framework that measures patch rarities in each color space and combines them in a final\n map. For each color channel, first, the input image is partitioned into non-overlapping patches and\n then each patch is represented by a vector of coefficients that linearly reconstruct it from a learned\n dictionary of patches from natural scenes. Next, two measures of saliency (Local and Global) are\n calculated and fused to indicate saliency of each patch. Local saliency is distinctiveness of a patch\n from its surrounding patches. Global saliency is the inverse of a patch’s probability of happening\n over the entire image. The final saliency map is built by normalizing and fusing local and global\n saliency maps of all channels from both color systems. Extensive evaluation over four benchmark\n eye-tracking datasets shows the significant advantage of our approach over 10 state-of-the-art\n saliency models.},\n pages = {1-8},\n month = {Jun},\n year = {2012},\n review = {full/conf},\n type = {bu;mod},\n if = {2012 acceptance rate: 26.2%},\n file = {http://ilab.usc.edu/publications/doc/Borji_Itti12cvpr.pdf}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Borji_Itti12cvpr\"\n style=\"display:none\">\n We introduce a saliency model based on two key ideas. The first one is considering local and global image patch rarities as two complementary processes. The second one is based on our observation that for different images, one of the RGB and Lab color spaces outperforms the other in saliency detection. We propose a framework that measures patch rarities in each color space and combines them in a final map. For each color channel, first, the input image is partitioned into non-overlapping patches and then each patch is represented by a vector of coefficients that linearly reconstruct it from a learned dictionary of patches from natural scenes. Next, two measures of saliency (Local and Global) are calculated and fused to indicate saliency of each patch. Local saliency is distinctiveness of a patch from its surrounding patches. Global saliency is the inverse of a patch’s probability of happening over the entire image. The final saliency map is built by normalizing and fusing local and global saliency maps of all channels from both color systems. Extensive evaluation over four benchmark eye-tracking datasets shows the significant advantage of our approach over 10 state-of-the-art saliency models.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"borji-itti-exploitinglocalandglobalpatchraritiesforsaliencydetection-2012","role":"author","year":"2012","type":"bu;mod","title":"Exploiting Local and Global Patch Rarities for Saliency Detection","review":"full/conf","pages":"1-8","month":"Jun","key":"Borji_Itti12cvpr","if":"2012 acceptance rate: 26.2%","id":"Borji_Itti12cvpr","file":"http://ilab.usc.edu/publications/doc/Borji_Itti12cvpr.pdf","booktitle":"Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island","bibtype":"inproceedings","bibtex":"@inproceedings{ Borji_Itti12cvpr,\n author = {A. Borji and L. Itti},\n title = {Exploiting Local and Global Patch Rarities for Saliency Detection},\n booktitle = {Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island},\n abstract = {We introduce a saliency model based on two key ideas. The first one is considering local and global image\n patch rarities as two complementary processes. The second one is based on our observation that for\n different images, one of the RGB and Lab color spaces outperforms the other in saliency detection. We\n propose a framework that measures patch rarities in each color space and combines them in a final\n map. For each color channel, first, the input image is partitioned into non-overlapping patches and\n then each patch is represented by a vector of coefficients that linearly reconstruct it from a learned\n dictionary of patches from natural scenes. Next, two measures of saliency (Local and Global) are\n calculated and fused to indicate saliency of each patch. Local saliency is distinctiveness of a patch\n from its surrounding patches. Global saliency is the inverse of a patch’s probability of happening\n over the entire image. The final saliency map is built by normalizing and fusing local and global\n saliency maps of all channels from both color systems. Extensive evaluation over four benchmark\n eye-tracking datasets shows the significant advantage of our approach over 10 state-of-the-art\n saliency models.},\n pages = {1-8},\n month = {Jun},\n year = {2012},\n review = {full/conf},\n type = {bu;mod},\n if = {2012 acceptance rate: 26.2%},\n file = {http://ilab.usc.edu/publications/doc/Borji_Itti12cvpr.pdf}\n}","author_short":["Borji, A.","Itti, L."],"author":["Borji, A.","Itti, L."],"abstract":"We introduce a saliency model based on two key ideas. The first one is considering local and global image patch rarities as two complementary processes. The second one is based on our observation that for different images, one of the RGB and Lab color spaces outperforms the other in saliency detection. We propose a framework that measures patch rarities in each color space and combines them in a final map. For each color channel, first, the input image is partitioned into non-overlapping patches and then each patch is represented by a vector of coefficients that linearly reconstruct it from a learned dictionary of patches from natural scenes. Next, two measures of saliency (Local and Global) are calculated and fused to indicate saliency of each patch. Local saliency is distinctiveness of a patch from its surrounding patches. Global saliency is the inverse of a patch’s probability of happening over the entire image. The final saliency map is built by normalizing and fusing local and global saliency maps of all channels from both color systems. Extensive evaluation over four benchmark eye-tracking datasets shows the significant advantage of our approach over 10 state-of-the-art saliency models."},"bibtype":"inproceedings","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["exploiting","local","global","patch","rarities","saliency","detection","borji","itti"],"title":"Exploiting Local and Global Patch Rarities for Saliency Detection","year":2012,"dataSources":["wedBDxEpNXNCLZ2sZ"]}