Storing and Recalling Information for Vision Localization. Siagian, C. & Itti, L. In IEEE International Conference on Robotics and Automation (ICRA), Pasadena, California, May, 2008. abstract bibtex In implementing a vision localization system, a crucial issue to consider is how to efficiently store and recall the necessary information, so that the robot is not only able to accurately localize itself, but does so in a timely manner. In the presented system, we discuss a strategy to minimize the amount of stored data by analyzing the strengths and weaknesses of several cooperating recognition modules and by using them through a prioritization scheme which orders the data entries from the most likely to match to the least likely. We validate the system through a series of experiments in three large scale outdoor environments: a building complex (126x180ft. area, 3583 testing images), a vegetation-filled park (270x360ft. area, 6006 testing images), and an open-field area (450x585ft. area, 8823 testing images) - each with its own set of challenges. Not only is the system able to localize in these environments (on average 3.46ft., 6.55ft., 12.96ft. of error, respectively), it does so while searching through only 7.35%, 3.50%, and 6.12% of all the stored information, respectively.
@inproceedings{ Siagian_Itti08icra,
author = {C. Siagian and L. Itti},
title = {Storing and Recalling Information for Vision Localization},
abstract = {In implementing a vision localization system, a crucial
issue to consider is how to efficiently store and
recall the necessary information, so that the robot
is not only able to accurately localize itself, but
does so in a timely manner. In the presented system,
we discuss a strategy to minimize the amount of
stored data by analyzing the strengths and
weaknesses of several cooperating recognition
modules and by using them through a prioritization
scheme which orders the data entries from the most
likely to match to the least likely. We validate the
system through a series of experiments in three
large scale outdoor environments: a building complex
(126x180ft. area, 3583 testing images), a
vegetation-filled park (270x360ft. area, 6006
testing images), and an open-field area
(450x585ft. area, 8823 testing images) - each with
its own set of challenges. Not only is the system
able to localize in these environments (on average
3.46ft., 6.55ft., 12.96ft. of error, respectively),
it does so while searching through only 7.35%,
3.50%, and 6.12% of all the stored information,
respectively.},
year = {2008},
month = {May},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA),
Pasadena, California},
type = {bu; sc},
file = {http://ilab.usc.edu/publications/doc/Siagian_Itti08icra.pdf},
review = {full/conf},
if = {2008 acceptance rate: 43%}
}
Downloads: 0
{"_id":{"_str":"5298a1a19eb585cc260008df"},"__v":0,"authorIDs":[],"author_short":["Siagian, C.","Itti, L."],"bibbaseid":"siagian-itti-storingandrecallinginformationforvisionlocalization-2008","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Siagian_Itti08icra\"> </a>Storing and Recalling Information for Vision Localization.</span>\n\t<span class=\"bibbase_paper_author\">\nSiagian, C.; and Itti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2008</span>. -->\n</span>\n\n\n\nIn\n<i>IEEE International Conference on Robotics and Automation (ICRA), Pasadena, California</i>, May 2008.\n\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Siagian_Itti08icra')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Storing and Recalling Information for Vision Localization [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Siagian_Itti08icra')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Siagian_Itti08icra\"\n style=\"display:none\">\n <pre>@inproceedings{ Siagian_Itti08icra,\n author = {C. Siagian and L. Itti},\n title = {Storing and Recalling Information for Vision Localization},\n abstract = {In implementing a vision localization system, a crucial\n issue to consider is how to efficiently store and\n recall the necessary information, so that the robot\n is not only able to accurately localize itself, but\n does so in a timely manner. In the presented system,\n we discuss a strategy to minimize the amount of\n stored data by analyzing the strengths and\n weaknesses of several cooperating recognition\n modules and by using them through a prioritization\n scheme which orders the data entries from the most\n likely to match to the least likely. We validate the\n system through a series of experiments in three\n large scale outdoor environments: a building complex\n (126x180ft. area, 3583 testing images), a\n vegetation-filled park (270x360ft. area, 6006\n testing images), and an open-field area\n (450x585ft. area, 8823 testing images) - each with\n its own set of challenges. Not only is the system\n able to localize in these environments (on average\n 3.46ft., 6.55ft., 12.96ft. of error, respectively),\n it does so while searching through only 7.35%,\n 3.50%, and 6.12% of all the stored information,\n respectively.},\n year = {2008},\n month = {May},\n booktitle = {IEEE International Conference on Robotics and Automation (ICRA),\nPasadena, California},\n type = {bu; sc},\n file = {http://ilab.usc.edu/publications/doc/Siagian_Itti08icra.pdf},\n review = {full/conf},\n if = {2008 acceptance rate: 43%}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Siagian_Itti08icra\"\n style=\"display:none\">\n In implementing a vision localization system, a crucial issue to consider is how to efficiently store and recall the necessary information, so that the robot is not only able to accurately localize itself, but does so in a timely manner. In the presented system, we discuss a strategy to minimize the amount of stored data by analyzing the strengths and weaknesses of several cooperating recognition modules and by using them through a prioritization scheme which orders the data entries from the most likely to match to the least likely. We validate the system through a series of experiments in three large scale outdoor environments: a building complex (126x180ft. area, 3583 testing images), a vegetation-filled park (270x360ft. area, 6006 testing images), and an open-field area (450x585ft. area, 8823 testing images) - each with its own set of challenges. Not only is the system able to localize in these environments (on average 3.46ft., 6.55ft., 12.96ft. of error, respectively), it does so while searching through only 7.35%, 3.50%, and 6.12% of all the stored information, respectively.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"siagian-itti-storingandrecallinginformationforvisionlocalization-2008","role":"author","year":"2008","type":"bu; sc","title":"Storing and Recalling Information for Vision Localization","review":"full/conf","month":"May","key":"Siagian_Itti08icra","if":"2008 acceptance rate: 43%","id":"Siagian_Itti08icra","file":"http://ilab.usc.edu/publications/doc/Siagian_Itti08icra.pdf","booktitle":"IEEE International Conference on Robotics and Automation (ICRA), Pasadena, California","bibtype":"inproceedings","bibtex":"@inproceedings{ Siagian_Itti08icra,\n author = {C. Siagian and L. Itti},\n title = {Storing and Recalling Information for Vision Localization},\n abstract = {In implementing a vision localization system, a crucial\n issue to consider is how to efficiently store and\n recall the necessary information, so that the robot\n is not only able to accurately localize itself, but\n does so in a timely manner. In the presented system,\n we discuss a strategy to minimize the amount of\n stored data by analyzing the strengths and\n weaknesses of several cooperating recognition\n modules and by using them through a prioritization\n scheme which orders the data entries from the most\n likely to match to the least likely. We validate the\n system through a series of experiments in three\n large scale outdoor environments: a building complex\n (126x180ft. area, 3583 testing images), a\n vegetation-filled park (270x360ft. area, 6006\n testing images), and an open-field area\n (450x585ft. area, 8823 testing images) - each with\n its own set of challenges. Not only is the system\n able to localize in these environments (on average\n 3.46ft., 6.55ft., 12.96ft. of error, respectively),\n it does so while searching through only 7.35%,\n 3.50%, and 6.12% of all the stored information,\n respectively.},\n year = {2008},\n month = {May},\n booktitle = {IEEE International Conference on Robotics and Automation (ICRA),\nPasadena, California},\n type = {bu; sc},\n file = {http://ilab.usc.edu/publications/doc/Siagian_Itti08icra.pdf},\n review = {full/conf},\n if = {2008 acceptance rate: 43%}\n}","author_short":["Siagian, C.","Itti, L."],"author":["Siagian, C.","Itti, L."],"abstract":"In implementing a vision localization system, a crucial issue to consider is how to efficiently store and recall the necessary information, so that the robot is not only able to accurately localize itself, but does so in a timely manner. In the presented system, we discuss a strategy to minimize the amount of stored data by analyzing the strengths and weaknesses of several cooperating recognition modules and by using them through a prioritization scheme which orders the data entries from the most likely to match to the least likely. We validate the system through a series of experiments in three large scale outdoor environments: a building complex (126x180ft. area, 3583 testing images), a vegetation-filled park (270x360ft. area, 6006 testing images), and an open-field area (450x585ft. area, 8823 testing images) - each with its own set of challenges. Not only is the system able to localize in these environments (on average 3.46ft., 6.55ft., 12.96ft. of error, respectively), it does so while searching through only 7.35%, 3.50%, and 6.12% of all the stored information, respectively."},"bibtype":"inproceedings","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["storing","recalling","information","vision","localization","siagian","itti"],"title":"Storing and Recalling Information for Vision Localization","year":2008,"dataSources":["wedBDxEpNXNCLZ2sZ"]}