Linking User-Generated Video Annotations To The Web Of Data. Hildebrand, M. & van Ossenbruggen, J. R. In Proceedings of 18th International Conference on Multimedia Modeling 2012 , January, 2012. Paper abstract bibtex In the audiovisual domain tagging games are explored as a method to collect user-generated metadata. For example, the Netherlands Institute for Sound and Vision deployed the video labelling game "Waisda?" to collect user tags for videos from their collection. These tags are potentially useful to improve the access to the content within the videos. However, the uncontrolled and often incomplete tags allow for multiple interpretations, preventing long term access. In this paper we investigate a semi-automatic process to define the interpretation of the tags by linking them to concepts from the Linked Open Data cloud. More specifically, we investigate if existing web services are suited to find a number of candidate concepts, and if human users can select the most appropriate concept from these suggestions in the context of the video. We present a prototype application that supports this process and discuss the results of a user experiment where this application is used with different data sources.
@inproceedings{18859,
author = {Hildebrand, M. and van Ossenbruggen, J. R.},
title = {Linking {User-{Generated}} {Video} {Annotations} {To} {The} {Web} {Of} {Data}},
booktitle = {Proceedings of 18th International Conference on Multimedia Modeling 2012 },
conferencetitle = {International Conference on Multimedia Modeling },
conferencedate = {2012, Jan 4 - Jan 6},
conferencelocation = {Klagenfurt, Austria},
year = {2012},
month = {January},
refereed = {y},
group = {INS2},
language = {en},
abstract = {In the audiovisual domain tagging games are explored as a method to collect user-generated metadata. For
example, the Netherlands Institute for Sound and Vision deployed the video labelling game "Waisda?" to collect user tags
for videos from their collection. These tags are potentially useful to improve the access to the content within the videos.
However, the uncontrolled and often incomplete tags allow for multiple interpretations, preventing long term access. In
this paper we investigate a semi-automatic process to define the interpretation of the tags by linking them to concepts
from the Linked Open Data cloud. More specifically, we investigate if existing web services are suited to find a number
of candidate concepts, and if human users can select the most appropriate concept from these suggestions in the context
of the video. We present a prototype application that supports this process and discuss the results of a user experiment
where this application is used with different data sources.
},
url = {http://homepages.cwi.nl/%7Ejrvosse/publications/2012/mmm2012.pdf},
}
Downloads: 0
{"_id":{"_str":"5211f22e44b2654d3d0000f6"},"__v":3,"authorIDs":["545cbba26aaec20d23000186","5NMn3nQtSs5PhrZNQ"],"author_short":["Hildebrand, M.","van Ossenbruggen, J. R."],"bibbaseid":"hildebrand-vanossenbruggen-linkingusergeneratedvideoannotationstothewebofdata-2012","bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"propositions":[],"lastnames":["Hildebrand"],"firstnames":["M."],"suffixes":[]},{"propositions":["van"],"lastnames":["Ossenbruggen"],"firstnames":["J.","R."],"suffixes":[]}],"title":"Linking User-Generated Video Annotations To The Web Of Data","booktitle":"Proceedings of 18th International Conference on Multimedia Modeling 2012 ","conferencetitle":"International Conference on Multimedia Modeling ","conferencedate":"2012, Jan 4 - Jan 6","conferencelocation":"Klagenfurt, Austria","year":"2012","month":"January","refereed":"y","group":"INS2","language":"en","abstract":"In the audiovisual domain tagging games are explored as a method to collect user-generated metadata. For example, the Netherlands Institute for Sound and Vision deployed the video labelling game \"Waisda?\" to collect user tags for videos from their collection. These tags are potentially useful to improve the access to the content within the videos. However, the uncontrolled and often incomplete tags allow for multiple interpretations, preventing long term access. In this paper we investigate a semi-automatic process to define the interpretation of the tags by linking them to concepts from the Linked Open Data cloud. More specifically, we investigate if existing web services are suited to find a number of candidate concepts, and if human users can select the most appropriate concept from these suggestions in the context of the video. We present a prototype application that supports this process and discuss the results of a user experiment where this application is used with different data sources. ","url":"http://homepages.cwi.nl/%7Ejrvosse/publications/2012/mmm2012.pdf","bibtex":"@inproceedings{18859,\nauthor = {Hildebrand, M. and van Ossenbruggen, J. R.},\ntitle = {Linking {User-{Generated}} {Video} {Annotations} {To} {The} {Web} {Of} {Data}},\nbooktitle = {Proceedings of 18th International Conference on Multimedia Modeling 2012 },\nconferencetitle = {International Conference on Multimedia Modeling },\nconferencedate = {2012, Jan 4 - Jan 6},\nconferencelocation = {Klagenfurt, Austria},\nyear = {2012},\nmonth = {January},\nrefereed = {y},\ngroup = {INS2},\nlanguage = {en},\nabstract = {In the audiovisual domain tagging games are explored as a method to collect user-generated metadata. For\n example, the Netherlands Institute for Sound and Vision deployed the video labelling game \"Waisda?\" to collect user tags\n for videos from their collection. These tags are potentially useful to improve the access to the content within the videos.\n However, the uncontrolled and often incomplete tags allow for multiple interpretations, preventing long term access. In\n this paper we investigate a semi-automatic process to define the interpretation of the tags by linking them to concepts\n from the Linked Open Data cloud. More specifically, we investigate if existing web services are suited to find a number\n of candidate concepts, and if human users can select the most appropriate concept from these suggestions in the context\n of the video. We present a prototype application that supports this process and discuss the results of a user experiment\n where this application is used with different data sources.\r\n},\nurl = {http://homepages.cwi.nl/%7Ejrvosse/publications/2012/mmm2012.pdf},\n}\n\r\n\r\n\r\n","author_short":["Hildebrand, M.","van Ossenbruggen, J. R."],"key":"18859","id":"18859","bibbaseid":"hildebrand-vanossenbruggen-linkingusergeneratedvideoannotationstothewebofdata-2012","role":"author","urls":{"Paper":"http://homepages.cwi.nl/%7Ejrvosse/publications/2012/mmm2012.pdf"},"metadata":{"authorlinks":{"van ossenbruggen, j":"https://bibbase.org/show?bib=http://homepages.cwi.nl/~jrvosse/publications/pubs.bib&proxy=1"}},"downloads":0,"html":""},"bibtype":"inproceedings","biburl":"http://homepages.cwi.nl/~jrvosse/publications/pubs.bib","downloads":0,"keywords":[],"search_terms":["linking","user","generated","video","annotations","web","data","hildebrand","van ossenbruggen"],"title":"Linking User-Generated Video Annotations To The Web Of Data","title_words":["linking","user","generated","video","annotations","web","data"],"year":2012,"dataSources":["5GYijBLBgdYqK9T7H"]}