Fixing Mislabeling by Human Annotators Leveraging Conflict Resolution and Prior Knowledge. Zeni, M., Zhang, W., Bignotti, E., Passerini, A., & Giunchiglia, F. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(1):32:1–32:23, March, 2019. Paper doi abstract bibtex According to the "human in the loop" paradigm, machine learning algorithms can improve when leveraging on human intelligence, usually in the form of labels or annotation from domain experts. However, in the case of research areas such as ubiquitous computing or lifelong learning, where the annotator is not an expert and is continuously asked for feedback, humans can provide significant fractions of incorrect labels. We propose to address this issue in a series of experiments where students are asked to provide information about their behavior via a dedicated mobile application. Their trustworthiness is tested by employing an architecture where the machine uses all its available knowledge to check the correctness of its own and the user labeling to build a uniform confidence measure for both of them to be used when a contradiction arises. The overarching system runs through a series of modes with progressively higher confidence and features a conflict resolution component to settle the inconsistencies. The results are very promising and show the pervasiveness of annotation mistakes, the extreme diversity of the users' behaviors which provides evidence of the impracticality of a uniform fits-it-all solution, and the substantially improved performance of a skeptical supervised learning strategy.
@article{zeni_fixing_2019,
title = {Fixing {Mislabeling} by {Human} {Annotators} {Leveraging} {Conflict} {Resolution} and {Prior} {Knowledge}},
volume = {3},
url = {https://doi.org/10.1145/3314419},
doi = {10.1145/3314419},
abstract = {According to the "human in the loop" paradigm, machine learning algorithms can improve when leveraging on human intelligence, usually in the form of labels or annotation from domain experts. However, in the case of research areas such as ubiquitous computing or lifelong learning, where the annotator is not an expert and is continuously asked for feedback, humans can provide significant fractions of incorrect labels. We propose to address this issue in a series of experiments where students are asked to provide information about their behavior via a dedicated mobile application. Their trustworthiness is tested by employing an architecture where the machine uses all its available knowledge to check the correctness of its own and the user labeling to build a uniform confidence measure for both of them to be used when a contradiction arises. The overarching system runs through a series of modes with progressively higher confidence and features a conflict resolution component to settle the inconsistencies. The results are very promising and show the pervasiveness of annotation mistakes, the extreme diversity of the users' behaviors which provides evidence of the impracticality of a uniform fits-it-all solution, and the substantially improved performance of a skeptical supervised learning strategy.},
number = {1},
urldate = {2021-10-18},
journal = {Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},
author = {Zeni, Mattia and Zhang, Wanyi and Bignotti, Enrico and Passerini, Andrea and Giunchiglia, Fausto},
month = mar,
year = {2019},
keywords = {Annotation Errors, Collaborative and Social Computing, Ubiquitous and Mobile Devices},
pages = {32:1--32:23},
}
Downloads: 0
{"_id":"yTwTCRQGeGWQzjwBZ","bibbaseid":"zeni-zhang-bignotti-passerini-giunchiglia-fixingmislabelingbyhumanannotatorsleveragingconflictresolutionandpriorknowledge-2019","authorIDs":["33ph5e9un629a6z8R","3paxSEcmGHqawmQSH","57aa14abe65eeaa8320002af","5de77c1a021482de010000e0","5de78851021482de010001b2","5decd61f619535de01000025","5dee3c040bde9edf010001dd","5def64c2fe2024de010000e7","5df0a88df651f5df0100018a","5df0fa2745b054df010001b2","5df2220c1e4fe9df010000d7","5e08cd5b2389e9de01000088","5e0a140352fbd9de01000055","5e15dceb7c179bdf01000143","5e231c0f327a15de01000029","5e25659af58a5cde01000002","5e25db0da6f19fde01000064","5e27fe454d75d8de010000bd","5e298e8bb7b1e8de0100012c","5e29cbbb888177df01000014","5e31977d6be690de01000001","5e3ac907f2a00cdf0100017d","5e3ae0581b85fadf01000117","5e4ff29cf5b214df01000094","5e5bae4660bedddf0100000e","5e5cf7e0d12a0ade010000eb","9ReFb2Da8PRKDEiu6","A3Fn5YEyTd3ZxoG7y","AgFDK7gnn4H4khLnK","BZSatXKWASyWjpv9L","EKTHuvMhGtN784gLt","F6EF23Wr7zXydZwsP","GZryatA6NtiGdcDbk","GjYuD5Eo5sEBdhqHv","HDYuhajNcKu2efX8u","LbrTdoFuYi7e4SqTW","NBtAi3YP6iDqJah7a","Psg6ouFSuxJmuau9r","PtjuyZgHpnZRdqmJC","RiMAZDfAxDoHGXhTe","Tu2Hci7bxsbXyXwEj","WbtAuk2rgvNC2RAz8","Wv4QYKQRSK54rTNXY","XQ7n9obLkgiqypEat","YRmcqsDvg3moHKTGq","YisY5WcryswGRExKZ","YtpavQm9TQ9pwahBf","Z5ypYQQxSGfBsK9Ws","bGx4GmfKwduWJ2fgX","gXMkRZCiD6RT7X6yq","h2wHLXZ3GaMCpEkoM","i6ug8GaEbRyz6KwiE","iYkWuzprAgueM5cPp","iccC9J3ERnpGAsH4D","ifoNP4hQiRCkyxkM9","kysSRK2KCL8di7gAj","mPys3rfzZ9n898dwu","okwea5BAGQvL9BbPR","qScd5CgxHavzzr6e7","qXDqwSBwKrucTn3m8","r6JiTE8oem6hHJ42s","r7paw6PwmNmydYBtw","wzP9sNPo9bAMpZAwv","za8HNbga74EvGXpWT"],"author_short":["Zeni, M.","Zhang, W.","Bignotti, E.","Passerini, A.","Giunchiglia, F."],"bibdata":{"bibtype":"article","type":"article","title":"Fixing Mislabeling by Human Annotators Leveraging Conflict Resolution and Prior Knowledge","volume":"3","url":"https://doi.org/10.1145/3314419","doi":"10.1145/3314419","abstract":"According to the \"human in the loop\" paradigm, machine learning algorithms can improve when leveraging on human intelligence, usually in the form of labels or annotation from domain experts. However, in the case of research areas such as ubiquitous computing or lifelong learning, where the annotator is not an expert and is continuously asked for feedback, humans can provide significant fractions of incorrect labels. We propose to address this issue in a series of experiments where students are asked to provide information about their behavior via a dedicated mobile application. Their trustworthiness is tested by employing an architecture where the machine uses all its available knowledge to check the correctness of its own and the user labeling to build a uniform confidence measure for both of them to be used when a contradiction arises. The overarching system runs through a series of modes with progressively higher confidence and features a conflict resolution component to settle the inconsistencies. The results are very promising and show the pervasiveness of annotation mistakes, the extreme diversity of the users' behaviors which provides evidence of the impracticality of a uniform fits-it-all solution, and the substantially improved performance of a skeptical supervised learning strategy.","number":"1","urldate":"2021-10-18","journal":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","author":[{"propositions":[],"lastnames":["Zeni"],"firstnames":["Mattia"],"suffixes":[]},{"propositions":[],"lastnames":["Zhang"],"firstnames":["Wanyi"],"suffixes":[]},{"propositions":[],"lastnames":["Bignotti"],"firstnames":["Enrico"],"suffixes":[]},{"propositions":[],"lastnames":["Passerini"],"firstnames":["Andrea"],"suffixes":[]},{"propositions":[],"lastnames":["Giunchiglia"],"firstnames":["Fausto"],"suffixes":[]}],"month":"March","year":"2019","keywords":"Annotation Errors, Collaborative and Social Computing, Ubiquitous and Mobile Devices","pages":"32:1–32:23","bibtex":"@article{zeni_fixing_2019,\n\ttitle = {Fixing {Mislabeling} by {Human} {Annotators} {Leveraging} {Conflict} {Resolution} and {Prior} {Knowledge}},\n\tvolume = {3},\n\turl = {https://doi.org/10.1145/3314419},\n\tdoi = {10.1145/3314419},\n\tabstract = {According to the \"human in the loop\" paradigm, machine learning algorithms can improve when leveraging on human intelligence, usually in the form of labels or annotation from domain experts. However, in the case of research areas such as ubiquitous computing or lifelong learning, where the annotator is not an expert and is continuously asked for feedback, humans can provide significant fractions of incorrect labels. We propose to address this issue in a series of experiments where students are asked to provide information about their behavior via a dedicated mobile application. Their trustworthiness is tested by employing an architecture where the machine uses all its available knowledge to check the correctness of its own and the user labeling to build a uniform confidence measure for both of them to be used when a contradiction arises. The overarching system runs through a series of modes with progressively higher confidence and features a conflict resolution component to settle the inconsistencies. The results are very promising and show the pervasiveness of annotation mistakes, the extreme diversity of the users' behaviors which provides evidence of the impracticality of a uniform fits-it-all solution, and the substantially improved performance of a skeptical supervised learning strategy.},\n\tnumber = {1},\n\turldate = {2021-10-18},\n\tjournal = {Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},\n\tauthor = {Zeni, Mattia and Zhang, Wanyi and Bignotti, Enrico and Passerini, Andrea and Giunchiglia, Fausto},\n\tmonth = mar,\n\tyear = {2019},\n\tkeywords = {Annotation Errors, Collaborative and Social Computing, Ubiquitous and Mobile Devices},\n\tpages = {32:1--32:23},\n}\n\n\n\n","author_short":["Zeni, M.","Zhang, W.","Bignotti, E.","Passerini, A.","Giunchiglia, F."],"key":"zeni_fixing_2019","id":"zeni_fixing_2019","bibbaseid":"zeni-zhang-bignotti-passerini-giunchiglia-fixingmislabelingbyhumanannotatorsleveragingconflictresolutionandpriorknowledge-2019","role":"author","urls":{"Paper":"https://doi.org/10.1145/3314419"},"keyword":["Annotation Errors","Collaborative and Social Computing","Ubiquitous and Mobile Devices"],"metadata":{"authorlinks":{"passerini, a":"https://bibbase.org/"}},"downloads":0,"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/mh_lenguyen","creationDate":"2019-11-26T17:35:27.704Z","downloads":0,"keywords":["annotation errors","collaborative and social computing","ubiquitous and mobile devices"],"search_terms":["fixing","mislabeling","human","annotators","leveraging","conflict","resolution","prior","knowledge","zeni","zhang","bignotti","passerini","giunchiglia"],"title":"Fixing Mislabeling by Human Annotators Leveraging Conflict Resolution and Prior Knowledge","year":2019,"dataSources":["XJ7Gu6aiNbAiJAjbw","XvjRDbrMBW2XJY3p9","3C6BKwtiX883bctx4","5THezwiL4FyF8mm4G","RktFJE9cDa98BRLZF","qpxPuYKLChgB7ox6D","PfM5iniYHEthCfQDH","iwKepCrWBps7ojhDx"]}