{"_id":"y5PsrmJrX2LzyzKMX","bibbaseid":"conger-integrationandgeneralizationofkappasformultipleraters-1980","author_short":["Conger, A. J."],"bibdata":{"bibtype":"article","type":"article","title":"Integration and generalization of kappas for multiple raters","volume":"88","copyright":"(c) 2012 APA, all rights reserved","issn":"1939-1455(Electronic);0033-2909(Print)","doi":"10.1037/0033-2909.88.2.322","abstract":"J. A. Cohen's kappa (1960) for measuring agreement between 2 raters, using a nominal scale, has been extended for use with multiple raters by R. J. Light (1971) and J. L. Fleiss (1971). In the present article, these indices are analyzed and reformulated in terms of agreement statistics based on all pairs of raters. It has been argued that simultaneous agreement among all raters could provide an alternative basis for measuring multiple-rater agreement; however, agreement among raters can actually be considered to be an arbitrary choice along a continuum ranging from agreement for a pair of raters to agreement among all raters. Using this generalized concept of g-wise agreement, multiple-rater kappas are extended, interrelated, and illustrated. (4 ref)","number":"2","journal":"Psychological Bulletin","author":[{"propositions":[],"lastnames":["Conger"],"firstnames":["Anthony","J."],"suffixes":[]}],"year":"1980","keywords":"*Rating, Statistical Correlation","pages":"322–328","bibtex":"@article{conger_integration_1980,\n\ttitle = {Integration and generalization of kappas for multiple raters},\n\tvolume = {88},\n\tcopyright = {(c) 2012 APA, all rights reserved},\n\tissn = {1939-1455(Electronic);0033-2909(Print)},\n\tdoi = {10.1037/0033-2909.88.2.322},\n\tabstract = {J. A. Cohen's kappa (1960) for measuring agreement between 2 raters, using a nominal scale, has been extended for use with multiple raters by R. J. Light (1971) and J. L. Fleiss (1971). In the present article, these indices are analyzed and reformulated in terms of agreement statistics based on all pairs of raters. It has been argued that simultaneous agreement among all raters could provide an alternative basis for measuring multiple-rater agreement; however, agreement among raters can actually be considered to be an arbitrary choice along a continuum ranging from agreement for a pair of raters to agreement among all raters. Using this generalized concept of g-wise agreement, multiple-rater kappas are extended, interrelated, and illustrated. (4 ref)},\n\tnumber = {2},\n\tjournal = {Psychological Bulletin},\n\tauthor = {Conger, Anthony J.},\n\tyear = {1980},\n\tkeywords = {*Rating, Statistical Correlation},\n\tpages = {322--328},\n}\n\n","author_short":["Conger, A. J."],"key":"conger_integration_1980","id":"conger_integration_1980","bibbaseid":"conger-integrationandgeneralizationofkappasformultipleraters-1980","role":"author","urls":{},"keyword":["*Rating","Statistical Correlation"],"metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://bibbase.org/zotero/ofurtado","dataSources":["7i2Yc4ejK6JQ7w28D"],"keywords":["*rating","statistical correlation"],"search_terms":["integration","generalization","kappas","multiple","raters","conger"],"title":"Integration and generalization of kappas for multiple raters","year":1980}