A Theoretical Analysis of NDCG Type Ranking Measures. Wang, Y., Wang, L., Li, Y., He, D., & Liu, T. In Proceedings of the 26th Annual Conference on Learning Theory, pages 25–54, June, 2013. PMLR. ISSN: 1938-7228Paper abstract bibtex Ranking has been extensively studied in information retrieval, machine learning and statistics. A central problem in ranking is to design a ranking measure for evaluation of ranking functions. State of the art leaning to rank methods often train a ranking function by using a ranking measure as the objective to maximize. In this paper we study, from a theoretical perspective, the widely used NDCG type ranking measures. We analyze the behavior of these ranking measures as the number of objects to rank getting large. We first show that, whatever the ranking function is, the standard NDCG which adopts a logarithmic discount, converges to 1 as the number of items to rank goes to infinity. On the first sight, this result seems to imply that NDCG cannot distinguish good and bad ranking functions, contradicting to the empirical success of NDCG in many applications. Our next main result is a theorem which shows that although NDCG converge to the same limit for all ranking functions, it has distinguishability for ranking functions in a strong sense. We then investigate NDCG with other possible discount. Specifically we characterize the class of feasible discount functions for NDCG. We also compare the limiting behavior and the power of distinguishability of these feasible NDCG type measures to the standard NDCG. We next turn to the cut-off version of NDCG, i.e., NDCG@k. The most popular NDCG@k uses a combination of a slow logarithmic decay and a hard cut-off as its discount. So a natural question is why not simply use a smooth discount with fast decay? We show that if the decay is too fast, then the NDCG measure does not have strong power of distinguishability and even not converge. Finally, feasible NDCG@k are also discussed.
@inproceedings{wang_theoretical_2013,
title = {A {Theoretical} {Analysis} of {NDCG} {Type} {Ranking} {Measures}},
url = {https://proceedings.mlr.press/v30/Wang13.html},
abstract = {Ranking has been extensively studied in information retrieval, machine learning and statistics. A central problem in ranking is to design a ranking measure for evaluation of ranking functions. State of the art leaning to rank methods often train a ranking function by using a ranking measure as the objective to maximize. In this paper we study, from a theoretical perspective, the widely used NDCG type ranking measures. We analyze the behavior of these ranking measures as the number of objects to rank getting large. We first show that, whatever the ranking function is, the standard NDCG which adopts a logarithmic discount, converges to 1 as the number of items to rank goes to infinity. On the first sight, this result seems to imply that NDCG cannot distinguish good and bad ranking functions, contradicting to the empirical success of NDCG in many applications. Our next main result is a theorem which shows that although NDCG converge to the same limit for all ranking functions, it has distinguishability for ranking functions in a strong sense. We then investigate NDCG with other possible discount. Specifically we characterize the class of feasible discount functions for NDCG. We also compare the limiting behavior and the power of distinguishability of these feasible NDCG type measures to the standard NDCG. We next turn to the cut-off version of NDCG, i.e., NDCG@k. The most popular NDCG@k uses a combination of a slow logarithmic decay and a hard cut-off as its discount. So a natural question is why not simply use a smooth discount with fast decay? We show that if the decay is too fast, then the NDCG measure does not have strong power of distinguishability and even not converge. Finally, feasible NDCG@k are also discussed.},
language = {en},
urldate = {2023-05-23},
booktitle = {Proceedings of the 26th {Annual} {Conference} on {Learning} {Theory}},
publisher = {PMLR},
author = {Wang, Yining and Wang, Liwei and Li, Yuanzhi and He, Di and Liu, Tie-Yan},
month = jun,
year = {2013},
note = {ISSN: 1938-7228},
pages = {25--54},
}
Downloads: 0
{"_id":"3HuJbWfkwdDasEW2c","bibbaseid":"wang-wang-li-he-liu-atheoreticalanalysisofndcgtyperankingmeasures-2013","author_short":["Wang, Y.","Wang, L.","Li, Y.","He, D.","Liu, T."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"A Theoretical Analysis of NDCG Type Ranking Measures","url":"https://proceedings.mlr.press/v30/Wang13.html","abstract":"Ranking has been extensively studied in information retrieval, machine learning and statistics. A central problem in ranking is to design a ranking measure for evaluation of ranking functions. State of the art leaning to rank methods often train a ranking function by using a ranking measure as the objective to maximize. In this paper we study, from a theoretical perspective, the widely used NDCG type ranking measures. We analyze the behavior of these ranking measures as the number of objects to rank getting large. We first show that, whatever the ranking function is, the standard NDCG which adopts a logarithmic discount, converges to 1 as the number of items to rank goes to infinity. On the first sight, this result seems to imply that NDCG cannot distinguish good and bad ranking functions, contradicting to the empirical success of NDCG in many applications. Our next main result is a theorem which shows that although NDCG converge to the same limit for all ranking functions, it has distinguishability for ranking functions in a strong sense. We then investigate NDCG with other possible discount. Specifically we characterize the class of feasible discount functions for NDCG. We also compare the limiting behavior and the power of distinguishability of these feasible NDCG type measures to the standard NDCG. We next turn to the cut-off version of NDCG, i.e., NDCG@k. The most popular NDCG@k uses a combination of a slow logarithmic decay and a hard cut-off as its discount. So a natural question is why not simply use a smooth discount with fast decay? We show that if the decay is too fast, then the NDCG measure does not have strong power of distinguishability and even not converge. Finally, feasible NDCG@k are also discussed.","language":"en","urldate":"2023-05-23","booktitle":"Proceedings of the 26th Annual Conference on Learning Theory","publisher":"PMLR","author":[{"propositions":[],"lastnames":["Wang"],"firstnames":["Yining"],"suffixes":[]},{"propositions":[],"lastnames":["Wang"],"firstnames":["Liwei"],"suffixes":[]},{"propositions":[],"lastnames":["Li"],"firstnames":["Yuanzhi"],"suffixes":[]},{"propositions":[],"lastnames":["He"],"firstnames":["Di"],"suffixes":[]},{"propositions":[],"lastnames":["Liu"],"firstnames":["Tie-Yan"],"suffixes":[]}],"month":"June","year":"2013","note":"ISSN: 1938-7228","pages":"25–54","bibtex":"@inproceedings{wang_theoretical_2013,\n\ttitle = {A {Theoretical} {Analysis} of {NDCG} {Type} {Ranking} {Measures}},\n\turl = {https://proceedings.mlr.press/v30/Wang13.html},\n\tabstract = {Ranking has been extensively studied in information retrieval, machine learning and statistics. A central problem in ranking is to design a ranking measure for evaluation of ranking functions. State of the art leaning to rank methods often train a ranking function by using a ranking measure as the objective to maximize. In this paper we study, from a theoretical perspective, the widely used NDCG type ranking measures. We analyze the behavior of these ranking measures as the number of objects to rank getting large. We first show that, whatever the ranking function is, the standard NDCG which adopts a logarithmic discount, converges to 1 as the number of items to rank goes to infinity. On the first sight, this result seems to imply that NDCG cannot distinguish good and bad ranking functions, contradicting to the empirical success of NDCG in many applications. Our next main result is a theorem which shows that although NDCG converge to the same limit for all ranking functions, it has distinguishability for ranking functions in a strong sense. We then investigate NDCG with other possible discount. Specifically we characterize the class of feasible discount functions for NDCG. We also compare the limiting behavior and the power of distinguishability of these feasible NDCG type measures to the standard NDCG. We next turn to the cut-off version of NDCG, i.e., NDCG@k. The most popular NDCG@k uses a combination of a slow logarithmic decay and a hard cut-off as its discount. So a natural question is why not simply use a smooth discount with fast decay? We show that if the decay is too fast, then the NDCG measure does not have strong power of distinguishability and even not converge. Finally, feasible NDCG@k are also discussed.},\n\tlanguage = {en},\n\turldate = {2023-05-23},\n\tbooktitle = {Proceedings of the 26th {Annual} {Conference} on {Learning} {Theory}},\n\tpublisher = {PMLR},\n\tauthor = {Wang, Yining and Wang, Liwei and Li, Yuanzhi and He, Di and Liu, Tie-Yan},\n\tmonth = jun,\n\tyear = {2013},\n\tnote = {ISSN: 1938-7228},\n\tpages = {25--54},\n}\n\n\n\n","author_short":["Wang, Y.","Wang, L.","Li, Y.","He, D.","Liu, T."],"key":"wang_theoretical_2013","id":"wang_theoretical_2013","bibbaseid":"wang-wang-li-he-liu-atheoreticalanalysisofndcgtyperankingmeasures-2013","role":"author","urls":{"Paper":"https://proceedings.mlr.press/v30/Wang13.html"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"https://bibbase.org/zotero/mh_lenguyen","dataSources":["iwKepCrWBps7ojhDx"],"keywords":[],"search_terms":["theoretical","analysis","ndcg","type","ranking","measures","wang","wang","li","he","liu"],"title":"A Theoretical Analysis of NDCG Type Ranking Measures","year":2013}