Explaining recommendations. Tintarev, N. 2009. Website abstract bibtex Recommender systems such as Amazon, offer users recommendations, or suggestions of items to try or buy. These recommendations can then be explained to the user, e.g. “You might (not) like this item because...”. We propose a novel classification of reasons for in- cluding explanations in recommender systems. Our focus is on the aim of effectiveness, or decision support, and we contrast it with other metrics such as satisfaction and persua- sion. Effective explanations should be helpful in the sense that they help users find items that they like (even after trying them), and discard items they would not like. In user studies, we found that people varied in the features they found important, and composed a short list of features in two domains (movies and cameras). We then built a natural language explanation generation testbed system, considering these features as well as the limitations of using commercial data. This testbed was used in a series of experiments to test whether personalization of explanations affects effectiveness, persua- sion and satisfaction. We chose a simple form of personalization which considers likely constraints of a recommender system (e.g. limited meta-data related to the user) as well as brevity (assuming users want to browse items relatively quickly). In these experiments we found that: 1. Explanations help participants to make decisions compared to recommendations without explanations, we we sawas a significant decrease in opt-outs in item ratings - participants were more likely to be able to give an initial rating for an item if they were given an explanation, and the likelihood of receiving a rating increased for feature-based explanations compared to a baseline. 2. Contrary to our initial hypothesis, our method of personalization could damage effectiveness for both movies and cameras which are domains that differ with regard to two dimensions which we found affected perceived effectiveness: cost (low vs. high), and valuation type (subjective vs. objective). 3. Participants were more satisfied with feature-based than baseline explanations. If the personalization is perceived as relevant to them, then personalized feature-based explanations were preferred over non-personalized. 4. Satisfaction with explanations was also reflected in the proportion of opt-outs. The opt-out rate for the explanations was highest in the baseline for all experiments. This was the case despite the different types of explanation baselines used in the two domains
@book{
title = {Explaining recommendations},
type = {book},
year = {2009},
identifiers = {[object Object]},
websites = {http://link.springer.com/10.1007/978-3-540-73078-1},
id = {51c09ebc-c378-35d5-99b6-e30fd47985ed},
created = {2018-03-19T13:18:06.233Z},
file_attached = {false},
profile_id = {2ed0fe69-06a2-3e8b-9bc9-5bdb197f1120},
group_id = {e795dbfa-5576-3499-9c01-6574f19bf7aa},
last_modified = {2018-12-14T12:16:31.740Z},
tags = {Amazon.,Explanations,human-computer interaction,natural language generation,recommender systems,user- centered design},
read = {true},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {Tintarev2009},
private_publication = {false},
abstract = {Recommender systems such as Amazon, offer users recommendations, or suggestions of items to try or buy. These recommendations can then be explained to the user, e.g. “You might (not) like this item because...”. We propose a novel classification of reasons for in- cluding explanations in recommender systems. Our focus is on the aim of effectiveness, or decision support, and we contrast it with other metrics such as satisfaction and persua- sion. Effective explanations should be helpful in the sense that they help users find items that they like (even after trying them), and discard items they would not like. In user studies, we found that people varied in the features they found important, and composed a short list of features in two domains (movies and cameras). We then built a natural language explanation generation testbed system, considering these features as well as the limitations of using commercial data. This testbed was used in a series of experiments to test whether personalization of explanations affects effectiveness, persua- sion and satisfaction. We chose a simple form of personalization which considers likely constraints of a recommender system (e.g. limited meta-data related to the user) as well as brevity (assuming users want to browse items relatively quickly). In these experiments we found that: 1. Explanations help participants to make decisions compared to recommendations without explanations, we we sawas a significant decrease in opt-outs in item ratings - participants were more likely to be able to give an initial rating for an item if they were given an explanation, and the likelihood of receiving a rating increased for feature-based explanations compared to a baseline. 2. Contrary to our initial hypothesis, our method of personalization could damage effectiveness for both movies and cameras which are domains that differ with regard to two dimensions which we found affected perceived effectiveness: cost (low vs. high), and valuation type (subjective vs. objective). 3. Participants were more satisfied with feature-based than baseline explanations. If the personalization is perceived as relevant to them, then personalized feature-based explanations were preferred over non-personalized. 4. Satisfaction with explanations was also reflected in the proportion of opt-outs. The opt-out rate for the explanations was highest in the baseline for all experiments. This was the case despite the different types of explanation baselines used in the two domains},
bibtype = {book},
author = {Tintarev, Nava}
}
Downloads: 0
{"_id":"KnTB4hhGkCH5DxEdW","bibbaseid":"tintarev-explainingrecommendations-2009","downloads":0,"creationDate":"2018-09-05T13:52:59.087Z","title":"Explaining recommendations","author_short":["Tintarev, N."],"year":2009,"bibtype":"book","biburl":null,"bibdata":{"title":"Explaining recommendations","type":"book","year":"2009","identifiers":"[object Object]","websites":"http://link.springer.com/10.1007/978-3-540-73078-1","id":"51c09ebc-c378-35d5-99b6-e30fd47985ed","created":"2018-03-19T13:18:06.233Z","file_attached":false,"profile_id":"2ed0fe69-06a2-3e8b-9bc9-5bdb197f1120","group_id":"e795dbfa-5576-3499-9c01-6574f19bf7aa","last_modified":"2018-12-14T12:16:31.740Z","tags":"Amazon.,Explanations,human-computer interaction,natural language generation,recommender systems,user- centered design","read":"true","starred":false,"authored":false,"confirmed":"true","hidden":false,"citation_key":"Tintarev2009","private_publication":false,"abstract":"Recommender systems such as Amazon, offer users recommendations, or suggestions of items to try or buy. These recommendations can then be explained to the user, e.g. “You might (not) like this item because...”. We propose a novel classification of reasons for in- cluding explanations in recommender systems. Our focus is on the aim of effectiveness, or decision support, and we contrast it with other metrics such as satisfaction and persua- sion. Effective explanations should be helpful in the sense that they help users find items that they like (even after trying them), and discard items they would not like. In user studies, we found that people varied in the features they found important, and composed a short list of features in two domains (movies and cameras). We then built a natural language explanation generation testbed system, considering these features as well as the limitations of using commercial data. This testbed was used in a series of experiments to test whether personalization of explanations affects effectiveness, persua- sion and satisfaction. We chose a simple form of personalization which considers likely constraints of a recommender system (e.g. limited meta-data related to the user) as well as brevity (assuming users want to browse items relatively quickly). In these experiments we found that: 1. Explanations help participants to make decisions compared to recommendations without explanations, we we sawas a significant decrease in opt-outs in item ratings - participants were more likely to be able to give an initial rating for an item if they were given an explanation, and the likelihood of receiving a rating increased for feature-based explanations compared to a baseline. 2. Contrary to our initial hypothesis, our method of personalization could damage effectiveness for both movies and cameras which are domains that differ with regard to two dimensions which we found affected perceived effectiveness: cost (low vs. high), and valuation type (subjective vs. objective). 3. Participants were more satisfied with feature-based than baseline explanations. If the personalization is perceived as relevant to them, then personalized feature-based explanations were preferred over non-personalized. 4. Satisfaction with explanations was also reflected in the proportion of opt-outs. The opt-out rate for the explanations was highest in the baseline for all experiments. This was the case despite the different types of explanation baselines used in the two domains","bibtype":"book","author":"Tintarev, Nava","bibtex":"@book{\n title = {Explaining recommendations},\n type = {book},\n year = {2009},\n identifiers = {[object Object]},\n websites = {http://link.springer.com/10.1007/978-3-540-73078-1},\n id = {51c09ebc-c378-35d5-99b6-e30fd47985ed},\n created = {2018-03-19T13:18:06.233Z},\n file_attached = {false},\n profile_id = {2ed0fe69-06a2-3e8b-9bc9-5bdb197f1120},\n group_id = {e795dbfa-5576-3499-9c01-6574f19bf7aa},\n last_modified = {2018-12-14T12:16:31.740Z},\n tags = {Amazon.,Explanations,human-computer interaction,natural language generation,recommender systems,user- centered design},\n read = {true},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Tintarev2009},\n private_publication = {false},\n abstract = {Recommender systems such as Amazon, offer users recommendations, or suggestions of items to try or buy. These recommendations can then be explained to the user, e.g. “You might (not) like this item because...”. We propose a novel classification of reasons for in- cluding explanations in recommender systems. Our focus is on the aim of effectiveness, or decision support, and we contrast it with other metrics such as satisfaction and persua- sion. Effective explanations should be helpful in the sense that they help users find items that they like (even after trying them), and discard items they would not like. In user studies, we found that people varied in the features they found important, and composed a short list of features in two domains (movies and cameras). We then built a natural language explanation generation testbed system, considering these features as well as the limitations of using commercial data. This testbed was used in a series of experiments to test whether personalization of explanations affects effectiveness, persua- sion and satisfaction. We chose a simple form of personalization which considers likely constraints of a recommender system (e.g. limited meta-data related to the user) as well as brevity (assuming users want to browse items relatively quickly). In these experiments we found that: 1. Explanations help participants to make decisions compared to recommendations without explanations, we we sawas a significant decrease in opt-outs in item ratings - participants were more likely to be able to give an initial rating for an item if they were given an explanation, and the likelihood of receiving a rating increased for feature-based explanations compared to a baseline. 2. Contrary to our initial hypothesis, our method of personalization could damage effectiveness for both movies and cameras which are domains that differ with regard to two dimensions which we found affected perceived effectiveness: cost (low vs. high), and valuation type (subjective vs. objective). 3. Participants were more satisfied with feature-based than baseline explanations. If the personalization is perceived as relevant to them, then personalized feature-based explanations were preferred over non-personalized. 4. Satisfaction with explanations was also reflected in the proportion of opt-outs. The opt-out rate for the explanations was highest in the baseline for all experiments. This was the case despite the different types of explanation baselines used in the two domains},\n bibtype = {book},\n author = {Tintarev, Nava}\n}","author_short":["Tintarev, N."],"urls":{"Website":"http://link.springer.com/10.1007/978-3-540-73078-1"},"bibbaseid":"tintarev-explainingrecommendations-2009","role":"author","downloads":0},"search_terms":["explaining","recommendations","tintarev"],"keywords":[],"authorIDs":[]}