Evaluating recommender explanations: Problems experienced and lessons learned for the evaluation of adaptive systems

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

We describe the methodological considerations that arose
over a series of experiments evaluating the eectiveness of explanations
for recommendations. In particular, we look at issues relating to: criteria,
metrics, product domain used, choice of materials, possible confound-
ing factors, and approximation of experience versus real experience. We
generalize the problems we found and the solutions that we applied to
adaptive systems. We illustrate the learned lessons with examples from
our previous work on adaptive systems (ranging from adaptive learning
to persuasive technologies).
Original languageEnglish
Title of host publicationProceedings of the Sixth Workshop on User-Centred Design and Evaluation of Adaptive Systems
EditorsStephan Weibelzahl, Judith Masthoff, Alexandros Paramythis, Lex van Velsen
PublisherCEUR-WS
Pages54-63
Number of pages10
Publication statusPublished - Jun 2009
EventUCDEAS Workshop associated with UMAP - Trento, Italy
Duration: 22 Jun 200926 Jun 2009

Conference

ConferenceUCDEAS Workshop associated with UMAP
CountryItaly
CityTrento
Period22/06/0926/06/09

Fingerprint Dive into the research topics of 'Evaluating recommender explanations: Problems experienced and lessons learned for the evaluation of adaptive systems'. Together they form a unique fingerprint.

  • Cite this

    Tintarev, N., & Masthoff, J. (2009). Evaluating recommender explanations: Problems experienced and lessons learned for the evaluation of adaptive systems. In S. Weibelzahl, J. Masthoff, A. Paramythis, & L. van Velsen (Eds.), Proceedings of the Sixth Workshop on User-Centred Design and Evaluation of Adaptive Systems (pp. 54-63). CEUR-WS. http://www.csd.abdn.ac.uk/~ntintare/TintarevMasthoff_UMAP09.pdf