A method for evaluating the navigability of recommendation algorithms

Daniel Lamprecht*, Markus Strohmaier, Denis Helic

*Corresponding author for this work

    Research output: Contribution to journalArticle

    Abstract

    Recommendations are increasingly used to support and enable discovery, browsing and exploration of large item collections, especially when no clear classification of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the navigability of recommendation algorithms. The proposed method combines approaches from network science and information retrieval and evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three datasets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.

    Original languageEnglish
    Pages (from-to)247-259
    Number of pages13
    JournalStudies in Computational Intelligence
    Volume693
    DOIs
    Publication statusPublished - 2017

    ASJC Scopus subject areas

    • Artificial Intelligence

    Fingerprint Dive into the research topics of 'A method for evaluating the navigability of recommendation algorithms'. Together they form a unique fingerprint.

  • Cite this