SlideShare une entreprise Scribd logo
1  sur  23
TasteWeights
                        A Visual Interactive Hybrid Recommender System


                     Svetlin Bostandjiev, John O’Donovan, Tobias Höllerer
                               Department of Computer Science
                             University of California, Santa Barbara
                                  {alex, jod, holl}@cs.ucsb.edu




ACM Recommender Systems. Burlington Hotel, Dublin.                          September 10th 2012
Motivation
Problems
• Many traditional
  recommenders are “black
  boxes” and lack explanation
  and control [Herlocker]

• “Why am I being
  recommended this movie? I
  don’t like horror films.”

• Even in modern
  recommenders, data can be
  static, outdated or simply
  irrelevant from the beginning.

• Data about users and items is
  spread far and wide.
Motivation
Challenges:
  Need for more dynamic, more
  adaptable algorithms that can cope
  with diverse data from APIs.
  And, we need an interface that
  can keep up…


Solutions?
   User interfaces help to explain
   provenance of a recommendation.
   This can improve users’
   understanding of the underlying
   system and contribute to better user
   experience and greater satisfaction

   Interaction allows users to: tweak
   otherwise hidden systems settings;
   provide updated preference data,
   recommendation feedback etc. etc.
TasteWeights: Background
Initial Work on Graph-based
    Representations of Collaborative
    Filtering Algorithms:
  PeerChooser : Based on static MovieLens data
  SmallWorlds: Web-based, dynamic data from
  Facebook API.

Issues discovered during
   evaluations:
   PeerChooser: Interaction with nodes that
   represent movie genres …too coarse.
  SmallWorlds: A “complete” representation, but
   far too complicated view.

Learning from evaluations:
  Abstraction, Detail-on-demand, Interactive Visual
  Cues, Cleaner game-like graphics, and more
  flexible API connectivity. Focus on “social”
  recommendation.
Interactive, Trust-based Recommender for Facebook Data



Supports user interaction to
update information at
recommendation time

>Solves stale data problem.

Makes the ACF algorithm
transparent and understandable.

>increases satisfaction,
acceptance etc.

Enables fast visual exploration of
the data

>what-if scenarios
>increases learning
Combining Social and Semantic
                       Recommendations
                                                   Facebook /
                                                  Twitter (Social
                                                      Recs)




DBPedia/Freebas
       e
(Semantic Recs)
TasteWeights Design
Demo
http://www.youtube.com/watch?v=9_JgynePm9w&hd=1
Approach
Parallel hybrid recommender system




               Entity Resolution          Recommendation         Hybridization


                                            Wikipedia
               Input Data                    Sources          Rec
               Resolution    Similarity     (W articles)   Algorithm
              (W articles)     Model                         (recs)



 Input        Input Data                    Facebook          Rec       Hybrid     Output
                             Similarity      Sources                      Rec
  Data        Resolution
              (FB pages)       Model
                                            (band pages)
                                                           Algorithm
                                                             (recs)    Algorithm
                                                                                    Data
 (FB likes)                                                                         (recs)


              Input Data                     Twitter          Rec
                             Similarity
              Resolution                                   Algorithm
              (TW #tags)
                               Model         Sources         (recs)
                                            (WF Experts)
Recommendation
Sources
Input Data Resolution
  Mapping between Wikipedia articles
  Facebook pages, and Twitter #tags


Similarity Models
  Wikipedia
        (Data source: DBpedia)

  Facebook
        (Data source: Facebook Graph API)

  Twitter
         (Data source: wefollow.com)
Generating Recs.
Individual Source



Hybrid Strategies
  Weighted



  Mixed



  Cross-source
Evaluation
Goals
  Evaluate combining social and semantic recommendations
  Evaluate explanation and transparency in a hybrid recommender
  Evaluate interaction in a hybrid recommender


Setup
  Supervised user study. 32 participants from the human subject pool at UCSB

Procedure
  Pre-questionnaire
  Tasks
        Interact with Profile
        Interact with Sources
        Interact with Full interface
        Rate recommendations
  Post-questionnaire               (Explanation & Interaction)
Evaluation: Accuracy
Experiment
  One-way repeated measures ANOVA
  Compared 9 recommendation methods (below) in terms of rec. accuracy


Method (independent variable)
  Single-source:   Wikipedia, Facebook, Twitter
  Hybrid:          Weighted, Mixed, Cross-source
  Interaction:     Profile, Sources, Full


Accuracy (dependent variable)
  Measured in terms of “Utility”
Results: Accuracy




                                             Results from a Tukey post-hoc analysis of the
                                             recommendation methods: multiple comparisons
                                             of means with 95% family-wise confidence level



Plot of means of recommendation methods
over utility with 95% confidence intervals
Results: Diversity
                                                             Diversity
                        140
Number of Recommended Items




                                            Wikipeda
                        120

                                            Facebook (CF)
                        100
                                            Hybrid

                              80
                                            Hybrid + Interaction


                              60



                              40



                              20



                              0
                                   1000   10000              100000            1000000       10000000   100000000
                                                         Bins for Number of Facebook Likes
TasteWeights on LinkedIn Data
Case Study: Portability of TW interface
  -Developed a Social-Semantic Recommendation algorithm for data from LinkedIn API
  -Personalized for one “active” logged-in user.
   -Visualized the algorithm in TasteWeights interface

Algorithm:
   -Map profile items to noun-phrases
   -Resolve to Wikipedia articles
   -e.g.: ph.D => PHD, UCSB => UC Santa Barbara
   -Compute similarity based on overlap in resolved entities.


Features
   -Segmented / Organized user profile
   -Interactive profile weighting
   -Interactive weighting of social connections
   -Dynamic re-ranking of recommendations (visual feedback)
   -Provenance views to show effects of each interaction.
Conclusions
UI and interaction design are important considerations for RSs
  -Increased explanation, provenance
  -Expose otherwise hidden controls (e.g: control of hybrid recommender)
  -Helps ease the stale data problem
  -Support user input at various granularity (recommended item, recommendation partner, profile items etc)
  -Increase ambient learning.
  -Promote interest in the recommender system (game-like feel)

Contributions:
   -Demonstrated a novel interactive RS
   -Hybrid of recommendations from Wikipedia, Facebook and Twitter
   -Evaluated via a 32 person supervised user study at UCSB.
   -Demonstrated portability of the system on LinkedIn’s API.


Results
  -Interaction increases user satisfaction in all conditions. (more interaction = higher accuracy)
  -Cross-source hybrid strategy outperformed individual source strategies.
After the break… Inspectability and
  Control in Social Recommenders
 In this work we touched on the ideas of inspectability
 and control in the context of our hybrid recommender
 system.

 In the next talk, Bart Knijnenburg (UC Irvine) will present
 results from a larger study that focuses on a general
 analysis of inspectability and control in social
 recommenders. This study used some components from
 our TasteWeights system.
Thanks for listening!

Contenu connexe

Tendances

Personalizing the web building effective recommender systems
Personalizing the web building effective recommender systemsPersonalizing the web building effective recommender systems
Personalizing the web building effective recommender systemsAravindharamanan S
 
Recommender Systems
Recommender SystemsRecommender Systems
Recommender SystemsLior Rokach
 
Social media recommendation based on people and tags (final)
Social media recommendation based on people and tags (final)Social media recommendation based on people and tags (final)
Social media recommendation based on people and tags (final)es712
 
[Final]collaborative filtering and recommender systems
[Final]collaborative filtering and recommender systems[Final]collaborative filtering and recommender systems
[Final]collaborative filtering and recommender systemsFalitokiniaina Rabearison
 
Recommender Systems: Advances in Collaborative Filtering
Recommender Systems: Advances in Collaborative FilteringRecommender Systems: Advances in Collaborative Filtering
Recommender Systems: Advances in Collaborative FilteringChangsung Moon
 
Tags as tools for social classification
Tags as tools for social classificationTags as tools for social classification
Tags as tools for social classificationIsabella Peters
 
Cataloguing of learning objects using social tagging
Cataloguing of learning objects using social taggingCataloguing of learning objects using social tagging
Cataloguing of learning objects using social taggingLuciana Zaina
 

Tendances (8)

Personalizing the web building effective recommender systems
Personalizing the web building effective recommender systemsPersonalizing the web building effective recommender systems
Personalizing the web building effective recommender systems
 
Collaborative filtering
Collaborative filteringCollaborative filtering
Collaborative filtering
 
Recommender Systems
Recommender SystemsRecommender Systems
Recommender Systems
 
Social media recommendation based on people and tags (final)
Social media recommendation based on people and tags (final)Social media recommendation based on people and tags (final)
Social media recommendation based on people and tags (final)
 
[Final]collaborative filtering and recommender systems
[Final]collaborative filtering and recommender systems[Final]collaborative filtering and recommender systems
[Final]collaborative filtering and recommender systems
 
Recommender Systems: Advances in Collaborative Filtering
Recommender Systems: Advances in Collaborative FilteringRecommender Systems: Advances in Collaborative Filtering
Recommender Systems: Advances in Collaborative Filtering
 
Tags as tools for social classification
Tags as tools for social classificationTags as tools for social classification
Tags as tools for social classification
 
Cataloguing of learning objects using social tagging
Cataloguing of learning objects using social taggingCataloguing of learning objects using social tagging
Cataloguing of learning objects using social tagging
 

Similaire à TasteWeights: Visual Interactive Hybrid Recommendations

Social Recommender Systems Tutorial - WWW 2011
Social Recommender Systems Tutorial - WWW 2011Social Recommender Systems Tutorial - WWW 2011
Social Recommender Systems Tutorial - WWW 2011idoguy
 
CIDR 2009: Jeff Heer Keynote
CIDR 2009: Jeff Heer KeynoteCIDR 2009: Jeff Heer Keynote
CIDR 2009: Jeff Heer Keynoteinfoblog
 
Designing for Collaboration: Challenges & Considerations of Multi-Use Informa...
Designing for Collaboration: Challenges & Considerations of Multi-Use Informa...Designing for Collaboration: Challenges & Considerations of Multi-Use Informa...
Designing for Collaboration: Challenges & Considerations of Multi-Use Informa...Stephanie Steinhardt
 
Requirements for Processing Datasets for Recommender Systems
Requirements for Processing Datasets for Recommender SystemsRequirements for Processing Datasets for Recommender Systems
Requirements for Processing Datasets for Recommender SystemsStoitsis Giannis
 
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...Francesca Lazzeri, PhD
 
Recsys2016 Tutorial by Xavier and Deepak
Recsys2016 Tutorial by Xavier and DeepakRecsys2016 Tutorial by Xavier and Deepak
Recsys2016 Tutorial by Xavier and DeepakDeepak Agarwal
 
Quality, Quantity, Web and Semantics
Quality, Quantity, Web and SemanticsQuality, Quantity, Web and Semantics
Quality, Quantity, Web and SemanticsZemanta
 
Quality, quantity, web and semantics
Quality, quantity, web and semanticsQuality, quantity, web and semantics
Quality, quantity, web and semanticsAndraz Tori
 
2013 nas-ehs-data-integration-dc
2013 nas-ehs-data-integration-dc2013 nas-ehs-data-integration-dc
2013 nas-ehs-data-integration-dcc.titus.brown
 
Collaborative Filtering and Recommender Systems By Navisro Analytics
Collaborative Filtering and Recommender Systems By Navisro AnalyticsCollaborative Filtering and Recommender Systems By Navisro Analytics
Collaborative Filtering and Recommender Systems By Navisro AnalyticsNavisro Analytics
 
RCOMM 2011 - Sentiment Classification
RCOMM 2011 - Sentiment ClassificationRCOMM 2011 - Sentiment Classification
RCOMM 2011 - Sentiment Classificationbohanairl
 
RCOMM 2011 - Sentiment Classification with RapidMiner
RCOMM 2011 - Sentiment Classification with RapidMinerRCOMM 2011 - Sentiment Classification with RapidMiner
RCOMM 2011 - Sentiment Classification with RapidMinerbohanairl
 
The BioMoby Semantic Annotation Experiment
The BioMoby Semantic Annotation ExperimentThe BioMoby Semantic Annotation Experiment
The BioMoby Semantic Annotation ExperimentMark Wilkinson
 
Recsys 2018 overview and highlights
Recsys 2018 overview and highlightsRecsys 2018 overview and highlights
Recsys 2018 overview and highlightsSandra Garcia
 
Twente ir-course 20-10-2010
Twente ir-course 20-10-2010Twente ir-course 20-10-2010
Twente ir-course 20-10-2010Arjen de Vries
 
DataEngConf: Talkographics: Using What Viewers Say Online to Measure TV and B...
DataEngConf: Talkographics: Using What Viewers Say Online to Measure TV and B...DataEngConf: Talkographics: Using What Viewers Say Online to Measure TV and B...
DataEngConf: Talkographics: Using What Viewers Say Online to Measure TV and B...Hakka Labs
 
BAQMaR - Conference DM
BAQMaR - Conference DMBAQMaR - Conference DM
BAQMaR - Conference DMBAQMaR
 
Altmetrics to track the impact of datasets
Altmetrics to track the impact of datasetsAltmetrics to track the impact of datasets
Altmetrics to track the impact of datasetsPat Loria
 
Public PhD Defense - Ben De Meester
Public PhD Defense - Ben De MeesterPublic PhD Defense - Ben De Meester
Public PhD Defense - Ben De MeesterBen De Meester
 

Similaire à TasteWeights: Visual Interactive Hybrid Recommendations (20)

Social Recommender Systems Tutorial - WWW 2011
Social Recommender Systems Tutorial - WWW 2011Social Recommender Systems Tutorial - WWW 2011
Social Recommender Systems Tutorial - WWW 2011
 
CIDR 2009: Jeff Heer Keynote
CIDR 2009: Jeff Heer KeynoteCIDR 2009: Jeff Heer Keynote
CIDR 2009: Jeff Heer Keynote
 
Designing for Collaboration: Challenges & Considerations of Multi-Use Informa...
Designing for Collaboration: Challenges & Considerations of Multi-Use Informa...Designing for Collaboration: Challenges & Considerations of Multi-Use Informa...
Designing for Collaboration: Challenges & Considerations of Multi-Use Informa...
 
Requirements for Processing Datasets for Recommender Systems
Requirements for Processing Datasets for Recommender SystemsRequirements for Processing Datasets for Recommender Systems
Requirements for Processing Datasets for Recommender Systems
 
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
 
Recsys2016 Tutorial by Xavier and Deepak
Recsys2016 Tutorial by Xavier and DeepakRecsys2016 Tutorial by Xavier and Deepak
Recsys2016 Tutorial by Xavier and Deepak
 
Quality, Quantity, Web and Semantics
Quality, Quantity, Web and SemanticsQuality, Quantity, Web and Semantics
Quality, Quantity, Web and Semantics
 
Quality, quantity, web and semantics
Quality, quantity, web and semanticsQuality, quantity, web and semantics
Quality, quantity, web and semantics
 
2013 nas-ehs-data-integration-dc
2013 nas-ehs-data-integration-dc2013 nas-ehs-data-integration-dc
2013 nas-ehs-data-integration-dc
 
Collaborative Filtering and Recommender Systems By Navisro Analytics
Collaborative Filtering and Recommender Systems By Navisro AnalyticsCollaborative Filtering and Recommender Systems By Navisro Analytics
Collaborative Filtering and Recommender Systems By Navisro Analytics
 
RCOMM 2011 - Sentiment Classification
RCOMM 2011 - Sentiment ClassificationRCOMM 2011 - Sentiment Classification
RCOMM 2011 - Sentiment Classification
 
RCOMM 2011 - Sentiment Classification with RapidMiner
RCOMM 2011 - Sentiment Classification with RapidMinerRCOMM 2011 - Sentiment Classification with RapidMiner
RCOMM 2011 - Sentiment Classification with RapidMiner
 
The BioMoby Semantic Annotation Experiment
The BioMoby Semantic Annotation ExperimentThe BioMoby Semantic Annotation Experiment
The BioMoby Semantic Annotation Experiment
 
Recsys 2018 overview and highlights
Recsys 2018 overview and highlightsRecsys 2018 overview and highlights
Recsys 2018 overview and highlights
 
Twente ir-course 20-10-2010
Twente ir-course 20-10-2010Twente ir-course 20-10-2010
Twente ir-course 20-10-2010
 
DataEngConf: Talkographics: Using What Viewers Say Online to Measure TV and B...
DataEngConf: Talkographics: Using What Viewers Say Online to Measure TV and B...DataEngConf: Talkographics: Using What Viewers Say Online to Measure TV and B...
DataEngConf: Talkographics: Using What Viewers Say Online to Measure TV and B...
 
BAQMaR - Conference DM
BAQMaR - Conference DMBAQMaR - Conference DM
BAQMaR - Conference DM
 
Altmetrics to track the impact of datasets
Altmetrics to track the impact of datasetsAltmetrics to track the impact of datasets
Altmetrics to track the impact of datasets
 
Collaborative filtering
Collaborative filteringCollaborative filtering
Collaborative filtering
 
Public PhD Defense - Ben De Meester
Public PhD Defense - Ben De MeesterPublic PhD Defense - Ben De Meester
Public PhD Defense - Ben De Meester
 

TasteWeights: Visual Interactive Hybrid Recommendations

  • 1. TasteWeights A Visual Interactive Hybrid Recommender System Svetlin Bostandjiev, John O’Donovan, Tobias Höllerer Department of Computer Science University of California, Santa Barbara {alex, jod, holl}@cs.ucsb.edu ACM Recommender Systems. Burlington Hotel, Dublin. September 10th 2012
  • 2. Motivation Problems • Many traditional recommenders are “black boxes” and lack explanation and control [Herlocker] • “Why am I being recommended this movie? I don’t like horror films.” • Even in modern recommenders, data can be static, outdated or simply irrelevant from the beginning. • Data about users and items is spread far and wide.
  • 3. Motivation Challenges:  Need for more dynamic, more adaptable algorithms that can cope with diverse data from APIs.  And, we need an interface that can keep up… Solutions? User interfaces help to explain provenance of a recommendation. This can improve users’ understanding of the underlying system and contribute to better user experience and greater satisfaction Interaction allows users to: tweak otherwise hidden systems settings; provide updated preference data, recommendation feedback etc. etc.
  • 4. TasteWeights: Background Initial Work on Graph-based Representations of Collaborative Filtering Algorithms: PeerChooser : Based on static MovieLens data SmallWorlds: Web-based, dynamic data from Facebook API. Issues discovered during evaluations: PeerChooser: Interaction with nodes that represent movie genres …too coarse. SmallWorlds: A “complete” representation, but far too complicated view. Learning from evaluations: Abstraction, Detail-on-demand, Interactive Visual Cues, Cleaner game-like graphics, and more flexible API connectivity. Focus on “social” recommendation.
  • 5.
  • 6. Interactive, Trust-based Recommender for Facebook Data Supports user interaction to update information at recommendation time >Solves stale data problem. Makes the ACF algorithm transparent and understandable. >increases satisfaction, acceptance etc. Enables fast visual exploration of the data >what-if scenarios >increases learning
  • 7. Combining Social and Semantic Recommendations Facebook / Twitter (Social Recs) DBPedia/Freebas e (Semantic Recs)
  • 10. Approach Parallel hybrid recommender system Entity Resolution Recommendation Hybridization Wikipedia Input Data Sources Rec Resolution Similarity (W articles) Algorithm (W articles) Model (recs) Input Input Data Facebook Rec Hybrid Output Similarity Sources Rec Data Resolution (FB pages) Model (band pages) Algorithm (recs) Algorithm Data (FB likes) (recs) Input Data Twitter Rec Similarity Resolution Algorithm (TW #tags) Model Sources (recs) (WF Experts)
  • 11. Recommendation Sources Input Data Resolution Mapping between Wikipedia articles Facebook pages, and Twitter #tags Similarity Models Wikipedia (Data source: DBpedia) Facebook (Data source: Facebook Graph API) Twitter (Data source: wefollow.com)
  • 12. Generating Recs. Individual Source Hybrid Strategies Weighted Mixed Cross-source
  • 13. Evaluation Goals Evaluate combining social and semantic recommendations Evaluate explanation and transparency in a hybrid recommender Evaluate interaction in a hybrid recommender Setup Supervised user study. 32 participants from the human subject pool at UCSB Procedure Pre-questionnaire Tasks Interact with Profile Interact with Sources Interact with Full interface Rate recommendations Post-questionnaire (Explanation & Interaction)
  • 14. Evaluation: Accuracy Experiment One-way repeated measures ANOVA Compared 9 recommendation methods (below) in terms of rec. accuracy Method (independent variable) Single-source: Wikipedia, Facebook, Twitter Hybrid: Weighted, Mixed, Cross-source Interaction: Profile, Sources, Full Accuracy (dependent variable) Measured in terms of “Utility”
  • 15. Results: Accuracy Results from a Tukey post-hoc analysis of the recommendation methods: multiple comparisons of means with 95% family-wise confidence level Plot of means of recommendation methods over utility with 95% confidence intervals
  • 16.
  • 17.
  • 18. Results: Diversity Diversity 140 Number of Recommended Items Wikipeda 120 Facebook (CF) 100 Hybrid 80 Hybrid + Interaction 60 40 20 0 1000 10000 100000 1000000 10000000 100000000 Bins for Number of Facebook Likes
  • 19. TasteWeights on LinkedIn Data Case Study: Portability of TW interface -Developed a Social-Semantic Recommendation algorithm for data from LinkedIn API -Personalized for one “active” logged-in user. -Visualized the algorithm in TasteWeights interface Algorithm: -Map profile items to noun-phrases -Resolve to Wikipedia articles -e.g.: ph.D => PHD, UCSB => UC Santa Barbara -Compute similarity based on overlap in resolved entities. Features -Segmented / Organized user profile -Interactive profile weighting -Interactive weighting of social connections -Dynamic re-ranking of recommendations (visual feedback) -Provenance views to show effects of each interaction.
  • 20.
  • 21. Conclusions UI and interaction design are important considerations for RSs -Increased explanation, provenance -Expose otherwise hidden controls (e.g: control of hybrid recommender) -Helps ease the stale data problem -Support user input at various granularity (recommended item, recommendation partner, profile items etc) -Increase ambient learning. -Promote interest in the recommender system (game-like feel) Contributions: -Demonstrated a novel interactive RS -Hybrid of recommendations from Wikipedia, Facebook and Twitter -Evaluated via a 32 person supervised user study at UCSB. -Demonstrated portability of the system on LinkedIn’s API. Results -Interaction increases user satisfaction in all conditions. (more interaction = higher accuracy) -Cross-source hybrid strategy outperformed individual source strategies.
  • 22. After the break… Inspectability and Control in Social Recommenders In this work we touched on the ideas of inspectability and control in the context of our hybrid recommender system. In the next talk, Bart Knijnenburg (UC Irvine) will present results from a larger study that focuses on a general analysis of inspectability and control in social recommenders. This study used some components from our TasteWeights system.

Notes de l'éditeur

  1. Grand challenge? Seamlessly ingest data from newly arriving sources and produce real-time recommendations.
  2. Show demo ifpossible1.50 jenny node interaction.
  3. General ideaExplain 3 columnsLeftprofileMiddleWeb APIs wiki – semantic FB – social Twitter – socio-semantic expert-basedmeta data usually not available in modern rec syspandora has just started sharing why they rec certain bands to youwe argue that there is value in displaying meta data and letting the user fine-tune itRight
  4. Demo the interactivity and UI featuresConnectionsReal time feedback Item weights Domain weights more popular bands go to top
  5. Parallel hybrid rec sysalgos run in parallel and are later combined to produce the final recsInput data resolutionto the space of different data domains: W, F, Tmap input music bands -> W articles, F pages, T hash tagsdone mainly through search calloutsSimilarity modelstake in W articles, F pages, T hash tags -> produce related W articles / F friends / T experts Recalgosindividual recalgo per data domaincombine individual algos via a hybrid recalgo-> related music bands
  6. Input data resolution mainly tru search engine calloutsSimilarity models
  7. First, we compute recs from each individual sourceThen, combine recsWeightedlinear combination: weight of rec from individual source * weight of the sourceMixedcompile ranked list of recs, pick top recs from each list, then 2nd best recs, and so onCross-sourcesame as weighted, but give more weights to recs that appear in multiple sources
  8. GoalsCompare hybrid methods to individual source methods 3. Compare interactive methods to hybrid methods
  9. Utilitymetric for how useful a list of items ismore weight to items on top of the listexponential decay of utility as we go down the listFor each method, generate top 15 recs.Ask users to rate theseMethod 1: (top 5 recs ratings) 5, 4, 4, 5, 3 -> utilty 10Method 2: 1, 2, 3, 1, 2 -> utility 3Prepare some response for a diversity question.
  10. First three individual sourcesYellow hybridsPurple interaction methodsimprovedrec accuracy: Cross-source hybrid > single source (contribution 1)Full interaction > best hybrid (Cross-source) (contributions 3)
  11. Explicit qualitative evaluationExplanationHighest rating (4.4) “the system helped me understand how I got the recommendations”InteractionPerceived: highest: “interaction helped me get better recommendations”
  12. Explicit qualitative evaluationExplanationHighest rating (4.4) “the system helped me understand how I got the recommendations”InteractionPerceived: highest: “interaction helped me get better recommendations”
  13. This should be a bar graph.Explicit qualitative evaluationExplanationHighest rating (4.4) “the system helped me understand how I got the recommendations”InteractionPerceived: highest: “interaction helped me get better recommendations”
  14. Utilitymetric for how useful a list of items ismore weight to items on top of the listexponential decay of utility as we go down the listFor each method, generate top 15 recs.Ask users to rate theseMethod 1: (top 5 recs ratings) 5, 4, 4, 5, 3 -> utilty 10Method 2: 1, 2, 3, 1, 2 -> utility 3Prepare some response for a diversity question.
  15. Utilitymetric for how useful a list of items ismore weight to items on top of the listexponential decay of utility as we go down the listFor each method, generate top 15 recs.Ask users to rate theseMethod 1: (top 5 recs ratings) 5, 4, 4, 5, 3 -> utilty 10Method 2: 1, 2, 3, 1, 2 -> utility 3Prepare some response for a diversity question.
  16. Utilitymetric for how useful a list of items ismore weight to items on top of the listexponential decay of utility as we go down the listFor each method, generate top 15 recs.Ask users to rate theseMethod 1: (top 5 recs ratings) 5, 4, 4, 5, 3 -> utilty 10Method 2: 1, 2, 3, 1, 2 -> utility 3Prepare some response for a diversity question.