Slides of my presentation at IUI 2014, the visual Hybrid Recommender SetFusion - "See What you Want to See: Visual User-Driven Approach for Recommendation"
http://dl.acm.org/citation.cfm?id=2557542
DEMO available:
http://www.youtube.com/watch?v=9LwSx1V6Yxk
1. See What you Want to See:
Visual User-Driven Approach
for Recommendation
Denis Parra, PUC Chile
Peter Brusilovsky, University of Pittsburgh
Christoph Trattner, Graz University of Technology
IUI 2014, Haifa, Israel
2. Outline
• Short intro to some Challenges in Recommender
Systems
• Our Approach to User Controllability (demo)
• User Study & Results
• Summary & Future Work
02/27/2014 D.Parra et al.~ IUI 2014 2
4. Recommender Systems (RecSys)
Systems that help people to find relevant items in a
crowded item or information space (McNee et al. 2006)
02/27/2014 D.Parra et al.~ IUI 2014 4
5. Challenges of RecSys Addressed Here
Traditionally, RecSys has focused on producing
accurate recommendation algorithms. In this research,
these challenges are addressed:
1. Human Factors in RecSys: Study controllability by
introducing a novel visualization that presents
fusion of different recommenders
2. Evaluation: Use of Objective,
Subjective & Behavioral metrics
02/27/2014 D.Parra et al.~ IUI 2014 5
6. Research Goals & User Studies
Research Goal
• To understand the effect of controllability on the
user engagement and on the overall user
experience of a RecSys
(on this paper) Through
• Two studies conducted using Conference Navigator:
02/27/2014 D.Parra et al.~ IUI 2014 6
Program Proceedings Author List Recommendations
http://halley.exp.sis.pitt.edu/cn3/
9. Preliminary Work: TalkExplorer
• Adaptation of Aduna Visualization in CN
• Main research question: Do fusion (intersection) of
contexts of relevance improve user experience?
7/22/2013 D.Parra ~ PhD. Dissertation Defense 9
Center user
CN user
RecommenderRecommender
Cluster
with
intersect
ion of
entities
Cluster (of
talks)
associated
to only one
entity
12. Our Proposed Interface - II
02/27/2014 D.Parra et al.~ IUI 2014 12
Traditional
Ranked List
Papers sorted by
Relevance.
It combines 3
recommendation
approaches.
13. Our Proposed Interface - III
02/27/2014 D.Parra et al.~ IUI 2014 13
Sliders
Allow the user to control the importance of
each data source or recommendation method
Interactive Venn Diagram
Allows the user to inspect and to filter papers
recommended. Actions available:
- Filter item list by clicking on an area
- Highlight a paper by mouse-over on a circle
- Scroll to paper by clicking on a circle
- Indicate bookmarked papers
14. Mixed Hybridization: Item Score
7/22/2013 D.Parra ~ PhD. Dissertation Defense 14
M: The set of all methods available to fuse
rankreci,mj : rank–position in the list of a recommended item
reci : recommended item i
mj, : recommendation method j
Wmj : weight given by the user to the method mj using the controllable
interface
|Mreci| represents the number of methods by which item reci was
recommended
Slider weight
16. Studies: CSCW 2013 & UMAP 2013
02/27/2014 D.Parra et al.~ IUI 2014 16
CSCW 2013
Conditions Static
List
Interactive
SetFusion
# Attendants ~400
# RecSys
Users
15 22
Study type Between Subjects
UMAP 2013
Interactive
SetFusion
~ 100
50
1 group
Preliminary User study: Here we learned that
the Interactive interface had a positive effect
on user behavior and perception of the recsys
Second study:
Only interactive
interface
CHANGES:
1. Preference Elicitation:
In CSCW we avoided
cold start. In UMAP
we had no constraints
2. Use of the ratings to
update the
recommended items
3. Tuning of Content-
based recommender
17. Comparing CSCW and UMAP
02/27/2014 D.Parra et al.~ IUI 2014 17
(Only Interactive Interfaces) CSCW 2013 UMAP 2013
# Users exposed to recommendation 84 95
# Users who used the recommender 22 ( ~ 26 %) 50 ( ~52.6 %)
# Users bookmarked papers 6 ( ~ 27.2 %) 14 (~28 %)
# Talks bookmarked / user avg. 28 / 4.67 103 / 7.36
Average User rating 3.73 / 10 ( ~45.4 %) 3.62 / 8 (~16%)
Usage at Recommender Page
# Talks explored (user avg.) 16.84 14.9
# People returning 7 (~31.8%) 14 (28%)
Average time spent in page (seconds) 261.72 353.8
18. Comparing CSCW and UMAP
02/27/2014 D.Parra et al.~ IUI 2014 18
(Only Interactive Interfaces) CSCW 2013 UMAP 2013
# Users exposed to recommendation 84 95
# Users who used the recommender 22 50
# Users bookmarked papers 6 ( ~ 27.2 %) 14 (28 %)
# Talks bookmarked / user avg. 28 / 4.67 103 / 7.36
Average User rating 3.73 / 10 ( ~45.4 %) 3.62 / 8 (~16%)
Usage at Recommender Page
# Talks explored (user avg.) 16.84 14.9
# People returning 7 (~31.8%) 14 (28%)
Average time spent in page (seconds) 261.72 353.8
19. Comparing CSCW and UMAP
02/27/2014 D.Parra et al.~ IUI 2014 19
(Only Interactive Interfaces) CSCW 2013 UMAP 2013
# Users exposed to recommendation 84 95
# Users who used the recommender 22 50
# Users bookmarked papers 6 ( ~ 27.2 %) 14 (~28 %)
# Talks bookmarked / user avg. 28 / 4.67 103 / 7.36
Average User rating 3.73 / 10 ( ~45.4 %) 3.62 / 8 (~16%)
Usage at Recommender Page
# Talks explored (user avg.) 16.84 14.9
# People returning 7 (~31.8%) 14 (28%)
Average time spent in page (seconds) 261.72 353.8
20. From the Final Survey
CSCW 2013
(11 users)
UMAP 2013
(8 users)
I don’t think that Conference
Navigator needs a
Recommender System
M = 2.36, S.E. =
0.2
M = 1.5 , S.E. = 0.21
(p < 0.05)
I would recommend this
system to my colleagues
M = 3.36, S.E. =
0.28
M = 4.25, S.E. = 0.33
(p < 0.05)
02/27/2014 D.Parra et al.~ IUI 2014 20
- Users perceived SetFusion significantly as a more
useful tool in UMAP than in CSCW
22. Summary of Results
• From Study 1 we showed that User Controllability
had an effect on the user experience with RecSys.
• Comparing SetFusion in Study 1 and Study 2:
– A natural elicitation setting (UMAP) allowed users to
be more engaged on using the system for the task of the
interface: bookmark papers recommended.
– Users also perceived the system as more useful in
UMAP 2013.
– Ratings are a form of giving user control, a big lesson
from Study 1: if you ask user for feedback, use it!
02/27/2014 D.Parra et al.~ IUI 2014 22
23. Limitations & Future Work
• Apply our approach to other domains (fusion of
data sources or recommendation algorithms)
• Find alternatives to scale the approach to more than
3 sets, potential alternatives:
– Clustering and
– Radial sets
• Consider other factors that might interact with the
user experience:
– Controllability by itself vs. minimum level of accuracy
02/27/2014 D.Parra et al.~ IUI 2014 23