SlideShare une entreprise Scribd logo
1  sur  17
CONTEXT-AWARE SIMILARITIES
WITHIN THE FACTORIZATION
FRAMEWORK

Balázs Hidasi
Domonkos Tikk

CARR WORKSHOP, 5TH FEBRUARY 2013, ROME
OUTLINE
 Background & scope
 Levels of CA similarity

 Experiments

 Future work
BACKGROUND



         Implicit feedback          Context




                                  I2I
             Factorization
                             recommendation




                        This work
IMPLICIT FEEDBACK
 User preference not coded explicitely in data
 E.g. purchase history
     Presence  preference assumed
     Absence  ???

   Binary „preference” matrix
       Zeroes should be considered
   Optimize for
     Weighted RMSE
     Partial ordering (ranking)
CONTEXT
 Can mean anything
 Here: event context
     User U has an event
     on Item I
     while the context is C

 E.g.: time, weather, mood of U, freshness of I, etc.
 In the experiments:
       Seasonality (time of the day or time of the week)
           Time period: week / day
FACTORIZATION I
   Preference data can be organized into matrix
     Size of dimensions high
     Data is sparse

   Approximate this matrix by the product of two low
    rank matrices
     Each item and user has a feature vector
     Predicted preference is the scalar product of the




                                                            item
      appropriate vectors
                                                     user          r
                   T
     r       Uu       Ii
       u ,i

   Here we optimize for wRMSE (implicit case)
       Learning features with ALS
FACTORIZATION II.
 Context introduced  additional context dimension
 Matrix  tensor (table of records)

 Models for preference prediction:




                                                                                            item
       Elementwise product model
         Weighted scalar product                                                    user          r
                      T
         ru , i , c 1 (U u  I i  C c )
       Pairwise model
           Context dependent user/item bias
                                     T                    T                      T
           ru , i , c          Uu       Ii     Uu            Cc      Ii             Cc
                                                context




                                                                       context
                         item




                                +                         +
            user                         user                  item                            r
ITEM-TO-ITEM RECOMMENDATIONS
 Items similar to the current item
 E.g.: user cold start, related items, etc.

 Approaches: association rules, similarity between
  item consumption vectors, etc.
 In the factorization framework:
       Similarity between the feature vectors
                                  T
       Scalar product: s i , j Ii I j
                                               T
                                      Ii           Ij
       Cosine similarity: s i , j
                                     Ii    2
                                               Ij
                                                        2
SCOPE OF THIS PROJECT
   Examination wether item similarities can be
    improved
     using context-aware learning or prediction
     compared to the basic feature based solution

   Motivation:
       If factorization models are used anyways, it would be
        good to use them for I2I recommendations as well
   Out of scope:
       Comparision with other approaches (e.g. association
        rules)
CONTEXT-AWARE SIMILARITIES: LEVEL1
   The form of computing similarity remains
                     T
       si, j   Ii       Ij            Ii
                                                T
                                                    Ij
                              si, j
                                      Ii    2
                                                Ij
                                                         2



   Similarity is NOT CA
   Context-aware learning is used
     Assumption: item models will be more accurate
     Reasons: during the learning context is modelled separately

   Elementwise product model
       Context related effects coded in the context feature vector
   Pairwise model
       Context related biases removed
CONTEXT-AWARE SIMILARITIES: LEVEL2
 Incorporating the context features
 Elementwise product model
     Similarities reweighted by the context feature
     Assumption: will be sensitive to the quality of the
      context
                                                                   T
     si , j ,c
                    T
                   1 Ii  Cc  I j                             1       Ii  Cc  I j
                                             s i , j ,c
   Pairwise model                                                      Ii   2
                                                                                 Ij
                                                                                      2

     Context dependent promotions/demotions for the
      participating items
     Assumption: minor improvements to the basic similarity
                          T             T                      T
     s i , j ,c     Ii       Ij   Ii       Cc            Ij       Cc
CONTEXT-AWARE SIMILARITIES: LEVEL2
NORMALIZATION

 Normalization of context vector
 Only for cosine similarity

 Elementwise product model:
     Makes no difference in the ordering
     Recommendation in a given context to a given user

   Pairwise model
     Might affect results
     Controls the importance of item promotions/demotions
                                    T                          T                 T
                           Ii           Ij            Ii           Cc       Ij        Cc
         s i , j ,c
                      Ii            Ij                     Ii      2
                                                                             Ij
                                2                2                                    2

                                T                              T                      T
                       Ii               Ij            Ii           Cc        Ij           Cc
        s i , j ,c
                      Ii            Ij               Ii    2
                                                                Cc      2
                                                                            Ij        Cc
                            2                2                                    2            2
EXPERIMENTS - SETUP
   Four implicit dataset
     LastFM 1K – music
     TV1, TV2 – IPTV
     Grocery – online grocery shopping

   Context: seasonality
       Both with manually and automatically determined time
        bands
   Evaluation: recommend similar items to the users’
    previous item
     Recall@20
     MAP@20
     Coverage@20
EXPERIMENTS - RESULTS
           Improvement for Grocery                                            Improvement for LastFM
35.00%                                                             200.00%                     183.91%
                                   29.40%
30.00%
                                                                   150.00%
25.00%                    21.40%
20.00%                                                  Recall     100.00%
                                                                                                                            Recall
15.00%                                      12.90%      Coverage
                                                                                                                            Coverage
10.00%                                                              50.00%
             6.11% 6.79%                                                      6.14%      17.99%19.72%
                             5.14%   5.83%
          3.91% 3.53%                                                            5.27%
 5.00%
                                              0.87%                   0.00%
 0.00%                                                                               -7.92%             -1.62% -3.18%
                                                                                                          -18.26% -21.96%
                                                                    -50.00%

             Improvement for TV1                                                Improvement for TV2
 900.00%              797.51%                                       20.00%
 800.00%                                                                        0.91% 0.79%
                                                                     0.00%           0.59%
 700.00%
                                                                                                        -0.23% -3.23%
 600.00%                                                            -20.00% -8.08%                         -8.13% -10.40%
 500.00%                                                                                                                    Recall
                                                       Recall
 400.00%                                                            -40.00%
                                                       Coverage                                 -37.24%                     Coverage
 300.00%
                                                                    -60.00%
 200.00%
                              7.34% 46.40%
 100.00% 0.69% 8.31%
                  1.77% 6.77% 5.89% 4.86%                           -80.00%
   0.00%                                                                                      -82.32%
           -3.53%                                                  -100.00%
-100.00%
        From left to right: L1 elementwise, L1 pairwise, L2 elementwise, L2 pairwise, L2 pairwise (norm)
EXPERIMENTS - CONCLUSION
 Context awareness generally helps
 Impromevent greatly depends on method and
  context quality
 All but the elementwise level2 method:
     Minor improvements
     Tolerant of context quality

   Elementwise product level2:
     Possibly huge improvements
     Or huge decrease in recommendations
     Depends on the context/problem
(POSSIBLE) FUTURE WORK
 Experimentation with different contexts
 Different similarity between feature vectors

 Predetermination wether context is useful for
     User bias
     Item bias
     Reweighting

 Predetermination of context quality
 Different evaluation methods
       E.g. recommend to session
THANKS FOR THE ATTENTION!




For more of my recommender systems related research visit my website:
http://www.hidasi.eu

Contenu connexe

Plus de Domonkos Tikk

Challenges Encountered by Scaling Up Recommendation Services at Gravity R&D
Challenges Encountered by Scaling Up Recommendation Services at Gravity R&DChallenges Encountered by Scaling Up Recommendation Services at Gravity R&D
Challenges Encountered by Scaling Up Recommendation Services at Gravity R&DDomonkos Tikk
 
General factorization framework for context-aware recommendations
General factorization framework for context-aware recommendationsGeneral factorization framework for context-aware recommendations
General factorization framework for context-aware recommendationsDomonkos Tikk
 
Tartalomgazdagítás (content enrichment)
Tartalomgazdagítás (content enrichment) Tartalomgazdagítás (content enrichment)
Tartalomgazdagítás (content enrichment) Domonkos Tikk
 
Idomaar crowd rec_reference_fw
Idomaar crowd rec_reference_fwIdomaar crowd rec_reference_fw
Idomaar crowd rec_reference_fwDomonkos Tikk
 
Big Data in Online Classifieds
Big Data in Online ClassifiedsBig Data in Online Classifieds
Big Data in Online ClassifiedsDomonkos Tikk
 
Slides from CARR 2012 WS - Enhancing Matrix Factorization Through Initializat...
Slides from CARR 2012 WS - Enhancing Matrix Factorization Through Initializat...Slides from CARR 2012 WS - Enhancing Matrix Factorization Through Initializat...
Slides from CARR 2012 WS - Enhancing Matrix Factorization Through Initializat...Domonkos Tikk
 
Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Imp...
Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Imp...Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Imp...
Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Imp...Domonkos Tikk
 
Recommender Systems Evaluation: A 3D Benchmark - presented at RUE 2012 worksh...
Recommender Systems Evaluation: A 3D Benchmark - presented at RUE 2012 worksh...Recommender Systems Evaluation: A 3D Benchmark - presented at RUE 2012 worksh...
Recommender Systems Evaluation: A 3D Benchmark - presented at RUE 2012 worksh...Domonkos Tikk
 
From a toolkit of recommendation algorithms into a real business: the Gravity...
From a toolkit of recommendation algorithms into a real business: the Gravity...From a toolkit of recommendation algorithms into a real business: the Gravity...
From a toolkit of recommendation algorithms into a real business: the Gravity...Domonkos Tikk
 

Plus de Domonkos Tikk (9)

Challenges Encountered by Scaling Up Recommendation Services at Gravity R&D
Challenges Encountered by Scaling Up Recommendation Services at Gravity R&DChallenges Encountered by Scaling Up Recommendation Services at Gravity R&D
Challenges Encountered by Scaling Up Recommendation Services at Gravity R&D
 
General factorization framework for context-aware recommendations
General factorization framework for context-aware recommendationsGeneral factorization framework for context-aware recommendations
General factorization framework for context-aware recommendations
 
Tartalomgazdagítás (content enrichment)
Tartalomgazdagítás (content enrichment) Tartalomgazdagítás (content enrichment)
Tartalomgazdagítás (content enrichment)
 
Idomaar crowd rec_reference_fw
Idomaar crowd rec_reference_fwIdomaar crowd rec_reference_fw
Idomaar crowd rec_reference_fw
 
Big Data in Online Classifieds
Big Data in Online ClassifiedsBig Data in Online Classifieds
Big Data in Online Classifieds
 
Slides from CARR 2012 WS - Enhancing Matrix Factorization Through Initializat...
Slides from CARR 2012 WS - Enhancing Matrix Factorization Through Initializat...Slides from CARR 2012 WS - Enhancing Matrix Factorization Through Initializat...
Slides from CARR 2012 WS - Enhancing Matrix Factorization Through Initializat...
 
Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Imp...
Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Imp...Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Imp...
Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Imp...
 
Recommender Systems Evaluation: A 3D Benchmark - presented at RUE 2012 worksh...
Recommender Systems Evaluation: A 3D Benchmark - presented at RUE 2012 worksh...Recommender Systems Evaluation: A 3D Benchmark - presented at RUE 2012 worksh...
Recommender Systems Evaluation: A 3D Benchmark - presented at RUE 2012 worksh...
 
From a toolkit of recommendation algorithms into a real business: the Gravity...
From a toolkit of recommendation algorithms into a real business: the Gravity...From a toolkit of recommendation algorithms into a real business: the Gravity...
From a toolkit of recommendation algorithms into a real business: the Gravity...
 

Dernier

08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 

Dernier (20)

08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 

Context-aware similarities within the factorization framework - presented at CARR 2013

  • 1. CONTEXT-AWARE SIMILARITIES WITHIN THE FACTORIZATION FRAMEWORK Balázs Hidasi Domonkos Tikk CARR WORKSHOP, 5TH FEBRUARY 2013, ROME
  • 2. OUTLINE  Background & scope  Levels of CA similarity  Experiments  Future work
  • 3. BACKGROUND Implicit feedback Context I2I Factorization recommendation This work
  • 4. IMPLICIT FEEDBACK  User preference not coded explicitely in data  E.g. purchase history  Presence  preference assumed  Absence  ???  Binary „preference” matrix  Zeroes should be considered  Optimize for  Weighted RMSE  Partial ordering (ranking)
  • 5. CONTEXT  Can mean anything  Here: event context  User U has an event  on Item I  while the context is C  E.g.: time, weather, mood of U, freshness of I, etc.  In the experiments:  Seasonality (time of the day or time of the week)  Time period: week / day
  • 6. FACTORIZATION I  Preference data can be organized into matrix  Size of dimensions high  Data is sparse  Approximate this matrix by the product of two low rank matrices  Each item and user has a feature vector  Predicted preference is the scalar product of the item appropriate vectors user r T  r Uu Ii u ,i  Here we optimize for wRMSE (implicit case)  Learning features with ALS
  • 7. FACTORIZATION II.  Context introduced  additional context dimension  Matrix  tensor (table of records)  Models for preference prediction: item  Elementwise product model  Weighted scalar product user r T  ru , i , c 1 (U u  I i  C c )  Pairwise model  Context dependent user/item bias T T T  ru , i , c Uu Ii Uu Cc Ii Cc context context item + + user user item r
  • 8. ITEM-TO-ITEM RECOMMENDATIONS  Items similar to the current item  E.g.: user cold start, related items, etc.  Approaches: association rules, similarity between item consumption vectors, etc.  In the factorization framework:  Similarity between the feature vectors T  Scalar product: s i , j Ii I j T Ii Ij  Cosine similarity: s i , j Ii 2 Ij 2
  • 9. SCOPE OF THIS PROJECT  Examination wether item similarities can be improved  using context-aware learning or prediction  compared to the basic feature based solution  Motivation:  If factorization models are used anyways, it would be good to use them for I2I recommendations as well  Out of scope:  Comparision with other approaches (e.g. association rules)
  • 10. CONTEXT-AWARE SIMILARITIES: LEVEL1  The form of computing similarity remains T  si, j Ii Ij Ii T Ij si, j Ii 2 Ij 2  Similarity is NOT CA  Context-aware learning is used  Assumption: item models will be more accurate  Reasons: during the learning context is modelled separately  Elementwise product model  Context related effects coded in the context feature vector  Pairwise model  Context related biases removed
  • 11. CONTEXT-AWARE SIMILARITIES: LEVEL2  Incorporating the context features  Elementwise product model  Similarities reweighted by the context feature  Assumption: will be sensitive to the quality of the context T  si , j ,c T 1 Ii  Cc  I j 1 Ii  Cc  I j s i , j ,c  Pairwise model Ii 2 Ij 2  Context dependent promotions/demotions for the participating items  Assumption: minor improvements to the basic similarity T T T  s i , j ,c Ii Ij Ii Cc Ij Cc
  • 12. CONTEXT-AWARE SIMILARITIES: LEVEL2 NORMALIZATION  Normalization of context vector  Only for cosine similarity  Elementwise product model:  Makes no difference in the ordering  Recommendation in a given context to a given user  Pairwise model  Might affect results  Controls the importance of item promotions/demotions T T T Ii Ij Ii Cc Ij Cc s i , j ,c Ii Ij Ii 2 Ij 2 2 2 T T T Ii Ij Ii Cc Ij Cc s i , j ,c Ii Ij Ii 2 Cc 2 Ij Cc 2 2 2 2
  • 13. EXPERIMENTS - SETUP  Four implicit dataset  LastFM 1K – music  TV1, TV2 – IPTV  Grocery – online grocery shopping  Context: seasonality  Both with manually and automatically determined time bands  Evaluation: recommend similar items to the users’ previous item  Recall@20  MAP@20  Coverage@20
  • 14. EXPERIMENTS - RESULTS Improvement for Grocery Improvement for LastFM 35.00% 200.00% 183.91% 29.40% 30.00% 150.00% 25.00% 21.40% 20.00% Recall 100.00% Recall 15.00% 12.90% Coverage Coverage 10.00% 50.00% 6.11% 6.79% 6.14% 17.99%19.72% 5.14% 5.83% 3.91% 3.53% 5.27% 5.00% 0.87% 0.00% 0.00% -7.92% -1.62% -3.18% -18.26% -21.96% -50.00% Improvement for TV1 Improvement for TV2 900.00% 797.51% 20.00% 800.00% 0.91% 0.79% 0.00% 0.59% 700.00% -0.23% -3.23% 600.00% -20.00% -8.08% -8.13% -10.40% 500.00% Recall Recall 400.00% -40.00% Coverage -37.24% Coverage 300.00% -60.00% 200.00% 7.34% 46.40% 100.00% 0.69% 8.31% 1.77% 6.77% 5.89% 4.86% -80.00% 0.00% -82.32% -3.53% -100.00% -100.00%  From left to right: L1 elementwise, L1 pairwise, L2 elementwise, L2 pairwise, L2 pairwise (norm)
  • 15. EXPERIMENTS - CONCLUSION  Context awareness generally helps  Impromevent greatly depends on method and context quality  All but the elementwise level2 method:  Minor improvements  Tolerant of context quality  Elementwise product level2:  Possibly huge improvements  Or huge decrease in recommendations  Depends on the context/problem
  • 16. (POSSIBLE) FUTURE WORK  Experimentation with different contexts  Different similarity between feature vectors  Predetermination wether context is useful for  User bias  Item bias  Reweighting  Predetermination of context quality  Different evaluation methods  E.g. recommend to session
  • 17. THANKS FOR THE ATTENTION! For more of my recommender systems related research visit my website: http://www.hidasi.eu