Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Rank all the (geo) things!

572 vues

Publié le

Publié dans : Technologie, Business
  • Soyez le premier à commenter

Rank all the (geo) things!

  1. 1. Rank all the (geo) things! @jsuchal @SynopsiTV
  2. 2. Blogs, newsletters How do you learn things? Courses, training Conferences Work
  3. 3. Research papers?
  4. 4. WHY NOT?
  5. 5. WHY NOT? “It’s not useful for the real-world.” “I wouldn’t understand any of that.”
  6. 6. About me PhD dropout FIIT STU Bratislava foaf.sk, otvorenezmluvy.sk, govdata.sk sme.sk news recommender developer @ SynopsiTV
  7. 7. My workflow
  8. 8. My workflow MAGIC! MAGIC! MAGIC!
  9. 9. Search vs. recommender engine Search engine input: query output: list of results Recommendation engine input: movie output: list of similar movies
  10. 10. Academic Mode
  11. 11. Accurately interpreting clickthrough data as implicit feedback Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in Information retrieval, SIGIR ’05, pages 154–161, New York, NY, USA, 2005. ACM. Significant on two-tailed tests at a 95% confidence level !!!
  12. 12. Learning to Rank for Spatiotemporal Search Blake Shaw, Jon Shea, Siddhartha Sinha, and Andrew Hogue. 2013. Learning to rank for spatiotemporal search. In Proceedings of the sixth ACM international conference on Web search and data mining (WSDM '13). ACM, New York, NY, USA, 717-726.
  13. 13. Learning to Rank for Spatiotemporal Search
  14. 14. Learning to Rank for Spatiotemporal Search
  15. 15. Learning to Rank for Spatiotemporal Search
  16. 16. Learning to Rank for Spatiotemporal Search
  17. 17. Learning to Rank for Spatiotemporal Search
  18. 18. Accurately interpreting clickthrough data as implicit feedback Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in Information retrieval, SIGIR ’05, pages 154–161, New York, NY, USA, 2005. ACM.
  19. 19. Accurately interpreting clickthrough data as implicit feedback
  20. 20. Evaluation Metrics ● Mean Average Precision @ N ○ probability of target result being in top N items ● Mean Reciprocal Rank ○ 1 / rank of target result ● Normalized Discounted Cumulative Gain ● Expected Reciprocal Rank
  21. 21. Optimizing search engines using clickthrough data Thorsten Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’02, pages 133–142, New York, NY, USA, 2002. ACM.
  22. 22. Optimizing search engines using clickthrough data
  23. 23. Query chains: learning to rank from implicit feedback Filip Radlinski and Thorsten Joachims. Query chains: learning to rank from implicit feedback. In KDD ’05: Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 239–248, New York, NY, USA, 2005. ACM.
  24. 24. On Caption Bias in Interleaving Experiments Katja Hofmann, Fritz Behr, and Filip Radlinski: On Caption Bias in Interleaving Experiments In Proceedings of the ACM Conference on Information and Knowledge Management (CIKM) 2012
  25. 25. On Caption Bias in Interleaving Experiments
  26. 26. Fighting Search Engine Amnesia: Reranking Repeated Results Milad Shokouhi, Ryen W. White, Paul Bennett, and Filip Radlinski. Fighting search engine amnesia: reranking repeated results. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’13, pages 273–282, New York, NY, USA, 2013. ACM. In this paper, we observed that the same results are often shown to users multiple times during search sessions. We showed that there are a number of effects at play, which can be leveraged to improve information retrieval performance. In particular, previously skipped results are much less likely to be clicked, and previously clicked results may or may not be re-clicked depending on other factors of the session.
  27. 27. Challenges
  28. 28. Diversification
  29. 29. Group recommendations
  30. 30. Context-aware recommendations Time of day Device Mood Season Location
  31. 31. Serious recommenders and search? Get in touch! @synopsitv @jsuchal

×