SlideShare une entreprise Scribd logo
1  sur  31
Human Agency on
Algorithmic Systems
ANSGAR KOENE & ELVIRA PEREZ VALLEJOS, UNIVERSITY OF NOTTINGHAM
HELENA WEBB & MENISHA PATEL, UNIVERSITY OF OXFORD
AOIR 2017
User experience satisfaction on social
network sites
2
Human attention is a limited resource
Filter3
Good information service = good filtering
4
Sacrificing control for Convenience
5
Sacrificing control for Convenience
6
Personalized recommendations
 Content based – similarity to past results the user liked
 Collaborative – results that similar users liked
(people with statistically similar tastes/interests)
 Community based – results that people in the same social
network liked
(people who are linked on a social network e.g. ‘friends’)
7
How do the algorithms work?
8
User understanding of social media
algorithms: Facebook News Feed
Of 40 interviewed participants more than 60% of Facebook users
are entirely unaware of any algorithmic curation on Facebook at
all: “They believed every single story from their friends and
followed pages appeared in their news feed”.
Published at: CHI 2015
10
Pre-workshop survey of 96 teenagers
(13-17 years old)
 No preference between the internet experience to be more personalised or more
‘organic’: A.: 53% More personalised, 47% More ‘organic’
 Lack of awareness about the way search engines rank the information but
participants believe it’s important for people to know
• How much do you know?- A.: 36% Not much, 58% A little, 6% Quite a lot
• Do you think it’s important to know?- A:. 62% Yes, 16% Not really, 22% Don’t know
 Regulation role: Who makes sure that the Internet and digital world is safe and
neutral? A:.4% Police, 23% Nobody, 29% Government, 44% The big tech companies
Multi-Stakeholder Workshop,
Multiple stakeholders: academia, education, NGOs, industry
30 participants
Fairness in relation to algorithmic design and practice
Four key case studies: fake news, personalisation, gaming the system, and
transparency
What constitutes a fair algorithm?
What kinds of (legal and ethical) responsibilities do Internet companies have,
to ensure their algorithms produce results that are fair and without bias?
Fairness in relation to algorithmic design and
practice - participant recommendations
 Criteria relating to social norms and values:
 Criteria relating to system reliability:
 Criteria relating to (non-)interference with user control:
Criteria relating to social norms and values:
(i) Sometimes disparate outcome are acceptable if based on
individual lifestyle choices over which people have control.
(ii) Ethical precautions are more important than higher accuracy.
(iii)There needs to be a balancing of individual values and socio-
cultural values. Problem: How to weigh relevant social-
cultural value?
Criteria relating to system reliability:
(i) Results must be balanced with due regard for trustworthiness.
(ii) Need for independent system evaluation and monitoring over
time.
Criteria relating to (non-)interference
with user control:
(i) Subjective fairness experience depends on user objectives at
time of use, therefore requires an ability to tune the data and
algorithm.
(ii) Users should be able to limit data collection about them and
its use. Inferred personal data is still personal data. Meaning
assigned to the data must be justified towards the user.
(iii)Functioning of algorithm should be demonstrated/explained in
a way that can be understood by the data subject.
Criteria relating to (non-)interference
with user control:
(iv) If not vital to the task, there should be option to opt-out of
the algorithm
(v) Users must have freedom to explore algorithm effects, even
if this would increase the ability to “game the system”
(vi)Need for clear means of appeal/redress for impact of the
algorithmic system.
Take (some) control of News Feed priorities
18
Letting users choose the Algorithm
Evaluating fairness from outputs only
Most preferred
Least preferred
Evaluating fairness with knowledge
about the algorithm decision principles
 A1: minimise disparity while
guaranteeing at least 70% of
maximum possible total
 A2: maximise the minimum
individual outcome while
guaranteeing at least 70% of
maximum possible total
 A3: maximise total
 A4: maximise the minimum
individual outcome
 A5: minimise disparity
Most preferred
Least preferred
Conclusion
Algorithmic mediation (can) plays an important role in improving
the usefulness of online services.
Users want more options to understand, adjust or even opt-out of
algorithmic mediation
Users do not agree on a single option when choosing a ‘best’
algorithm for a given task.
Thank you!
http://unbias.wp.horizon.ac.uk/
Open invitation to join the P7003 working group
http://sites.ieee.org/sagroups-7003/
25
Revealing News Feed behaviour
Participants indicate desired changes
26
27
Machine learning principles
Classifiers
Clustering
29
E. Bakshy, S. Medding & L.A. Adamic, “Exposure to ideologically diverse news and opinion on Facebook” Science, 348, 1130-1132, 2015
Echo-chamber enhancement by NewsFeed
algorithm
10.1 million active US Facebook users
Proportion of content that is cross-cutting
30
E. Bakshy, S. Medding & L.A. Adamic, “Exposure to ideologically diverse news and opinion on Facebook” Science, 348, 1130-1132, 2015
Positioning effect in NewsFeed
31

Contenu connexe

Tendances

Tendances (20)

Dasts16 a koene_un_bias
Dasts16 a koene_un_biasDasts16 a koene_un_bias
Dasts16 a koene_un_bias
 
Editorial responsibilities arising from personalisation algorithms
Editorial responsibilities arising from personalisation algorithmsEditorial responsibilities arising from personalisation algorithms
Editorial responsibilities arising from personalisation algorithms
 
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementTRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
 
Ansgar rcep algorithmic_bias_july2018
Ansgar rcep algorithmic_bias_july2018Ansgar rcep algorithmic_bias_july2018
Ansgar rcep algorithmic_bias_july2018
 
Bsa cpd a_koene2016
Bsa cpd a_koene2016Bsa cpd a_koene2016
Bsa cpd a_koene2016
 
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
 
Taming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and PolicyTaming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and Policy
 
IEEE P7003 at ICSE Fairware 2018
IEEE P7003 at ICSE Fairware 2018IEEE P7003 at ICSE Fairware 2018
IEEE P7003 at ICSE Fairware 2018
 
Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17
 
The Age of Algorithms
The Age of AlgorithmsThe Age of Algorithms
The Age of Algorithms
 
A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
 
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
 
From Law to Code: Translating Legal Principles into Digital Rules
From Law to Code: Translating Legal Principles into Digital RulesFrom Law to Code: Translating Legal Principles into Digital Rules
From Law to Code: Translating Legal Principles into Digital Rules
 
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
Algorithmic Bias:  Challenges and Opportunities for AI in HealthcareAlgorithmic Bias:  Challenges and Opportunities for AI in Healthcare
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
 
Data protection and smart grids
Data protection and smart gridsData protection and smart grids
Data protection and smart grids
 
CIS 2015 The Ethics of Personal Data - Robin Wilton
CIS 2015 The Ethics of Personal Data - Robin WiltonCIS 2015 The Ethics of Personal Data - Robin Wilton
CIS 2015 The Ethics of Personal Data - Robin Wilton
 
ICT and Environmental Regulation in the Developing World: Inequalities in Ins...
ICT and Environmental Regulation in the Developing World: Inequalities in Ins...ICT and Environmental Regulation in the Developing World: Inequalities in Ins...
ICT and Environmental Regulation in the Developing World: Inequalities in Ins...
 
The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...
 
Ws1 introduction talk
Ws1 introduction talkWs1 introduction talk
Ws1 introduction talk
 

Similaire à Human Agency on Algorithmic Systems

Ai Now institute 2017 report
 Ai Now institute 2017 report Ai Now institute 2017 report
Ai Now institute 2017 report
Willy Marroquin (WillyDevNET)
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
Edge AI and Vision Alliance
 
Ethics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptxEthics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptx
Petar Radanliev
 
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and moreifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
hen_drik
 
Black Box Learning Analytics? Beyond Algorithmic Transparency
Black Box Learning Analytics? Beyond Algorithmic TransparencyBlack Box Learning Analytics? Beyond Algorithmic Transparency
Black Box Learning Analytics? Beyond Algorithmic Transparency
Simon Buckingham Shum
 

Similaire à Human Agency on Algorithmic Systems (20)

The Impact of Computing Systems | Causal inference in practice
The Impact of Computing Systems | Causal inference in practiceThe Impact of Computing Systems | Causal inference in practice
The Impact of Computing Systems | Causal inference in practice
 
Algorithmically Mediated Online Inforamtion Access at MozFest17
Algorithmically Mediated Online Inforamtion Access at MozFest17Algorithmically Mediated Online Inforamtion Access at MozFest17
Algorithmically Mediated Online Inforamtion Access at MozFest17
 
Fair Recommender Systems
Fair Recommender Systems Fair Recommender Systems
Fair Recommender Systems
 
Ai Now institute 2017 report
 Ai Now institute 2017 report Ai Now institute 2017 report
Ai Now institute 2017 report
 
User-driven Technology Evaluation of eParticipation Systems
User-driven Technology Evaluation of eParticipation SystemsUser-driven Technology Evaluation of eParticipation Systems
User-driven Technology Evaluation of eParticipation Systems
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
 
Ethics of personalized information filtering
Ethics of personalized information filteringEthics of personalized information filtering
Ethics of personalized information filtering
 
Ethics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptxEthics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptx
 
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and moreifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
 
Agent technology for e commerce-recommendation systems
Agent technology for e commerce-recommendation systemsAgent technology for e commerce-recommendation systems
Agent technology for e commerce-recommendation systems
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?
 
Measuring effectiveness of machine learning systems
Measuring effectiveness of machine learning systemsMeasuring effectiveness of machine learning systems
Measuring effectiveness of machine learning systems
 
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
 
Towards Responsible AI - KC.pptx
Towards Responsible AI - KC.pptxTowards Responsible AI - KC.pptx
Towards Responsible AI - KC.pptx
 
AI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry Standards
 
Generative AI - Responsible Path Forward.pdf
Generative AI - Responsible Path Forward.pdfGenerative AI - Responsible Path Forward.pdf
Generative AI - Responsible Path Forward.pdf
 
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
 
Recommender Systems and Misinformation: The Problem or the Solution?
Recommender Systems and Misinformation: The Problem or the Solution?Recommender Systems and Misinformation: The Problem or the Solution?
Recommender Systems and Misinformation: The Problem or the Solution?
 
Ethical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and DiscriminationEthical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and Discrimination
 
Black Box Learning Analytics? Beyond Algorithmic Transparency
Black Box Learning Analytics? Beyond Algorithmic TransparencyBlack Box Learning Analytics? Beyond Algorithmic Transparency
Black Box Learning Analytics? Beyond Algorithmic Transparency
 

Plus de Ansgar Koene

Plus de Ansgar Koene (8)

What is AI?
What is AI?What is AI?
What is AI?
 
A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018
 
IEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias ConsiderationsIEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias Considerations
 
A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017
 
Internet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User TrustInternet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User Trust
 
are algorithms really a black box
are algorithms really a black boxare algorithms really a black box
are algorithms really a black box
 
Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16
 
Ass a koene_ca_sma
Ass a koene_ca_smaAss a koene_ca_sma
Ass a koene_ca_sma
 

Dernier

原版制作美国爱荷华大学毕业证(iowa毕业证书)学位证网上存档可查
原版制作美国爱荷华大学毕业证(iowa毕业证书)学位证网上存档可查原版制作美国爱荷华大学毕业证(iowa毕业证书)学位证网上存档可查
原版制作美国爱荷华大学毕业证(iowa毕业证书)学位证网上存档可查
ydyuyu
 
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
ayvbos
 
在线制作约克大学毕业证(yu毕业证)在读证明认证可查
在线制作约克大学毕业证(yu毕业证)在读证明认证可查在线制作约克大学毕业证(yu毕业证)在读证明认证可查
在线制作约克大学毕业证(yu毕业证)在读证明认证可查
ydyuyu
 
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
ydyuyu
 
Russian Call girls in Abu Dhabi 0508644382 Abu Dhabi Call girls
Russian Call girls in Abu Dhabi 0508644382 Abu Dhabi Call girlsRussian Call girls in Abu Dhabi 0508644382 Abu Dhabi Call girls
Russian Call girls in Abu Dhabi 0508644382 Abu Dhabi Call girls
Monica Sydney
 

Dernier (20)

原版制作美国爱荷华大学毕业证(iowa毕业证书)学位证网上存档可查
原版制作美国爱荷华大学毕业证(iowa毕业证书)学位证网上存档可查原版制作美国爱荷华大学毕业证(iowa毕业证书)学位证网上存档可查
原版制作美国爱荷华大学毕业证(iowa毕业证书)学位证网上存档可查
 
20240510 QFM016 Irresponsible AI Reading List April 2024.pdf
20240510 QFM016 Irresponsible AI Reading List April 2024.pdf20240510 QFM016 Irresponsible AI Reading List April 2024.pdf
20240510 QFM016 Irresponsible AI Reading List April 2024.pdf
 
APNIC Updates presented by Paul Wilson at ARIN 53
APNIC Updates presented by Paul Wilson at ARIN 53APNIC Updates presented by Paul Wilson at ARIN 53
APNIC Updates presented by Paul Wilson at ARIN 53
 
Tadepalligudem Escorts Service Girl ^ 9332606886, WhatsApp Anytime Tadepallig...
Tadepalligudem Escorts Service Girl ^ 9332606886, WhatsApp Anytime Tadepallig...Tadepalligudem Escorts Service Girl ^ 9332606886, WhatsApp Anytime Tadepallig...
Tadepalligudem Escorts Service Girl ^ 9332606886, WhatsApp Anytime Tadepallig...
 
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
 
Local Call Girls in Seoni 9332606886 HOT & SEXY Models beautiful and charmin...
Local Call Girls in Seoni  9332606886 HOT & SEXY Models beautiful and charmin...Local Call Girls in Seoni  9332606886 HOT & SEXY Models beautiful and charmin...
Local Call Girls in Seoni 9332606886 HOT & SEXY Models beautiful and charmin...
 
20240509 QFM015 Engineering Leadership Reading List April 2024.pdf
20240509 QFM015 Engineering Leadership Reading List April 2024.pdf20240509 QFM015 Engineering Leadership Reading List April 2024.pdf
20240509 QFM015 Engineering Leadership Reading List April 2024.pdf
 
Call girls Service in Ajman 0505086370 Ajman call girls
Call girls Service in Ajman 0505086370 Ajman call girlsCall girls Service in Ajman 0505086370 Ajman call girls
Call girls Service in Ajman 0505086370 Ajman call girls
 
best call girls in Hyderabad Finest Escorts Service 📞 9352988975 📞 Available ...
best call girls in Hyderabad Finest Escorts Service 📞 9352988975 📞 Available ...best call girls in Hyderabad Finest Escorts Service 📞 9352988975 📞 Available ...
best call girls in Hyderabad Finest Escorts Service 📞 9352988975 📞 Available ...
 
20240508 QFM014 Elixir Reading List April 2024.pdf
20240508 QFM014 Elixir Reading List April 2024.pdf20240508 QFM014 Elixir Reading List April 2024.pdf
20240508 QFM014 Elixir Reading List April 2024.pdf
 
APNIC Policy Roundup, presented by Sunny Chendi at the 5th ICANN APAC-TWNIC E...
APNIC Policy Roundup, presented by Sunny Chendi at the 5th ICANN APAC-TWNIC E...APNIC Policy Roundup, presented by Sunny Chendi at the 5th ICANN APAC-TWNIC E...
APNIC Policy Roundup, presented by Sunny Chendi at the 5th ICANN APAC-TWNIC E...
 
Trump Diapers Over Dems t shirts Sweatshirt
Trump Diapers Over Dems t shirts SweatshirtTrump Diapers Over Dems t shirts Sweatshirt
Trump Diapers Over Dems t shirts Sweatshirt
 
20240507 QFM013 Machine Intelligence Reading List April 2024.pdf
20240507 QFM013 Machine Intelligence Reading List April 2024.pdf20240507 QFM013 Machine Intelligence Reading List April 2024.pdf
20240507 QFM013 Machine Intelligence Reading List April 2024.pdf
 
在线制作约克大学毕业证(yu毕业证)在读证明认证可查
在线制作约克大学毕业证(yu毕业证)在读证明认证可查在线制作约克大学毕业证(yu毕业证)在读证明认证可查
在线制作约克大学毕业证(yu毕业证)在读证明认证可查
 
Best SEO Services Company in Dallas | Best SEO Agency Dallas
Best SEO Services Company in Dallas | Best SEO Agency DallasBest SEO Services Company in Dallas | Best SEO Agency Dallas
Best SEO Services Company in Dallas | Best SEO Agency Dallas
 
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
 
Ballia Escorts Service Girl ^ 9332606886, WhatsApp Anytime Ballia
Ballia Escorts Service Girl ^ 9332606886, WhatsApp Anytime BalliaBallia Escorts Service Girl ^ 9332606886, WhatsApp Anytime Ballia
Ballia Escorts Service Girl ^ 9332606886, WhatsApp Anytime Ballia
 
2nd Solid Symposium: Solid Pods vs Personal Knowledge Graphs
2nd Solid Symposium: Solid Pods vs Personal Knowledge Graphs2nd Solid Symposium: Solid Pods vs Personal Knowledge Graphs
2nd Solid Symposium: Solid Pods vs Personal Knowledge Graphs
 
Real Men Wear Diapers T Shirts sweatshirt
Real Men Wear Diapers T Shirts sweatshirtReal Men Wear Diapers T Shirts sweatshirt
Real Men Wear Diapers T Shirts sweatshirt
 
Russian Call girls in Abu Dhabi 0508644382 Abu Dhabi Call girls
Russian Call girls in Abu Dhabi 0508644382 Abu Dhabi Call girlsRussian Call girls in Abu Dhabi 0508644382 Abu Dhabi Call girls
Russian Call girls in Abu Dhabi 0508644382 Abu Dhabi Call girls
 

Human Agency on Algorithmic Systems

  • 1. Human Agency on Algorithmic Systems ANSGAR KOENE & ELVIRA PEREZ VALLEJOS, UNIVERSITY OF NOTTINGHAM HELENA WEBB & MENISHA PATEL, UNIVERSITY OF OXFORD AOIR 2017
  • 2. User experience satisfaction on social network sites 2
  • 3. Human attention is a limited resource Filter3
  • 4. Good information service = good filtering 4
  • 5. Sacrificing control for Convenience 5
  • 6. Sacrificing control for Convenience 6
  • 7. Personalized recommendations  Content based – similarity to past results the user liked  Collaborative – results that similar users liked (people with statistically similar tastes/interests)  Community based – results that people in the same social network liked (people who are linked on a social network e.g. ‘friends’) 7
  • 8. How do the algorithms work? 8
  • 9.
  • 10. User understanding of social media algorithms: Facebook News Feed Of 40 interviewed participants more than 60% of Facebook users are entirely unaware of any algorithmic curation on Facebook at all: “They believed every single story from their friends and followed pages appeared in their news feed”. Published at: CHI 2015 10
  • 11. Pre-workshop survey of 96 teenagers (13-17 years old)  No preference between the internet experience to be more personalised or more ‘organic’: A.: 53% More personalised, 47% More ‘organic’  Lack of awareness about the way search engines rank the information but participants believe it’s important for people to know • How much do you know?- A.: 36% Not much, 58% A little, 6% Quite a lot • Do you think it’s important to know?- A:. 62% Yes, 16% Not really, 22% Don’t know  Regulation role: Who makes sure that the Internet and digital world is safe and neutral? A:.4% Police, 23% Nobody, 29% Government, 44% The big tech companies
  • 12. Multi-Stakeholder Workshop, Multiple stakeholders: academia, education, NGOs, industry 30 participants Fairness in relation to algorithmic design and practice Four key case studies: fake news, personalisation, gaming the system, and transparency What constitutes a fair algorithm? What kinds of (legal and ethical) responsibilities do Internet companies have, to ensure their algorithms produce results that are fair and without bias?
  • 13. Fairness in relation to algorithmic design and practice - participant recommendations  Criteria relating to social norms and values:  Criteria relating to system reliability:  Criteria relating to (non-)interference with user control:
  • 14. Criteria relating to social norms and values: (i) Sometimes disparate outcome are acceptable if based on individual lifestyle choices over which people have control. (ii) Ethical precautions are more important than higher accuracy. (iii)There needs to be a balancing of individual values and socio- cultural values. Problem: How to weigh relevant social- cultural value?
  • 15. Criteria relating to system reliability: (i) Results must be balanced with due regard for trustworthiness. (ii) Need for independent system evaluation and monitoring over time.
  • 16. Criteria relating to (non-)interference with user control: (i) Subjective fairness experience depends on user objectives at time of use, therefore requires an ability to tune the data and algorithm. (ii) Users should be able to limit data collection about them and its use. Inferred personal data is still personal data. Meaning assigned to the data must be justified towards the user. (iii)Functioning of algorithm should be demonstrated/explained in a way that can be understood by the data subject.
  • 17. Criteria relating to (non-)interference with user control: (iv) If not vital to the task, there should be option to opt-out of the algorithm (v) Users must have freedom to explore algorithm effects, even if this would increase the ability to “game the system” (vi)Need for clear means of appeal/redress for impact of the algorithmic system.
  • 18. Take (some) control of News Feed priorities 18
  • 19. Letting users choose the Algorithm
  • 20. Evaluating fairness from outputs only Most preferred Least preferred
  • 21. Evaluating fairness with knowledge about the algorithm decision principles  A1: minimise disparity while guaranteeing at least 70% of maximum possible total  A2: maximise the minimum individual outcome while guaranteeing at least 70% of maximum possible total  A3: maximise total  A4: maximise the minimum individual outcome  A5: minimise disparity Most preferred Least preferred
  • 22. Conclusion Algorithmic mediation (can) plays an important role in improving the usefulness of online services. Users want more options to understand, adjust or even opt-out of algorithmic mediation Users do not agree on a single option when choosing a ‘best’ algorithm for a given task.
  • 23. Thank you! http://unbias.wp.horizon.ac.uk/ Open invitation to join the P7003 working group http://sites.ieee.org/sagroups-7003/
  • 24.
  • 27. 27
  • 28.
  • 30. E. Bakshy, S. Medding & L.A. Adamic, “Exposure to ideologically diverse news and opinion on Facebook” Science, 348, 1130-1132, 2015 Echo-chamber enhancement by NewsFeed algorithm 10.1 million active US Facebook users Proportion of content that is cross-cutting 30
  • 31. E. Bakshy, S. Medding & L.A. Adamic, “Exposure to ideologically diverse news and opinion on Facebook” Science, 348, 1130-1132, 2015 Positioning effect in NewsFeed 31

Notes de l'éditeur

  1. Our first stakeholder workshop was held on February 3rd 2017, at the Digital Catapult in London. The first workshop brought together participants from academia, education, NGOs and enterprises. We were fortunate to have 30 particiipants on the day, which was a great turnout. The workshop itself focused on four case studies each chosen as it concerned a key current debate surrounding the use of algorithms and fairness. The case studies centred around: fake news, personalisation, gaming the system, and transparency.
  2. This WP aims to develop a methodology and the necessary IT and techniques for revealing the impact of algorithmic biases in personalisation-based platforms to non-experts (e.g. youths), and for co-developing “fairer” algorithms in close collaboration with specialists and non-expert users. In Year 1, Sofia and Michael have been running a task that asks participants to make task allocation decisions. In a situation in which resources are limited, different algorithms might be used to determine who receives what. Participants are asked to determine which algorithm is best suited to make the allocation and this inevitably brings up issues of fairness. Disucssion reveals different models of fairness. These findings will put towards further work on the processes of algorithm design and the possibility to develop a fair algorithm.