Keynote presentation on policy approaches to socio-technical causes of algorithmic bias at the Bias in Information, Algorithms and Systems workshop at the iConference on 25 March 2018.
BDSM⚡Call Girls in Sector 135 Noida Escorts >༒8448380779 Escort Service
iConference 2018 BIAS workshop keynote
1. Policy approaches to socio-technical
causes of bias in algorithmic systems
– what role can ethical standards play?
ANSGAR KOENE,
HORIZON DIGITAL ECONOMY RESEARCH INSTITUTE, UNIVERSITY OF NOTTIN GHAM
25TH MARCH 2018
1
8. Case study: Recidivism risk prediction
COMPAS recidivism prediction tool
◦ Built by a commercial company, Northpointe, Inc.
Estimates likelihood of criminals re-offending in future
◦ Inputs: Based on a long questionnaire
◦ Outputs: Used across US by judges and parole officers
Are COMPAS’ estimates fair to salient social groups?
Machine Bias: There’s software used across the
country to predict future criminals. Propublica
8
9. Is COMPAS fair to all groups?
Northpointe: In each estimated risk level, false discovery rates for blacks & whites are similar
So YES!
9
10. Is COMPAS fair to all groups?
ProPublica: False positive & false negative rates are considerably worse for blacks than whites
So NO!
10
11. Who is right about COMPAS?
Both! Depends on how you measure fairness!
How many fairness measures can one define?
◦ How many different error rate measures can one define?
11
12. But, aren’t the measures similar?
NO! They present inherent trade-offs!
When base recidivism rates for blacks & whites differ, no
non-trivial solution to achieve similar FPR, FNR, FDR, FOR!
No non-trivial solution can be simultaneously fair
according to both ProPublica & Northpointe analyses!
12
13. Limited resources assignment problem:
Choose your favourite character to play
Each character can be played by only one player
13
15. The algorithm decision principles
A1: minimise total disparity
while guaranteeing at least 70%
of maximum overall satisfaction
A2: maximise the minimum
individual satisfaction while
guaranteeing at least 70% of
maximum overall satisfaction
A3: maximise overall
satisfaction
A4: maximise the minimum
individual satisfaction
A5: minimise total disparity
15
17. Fairness is fundamentally a societally defined construct (e.g. equality of outcomes vs
equality of treatment)
◦ Cultural differences between nations/jurisdictions
◦ Cultural changes in time
“Code is Law”: Algorithms, like laws, both operationalize and entrench spatio-temporal
values
Algorithms, like the law, must be:
◦ transparent
◦ adaptable to change (by a balanced process)
The ‘messy’ problem of fair algorithms
17
18. UnBias: Emancipating Users Against Algorithmic
Biases for a Trusted Digital Economy
Standards and policy
Stakeholder workshops
18
Youth Juries
21. EU response (in addition to GDPR)
21
EU Parliament Science and Technology Options Assessment
(STOA) panel request for study on “Algorithmic Opportunities
and Accountability”
23. ACM Principles on Algorithmic
Transparency and Accountability
Awareness
Access and Redress
Accountability
Explanation
Data Provenance
Auditability
Validation and Testing
23
26. IEEE-SA Standards Projects• IEEE P7000: Model Process for Addressing Ethical Concerns During System Design
• IEEE P7001: Transparency of Autonomous Systems
• IEEE P7002: Data Privacy Process
• IEEE P7003: Algorithmic Bias Considerations
• IEEE P7004: Standard on Child and Student Data Governance
• IEEE P7005: Standard on Employer Data Governance
• IEEE P7006: Standard on Personal Data AI Agent Working Group
• IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation
Systems
• IEEE P7008: Standard for Ethically Driven Nudging for Robotic, Intelligent and
Autonomous Systems
• IEEE P7009: Standard for Fail-Safe Design of Autonomous and Semi-Autonomous
Systems
• IEEE P7010: Wellbeing Metrics Standard for Ethical Artificial Intelligence and
Autonomous Systems
26
27. 27
Open invitation to join the P7003 working group
http://sites.ieee.org/sagroups-7003/
28. P7003 foundational sections
Taxonomy of Algorithmic Bias
Person categorization and identifying affected population groups
Legal frameworks related to Bias
Psychology of Bias
28
29. P7003 algorithm development sections
Algorithmic system design stages
Assurance of representativeness fo testing/training data
Evaluation of system outcomes
Evaluation of algorithmic processing
Assessment of resilience against external manipulation to Bias
Documentation of criteria, scope and justifications of choices
Use Cases
29
30. Related standards activities
British Standards Institute (BSI) – BS 8611 Ethics design and application of robots
ISO/IEC JTC1 SC42
◦ Artificial Intelligence Concepts and Terminology
◦ Framework for Artificial Intelligence Systems Using Machine Learning
Jan 2018 China published “Artificial Intelligence Standardization White Paper.”
30