3. 機械学習界隈での公平性の注目
ACM FAT*
国際会議
ACM FAT* 2018-19
AAAI/ACM AIES 2018-19
国際ワークショップ
FATML 2014-18
招待講演
•ICML2017(L. Sweeney)
•NIPS2017 (K. Crawford)
•KDD2017 (C. Dwork)
•KDD2018 (J. M. Wing)
AI for Social Good 2018-19
Challenges and Opportunities for AI
in Financial Services 2018
AI Ethics WS 2018
14. 設定
• 簡単のために教師あり分類問題のみを考える
• 学歴,職歴,資格など
• 性別,人種,宗教,政治志向,年齢など
• 予測したいもの (e.g., 採否)
• アルゴリズムによって予測されたラベル
入力 X ラベル Y 予測ラベル ̂Y別の入力 X
S = 男性 S = 女性
入力 X
ラベル Y
センシティブ属性 S
予測ラベル ̂Y
学習
54. Fair bandit [Joseph+16]
• 報酬の大きさ = その人の能力の高さ
• 能力が高い人以外を優先的に選んではいけない
πi(t) > πj(t) only if fi(x(t)
i ) > fj(x(t)
j )
能力主義的公平性
A B C D E
fi(x(t)
i )
71. References 1
• [Hardt+16] Moritz Hardt, Eric Price, and Nathan Srebro.
Equality of Opportunity in Supervised Learning. In: NeurIPS,
pp. 3315-3323, 2016. https://arxiv.org/abs/1610.02413
• [Pleiss+17] Geoff Pleiss, Manish Raghavan, Felix Wu, Jon
Kleinberg, and Kilian Q. Weinberger. On Fairness and
Calibration. In: NeurIPS, pp. 5680-5689, 2017. https://arxiv.org/
abs/1709.02012
• [Dwork+12] Cynthia Dwork, Moritz Hardt, Toniann
Pitassi, Omer Reingold, Rich Zemel. Fairness Through
Awareness. In: the 3rd innovations in theoretical computer
science conference, pp. 214-226, 2012. https://arxiv.org/abs/
1104.3913
72. References 2
• [Agarwal+18] Alekh Agarwal, Alina Beygelzimer, Miroslav
Dudík, John Langford, and Hanna Wallach. A Reductions
Approach to Fair Classification. In: ICML, PMLR 80, pp.
60-69, 2018. https://arxiv.org/abs/1803.02453
• [Agarwal+19] Alekh Agarwal, Miroslav Dudík, and Zhiwei
Steven Wu. Fair Regression: Quantitative Definitions and
Reduction-based Algorithms. In: ICML, PMLR 97, pp. 120-129,
2019. https://arxiv.org/abs/1905.12843
• [Zafar+13] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi,
and Cynthia Dwork. Learning Fair Representations. In: ICML,
PMLR 28, pp. 325-333, 2013.
73. References 3
• [Zhao+19] Han Zhao, Geoffrey J. Gordon. Inherent Tradeoffs in
Learning Fair Representations. In: NeurIPS, 2019, to appear.
https://arxiv.org/abs/1906.08386
• [Xie+16] Qizhe Xie, Zihang Dai, Yulun Du, Eduard
Hovy, Graham Neubig. Controllable Invariance through
Adversarial Feature Learning. In: NeurIPS, pp. 585-596, 2016.
https://arxiv.org/abs/1705.11122
• [Moyer+18] Daniel Moyer, Shuyang Gao, Rob
Brekelmans, Greg Ver Steeg, and Aram Galstyan. Invariant
Representations without Adversarial Training. In: NeurIPS, pp.
9084-9893, 2018. https://arxiv.org/abs/1805.09458
74. References 4
• [Woodworth+18] Blake Woodworth, Suriya Gunasekar, Mesrob
I. Ohannessian, Nathan Srebro. Learning Non-Discriminatory
Predictors. In: COLT, pp. 1920-1953, 2017. https://arxiv.org/abs/
1702.06081
• [Cotter+19] Andrew Cotter, Maya Gupta, Heinrich
Jiang, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake
Woodworth, Seungil You. Training Well-Generalizing
Classifiers for Fairness Metrics and Other Data-Dependent
Constraints. In: ICML, PMLR 97, pp. 1397-1405, 2019. https://
arxiv.org/abs/1807.00028
• [Rothblum+18] Guy N. Rothblum, Gal Yona. Probably
Approximately Metric-Fair Learning. In: ICML, PMLR 80, pp.
5680-5688, 2018. https://arxiv.org/abs/1803.03242
75. References 5
• [Joseph+16] Matthew Joseph, Michael Kearns, Jamie
Morgenstern, Aaron Roth. Fairness in Learning: Classic and
Contextual Bandits. In: NeurIPS, pp. 325-333, 2016.
• [Liu+17] Yang Liu, Goran Radanovic, Christos
Dimitrakakis, Debmalya Mandal, David C. Parkes. Calibrated
Fairness in Bandits. In: 4th Workshop on Fairness,
Accountability, and Transparency in Machine Learning
(FATML), 2017. https://arxiv.org/abs/1707.01875
• [Gillen+18] Stephen Gillen, Christopher Jung, Michael
Kearns, Aaron Roth. Online Learning with an Unknown
Fairness Metric. In: NeurIPS, pp. 2600-2609, 2018. https://
arxiv.org/abs/1802.06936
76. References 6
• [Jabbari+17] Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie
Morgenstern, Aaron Roth. Fairness in Reinforcement Learning. In:
ICML, PMLR 70, pp. 1617-1626, 2017. https://arxiv.org/abs/1611.03071
• [Liu+18] Lydia T. Liu, Sarah Dean, Esther Rolf, Max
Simchowitz, Moritz Hardt. Delayed Impact of Fair Machine Learning.
In: ICML, PMLR 80, pp. 3150-3158, 2018. https://arxiv.org/abs/
1803.04383
• [Aivodji+19] Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien
Gambs, Satoshi Hara, Alain Tapp. Fairwashing: the risk of
rationalization. In: ICML, 2019. https://arxiv.org/abs/1901.09749
• [Fukuchi+20] Kazuto Fukuchi, Satoshi Hara, Takanori Maehara. Faking
Fairness via Stealthily Biased Sampling. In: AAAI, Special Track on AI
for Social Impact (AISI), 2020, to appear. https://arxiv.org/abs/
1901.08291
81. On the Long-term Impact of Algorithmic Decision
Policies: Effort Unfairness and Feature Segregation
through Social Learning [Heidari+ICML 19]
• https://arxiv.org/abs/1903.01209
• effortを導入して,努力した量に対する報酬の違いによっ
て公平性を定義
82. Obtaining fairness using optimal
transport theory [Barrio+ICML 19]
• https://arxiv.org/abs/1806.03195
• Wassserstain distanceを使ってDisparete Impact
scoreを拡張
• 最適な(approximately) Fairなデータ分布を求めるアルゴ
リズムを開発
• [Feldman+KDD 15]の後続
96. Leveraging Labeled and Unlabeled Data
for Consistent Fair Binary Classification
[Chzhen+NeurIPS 19]
• https://arxiv.org/abs/1906.05082
• Equal opportunity制約下でのsemi-supervised
learning
• consistencyの証明
• AsymptoticにEqual opportunityの達成と同時に最適
分類器に収束
97. Near Neighbor: Who is the Fairest of
Them All? [Har-Peled+NeurIPS 19]
• https://arxiv.org/abs/1906.02640
• Fair nearest neighbor問題
• 半径r内の点をuniformにサンプルする問題
• 計算量の解析など