2. 曖昧性推定
2
機械学習システムは様々な分野での社会実装が期待されている
Autonomous Driving[1]
[1] Levinson et al., “Towards Fully Autonomous Driving: Systems and Algorithms”, iEEE, 2011
[2] Miotto et al., “Deep patient: an unsupervised representation to predict the future of patients from the electronic health records”
, 2016
Medical Diagnosis[2]
ミス
重要な意思決定における機械学習のミスは甚大な被害につながる
→ 判定結果に対する曖昧性を推定する研究
4. DNNの抱える課題
4
Out of Distribution / Adversarial Attacks
OODやノイズ画像に対する識別でも高スコアで誤識別する
(Nguyen et al., 2015[3])
Perturbationを加えると誤識別する
(Goodfellow et al., 2015[4])
score>99.6% の誤分類データ[3] Adversarial Example[4]
[3] Nguyen et al., “Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images”, IEEE, 2015
[4] Goodfellow et al., “Explaining and Harnessing Adversarial Examples”, ICLR, 2015
5. DNNの抱える課題
5
Over Confidence [Guo + , 2017][5]
モデルが正しく予測できないにも関わらず、
高いスコアを出力してしまう
[5] Guo et al., “On Calibration of Modern Neural Networks”, ICML, 2017
医療画像診断や自動運転など,DNNを社会実装していくには
これらの課題の解決し,
判定結果に対して曖昧性を算出することが不可欠となる
モデルの出力事後確率の値が
正解率(Accuracy)を上回ってしまう現象
14. 紹介論文
15
① On Mixup Training:
Improved Calibration and Predictive Uncertainty
for Deep Neural Networks
② Confidence – Aware Learning For Deep Neural Networks
データ拡張のMixupがOverconfidence解消に貢献することを示す
DNNの学習過程における傾向を加味したCorrect Ranking Lossを提案
学習過程でOverConfidenceを解消した紹介
15. 紹介論文
16
① On Mixup Training:
Improved Calibration and Predictive Uncertainty
for Deep Neural Networks
② Confidence – Aware Learning For Deep Neural Networks
データ拡張のMixupがOverconfidence解消に貢献することを示す
DNNの学習過程における傾向を加味したCorrect Ranking Lossを提案
学習過程でOverConfidenceを解消した紹介
16. 17
On Mixup Training:
Improved Calibration and Predictive Uncertainty
for Deep Neural Networks
NeurIPS 2019
Sunil Thulasidasan , Gopinath Chennupati ,
Jeff Bilmes , Tanmoy Bhattacharya , Sarah Michalak
50. 参考
51
[1] Levinson et al., “Towards Fully Autonomous Driving: Systems and Algorithms”, iEEE, 2011
[2] Miotto et al., “Deep patient: an unsupervised representation to predict
the future of patients from the electronic health records”, 2016
[3] Nguyen et al., “Deep Neural Networks are Easily Fooled: High Confidence Predictions
for Unrecognizable Images”, IEEE, 2015
[4] Goodfellow et al., “Explaining and Harnessing Adversarial Examples”, ICLR, 2015
[5] Guo et al., “On Calibration of Modern Neural Networks”, ICML, 2017
[6] Yarin Gal, Zoubin Ghahramani, “Dropout as a Bayesian Approximation:
Representing Model Uncertainty in Deep Learning”, 2016
[7] Lakshminarayanan et al., “Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles ”, NIPS, 2017
[8] Chapelle et al., “Vicinal risk minimization” , NeurIPS, 2001
[9] Szegedy et al., “Rethinking the inception architecture for computer vision” , IEEE, 2016
[10] Pereyra et al., “Regularizing neural networks by penalizing confident output distributions”, ICLR, 2017
[11] Verma et al., “Manifold Mixup: Better Representations by Interpolating Hidden States”, ICML, 2019
[12] Toneva et al., “AN EMPIRICAL STUDY OF EXAMPLE FORGETTING
DURING DEEP NEURAL NETWORK LEARNING”, ICLR, 2019
[13] Geifman et al., “BIAS-REDUCED UNCERTAINTY ESTIMATION FOR DEEP NEURAL CLASSIFIERS”, ICLR, 2019
[14] Kendall et al., “What uncertainties do we need in bayesian deep learning for computer vision?”, NIPS, 2017
[15] Geifman et al., “BIAS-REDUCED UNCERTAINTY ESTIMATION FOR DEEP NEURAL CLASSIFIERS”, ICLR, 2019
[16] Liang et al., “Verified uncertainty calibration”,, 2019
[17] Lee et al., “A simple unified framework for detecting out-of-distribution samples and adversarial attacks” , 2018
[18] Sener & Savarese ., “Active learning for convolutional neural networks: A core-set approach”, NIPS, 2018