
Soyez le premier à aimer ceci
SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez nos Conditions d’utilisation et notre Politique de confidentialité.
SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez notre Politique de confidentialité et nos Conditions d’utilisation pour en savoir plus.
Publié le
This session was recorded in NYC on October 22nd, 2019 and can be viewed here: https://youtu.be/YCMFz955ajo
Reducing AI Bias Using Truncated Statistics
An emergent threat to the practical use of machine learning is the presence of bias in the data used to train models. Biased training data can result in models which make incorrect or disproportionately correct decisions, or that reinforce the injustices reflected in their training data. For example, recent works have shown that semantics derived automatically from text corpora contain human biases, and found that the accuracy of face and gender recognition systems are systematically lower for people of color and women. While the root causes of AI bias are difficult to pin down, a common cause of bias is the violation of the pervasive assumption that the data used to train models are unbiased samples of an underlying “test distribution,” which represents the conditions that the trained model will encounter in the future. Overcoming the bias introduced by the discrepancy between train and test distributions has been the focus of a long line of research in truncated Statistics. We provide computationally and statistically efficient algorithms for truncated density estimation and truncated linear, logistic and probit regression in high dimensions, through a general, practical framework based on Stochastic Gradient Descent. We illustrate the efficacy of our framework through several experiments.
Bio: David Eisenbud served as Director of MSRI from 1997 to 2007, and began a new term in 2013. He received his PhD in mathematics in 1970 at the University of Chicago under Saunders MacLane and Chris Robson, and was on the faculty at Brandeis University before coming to Berkeley, where he became Professor of Mathematics in 1997. He served from 2009 to 2011 as Director for Mathematics and the Physical Sciences at the Simons Foundation, and is currently on the Board of Directors of the Foundation. He has been a visiting professor at Harvard, Bonn, and Paris. Eisenbud’s mathematical interests range widely over commutative and noncommutative algebra, algebraic geometry, topology, and computer methods.
Eisenbud is Chair of the Editorial Board of the Algebra and Number Theory journal, which he helped found in 2006, and serves on the Board of the Journal of Software for Algebra and Geometry, as well as SpringerVerlag’s book series Algorithms and Computation in Mathematics.
Eisenbud was President of the American Mathematical Society from 2003 to 2005. He is a Director of Math for America, a foundation devoted to improving mathematics teaching. He has been a member of the Board of Mathematical Sciences and their Applications of the National Research Council, and is a member of the U.S. National Committee of the International Mathematical Union. In 2006, Eisenbud was elected a Fellow of the American Academy of Arts and Sciences.
Soyez le premier à aimer ceci
Il semblerait que vous ayez déjà ajouté cette diapositive à .
Soyez le premier à commenter