2. What Scares Me About AI
• Algorithms are often implemented without ways to address
mistakes.
• AI makes it easier to not feel responsible.
• AI encodes & magnifies bias.
• Optimizing metrics above all else leads to negative outcomes.
• There is no accountability for big tech companies.
@math_rachel
5. Algorithms are used differently than human
decision makers:
• Algorithms are more likely to be implemented with no appeals
process in place.
• Algorithms are often used at scale.
• Algorithmic systems are cheap.
• People are more likely to assume algorithms are objective or error-
free (even if they’re given the option of a human override)
@math_rachel
The privileged are processed by people; the poor are processed by
algorithms. (Cathy O’Neil, Weapons of Math Destruction)
10. Bureaucracy has often been used to shift/evade responsibility (who do
you hold responsible in a complex system?)
Today’s algorithmic systems are extending bureaucracy.
24. Early cars:
• sharp metal knobs on dashboard that
could lodge in people’s skulls in crash
• non-collapsible steering columns would
frequently impale drivers
• belief that cars were dangerous
because of the people driving them
25. What Scares Me About AI
• Algorithms are implemented without ways to address mistakes.
• AI makes it easier to not feel responsible.
• AI encodes & magnifies bias.
• Optimizing metrics above all else leads to negative outcomes.
• There is no accountability for big tech companies.
@math_rachel
26. How We Can Do Better
• Make sure there is a meaningful, human appeals process.
Plan for how to catch and address mistakes in advance.
• Take responsibility, even when our work is just one part of
the system.
• Be on the lookout for bias. Create datasheets for data sets.
• Choose not to just optimize metrics.
• Push for standards and regulations for the tech industry.
@math_rachel
27. Resources
• Zeynep Tufekci: “How social media took us from Tahrir Square to
Donald Trump”
• Renee DiResta: “Up next: a better recommendation system”
• Timnit Gebru: “Datasheets for Datasets”
• Latanya Sweeney: “Saving Humanity”
• Arvind Narayanan: “21 Definitions of Fairness”
• Kate Crawford: “Politics of AI”
• danah boyd: “How an algorithmic world can be undermined”
• Joy Buolamwini: gendershades.org
Because you understand data and machine learning, you are well-positioned to see what could go wrong.
You have a moral responsibility to ask questions and to speak up.
Because you understand data and machine learning, you are well-positioned to see what could go wrong.
You have a moral responsibility to ask questions and to speak up.
Can’t regulate an individual technology, need to look at the ecosystem
Marzuki Darusman, chairman of the UN Independent International Fact-Finding Mission on Myanmar, told reporters that social media had played a "determining role" in Myanmar.
"It has ... substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public. Hate speech is certainly, of course, a part of that. As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media," he said.
In-person course at USF Data Institute free MOOC
Teach coders deep learning w/ 70 hours of study, no math pre-reqs
Target: students outside elite institutions, unusual interests, few resources, small data, working on real problems that they care about
It worked!