The document discusses bias in artificial intelligence. It notes that AI systems inherit biases from human biases in the data used to train models. Word embeddings and machine translation tools often reflect common stereotypes like associating nurses with women and doctors with men. The bias can be introduced at each stage of developing AI systems from data collection and annotation to training models. Efforts are needed to increase awareness of biases, promote inclusion and diversity, and ensure explainability and accountability in AI.
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
Bias in Artificial Intelligence
1. PAGE 1 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
#GHC17
AI581: Presentations: AI for Social
Good
Bias In Artificial Intelligence
Neelima Kumar | @Neelima_jadhav
2. PAGE 2 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
HUMAN BIAS
Picture a Nurse
3. PAGE 3 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Is AI BIAS?
4. PAGE 4 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Machine Learning : Learn from Data
5. PAGE 5 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
AI impacts lives
Transportation
Speech to Voice
Banking
Recruitment
Advertising
Predictive Policing
Health and Medicine
6. PAGE 6 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Word Embeddings
You shall know a word by the
company it keeps
-Firth, J.R. 1957:11
7. PAGE 7 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Associations Generated by Word2Vec
Man: Boy :: Women: x (x = Girl)
8. PAGE 8 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Stereotypes in word embeddings
Father : Doctor :: Mother : Nurse
Man : Programmer :: Woman : Homemaker
He: Realist :: She: Feminist
She: Pregnancy :: He: Kidney Stone
9. PAGE 9 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Stereotypes in Google Translate
10. PAGE 10 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Cultural Bias
11. PAGE 11 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Racial bias
12. PAGE 12 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
13. PAGE 13 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Class Discrimination( Who uses AI matters)
14. PAGE 14 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
How is bias introduced in AI?
Training data
is collected
and annoted
Model is
trained
Output
Margaret Mitchell, 2017
15. PAGE 15 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
How is bias introduced in AI?
Training data
is collected
and annoted
Model is
trained
Bias
Bias
Bias
Biased data created from process becomes new training data
Output
Margaret Mitchell, 2017
16. PAGE 16 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Hard things are hard
• Hard to get Clean Data
• Decisions not clearly understood
• Lack of Diversity
• Impact on Accuracy
17. PAGE 17 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Awareness and Inclusion
• Awareness of possible biases
• Design for inclusion and diversity
• Work with communities affected most
• More Women and minorities Developers
18. PAGE 18 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Explainability and Accountability
• Explanation of individual decisions
• Characterize strengths & weaknesses
• Predict future behavior
• Transparency of Data used for training
• Record decisions to that they could be audited
• Validation and Testing
19. PAGE 19 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
20. FEEDBACK? RATE AND REVIEW THE SESSION ON OUR MOBILE APP
Download the GHC 17 app at http://bit.ly/ghc17app or search GHC 2017 in the app store
Thank you
Notes de l'éditeur
Good afternoon everybody,
The topic of my talk today is “Bias in Ai”
Lets begin with a quick exercise , Close your eyes and
Picture a Nurse
Did any one picture some of like this and how about this?
No one??
We may not even know why but each one of us picked one image over the other
We are all affected by our unconscious bias
And while we all have our varying prejuides we are all just the same in having prejudices.
What about AI --?
IS artificial intelligence biased?
People tend to think of AI systems as mathematical models that are rational and immune to any biases.
The AI that I am referring to here is the field of machine learning, natural lang processing, neural nets and beyond.
These techniques learn about the worlds experiences based on the huge amounts of data that they are trained on.
The output of a system depends on its input and if the input is biased so will the output be. People tend to forget this, the thinking been that the vast amount of data could overwhelm any human biases but on the contrary AI systems will generate output with all its skews and biases intact.
This has potentiatial to cause harm to real people in the read world. I am not worried abt the temr.. But the fact that AI is used so much in our daily lives affecting lives and livelihoods
Ai is transforming the transportaing inductries. Its in our homes
Banking /Financial institutions are using it to decide who to give credit and how much loan is offered.
Its used by Hr to decide whom to fire/hire
Its used by Advertsing companies to decide what ad/recommendations to show you.
Its used by the justice depat to determine who goes to jail and for how long
It used in Health industry to determing what medications you should take and when someone shold be hospitalized.
Ai is affecting our core leag..
To set the stage let me walk you throu a few wgs
Cultural bias are observed even in simple google search. I was completely shocked to see the results of google search done for an everyday query. A women searching for a professional hair style gave the results on the Left hand side but when searched for unproffesional hairstyles for women they came back with results on the RHS.
Do you realize anything strage in this picture. This picture is an output of Google photo apps.
This is Joey a student of MIT studing computer vision. The face recognition software she worked on could recognize her face better when she used a white mask. She gave a great Ted talk explaining how she is fighting algorithmic bias. She explains who code matters-how,what they code
Bias can be introduced based on how the data is collected and who uses the system
the City of Boston used AI technology to predict where potholes are more likely to occur.
They analyze data collected from the Street Bump project, an app that allowed users to report potholes.
Surprisingly, the predictions showed significantly more potholes in upper middle-income neighborhoods. Yet a closer look at the data revealed a different picture the streets in those neighborhoods didn’t really have more potholes, the residents just reported them more often, due to their more frequent use of smartphones.
When ai systems only have a portion of the information needed to make correct assumptions, bias is implicitly added to the results.
This is simple picture of a simple machine learning model. Data is collected and annotated and feed as training data to the model. Once the model is trained it can make predictions on any new input data it receives.
Bias is introduced at all stages of this pipeline.
The data used to train can ne explicity biased based on what it represents and who it omiited
The process of collection and annotation can result in Sampling erros , reporting bias, selction bias and confirmation bias and so on
Implicit bias can be introduced because the model was not developed by a diverse community of developers and it can further be propogated in to how the output is predicted and used.
Biased data created from this process can become new training data and further amplify its effects.
So you can see an AI system is capable of amplifying human biases.
What can we do to address this issue?
Fixing the biases in AI is a hard problem to solve because
Its hard to get clean data that is free of any human bias. An Ai system that learns that there are indeed more female nurses to male nurses will always predict a nurse to be a female.
The decisions made by AI machine learning/deep learning systems are not clearly understood. Univeristy of Washington developed a system to distinguish huskies from wolves, They got about 90% accuracy but on further analyzing the system they found that the model had learned to distinguish this animals based on the snow surrounding the wolves and not its individual characteristics.
There is lack of diversity in the AI community causing biases to go undectected. There are very few women and people of color. If someone like Jackie was working in the photo applications group, he would have identified and caught the biases much earlier.
Correcting for biases might introduce new biases and impact the accuracy of the system. A predeictive policing application could use family history along with criminal background. If we were to adjust the system for fairness by removing family history it might have an impact on its prediction accuracy.
While its hard to fix biases we must take on these challenges.
1.First and foremost we need to start creating an Awareness in the community : We as Designers, developers and users of an AI system should be aware of the possible biases and its potential to harm individuals and society
2. We need to design for inclusion and diversity by providing access to the resources necessary for AI development such as datasets, computing resources, education, and training,
3. We need to work with representatives of minority communities who could be affected most so that they can participate in the design of such systems
4. We need to include opportunities for women to participate in development of AI.
Our models need to be explainable and accountable
Models must be capable of explaining the rational behind an individual decision like in the model developed to distinguish huskies to wolves where snow was the reason the model distinguished between huskies and wolves instead of individual animal features,
we must understand the models strengths and weaknesses and be able to determine how it will behave in the future to analyse who will be impacted most by any biases in the system
We need to be transparent about how the training data was collected and annotated to uncover any sampling errors or confirmation biases just as observed by the city of Boston. They solved the problem by putting sensors underneath garbage trucks that could collect data.
Models, algorithms and decisions must be recorded so that they can be audited incase any unfairness is suspected.
We should make available an API and any training data that allows third parties to query the algorithmic system and assess its response.
Validation and Testing: WE should use rigorous testing methods to validate our models and document its results by observing existence of any bias. This involves running the model with various trail data that change input variables with various permutations and combinations and observing the output for any biases.
Women in AI, AI4 All are a few organiztons trying to tackle the challenges
Darpa and Optiimizing Mind are trying to make AI mor explaininable.
as the AI ecosystem is still shaping up its an opputunity for all us women to play an active role in the development and +ve implications of Ai. To make the world a better place