SlideShare a Scribd company logo
1 of 5
Download to read offline
strategy+business
BY ANAND RAO AND ILANA GOLBIN
TECH & INNOVATION
What is fair when
it comes to AI bias?
It’s not the algorithm behaving badly, but
how we define fairness that determines an
artificial intelligence system’s impact.
APRIL 12, 2019
www.strategy-business.com
1
some aspect of the others. In other words, it is impossi-
ble for every decision to be fair to all parties.
No AI system can be universally fair or unbiased.
But we can design these systems to meet specific fairness
goals, thus mitigating some of the perceived unfairness
and creating a more responsible system overall. Of
course, responsible AI is a complex topic. It encompass-
es a number of factors beyond establishing processes
and training teams to look for bias in the data and ma-
chine learning models. These factors include integrating
risk mitigation and ethical concerns into AI algorithms
and data sets from the start, along with considering
other workforce issues and the greater good.
But there may be another factor holding companies
back from establishing responsible AI: At most organi-
zations, there exists a gap between what the data scien-
tists are building and what the company leaders want to
achieve with an AI implementation. It is thus important
for business leaders and data scientists to work together
to select the right notion of fairness for the particular
decision that is to be made and design the algorithm
that best meets this notion.
This approach is critical, because if the data scien-
tists know what leaders’ fairness goals are, they can
more effectively assess what models to build and what
data to use. In every situation, this requires bringing
together various business specialists with AI system ar-
B
ias is often identified as one of the major risks
associated with artificial intelligence (AI) sys-
tems. Recently reported cases of known bias in
AI — racism in the criminal justice system, gender dis-
crimination in hiring — are undeniably worrisome.
The public discussion about bias in such scenarios often
assigns blame to the algorithm itself. The algorithm, it
is said, has made the wrong decision, to the detriment of
a particular group. But this claim fails to take into ac-
count the human component: People perceive bias
through the subjective lens of fairness.
Here it can be helpful to consider how we are defin-
ing both bias and fairness. Bias occurs when we discrim-
inate against (or promote) a defined group consciously
or unconsciously, and it can creep into an AI system as
a result of skewed data or an algorithm that does not
account for skewed data. For example, an AI system
that reviews job applicants by learning from a compa-
ny’s historical data could end up discriminating against
a particular gender or race if that group were underrep-
resented in the company’s hiring in the past.
Fairness, meanwhile, is a social construct. And in
fact, when people judge an algorithm to be “biased,”
they are often conflating bias and fairness: They are us-
ing a specific definition of fairness to pass judgment on
the algorithm. There are at least 20 mathematical defi-
nitions of fairness, and when we choose one, we violate
What is fair when
it comes to AI bias?
It’s not the algorithm behaving badly, but how we define fairness
that determines an artificial intelligence system’s impact.
by Anand Rao and Ilana Golbin
www.strategy-business.com
2
rics in mind, regardless of situation or context (e.g.,
making race-blind admissions, or using only the SAT
score and GPA). Fairness in outcome is adjusting deci-
sion making to accommodate for situational and con-
textual characteristics, such as having slightly different
requirements for students with disadvantaged back-
grounds based on the understanding that they may not
have had the same access to education and opportunity
as students from wealthy suburbs.
In some cases, the goals of different business groups
may appear to be at odds. In the credit card offer
example, the marketing team may want to issue as
many cards as possible to increase the company’s mar-
ket share and boost the brand, while the risk team’s
goal is to minimize potential losses incurred when
customers who should not have been granted credit in
the first place don’t pay their bills. Because there is
no way to completely satisfy both teams, they have to
strike a balance that everyone can live with and that
ideally supports the organization’s ethical standards
and business goals.
Because fairness is a social construct, it will take
engagement and active discussion among teams to de-
cide what constitutes fairness in any scenario. In every
case, asking the following questions can help guide the
conversation and keep it focused on common goals for
the AI system:
•	 What decision is being made with the help of
the algorithmic solution?
•	 What decision maker or functional business
group will be using the algorithmic solution?
•	 What are the protected attributes to which we
want to be fair?
•	 Who are the individuals and groups that will
chitects and data scientists to help them develop or se-
lect the appropriate machine learning model. Thinking
through this issue before you roll out new AI systems
— both for optimal performance and to minimize bias
— could produce business benefits and help you estab-
lish and maintain ethical standards for AI.
You’ll need to determine which definition of fair-
ness you, as the business leaders, will focus on — and
which attributes should be protected (for example, gen-
der and race). For instance, take the issue of group fair-
ness versus fairness to the individual. Let’s consider a
credit card company that is implementing an algorithm
using historical data to predict whether individuals ap-
plying for a certain credit offer are “good” or “bad”
risks. In any scenario, leaders need to determine which
protected variables to consider in achieving a fair out-
come — and whose decision it is to determine who that
group should be. In this example, the goal is to treat
male and female credit card applicants fairly.
The AI developer must be aware of two potential
outcomes: true positives and false positives. True posi-
tives are instances in which the model correctly predict-
ed an applicant to be a good risk. False positives occur
when bad-risk customers are assigned a good-risk score.
People in the company’s risk management group will be
concerned with minimizing false positives to limit their
risk exposure, because wrong risk assignments directly
translate to potential losses to the company. When they
try to minimize these losses, however, they do not want
to discriminate based on gender.
Another definition of fairness involves a balance be-
tween treatment and outcomes. Here we can look to the
college admissions process. Fairness in treatment would
require looking at every candidate with the same met-
Anand Rao
anand.s.rao@pwc.com
is a principal with PwC US
based in Boston. He is PwC’s
global leader for artificial
intelligence and innovation lead
for the U.S. analytics practice.
He holds a Ph.D. in artificial
intelligence from the University
of Sydney and was formerly
chief research scientist at the
Australian Artificial Intelligence
Institute.
Ilana Golbin
ilana.a.golbin@pwc.com
is a manager with PwC US
based in Los Angeles. As part
of PwC’s Artificial Intelligence
Accelerator, she specializes in
developing solutions, frame-
works, and offerings to support
responsible AI.
Also contributing to this article
was PwC US associate Amitoj
Singh.
www.strategy-business.com
3
experience the ramifications of the treatment
and outcome?
•	 Can we clearly communicate the steps we’ve
taken in designing the system to treat those
affected by it fairly?
•	 How will we explain the decision-making
process to key stakeholders?
Any decision — especially those humans make —
can be construed as unfair in some way to one or more
people affected by it. But exploring these issues can
bring you closer to achieving responsible AI that strikes
a balance between business goals and ethical concerns,
while cultivating the trust of customers and other stake-
holders. Given that respondents to PwC’s 2019 AI Pre-
dictions Survey reported that their top challenge for
2019 was ensuring AI systems are trustworthy, it’s time
to stop blaming a biased algorithm and start talking
about what is fair. +
Articles published in strategy+business do not necessarily represent the views of the member firms of the
PwC network. Reviews and mentions of publications, products, or services do not constitute endorsement
or recommendation for purchase.
© 2019 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms,
each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Mentions
of Strategy& refer to the global team of practical strategists that is integrated within the PwC network of
firms. For more about Strategy&, see www.strategyand.pwc.com. No reproduction is permitted in whole
or part without written permission of PwC. “strategy+business” is a trademark of PwC.
strategy+business magazine
is published by certain member firms
of the PwC network.
To subscribe, visit strategy-business.com
or call 1-855-869-4862.
• strategy-business.com
• facebook.com/strategybusiness
• linkedin.com/company/strategy-business
• twitter.com/stratandbiz

More Related Content

What's hot

IIBA Presents Tasneem Memon - June 2019
IIBA Presents Tasneem Memon - June 2019IIBA Presents Tasneem Memon - June 2019
IIBA Presents Tasneem Memon - June 2019IIBA-Canberra
 
Importance of High Availability for B2B e-Commerce
Importance of High Availability for B2B e-CommerceImportance of High Availability for B2B e-Commerce
Importance of High Availability for B2B e-CommerceSteve Keifer
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Krishnaram Kenthapadi
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Krishnaram Kenthapadi
 
Forrester big data_predictive_analytics
Forrester big data_predictive_analyticsForrester big data_predictive_analytics
Forrester big data_predictive_analyticsShyam Sarkar
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsKrishnaram Kenthapadi
 
Serene Zawaydeh - Big Data -Investment -Wavelets
Serene Zawaydeh - Big Data -Investment -WaveletsSerene Zawaydeh - Big Data -Investment -Wavelets
Serene Zawaydeh - Big Data -Investment -WaveletsSerene Zawaydeh
 
Viability of HR Analytics
Viability of HR AnalyticsViability of HR Analytics
Viability of HR AnalyticsJonathan Gunter
 
P&C insurance middleware presentation v1
P&C insurance middleware presentation v1P&C insurance middleware presentation v1
P&C insurance middleware presentation v1Gregg Barrett
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
 
Fairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedInFairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedInC4Media
 
Source of Hire Report 2013
Source of Hire Report 2013Source of Hire Report 2013
Source of Hire Report 2013Alvin Kan
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Krishnaram Kenthapadi
 
Requirements Workshop -Text Analytics System - Serene Zawaydeh
Requirements Workshop -Text Analytics System - Serene ZawaydehRequirements Workshop -Text Analytics System - Serene Zawaydeh
Requirements Workshop -Text Analytics System - Serene ZawaydehSerene Zawaydeh
 
Fraud Detection in Insurance with Machine Learning for WARTA - Artur Suchwalko
Fraud Detection in Insurance with Machine Learning for WARTA - Artur SuchwalkoFraud Detection in Insurance with Machine Learning for WARTA - Artur Suchwalko
Fraud Detection in Insurance with Machine Learning for WARTA - Artur SuchwalkoInstitute of Contemporary Sciences
 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)GoDataDriven
 

What's hot (20)

IIBA Presents Tasneem Memon - June 2019
IIBA Presents Tasneem Memon - June 2019IIBA Presents Tasneem Memon - June 2019
IIBA Presents Tasneem Memon - June 2019
 
Importance of High Availability for B2B e-Commerce
Importance of High Availability for B2B e-CommerceImportance of High Availability for B2B e-Commerce
Importance of High Availability for B2B e-Commerce
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)
 
W acto07
W acto07W acto07
W acto07
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
 
Forrester big data_predictive_analytics
Forrester big data_predictive_analyticsForrester big data_predictive_analytics
Forrester big data_predictive_analytics
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML Systems
 
Serene Zawaydeh - Big Data -Investment -Wavelets
Serene Zawaydeh - Big Data -Investment -WaveletsSerene Zawaydeh - Big Data -Investment -Wavelets
Serene Zawaydeh - Big Data -Investment -Wavelets
 
Viability of HR Analytics
Viability of HR AnalyticsViability of HR Analytics
Viability of HR Analytics
 
P&C insurance middleware presentation v1
P&C insurance middleware presentation v1P&C insurance middleware presentation v1
P&C insurance middleware presentation v1
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
 
Fairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedInFairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedIn
 
2013 Sources of hire.
2013 Sources of hire.2013 Sources of hire.
2013 Sources of hire.
 
Source of Hire Report 2013
Source of Hire Report 2013Source of Hire Report 2013
Source of Hire Report 2013
 
CDO IBM
CDO IBMCDO IBM
CDO IBM
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)
 
Requirements Workshop -Text Analytics System - Serene Zawaydeh
Requirements Workshop -Text Analytics System - Serene ZawaydehRequirements Workshop -Text Analytics System - Serene Zawaydeh
Requirements Workshop -Text Analytics System - Serene Zawaydeh
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Fraud Detection in Insurance with Machine Learning for WARTA - Artur Suchwalko
Fraud Detection in Insurance with Machine Learning for WARTA - Artur SuchwalkoFraud Detection in Insurance with Machine Learning for WARTA - Artur Suchwalko
Fraud Detection in Insurance with Machine Learning for WARTA - Artur Suchwalko
 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)
 

Similar to What is fair when it comes to AI bias?

Confronting the risks of artificial Intelligence
Confronting the risks of artificial IntelligenceConfronting the risks of artificial Intelligence
Confronting the risks of artificial IntelligenceMauricio Rivadeneira
 
AI Developments and Trends (OECD)
AI Developments and Trends (OECD)AI Developments and Trends (OECD)
AI Developments and Trends (OECD)AnandSRao1962
 
Making AI Responsible – and Effective
Making AI Responsible – and EffectiveMaking AI Responsible – and Effective
Making AI Responsible – and EffectiveCognizant
 
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMS
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMSTHE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMS
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMSTekRevol LLC
 
Making AI Responsible and Effective
Making AI Responsible and EffectiveMaking AI Responsible and Effective
Making AI Responsible and EffectiveCognizant
 
#NFIM18 - Anna Felländer - Senior Advisor, The Boston Consulting Group
#NFIM18 - Anna Felländer - Senior Advisor, The Boston Consulting Group#NFIM18 - Anna Felländer - Senior Advisor, The Boston Consulting Group
#NFIM18 - Anna Felländer - Senior Advisor, The Boston Consulting GroupMinc
 
AI Readiness: Five Areas Business Must Prepare for Success in Artificial Inte...
AI Readiness: Five Areas Business Must Prepare for Success in Artificial Inte...AI Readiness: Five Areas Business Must Prepare for Success in Artificial Inte...
AI Readiness: Five Areas Business Must Prepare for Success in Artificial Inte...Kaleido Insights
 
Building higher quality explainable(XAI) models
Building higher quality explainable(XAI) modelsBuilding higher quality explainable(XAI) models
Building higher quality explainable(XAI) modelsMarket Strategy Consultant
 
Five Ways to Build Ethics and Trust in AI.pdf
Five Ways to Build Ethics and Trust in AI.pdfFive Ways to Build Ethics and Trust in AI.pdf
Five Ways to Build Ethics and Trust in AI.pdfEnterprise Insider
 
AI: Ready for Business
AI: Ready for BusinessAI: Ready for Business
AI: Ready for BusinessCognizant
 
Three big questions about AI in financial services
Three big questions about AI in financial servicesThree big questions about AI in financial services
Three big questions about AI in financial servicesWhite & Case
 
Artificial Intelligence in Financial Services: From Nice to Have to Must Have
Artificial Intelligence in Financial Services: From Nice to Have to Must HaveArtificial Intelligence in Financial Services: From Nice to Have to Must Have
Artificial Intelligence in Financial Services: From Nice to Have to Must HaveCognizant
 
How AI is transforming the recruitment industry
How AI is transforming the recruitment industryHow AI is transforming the recruitment industry
How AI is transforming the recruitment industryJobTatkal
 
Generative AI in insurance- A comprehensive guide.pdf
Generative AI in insurance- A comprehensive guide.pdfGenerative AI in insurance- A comprehensive guide.pdf
Generative AI in insurance- A comprehensive guide.pdfStephenAmell4
 
The role of ai in indian banking and retail
The role of ai in indian banking and retailThe role of ai in indian banking and retail
The role of ai in indian banking and retailBalaji C Shankar
 
Responsible Generative AI Design Patterns
Responsible Generative AI Design PatternsResponsible Generative AI Design Patterns
Responsible Generative AI Design PatternsDebmalya Biswas
 

Similar to What is fair when it comes to AI bias? (20)

The Future of Artificial Intelligence Depends on Trust
The Future of Artificial Intelligence Depends on TrustThe Future of Artificial Intelligence Depends on Trust
The Future of Artificial Intelligence Depends on Trust
 
Confronting the risks of artificial Intelligence
Confronting the risks of artificial IntelligenceConfronting the risks of artificial Intelligence
Confronting the risks of artificial Intelligence
 
AI Developments and Trends (OECD)
AI Developments and Trends (OECD)AI Developments and Trends (OECD)
AI Developments and Trends (OECD)
 
[REPORT PREVIEW] The Customer Experience of AI
[REPORT PREVIEW] The Customer Experience of AI[REPORT PREVIEW] The Customer Experience of AI
[REPORT PREVIEW] The Customer Experience of AI
 
Making AI Responsible – and Effective
Making AI Responsible – and EffectiveMaking AI Responsible – and Effective
Making AI Responsible – and Effective
 
Is AI the Next Frontier for National Competitive Advantage?
Is AI the Next Frontier for National Competitive Advantage?Is AI the Next Frontier for National Competitive Advantage?
Is AI the Next Frontier for National Competitive Advantage?
 
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMS
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMSTHE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMS
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMS
 
Making AI Responsible and Effective
Making AI Responsible and EffectiveMaking AI Responsible and Effective
Making AI Responsible and Effective
 
#NFIM18 - Anna Felländer - Senior Advisor, The Boston Consulting Group
#NFIM18 - Anna Felländer - Senior Advisor, The Boston Consulting Group#NFIM18 - Anna Felländer - Senior Advisor, The Boston Consulting Group
#NFIM18 - Anna Felländer - Senior Advisor, The Boston Consulting Group
 
AI Readiness: Five Areas Business Must Prepare for Success in Artificial Inte...
AI Readiness: Five Areas Business Must Prepare for Success in Artificial Inte...AI Readiness: Five Areas Business Must Prepare for Success in Artificial Inte...
AI Readiness: Five Areas Business Must Prepare for Success in Artificial Inte...
 
Building higher quality explainable(XAI) models
Building higher quality explainable(XAI) modelsBuilding higher quality explainable(XAI) models
Building higher quality explainable(XAI) models
 
Five Ways to Build Ethics and Trust in AI.pdf
Five Ways to Build Ethics and Trust in AI.pdfFive Ways to Build Ethics and Trust in AI.pdf
Five Ways to Build Ethics and Trust in AI.pdf
 
AI: Ready for Business
AI: Ready for BusinessAI: Ready for Business
AI: Ready for Business
 
CII: Addressing Gender Bias in Artificial Intelligence
CII: Addressing Gender Bias in Artificial IntelligenceCII: Addressing Gender Bias in Artificial Intelligence
CII: Addressing Gender Bias in Artificial Intelligence
 
Three big questions about AI in financial services
Three big questions about AI in financial servicesThree big questions about AI in financial services
Three big questions about AI in financial services
 
Artificial Intelligence in Financial Services: From Nice to Have to Must Have
Artificial Intelligence in Financial Services: From Nice to Have to Must HaveArtificial Intelligence in Financial Services: From Nice to Have to Must Have
Artificial Intelligence in Financial Services: From Nice to Have to Must Have
 
How AI is transforming the recruitment industry
How AI is transforming the recruitment industryHow AI is transforming the recruitment industry
How AI is transforming the recruitment industry
 
Generative AI in insurance- A comprehensive guide.pdf
Generative AI in insurance- A comprehensive guide.pdfGenerative AI in insurance- A comprehensive guide.pdf
Generative AI in insurance- A comprehensive guide.pdf
 
The role of ai in indian banking and retail
The role of ai in indian banking and retailThe role of ai in indian banking and retail
The role of ai in indian banking and retail
 
Responsible Generative AI Design Patterns
Responsible Generative AI Design PatternsResponsible Generative AI Design Patterns
Responsible Generative AI Design Patterns
 

More from Strategy&, a member of the PwC network

More from Strategy&, a member of the PwC network (20)

The seven stages of strategic leadership
The seven stages of strategic leadershipThe seven stages of strategic leadership
The seven stages of strategic leadership
 
Organizational effectiveness goes digital
Organizational effectiveness goes digital  Organizational effectiveness goes digital
Organizational effectiveness goes digital
 
Winning with a data-driven strategy
Winning with a data-driven strategyWinning with a data-driven strategy
Winning with a data-driven strategy
 
Automating trust with new technologies
Automating trust with new technologiesAutomating trust with new technologies
Automating trust with new technologies
 
Facing up to the automotive innovation dilemma
Facing up to  the automotive  innovation dilemmaFacing up to  the automotive  innovation dilemma
Facing up to the automotive innovation dilemma
 
The Four X Factors of Exceptional Leaders
The Four X Factors of Exceptional LeadersThe Four X Factors of Exceptional Leaders
The Four X Factors of Exceptional Leaders
 
Chinese cars go global
Chinese cars go globalChinese cars go global
Chinese cars go global
 
Power strategies
Power strategiesPower strategies
Power strategies
 
Tomorrow's Data Heros
Tomorrow's Data HerosTomorrow's Data Heros
Tomorrow's Data Heros
 
Memo to the CEO: Is Your Chief Strategy Officer Set Up for Success?
Memo to the CEO: Is Your Chief Strategy Officer Set Up for Success?Memo to the CEO: Is Your Chief Strategy Officer Set Up for Success?
Memo to the CEO: Is Your Chief Strategy Officer Set Up for Success?
 
Memo to the CEO: Is Your Chief Strategy Officer Set Up for Success?
Memo to the CEO: Is Your Chief Strategy Officer Set Up for Success?Memo to the CEO: Is Your Chief Strategy Officer Set Up for Success?
Memo to the CEO: Is Your Chief Strategy Officer Set Up for Success?
 
HQ 2.0: The Next-Generation Corporate Center
HQ 2.0: The Next-Generation Corporate CenterHQ 2.0: The Next-Generation Corporate Center
HQ 2.0: The Next-Generation Corporate Center
 
Keeping Cool under Pressure
Keeping Cool under PressureKeeping Cool under Pressure
Keeping Cool under Pressure
 
The Flywheel Philosophy
The Flywheel PhilosophyThe Flywheel Philosophy
The Flywheel Philosophy
 
Leading a Bionic Transformation
Leading a Bionic TransformationLeading a Bionic Transformation
Leading a Bionic Transformation
 
Why Is It So Hard to Trust a Blockchain?
Why Is It So Hard to Trust a Blockchain?Why Is It So Hard to Trust a Blockchain?
Why Is It So Hard to Trust a Blockchain?
 
Approaching Diversity with the Brain in Mind
Approaching Diversity with the Brain in MindApproaching Diversity with the Brain in Mind
Approaching Diversity with the Brain in Mind
 
The Thought Leader Interview: NBA Commissioner Adam Silver
The Thought Leader Interview: NBA Commissioner Adam SilverThe Thought Leader Interview: NBA Commissioner Adam Silver
The Thought Leader Interview: NBA Commissioner Adam Silver
 
Today's Retail Needs Both Tech and the Human Touch
Today's Retail Needs Both Tech and the Human TouchToday's Retail Needs Both Tech and the Human Touch
Today's Retail Needs Both Tech and the Human Touch
 
Why Tax Reform Changes Nothing -- and Everything
Why Tax Reform Changes Nothing -- and EverythingWhy Tax Reform Changes Nothing -- and Everything
Why Tax Reform Changes Nothing -- and Everything
 

Recently uploaded

Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 

Recently uploaded (20)

Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 

What is fair when it comes to AI bias?

  • 1. strategy+business BY ANAND RAO AND ILANA GOLBIN TECH & INNOVATION What is fair when it comes to AI bias? It’s not the algorithm behaving badly, but how we define fairness that determines an artificial intelligence system’s impact. APRIL 12, 2019
  • 2. www.strategy-business.com 1 some aspect of the others. In other words, it is impossi- ble for every decision to be fair to all parties. No AI system can be universally fair or unbiased. But we can design these systems to meet specific fairness goals, thus mitigating some of the perceived unfairness and creating a more responsible system overall. Of course, responsible AI is a complex topic. It encompass- es a number of factors beyond establishing processes and training teams to look for bias in the data and ma- chine learning models. These factors include integrating risk mitigation and ethical concerns into AI algorithms and data sets from the start, along with considering other workforce issues and the greater good. But there may be another factor holding companies back from establishing responsible AI: At most organi- zations, there exists a gap between what the data scien- tists are building and what the company leaders want to achieve with an AI implementation. It is thus important for business leaders and data scientists to work together to select the right notion of fairness for the particular decision that is to be made and design the algorithm that best meets this notion. This approach is critical, because if the data scien- tists know what leaders’ fairness goals are, they can more effectively assess what models to build and what data to use. In every situation, this requires bringing together various business specialists with AI system ar- B ias is often identified as one of the major risks associated with artificial intelligence (AI) sys- tems. Recently reported cases of known bias in AI — racism in the criminal justice system, gender dis- crimination in hiring — are undeniably worrisome. The public discussion about bias in such scenarios often assigns blame to the algorithm itself. The algorithm, it is said, has made the wrong decision, to the detriment of a particular group. But this claim fails to take into ac- count the human component: People perceive bias through the subjective lens of fairness. Here it can be helpful to consider how we are defin- ing both bias and fairness. Bias occurs when we discrim- inate against (or promote) a defined group consciously or unconsciously, and it can creep into an AI system as a result of skewed data or an algorithm that does not account for skewed data. For example, an AI system that reviews job applicants by learning from a compa- ny’s historical data could end up discriminating against a particular gender or race if that group were underrep- resented in the company’s hiring in the past. Fairness, meanwhile, is a social construct. And in fact, when people judge an algorithm to be “biased,” they are often conflating bias and fairness: They are us- ing a specific definition of fairness to pass judgment on the algorithm. There are at least 20 mathematical defi- nitions of fairness, and when we choose one, we violate What is fair when it comes to AI bias? It’s not the algorithm behaving badly, but how we define fairness that determines an artificial intelligence system’s impact. by Anand Rao and Ilana Golbin
  • 3. www.strategy-business.com 2 rics in mind, regardless of situation or context (e.g., making race-blind admissions, or using only the SAT score and GPA). Fairness in outcome is adjusting deci- sion making to accommodate for situational and con- textual characteristics, such as having slightly different requirements for students with disadvantaged back- grounds based on the understanding that they may not have had the same access to education and opportunity as students from wealthy suburbs. In some cases, the goals of different business groups may appear to be at odds. In the credit card offer example, the marketing team may want to issue as many cards as possible to increase the company’s mar- ket share and boost the brand, while the risk team’s goal is to minimize potential losses incurred when customers who should not have been granted credit in the first place don’t pay their bills. Because there is no way to completely satisfy both teams, they have to strike a balance that everyone can live with and that ideally supports the organization’s ethical standards and business goals. Because fairness is a social construct, it will take engagement and active discussion among teams to de- cide what constitutes fairness in any scenario. In every case, asking the following questions can help guide the conversation and keep it focused on common goals for the AI system: • What decision is being made with the help of the algorithmic solution? • What decision maker or functional business group will be using the algorithmic solution? • What are the protected attributes to which we want to be fair? • Who are the individuals and groups that will chitects and data scientists to help them develop or se- lect the appropriate machine learning model. Thinking through this issue before you roll out new AI systems — both for optimal performance and to minimize bias — could produce business benefits and help you estab- lish and maintain ethical standards for AI. You’ll need to determine which definition of fair- ness you, as the business leaders, will focus on — and which attributes should be protected (for example, gen- der and race). For instance, take the issue of group fair- ness versus fairness to the individual. Let’s consider a credit card company that is implementing an algorithm using historical data to predict whether individuals ap- plying for a certain credit offer are “good” or “bad” risks. In any scenario, leaders need to determine which protected variables to consider in achieving a fair out- come — and whose decision it is to determine who that group should be. In this example, the goal is to treat male and female credit card applicants fairly. The AI developer must be aware of two potential outcomes: true positives and false positives. True posi- tives are instances in which the model correctly predict- ed an applicant to be a good risk. False positives occur when bad-risk customers are assigned a good-risk score. People in the company’s risk management group will be concerned with minimizing false positives to limit their risk exposure, because wrong risk assignments directly translate to potential losses to the company. When they try to minimize these losses, however, they do not want to discriminate based on gender. Another definition of fairness involves a balance be- tween treatment and outcomes. Here we can look to the college admissions process. Fairness in treatment would require looking at every candidate with the same met- Anand Rao anand.s.rao@pwc.com is a principal with PwC US based in Boston. He is PwC’s global leader for artificial intelligence and innovation lead for the U.S. analytics practice. He holds a Ph.D. in artificial intelligence from the University of Sydney and was formerly chief research scientist at the Australian Artificial Intelligence Institute. Ilana Golbin ilana.a.golbin@pwc.com is a manager with PwC US based in Los Angeles. As part of PwC’s Artificial Intelligence Accelerator, she specializes in developing solutions, frame- works, and offerings to support responsible AI. Also contributing to this article was PwC US associate Amitoj Singh.
  • 4. www.strategy-business.com 3 experience the ramifications of the treatment and outcome? • Can we clearly communicate the steps we’ve taken in designing the system to treat those affected by it fairly? • How will we explain the decision-making process to key stakeholders? Any decision — especially those humans make — can be construed as unfair in some way to one or more people affected by it. But exploring these issues can bring you closer to achieving responsible AI that strikes a balance between business goals and ethical concerns, while cultivating the trust of customers and other stake- holders. Given that respondents to PwC’s 2019 AI Pre- dictions Survey reported that their top challenge for 2019 was ensuring AI systems are trustworthy, it’s time to stop blaming a biased algorithm and start talking about what is fair. +
  • 5. Articles published in strategy+business do not necessarily represent the views of the member firms of the PwC network. Reviews and mentions of publications, products, or services do not constitute endorsement or recommendation for purchase. © 2019 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Mentions of Strategy& refer to the global team of practical strategists that is integrated within the PwC network of firms. For more about Strategy&, see www.strategyand.pwc.com. No reproduction is permitted in whole or part without written permission of PwC. “strategy+business” is a trademark of PwC. strategy+business magazine is published by certain member firms of the PwC network. To subscribe, visit strategy-business.com or call 1-855-869-4862. • strategy-business.com • facebook.com/strategybusiness • linkedin.com/company/strategy-business • twitter.com/stratandbiz