Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
1. Algorithmic Bias
Challenges and Opportunities for AI in Healthcare
North Carolina Chapter of HIMSS
Greg S. Nelson, MMCi, CPHIMS
Vice President, Analytics & Strategy
Vidant Health
2. A Data-Driven Transformation
• How can Big Data Help?
• Data allows us to ask new questions
• Analytics allow us to identify opportunities
• More data is the basis for competitive advantage
• More data and agile methodologies enable us to find cost efficiencies
Source: https://blog.westerndigital.com/business-agility-big-data-mindset/
3. • Consider a revolutionary test for skin cancer that does not work on African Americans…..
• What about a model that directs poorer patients to a skilled nursing facility rather than their home
as it does for wealthier patients?
• Imagine an algorithm that selects nursing candidates for a multi-specialty practice—but it only
selects white females.
4. Algorithmic bias is what we experience when a machine-
learning model produces systematic errors that result in
unfair outcomes.
5. Advertising
High-income jobs are presented to men much more often than to women
Source: A. Datta, M. C. Tschantz, and A. Datta. Automated experiments on ad privacy settings. Proc. Privacy Enhancing Technologies, 2015(1):92–112, 2015.
6. Advertising
Source: L. Sweeney. Discrimination in online ad delivery. Queue, 11(3):10, 2013.
Ads for arrest records are significantly more likely to show up on searches for distinctively black names
7. AI Use Cases
Patterns or classes of AI
problems...
Algorithmic Medicine
Clinical algorithms to drive medical practice
AI Healthcare Advisors
Diagnose and treat diseases
Rev-Cycle/ Efficiency
NLP + ML to identify revenue opportunities
Diagnostic Interpretation
Efficient and accurate readings of imaging studies
Robotic Process Automation
Automation of repetitive tasks
Virtual Care
Real-time remote monitoring and alerting
Virtual Personal Health Assistants
Augmented reality, cognitive computing, sentiment analysis, speech
recognition, NLU/NLG
8. Discussion
Fraud
Healthcare Fraud Detection
Care Pathways
Adaptive Treatment Planning
Patient Flow
Patient Flow Management
Augmented Intelligence
Computer Assisted Diagnosis
What are the (a) risks and (b) impact of getting these wrong?
9. Risk Based Approach to Validation
• What can go
wrong?
• What is the
impact of
getting it
wrong?
11. Potential Sources of Bias
People, Process, Technology
Model
development
processes
Underlying data
and/or blending
techniques
Biases,
perspective or
experience of
the author
Application or
operationalization
of results
All of these processes are driven by human judgments...
14. Incentives
Data does not fully reflect the
underlying diagnosis and
treatment
Data Collection Processes
Data is shallower for
segments of the population
based on income (access)
Affordability
Personal, curated data is often
biased toward those than can
afford devices, apps, and tech
Bias in EHR Data Collection?
15. Fear and Uncertainty
What happens
if the model is
biased? How can I
trust a black-
box!
How did you
validate the
model?
What happens when
the model is wrong?
17. 3 Questions
to frame your thinking…
• How do we ensure that our
models are not biased?
• How can we make sure that
our models are
explainable?
• How can we engender
greater trust?
18. Four primary tenets to guide our work…
Being responsible for social mores
Fairness
Ensure the protection of individual privacy
Privacy
Transparency
Understanding what decisions are made
and why…
Trust begins with transparency, verification,
and accountability
Trust
19. Transparency
The goal is to understand
the process by which an
algorithmic system
makes decisions, and we
must ensure the model
can be explained.
How much can we trust the
data sources we use?
Do we trust the libraries, services and
APIs that deliver algorithms and
models?
How can we demonstrate
trustworthiness of the outcomes?
Do we understand data transformations within pipelines?
How does our solution conform to regulatory
requirements and business constraints?
What kind of explanation does
this output require if any?
Have we considered alternative data sources for a more complete picture?
Are there any implications due to incomplete data?
Do we know what
algorithms to use for
what problem?
Are there any cultural differences in
consuming the outputs?
Have we found
adversarial examples to
invalidate the model?
Did we engage
relevant experts to
validate outputs?
Are we clear about the
meaning of the data?
20. The black-box problem also poses issues
for physicians, who lack insight into what
the AI is actually doing. It’s not that
they’re afraid of being replaced; it’s more
that they’re afraid of basing decisions on
information they can’t see”
Source: Modern Healthcare
https://www.modernhealthcare.com/indepth/artificial-intelligence-in-healthcare-makes-slow-impact/
22. Trust
Begins with transparency, verification, and accountability
Transparency
Verification
Accountability
“… clinician involvement is important no matter how smart the
machines get. There is a strong need for the engagement of
medical experts to validate and oversee AI algorithms in
healthcare.”
Dr. Wyatt Decker, CMIO The Mayo Clinic
23. Fairness
Socially responsible—one that does not discriminate against classes of people that we
would generally consider protected.
Age Gender Sexual Orientation
Race Ethnicity
27. … to demonstrate how artificial
intelligence tools can be used to predict
unplanned hospital and skilled nursing
facility admissions and adverse events… in
testing innovative payment and service
delivery models
Source: CMS.gov (March 28, 2019)
$1.65M
Risk of inaction …
28. By 2022, the first U.S. medical malpractice case
involving a medical decision made by an advanced AI
algorithm will have been heard.
It will not be because an algorithm produced
an incorrect diagnosis.
It will be due to the failure to use an algorithm that
was proven to be more accurate and reliable than the
human alone.
Source: Gartner D&A Summit, March, 2019
30. C H O I C E
A N A L Y T I C
T E C H N I Q U E S
L i f e c y c l e
P r o c e s s e s
D A T A S O U R C E S
S K I L L S
31. C O N T R O L
S E C U R I T Y
& P R I V A C Y
D A T A & M O D E L
G O V E R N A N C E
V I S I B I L I T Y
D E P L O Y M E N T
32. Require businesses to conduct an impact
assessment that covers the risk associated
with algorithms’ accuracy, fairness, bias,
discrimination, privacy, and security
Software as a Medical Device
Legal, Regulatory and
Ethical Oversight
The Algorithmic
Accountability Act
(2019)
SaMD Pre-Specifications (SPS), Algorithm Change Protocol
(ACP), and Good Machine Learning Practices (GMLP),
33. Gregory S. Nelson, June, 2019 – North Carolina Medical Journal
AI governance is the process of assigning and assuring
organizational accountability, decision rights, risks, policies,
and investment decisions for applying artificial intelligence.
34.
35. We are not looking for robots to do work
for us, we are looking to make better
decisions by benefiting from machine
learning and AI.
Manu Tandon, CIO Beth Israel Deaconess Medical Center
37. Analytics Product Validation
Questions to ask ourselves throughout the process
Product
Are we building the right
product?
Process
Are we building the
product right?