SlideShare une entreprise Scribd logo
1  sur  16
Télécharger pour lire hors ligne
“I don’t trust AI”:
the role of Explainability in
Responsible AI
Overview and Examples
31st March 2021
Erika Agostinelli
IBM Data Scientist – Data Science & AI Elite
Agenda
2
• Context: Responsible AI
• Considerations
• Personas: Explanations for whom?
• Direct Interpretability vs Post-hoc
explanations
• Global vs Local explanations
• Type of your data
Some Open-Source tools
• AIX360
• What if Tool
• Examples
• Loan Application
Overview (~15min) Examples (~10min)
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Responsible AI
3
“As AI advances, and humans and AI systems increasingly
work together, it is essential that we trust the output of these
systems to inform our decisions.
Alongside policy considerations and business efforts, science
has a central role to play: developing and applying tools to
wire AI systems for trust.
https://www.research.ibm.com/artificial-intelligence/trusted-ai/
Fairness Robustness Explainability
Value
Alignment
Transparency
Accountability
/ / / /
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Personas
Explanation for whom?
4
👩🦰
🧓
🧑🦰
🧔
Group1: AI system builders
Technical individuals (data scientists and developers)
who build or deploy an AI system want to know if
their system is working as expected, how to diagnose
and improve it, and possibly to gain insight from its
decisions.
Group3: Regulatory bodies
Government agencies, charged to protect the rights of
their citizens, want to ensure decisions are made in a
safe and fair manner, and society is not negatively
impacted by the decisions such as a financial crisis
Group2: End-user decision makers
People who use the recommendations of an AI system to make a
decision (for example, physicians, loan officers, managers, judges, or
social workers) desire explanations that can build their trust and
confidence in the system’s recommendations and possibly provide
them with additional insight to improve their future decisions and
understanding of the phenomenon.
Group4: End consumers
People impacted by the recommendations made by an AI system
(for example, patients, loan applicants, employees, arrested
individuals, or at-risk children) desire explanations that can help
them under- stand if they were treated fairly and what factor(s)
could be changed to get a different result.
e.g. Data Scientist
“How can I improve the performance? Is
the model using the right data to predict the
result?”
e.g. Loan Officer
“How can I justify the predicted result? Would similar
applicants have received a similar result?”
e.g. Loan Applicants
“Why my application was rejected? What can I do to
get a loan the next time?”
e.g. Bank Executives, Audit Agencies
“Does this model comply with the law?
Is this model fair?”
Loan Application Example
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Interpretability vs Explainability
Different approaches
5
Directly Interpretable Approach
Research to explain the inner workings of an existing
or enhanced machine learning model directly, known
as a directly interpretable approach, to provide a
precise description of how the model determined its
decision.
Post-hoc Explanation Approach
Research, called post hoc interpretation, that probes
an existing model with input values similar to the
actual inputs to understand what factors were crucial
in the model’s decision.
We can see how the model “thinks”.
For example: a small decision tree
The Approach is model-agnostic so
we are trying to leverage its inputs
and outputs to infer what is
happening within the model
By Dr. Cynthia Rudin
https://www.nature.com/articles/s42256-019-0048-x
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Global vs Local
Model or Instance level approach
6
Global or Model-level Approach
An approach that describes the entire predictive model
to the user is called a global or model-level approach in
that the user can understand how any input will be
decided. it is easy to understand how a prediction will
be made for any input.
An example would be a simple decision tree:
If “salary > $50K” and “outstanding debt < $10K”
then mortgage approved
Local or Instance-level Approach
An approach that provides an explanation for a
particular example is called a local or instance-level
explanation.
An example would be an explanation for a credit
rating for a particular applicant might provide the
factors that led to the decision, but it will not
describe the factors for any other applicant.
X
X X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X X
X
X
X
X
X
X
X
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Type of Data
How to visualize your explanations
7
Tabular Text Images
Different type of data requires different type of visualizations
The choice of how to visualize your results will be crucial for your
persona. Can your end-user understand easily the results of your
explanations?
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Open-Source Tools – Example in Action
non exhaustive list
8
AI Explainability 360 (AIX360)
This toolkit is an open-source library developed by IBM
Research in support of interpretability and
explainability of datasets and machine learning models.
The AI Explainability 360 is released as a Python
package that includes a comprehensive set of
algorithms that cover different dimensions of
explanations along with proxy explainability metrics.
pip install aix360
https://aix360.mybluemix.net/
What If Tool
This toolkit is an interactive visual interface
developed by Google Research and designed to help
visualize datasets and better understand the output
of models.
pip install witwidget
https://pair-code.github.io/what-if-tool/
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
9
Local Global
Directly
Interpretable
Post-hoc
Explanation
AIX360
Taxonomy and guidance
Post-hoc
Explanation
- One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques (2019) (2019)
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
AIX360 Example
Loan Application – HELOC Dataset
10
Data Scientist
Must ensure the model works appropriately before
deployment
Loan Officer
Needs to assess the model’s prediction to make the
final judgement
Loan Applicant
Wants to understand the reason for the application
result
// BRCG / GLRM
// ProtoDash
// CEM
Notebook Available
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
AIX360 Example – Loan Application
Directly Interpretable Models for Global Understanding
11
Data Scientist
data scientist would ideally like to understand the behaviour of the model, as a whole, not just
in specific instances (e.g. specific loan applicants). A global view of the model may uncover
problems with overfitting and poor generalization to other geographies before deployment.
Boolean Rule Column Generation (BRCG)
An example of a Directly interpretable model, BRCG
yields a very simple set of rules with reasonable
accuracy.
Logistic Rule Regression (LogRR)
Part of the Generalised Linear Rule Models, it can
improve accuracy at the cost of a more complex but
still interpretable model.
Paper: Boolean Decision Rules via Column Generation
Paper: Generalized Linear Rule Models
👩🦰
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
AIX360 Example – Loan Application
Using Similar Examples to Inform a Loan Decision
12
Loan Officer
Using similar examples may help the employee understand the decision of an applicant's
HELOC application being accepted or rejected in the context of other similar applications.
ProtoDash
The method selects applications from the training
set that are similar in different ways to the user
application we want to explain, which makes this
method different from the traditional ‘distance’
methods (Euclidean, Cosine etc.).
Protodash is able to provide a much more well
rounded and comprehensive view of why the
decision for the applicant may be justifiable.
Paper: Efficient Data Representation by Selecting Prototypes with Importance Weights
🧑🦰
…
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
AIX360 Example – Loan Application
Using Similar Examples to Inform a Loan Decision
13
Loan Applicant
He would like to understand why he does not qualify for a line of credit and if so, what changes
in his application would qualify him.
Contrastive Explanation Method (CEM)
Contrastive explanations provide information to
applicants about what minimal changes to their
profile would have changed the decision of the AI
model from reject to accept or vice-versa
(pertinent negatives).
Also it can provide info on the minimal set of
changes that would still maintain the original
decision (pertinent positives).
Paper: Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
🧔
Pertinent Negative Example:
We observe that this loan application would have been accepted if
- the consolidated risk marker score (i.e. ExternalRiskEstimate) increased from 65 to 81,
- the loan application was on file (i.e. AverageMlnFile) for about 66 months and if
- the number of satisfactory trades (i.e. NumSatisfactoryTrades) increased to little over 21.
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
What if Tool Example
US Census Model Comparison
14
https://colab.research.google.com/github/pair-code/what-if-tool/blob/master/WIT_Model_Comparison.ipynb#scrollTo=NUQVro76e38Q
Find a Counterfactual
In the What-If Tool, a
Counterfactual is the
most similar datapoint of
a different classification
(for classification models)
or of a difference in
prediction greater than a
specified threshold (for
regression models).
Notebooks Available
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Other Resources
Useful Links
15
In addition to the Links in the slides +
Websites-Articles
- https://www.research.ibm.com/artificial-intelligence/trusted-ai/
- Understanding how LIME explains predictions
- Explain Any Models with the SHAP Values — Use the KernelExplainer
- Interpretability part 3: opening the black box with LIME and SHAP
- AI Explainability 360 Documentation
- What if tool Documentation
- The Mathematics of Decision Trees, Random Forest and Feature Importance in Scikit-learn and Spark
- An Introduction to ProtoDash — An Algorithm to Better Understand Datasets and Machine Learning Models
Papers
- Questioning the AI: Informing Design Practices for Explainable AI User Experiences (2020)
- One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques (2019) (2019)
- Explaining explainable AI (2019)
- Questioning the AI: Informing Design Practices for Explainable AI User Experiences (2020)
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
https://www.linkedin.com/in/erikaagostinelli/
www.erikaagostinelli.com
Thank you!
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI

Contenu connexe

Tendances

Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Krishnaram Kenthapadi
 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & ConcernsAjitesh Kumar
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
 
Suresh Poopandi_Generative AI On AWS-MidWestCommunityDay-Final.pdf
Suresh Poopandi_Generative AI On AWS-MidWestCommunityDay-Final.pdfSuresh Poopandi_Generative AI On AWS-MidWestCommunityDay-Final.pdf
Suresh Poopandi_Generative AI On AWS-MidWestCommunityDay-Final.pdfAWS Chicago
 
An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
 
Future of AI - 2023 07 25.pptx
Future of AI - 2023 07 25.pptxFuture of AI - 2023 07 25.pptx
Future of AI - 2023 07 25.pptxGreg Makowski
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AIBill Liu
 
Responsible AI
Responsible AIResponsible AI
Responsible AINeo4j
 
Responsible AI
Responsible AIResponsible AI
Responsible AINeo4j
 
Unlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdfUnlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
 
A Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for EnterpriseA Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for EnterpriseRocketSource
 
Intelligent Banking: AI cases in Retail and Commercial Banking
Intelligent Banking: AI cases in Retail and Commercial BankingIntelligent Banking: AI cases in Retail and Commercial Banking
Intelligent Banking: AI cases in Retail and Commercial BankingDmitry Petukhov
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language ProcessingYunyao Li
 
Revolutionizing your Business with AI (AUC VLabs).pdf
Revolutionizing your Business with AI (AUC VLabs).pdfRevolutionizing your Business with AI (AUC VLabs).pdf
Revolutionizing your Business with AI (AUC VLabs).pdfOmar Maher
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Saurabh Kaushik
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
 
Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Raheel Ahmad
 

Tendances (20)

Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)
 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & Concerns
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
 
Suresh Poopandi_Generative AI On AWS-MidWestCommunityDay-Final.pdf
Suresh Poopandi_Generative AI On AWS-MidWestCommunityDay-Final.pdfSuresh Poopandi_Generative AI On AWS-MidWestCommunityDay-Final.pdf
Suresh Poopandi_Generative AI On AWS-MidWestCommunityDay-Final.pdf
 
An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!
 
Future of AI - 2023 07 25.pptx
Future of AI - 2023 07 25.pptxFuture of AI - 2023 07 25.pptx
Future of AI - 2023 07 25.pptx
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
Unlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdfUnlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdf
 
AI in healthcare - Use Cases
AI in healthcare - Use Cases AI in healthcare - Use Cases
AI in healthcare - Use Cases
 
A Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for EnterpriseA Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for Enterprise
 
Intelligent Banking: AI cases in Retail and Commercial Banking
Intelligent Banking: AI cases in Retail and Commercial BankingIntelligent Banking: AI cases in Retail and Commercial Banking
Intelligent Banking: AI cases in Retail and Commercial Banking
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language Processing
 
Revolutionizing your Business with AI (AUC VLabs).pdf
Revolutionizing your Business with AI (AUC VLabs).pdfRevolutionizing your Business with AI (AUC VLabs).pdf
Revolutionizing your Business with AI (AUC VLabs).pdf
 
Shap
ShapShap
Shap
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
 
Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models?
 

Similaire à "I don't trust AI": the role of explainability in responsible AI

An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring gerogepatton
 
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORINGAN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORINGijaia
 
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
 
Improved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdfImproved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdfNarinder Singh Punn
 
Cognitive Exam Face Recognition Essay
Cognitive Exam Face Recognition EssayCognitive Exam Face Recognition Essay
Cognitive Exam Face Recognition EssayChelsea Porter
 
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...MITAILibrary
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Krishnaram Kenthapadi
 
What is explainable AI.pdf
What is explainable AI.pdfWhat is explainable AI.pdf
What is explainable AI.pdfStephenAmell4
 
Global Economy And Economic Variables
Global Economy And Economic VariablesGlobal Economy And Economic Variables
Global Economy And Economic VariablesLaura Martin
 
Cognitive Computing.PDF
Cognitive Computing.PDFCognitive Computing.PDF
Cognitive Computing.PDFCharles Quincy
 
Cognitive future part 1
Cognitive future part 1Cognitive future part 1
Cognitive future part 1Peter Tutty
 
Cognitive future part 1
Cognitive future part 1Cognitive future part 1
Cognitive future part 1Peter Tutty
 
Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Debmalya Biswas
 
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise SuccessAltimeter, a Prophet Company
 
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
 
B510519.pdf
B510519.pdfB510519.pdf
B510519.pdfaijbm
 
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...UXPA International
 

Similaire à "I don't trust AI": the role of explainability in responsible AI (20)

An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring
 
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORINGAN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
 
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
 
Explainable AI.pptx
Explainable AI.pptxExplainable AI.pptx
Explainable AI.pptx
 
Improved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdfImproved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdf
 
Cognitive Exam Face Recognition Essay
Cognitive Exam Face Recognition EssayCognitive Exam Face Recognition Essay
Cognitive Exam Face Recognition Essay
 
Work System Perspective on Service, Service Systems, IT Services, and Service...
Work System Perspective on Service, Service Systems, IT Services, and Service...Work System Perspective on Service, Service Systems, IT Services, and Service...
Work System Perspective on Service, Service Systems, IT Services, and Service...
 
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)
 
What is explainable AI.pdf
What is explainable AI.pdfWhat is explainable AI.pdf
What is explainable AI.pdf
 
Global Economy And Economic Variables
Global Economy And Economic VariablesGlobal Economy And Economic Variables
Global Economy And Economic Variables
 
Cognitive Computing.PDF
Cognitive Computing.PDFCognitive Computing.PDF
Cognitive Computing.PDF
 
Cognitive future part 1
Cognitive future part 1Cognitive future part 1
Cognitive future part 1
 
Cognitive future part 1
Cognitive future part 1Cognitive future part 1
Cognitive future part 1
 
Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020
 
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
 
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
 
B510519.pdf
B510519.pdfB510519.pdf
B510519.pdf
 
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
 
Lime
LimeLime
Lime
 

Dernier

MEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .pptMEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .pptaigil2
 
AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)Data & Analytics Magazin
 
The Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayerThe Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayerPavel Šabatka
 
CI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual interventionCI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual interventionajayrajaganeshkayala
 
Virtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product IntroductionVirtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product Introductionsanjaymuralee1
 
ChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics InfrastructureChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics Infrastructuresonikadigital1
 
SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024Becky Burwell
 
5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best Practices5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best PracticesDataArchiva
 
Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023Vladislav Solodkiy
 
Strategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for ClarityStrategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for ClarityAggregage
 
Mapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptxMapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptxVenkatasubramani13
 
YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.JasonViviers2
 
How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?sonikadigital1
 
Master's Thesis - Data Science - Presentation
Master's Thesis - Data Science - PresentationMaster's Thesis - Data Science - Presentation
Master's Thesis - Data Science - PresentationGiorgio Carbone
 
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptxTINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptxDwiAyuSitiHartinah
 
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024Guido X Jansen
 
Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...PrithaVashisht1
 

Dernier (17)

MEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .pptMEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .ppt
 
AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)
 
The Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayerThe Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayer
 
CI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual interventionCI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual intervention
 
Virtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product IntroductionVirtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product Introduction
 
ChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics InfrastructureChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics Infrastructure
 
SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024
 
5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best Practices5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best Practices
 
Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023
 
Strategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for ClarityStrategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
 
Mapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptxMapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptx
 
YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.
 
How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?
 
Master's Thesis - Data Science - Presentation
Master's Thesis - Data Science - PresentationMaster's Thesis - Data Science - Presentation
Master's Thesis - Data Science - Presentation
 
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptxTINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
 
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
 
Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...
 

"I don't trust AI": the role of explainability in responsible AI

  • 1. “I don’t trust AI”: the role of Explainability in Responsible AI Overview and Examples 31st March 2021 Erika Agostinelli IBM Data Scientist – Data Science & AI Elite
  • 2. Agenda 2 • Context: Responsible AI • Considerations • Personas: Explanations for whom? • Direct Interpretability vs Post-hoc explanations • Global vs Local explanations • Type of your data Some Open-Source tools • AIX360 • What if Tool • Examples • Loan Application Overview (~15min) Examples (~10min) Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 3. Responsible AI 3 “As AI advances, and humans and AI systems increasingly work together, it is essential that we trust the output of these systems to inform our decisions. Alongside policy considerations and business efforts, science has a central role to play: developing and applying tools to wire AI systems for trust. https://www.research.ibm.com/artificial-intelligence/trusted-ai/ Fairness Robustness Explainability Value Alignment Transparency Accountability / / / / Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 4. Personas Explanation for whom? 4 👩🦰 🧓 🧑🦰 🧔 Group1: AI system builders Technical individuals (data scientists and developers) who build or deploy an AI system want to know if their system is working as expected, how to diagnose and improve it, and possibly to gain insight from its decisions. Group3: Regulatory bodies Government agencies, charged to protect the rights of their citizens, want to ensure decisions are made in a safe and fair manner, and society is not negatively impacted by the decisions such as a financial crisis Group2: End-user decision makers People who use the recommendations of an AI system to make a decision (for example, physicians, loan officers, managers, judges, or social workers) desire explanations that can build their trust and confidence in the system’s recommendations and possibly provide them with additional insight to improve their future decisions and understanding of the phenomenon. Group4: End consumers People impacted by the recommendations made by an AI system (for example, patients, loan applicants, employees, arrested individuals, or at-risk children) desire explanations that can help them under- stand if they were treated fairly and what factor(s) could be changed to get a different result. e.g. Data Scientist “How can I improve the performance? Is the model using the right data to predict the result?” e.g. Loan Officer “How can I justify the predicted result? Would similar applicants have received a similar result?” e.g. Loan Applicants “Why my application was rejected? What can I do to get a loan the next time?” e.g. Bank Executives, Audit Agencies “Does this model comply with the law? Is this model fair?” Loan Application Example Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 5. Interpretability vs Explainability Different approaches 5 Directly Interpretable Approach Research to explain the inner workings of an existing or enhanced machine learning model directly, known as a directly interpretable approach, to provide a precise description of how the model determined its decision. Post-hoc Explanation Approach Research, called post hoc interpretation, that probes an existing model with input values similar to the actual inputs to understand what factors were crucial in the model’s decision. We can see how the model “thinks”. For example: a small decision tree The Approach is model-agnostic so we are trying to leverage its inputs and outputs to infer what is happening within the model By Dr. Cynthia Rudin https://www.nature.com/articles/s42256-019-0048-x Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 6. Global vs Local Model or Instance level approach 6 Global or Model-level Approach An approach that describes the entire predictive model to the user is called a global or model-level approach in that the user can understand how any input will be decided. it is easy to understand how a prediction will be made for any input. An example would be a simple decision tree: If “salary > $50K” and “outstanding debt < $10K” then mortgage approved Local or Instance-level Approach An approach that provides an explanation for a particular example is called a local or instance-level explanation. An example would be an explanation for a credit rating for a particular applicant might provide the factors that led to the decision, but it will not describe the factors for any other applicant. X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 7. Type of Data How to visualize your explanations 7 Tabular Text Images Different type of data requires different type of visualizations The choice of how to visualize your results will be crucial for your persona. Can your end-user understand easily the results of your explanations? Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 8. Open-Source Tools – Example in Action non exhaustive list 8 AI Explainability 360 (AIX360) This toolkit is an open-source library developed by IBM Research in support of interpretability and explainability of datasets and machine learning models. The AI Explainability 360 is released as a Python package that includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. pip install aix360 https://aix360.mybluemix.net/ What If Tool This toolkit is an interactive visual interface developed by Google Research and designed to help visualize datasets and better understand the output of models. pip install witwidget https://pair-code.github.io/what-if-tool/ Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 9. 9 Local Global Directly Interpretable Post-hoc Explanation AIX360 Taxonomy and guidance Post-hoc Explanation - One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques (2019) (2019) Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 10. AIX360 Example Loan Application – HELOC Dataset 10 Data Scientist Must ensure the model works appropriately before deployment Loan Officer Needs to assess the model’s prediction to make the final judgement Loan Applicant Wants to understand the reason for the application result // BRCG / GLRM // ProtoDash // CEM Notebook Available Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 11. AIX360 Example – Loan Application Directly Interpretable Models for Global Understanding 11 Data Scientist data scientist would ideally like to understand the behaviour of the model, as a whole, not just in specific instances (e.g. specific loan applicants). A global view of the model may uncover problems with overfitting and poor generalization to other geographies before deployment. Boolean Rule Column Generation (BRCG) An example of a Directly interpretable model, BRCG yields a very simple set of rules with reasonable accuracy. Logistic Rule Regression (LogRR) Part of the Generalised Linear Rule Models, it can improve accuracy at the cost of a more complex but still interpretable model. Paper: Boolean Decision Rules via Column Generation Paper: Generalized Linear Rule Models 👩🦰 Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 12. AIX360 Example – Loan Application Using Similar Examples to Inform a Loan Decision 12 Loan Officer Using similar examples may help the employee understand the decision of an applicant's HELOC application being accepted or rejected in the context of other similar applications. ProtoDash The method selects applications from the training set that are similar in different ways to the user application we want to explain, which makes this method different from the traditional ‘distance’ methods (Euclidean, Cosine etc.). Protodash is able to provide a much more well rounded and comprehensive view of why the decision for the applicant may be justifiable. Paper: Efficient Data Representation by Selecting Prototypes with Importance Weights 🧑🦰 … Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 13. AIX360 Example – Loan Application Using Similar Examples to Inform a Loan Decision 13 Loan Applicant He would like to understand why he does not qualify for a line of credit and if so, what changes in his application would qualify him. Contrastive Explanation Method (CEM) Contrastive explanations provide information to applicants about what minimal changes to their profile would have changed the decision of the AI model from reject to accept or vice-versa (pertinent negatives). Also it can provide info on the minimal set of changes that would still maintain the original decision (pertinent positives). Paper: Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives 🧔 Pertinent Negative Example: We observe that this loan application would have been accepted if - the consolidated risk marker score (i.e. ExternalRiskEstimate) increased from 65 to 81, - the loan application was on file (i.e. AverageMlnFile) for about 66 months and if - the number of satisfactory trades (i.e. NumSatisfactoryTrades) increased to little over 21. Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 14. What if Tool Example US Census Model Comparison 14 https://colab.research.google.com/github/pair-code/what-if-tool/blob/master/WIT_Model_Comparison.ipynb#scrollTo=NUQVro76e38Q Find a Counterfactual In the What-If Tool, a Counterfactual is the most similar datapoint of a different classification (for classification models) or of a difference in prediction greater than a specified threshold (for regression models). Notebooks Available Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 15. Other Resources Useful Links 15 In addition to the Links in the slides + Websites-Articles - https://www.research.ibm.com/artificial-intelligence/trusted-ai/ - Understanding how LIME explains predictions - Explain Any Models with the SHAP Values — Use the KernelExplainer - Interpretability part 3: opening the black box with LIME and SHAP - AI Explainability 360 Documentation - What if tool Documentation - The Mathematics of Decision Trees, Random Forest and Feature Importance in Scikit-learn and Spark - An Introduction to ProtoDash — An Algorithm to Better Understand Datasets and Machine Learning Models Papers - Questioning the AI: Informing Design Practices for Explainable AI User Experiences (2020) - One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques (2019) (2019) - Explaining explainable AI (2019) - Questioning the AI: Informing Design Practices for Explainable AI User Experiences (2020) Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 16. https://www.linkedin.com/in/erikaagostinelli/ www.erikaagostinelli.com Thank you! Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI