You can’t just ‘add AI’ to a project and expect it to work. It isn’t magic dust that can be sprinkled on a product. The key to building systems that are integrated into people’s lives is trust. If you don’t have the right amount of trust, you open the system up to disuse and misuse.
During this talk we went through the building blocks of AI from a UX Design perspective, what trust is, how trust is gained, and maybe more importantly lost, in UX/UI, how to effectively team humans/machines and techniques you can use day-to-day to build trusted AI products.
8. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
How to Build Trustworthy
AI Products
chrisbutler@philosophie.is
@chrizbot
9. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Chris Butler
Director of AI @ Philosophie NYC
18 years of product and BD
Microsoft, Waze, Horizon Ventures,
KAYAK, and started my own company
(failed)
chrisbutler@philosophie.is
@chrizbot
10. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Questions we should be asking
● What is AI?
● What is trust?
● What does trust have to do with machines?
● How do machines build trust with humans?
● What is the right level of trust for machines?
● What techniques can we use to learn about trust problems/solutions?
13. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Data science produces insights
Machine learning produces predictions
Artificial intelligence produces actions
- David Robinson, Chief Data Scientist at DataCamp
27. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Data science produces insights
Machine learning produces predictions
Artificial intelligence produces actions
- David Robinson, Chief Data Scientist at DataCamp
28. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Data science produces insights about people
Machine learning produces predictions for people to use
Artificial intelligence produces actions to help people
- David Robinson, Chief Data Scientist at DataCamp
32. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
What is trust?
● Contract—written or unwritten
● Focuses on expectations for the future
● Based on past performance, accountability, and transparency
● Builds slowly
● Can be lost quickly
41. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Common performance measures
• Accuracy - ratio of correctly predicted observations
• Precision - ratio of correct positive observations
• Recall - ratio of correctly predicted positive events
• F1 - weighted average of Precision and Recall
46. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
“...responsibility is shifted
to other drivers on the
road, and these human
drivers become the moral
crumple zone...”
Elish, M. C., Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction
(We Robot 2016) (March 20, 2016). We Robot 2016 Working Paper.
48. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
“Accountability can
therefore be a powerful
tool for motivating better
practices, [and more]
trustworthy systems.”
Nissenbaum, H., Accountability in a computerized society. Sci Eng Ethics (1996) 2:
25. https://doi.org/10.1007/BF02639315
70. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Problem interviews
● What are stories you use to tell people how your job works usually?
● What about a time in the past things didn’t go right? What did you
do?
● How do you explain your understanding to someone else? How do
you draw it?
● How do you cobble together solutions to get what you need done?
● How have people built trust when working with you?
77. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Scenarios
● Correct operation
● Incorrect operation - false positives/negatives
● Both sides of the borderline of misuse/disuse
● Feedback from human to machine
● State communication from machine to human
78. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Sample questions
● How do you feel about [action]?
● How do you think the system decided [action]?
● Do you trust these suggestions for what to do next?
● Was there enough information for you to [take action]?
● When would this fail?
● How much do you trust the system to make the right
decision in the future? It is more or less than before?
● What would make you skeptical to use the system?
81. PRODUCT SCHOOL TRUSTWORTHY AI PRODUCTS
Parting thoughts...
● Using AI is like any other technology that has tradeoffs… with some
nuance you should be aware of.
● Build for the right amount of trust—not too much or not too little.
● Trust is based on performance, accountability, and transparency.
● UX/UI is the basis for all of these things.
85. Part-time Product Management Courses in
San Francisco, Silicon Valley, Los Angeles, New
York, Austin, Boston, Seattle, Chicago, Denver,
London, Toronto
www.productschool.com