Similaire à Telehealth.org FINAL DECK 2023 McMenamin & Maheu - Powerpoint Slides - Therapist AI & ChatGPT- How to Use Legally & Ethically JM 14 Sept 1100.pptx
Similaire à Telehealth.org FINAL DECK 2023 McMenamin & Maheu - Powerpoint Slides - Therapist AI & ChatGPT- How to Use Legally & Ethically JM 14 Sept 1100.pptx(20)
2. Joseph P. McMenamin, MD, JD, FCLM
Joe McMenamin is a partner at Christian & Barton in
Richmond, Virginia. His practice concentrates on digital health
and on the application of AI in healthcare.
He is an Associate Professor of Legal Medicine at Virginia
Commonwealth University and Board-certified in Legal
Medicine.
3. Marlene M. Maheu, PhD
Marlene Maheu, PhD has been a pioneer in telemental health
for three decades.
With five textbooks, dozens of book chapters, and journal
articles to her name, she is the Founder and CEO of the
Telebehavioral Health Institute (TBHI).
She is the CEO of the Coalition for Technology in Behavioral
Science (CTiBS), and the Founder of the Journal for
Technology in Behavioral Science.
6. Preventing
Interruptions
Maximize your learning by:
• Making a to-do list as we go.
• Turning on your camera & join the
conversation throughout this
activity.
• Muting your phone.
• Asking family and friends to stay
away.
We will not be discussing all slides.
7. • Mr. McMenamin speaks neither for any legal client nor for Telehealth.org
• Is neither a technical expert nor an Intellectual Property lawyer.
• Offers information about the law, not legal advice.
• Labors under a dearth of legal authorities specific to AI.
Speaker Disclaimers
8. • Must treat some subjects in cursory fashion only.
• Presents theories of liability as illustrations, conceding nothing as to their
validity.
• Criticizes no person or entity, nor AI.
• In this presentation, neither creates nor seeks to create an attorney-client
relationship with any member of the audience.
Speaker Disclaimers
26. How AI has helped:
1. Personal Sensing (“Digital Phenotyping”)
Collecting and analyzing data from sensors
(smartphones, wearables, etc.) to identify behaviors,
thoughts, feelings, and traits.
2. Natural language processing
3. Chatbots
D’Alfonso, Curr Opin Psychol. 2020;36:112–117.
27. 1. Machine Learning
• Predict and classify suicidal thoughts, depression,
schizophrenia with ”high accuracy”.
U. Cal and IBM,
https://www.forbes.com/sites/bernardmarr/2023/07/06/ai-in-mental-
mental-health-opportunities-and-challenges-in-developing-intelligent-
intelligent-digital-therapies/
2. Causation v. Correlation
• Better prognosis for pneumonia in asthma patients.
28. 6. Hallucinations
• NEDA’s Tessa: Harmful diet advice to patients with
eating disorders.
7. Generalizability
• When training data do not resemble actual data.
• Watson and chemo.
8. No compassion or empathy
9. No conceptual thinking
10. No common sense
35. Do We Need to License AI to Use it in
Healthcare?
36. Do We Need to License AI to
Use it in Healthcare?
• Practice of clinical psychology includes but is not limited to: ‘Diagnosis and
treatment of mental and emotional disorders’ which consists of the appropriate
diagnosis of mental disorders according to standards of the profession and the
ordering or providing of treatments according to need.
• Va. Code § 54.1-3600
• Other professions have similar statutes across the 50 states & territories.
37. Do We Need to License AI to
Use it in Healthcare?
• Definitions of medicine, psychology, nursing, etc.:
• Likely broad enough to encompass AI functions.
• An AI system is not human, but if it functions as a HC professional, some propose
licensure or some other regulatory mechanism.
38. Do We Need to License AI to
Use it in Healthcare?
If licensure is needed:
• If so, in what jurisdiction(s)?
• Consider scope of practice.
40. What Does FDA Say About AI in Healthcare?
• Regulatory framework is not yet fully developed.
• Historical: Drug or device maker wishing to modify product submits proposal,
and supporting data; FDA says yes or no.
• FDA recognizes potential for drug development and the impediments that fusty
regulation could erect.
41. What Does Federal Drug Administration (FDA) Say
About AI in Healthcare?
• Concerned with transparency (can it be explained? intellectual property) and
security and integrity of data generated; potential for amplifying errors or
biases.
• FDA urges creation of a risk management plan, and care in choice of training
data, testing, validation.
• Pre-determined change control plans.
63. What if AI Recommends Non-standard
Treatment?
• The progress problem: Arterial blood gas monitoring in premature newborns
circa 1990.
• Non-standard advice: Proceed with caution.
The tension between progress and tort law.
72. Does the Law Require the Patient’s
Informed Consent to Use of AI in Health
Care?
73. Does the Law Require the Patient’s
Informed Consent to Use of AI in Health
Care?
• Traditional:
“Every human being of adult years and sound mind has a right to
determine what shall be done with his own body”
Schloendorff v. NY Hospital, 105 N.E. 92 (N.Y. 1914) (Cardozo, J.)
• AI: What disclosures are required?
74. (cont’d)
• Explain how AI works?
What does ‘informed’ mean where no-one knows how
black-box AI works?
• Whether the AI system was trained on a data set
representative of a particular patient population?
• Comparative predictive accuracy and error rates of AI system
across patient subgroups?
• Roles human caregivers and the AI system will play during
each part of a procedure?
75. (cont’d)
• Whether a medtech or pharma company influenced an
algorithm?
• Compare results with AI and human approaches?
What if there are no data?
• What if the patient doesn’t want to know?
• Provider’s financial interest in the AI used?
• Disclose AI recommendations HCP disapproves, or COIs?
76. (cont’d)
• Pedicle screw litigation: Used off-label
At present, nearly all AI is used off-label.
• Investigative nature of the device's use?
Rights of subjects in clinical trials?
• Experimental procedures: “most frequent risks and hazards” will
remain unknown until the procedure becomes established.
88. Who is an Expert?
• Trial Court: Cardiologist not qualified to testify on weight loss drug combo that
proprietary software package recommended because doctor is not a
software expert.
Skounakis v. Sotillo A-2403-15T2 (N.J. Super. Ct. App. Div. Mar. 19, 2018)
(on appeal, reversed)
89. Who is an Expert?
• MD who had performed many robotic surgeries not qualified on causation for
want of programming expertise.
Mracek v. Bryn Mawr Hospital, 363 F. App'x. 925, 926 (3d Cir. 2010) (ED
complicating robotic prostatectomy)
105. Will AI Put Me Out of Work?
• ChatGPT can outperform 1st and 2nd year medical students in
answering challenging clinical care exam questions.
• Law students: Similar.
• But: Probably not.
106. (cont’d)
• John Halamka: “Generative AI is not thought, it's not
sentience.”
• Most, if not all, countries are experiencing severe clinician
shortages.
Shortages are only predicted to get worse in the U.S. until at
least 2030.
107. (cont’d)
• AI-infused precision health tools might well be essential to
improving the efficiency of care.
• AI might help burn-out: ease the day-to-day weariness,
lethargy, and delay of reviewing patient charts.
• The day may come when the SOC requires use of AI.
114. Can AI Detect or Prevent Fraud?
• One large health insurer reported a savings of $1 billion
annually through AI-prevented FWA.
• Fed. Ct App: Company’s use of AI for prior auth and utilization
management services to MA and Medicaid managed care
plans is subject to qualitative review that may result in liability
for the AI-using entity.
US ex re v. Evicore Healthcare MSI, LLC. (2d. Cir., 2022)
117. J. DOE 1 et al. v. GitHub, Inc. et al., Case
No. 4:22-cv-06823-JST (N.D. Cal. 2022):
• Ps: They and class own copyrighted materials made available
publicly on GitHub.
• Ps: Representing class, assert 12 causes of action, including
violations of Digital Millennium Copyright Act, California
Consumer Privacy Act, and breach of contract.
118. Claim:
• Defendants' OpenAI's Codex and GitHub's Copilot generate
suggestions nearly identical to code scraped from public
GitHub repositories, without giving the attribution required
under the applicable license.
119. Defenses:
1. Standing. Did these Plaintiffs suffer injury?
2. Intent: Copilot, as a neutral technology, cannot satisfy
DMCA’s § 1202's intent and knowledge requirements.