This presentation reviews the state of the art with respect to the use of artificial intelligence in education, reflecting on the ethical aspects and implications with particular reference to distance education.
1. Artificial Intelligence in Education:
Ethical Futures
Show & TEL Ethics & Technology-Enhanced Learning
02 November 2020
DR. ROBERT FARROW
INSTITUTE OF EDUCATIONAL TECHNOLOGY
@philosopher1978
rob.farrow@open.ac.uk
2. 2
01 Artificial Intelligence (AI)
A review of the contemporary
02 Opportunities and Risks
Understanding ethically significant aspects
03
04 Ethical AI in Distance Education
What can we anticipate?
05 Reflections
Concluding remarks and discussion
STRUCTURE
An overview
The AI4People Ethical Framework
4. 4
WHAT IS IT?
ARTIFICIAL INTELLIGENCE
• The mechanical
simulation of human
agency, intelligence,
and perception
• The use of machines
to perform tasks that
have traditionally
been performed by
natural intelligence
• Involves a
constellation of
technologies,
including machine
learning; natural
language
processing; speech
recognition
5. 5
THE VISION
ARTIFICIAL INTELLIGENCE
• Predicted to disrupt
human society and
productivity as a ‘4th
Industrial Revolution’
(Schwab, 2016)
• Solutions to
problems: repetitive
tasks; managing
risks; increase
affordability;
innovation;
accessibility;
efficiencies;
enhanced cognition
• Market expected to
be worth $126 billion
by 2025 (Statista,
2020)
CC-BY Emily Spratt
https://en.m.wikipedia.org/wiki/File:Alain_Passard_AI_Art.png
6. 6
“STRONG” & “WEAK” AI
ARTIFICIAL INTELLIGENCE
Searle (1980):
• AI hypothesis, strong form: an AI system can think and have a mind (in the
philosophical definition of the term);
• AI hypothesis, weak form: an AI system can only act like it thinks and has a
mind.
Strong / “General” AI
• A generalized intelligence which equals (or surpasses) that of human beings
• Capable of operating with minimal human oversight
• Communicates with humans in natural language
• Examples: ?
Weak / “Narrow” AI
• Software written for specific applications or tasks
• Examples: traffic management; spam filters; detection of banking fraud;
disease mapping; playing chess; facial recognition virtual assistants
7. 7
TWO PHILOSOPHICAL ARGUMENTS AGAINST STRONG AI
ARTIFICIAL INTELLIGENCE
Turing, A. (1950). Computing
Machinery and Intelligence. Mind,
LXI, 236.
Searle, John (1980). Minds, Brains and
Programs. Behavioral and Brain Sciences,
3 (3): 417–457
8. 8
SHOULD AI BE RATIONAL, OR BE LIKE HUMANS?
ARTIFICIAL INTELLIGENCE: CONTRASTING VISIONS
Russell and Norvig (1995, 2002, 2009)
10. 10
OPPORTUNITIES AND RISKS
ARTIFICIAL INTELLIGENCE
Floridi et al. (2020) suggest that it is not a matter of whether AI will have an
impact, but by whom, how, where, and when this positive or negative impact
will be felt.
They identify four chief opportunities for AI:
• who we can become (autonomous self-realisation);
• what we can do (human agency);
• what we can achieve (individual and societal capabilities); and
• how we can interact with each other and the world (societal cohesion).
Risks include:
• failing to realise the benefits of AI
• overuse of AI (accidental)
• misuse of AI (by design)
• disruption
• unfairness
11. 11
FOUR CORE OPPORTUNITIES OFFERED BY AI, FOUR CORRESPONDING RISKS, AND THE
OPPORTUNITY COST OF UNDERUSING AI
Floridi, L., Cowls, J., Beltrametti, M. et al. (2018). AI4People—An Ethical Framework for a Good AI Society:
Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707.
https://doi.org/10.1007/s11023-018-9482-5
ARTIFICIAL INTELLIGENCE
13. 13
AI4PEOPLE ETHICAL FRAMEWORK
AI IN EDUCATION
There is already a wide array of ethical guidance suggested for AI.
Floridi & Cowls (2019) report on the outcomes of the AI4People project, which
reviewed the following high profile guidelines, identifying 47 principles:
1. The Asilomar AI Principles, developed in collaboration with attendees of the high-level Asilomar
conference (2017) https://futureoflife.org/ai-principles/
2. The Montreal Declaration for Responsible AI (2017)
https://nouvelles.umontreal.ca/en/article/2018/12/04/developing-ai-in-a-responsible-way/
3. The expert crowd-sourced Ethically Aligned Design: A Vision for Prioritizing Human Well-being
with Autonomous and Intelligent Systems (v2, 2017) https://standards.ieee.org/content/dam/ieee-
standards/standards/web/documents/other/ead_v2.pdf
4. The Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems published by the
European Commission’s European Group on Ethics in Science and New Technologies (2018)
http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf
5. The ‘five overarching principles for an AI code’ offered in UK House of Lords Artificial Intelligence
Committee’s report, AI in the UK: ready, willing and able? (2018)
https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm
6. The Tenets of the Partnership on AI, a multi-stakeholder organization consisting of academics,
researchers, civil society organisations, companies building and utilising AI technology, (2018)
https://www.partnershiponai.org/
14. 14
AI4PEOPLE ETHICAL FRAMEWORK
AI IN EDUCATION
The AI4People initiative synthesizes these guidelines to four traditional ethical
principles and proposes one new AI-specific principle.
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard
Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
15. 15
AI4PEOPLE ETHICAL FRAMEWORK
AI IN EDUCATION
Traditional ethical principles:
• Beneficence – the promotion of well-being, good outcomes, social and
environmental health
• Non-maleficence – protection from harm, ensuring correct use of
technologies, anticipating outcomes
• Autonomy – self-determination, retaining human decision-making and
rights, ensuring delegation is reversible
• Justice – fair distribution of resources, non-discrimination, minimising bias,
sharing benefits, empowering people
A new ethical principle:
• Explicability – enables the preceding four principles through a synthesis of
transparency, accountability, intelligibility (for the layman)
• “how does it work?”
• “who is responsible for the way it works?”
16. Ethical AI in Distance
Education
What can we anticipate?
17.
18. 18
WHAT DOES THE FUTURE HOLD?
AI IN EDUCATION
The application of AI in education is increasing.
According to the AI in Education Market Research Report (2020), the global
market reached $1.1 billion in 2019 and is predicted to generate $25.7 billion
in 2030
Drivers:
• Demand for personalised learning
• Technological sophistication
• Educational infrastructure
• Specialisation
• AI literacy
• Covid-19
19. 19
EXAMPLES OF AI IN EDUCATION
AI IN EDUCATION
Algorithmic decision making –who to enrol, how to support their learning
Analytics – learning, social, emotional
Automated assessment/feedback– quizzes, writing analytics,
Delegation of administrative tasks – freeing up time for learning & teaching
Knowledge management – making better use of data, making connections
Nudge autonomy – prompting stakeholders to take actions at appropriate times
Predictive analytics – modelling different scenarios
Simulations & practical experience – authentic learning experiences
Student support – AI tutoring, chatbots
VLE / UX – personalised interfaces
20. 20
APPLYING THE AI4PEOPLE ETHICAL FRAMEWORK
AI IN EDUCATION
Beneficence
• Education as a common good
• Extending educational opportunity – but are some thereby excluded?
• AI efficiencies may not improve pedagogical quality
Non-maleficence
• Managing risk, avoiding misuse (n.b. YouTube)
• Mistakes happen at scale
• What happens when things go wrong? Who has oversight?
• Algorithms take time/data to calibrate – what happens to the life chances of
those who pass through the system while this is happening?
• Risk of devaluing human labour and contribution
Autonomy
• Balancing the rights and privacies of learners with the potential pedagogical
benefits for them
• Nudged rather than delegated autonomy
• Processes for retrieving decision-making powers
21. 21
APPLYING THE AI4PEOPLE ETHICAL FRAMEWORK
AI IN EDUCATION
Justice
• Algorithmic system bias (Noble, 2018) – how can this be addressed?
• How could we resolve competing claims to justice?
• Preventing new harms
• Can we reduce careful judgements to algorithms or decision trees?
Explicability
• “how does it work?”
• “who is responsible for the way it works?”
The ‘Explicability’ requirement is arguably quite close to openness as a
principle
• Is transparency desirable in the pedagogical process?
• Can you ‘game’ a learning system once you know its key metrics?
Consider the OU policies on the use of student data (2014)
https://help.open.ac.uk/documents/policies/ethical-use-of-student-data
23. 23
CRITIQUE OF THE EXPLICABILITY PRINCIPLE
AI IN EDUCATION
Robbins (2020) notes that the Explicability principle has been endorsed by
Microsoft, Google, the World Economic Forum and the European Commission. He
argues that the requirement for Explicability is misplaced:
• Many uses for AI are low risk and don’t require Explication; in some case the
need to provide Explication could prevent advantages of AI being realised
• Argues that it is not the algorithm (process) or designer/decision maker but the
underlying principle that determines ethical value
• “a principle of explicability for AI makes the use of AI redundant”
Whittlestone et al. (2019) suggest a roadmap for going ‘beyond principles’:
• Uncovering and resolving the ambiguity inherent in commonly used terms, such
as privacy, bias, and explainability
• Identifying and resolving tensions between the ways technology may both
threaten and support different values
• Convenience vs self-actualisation; accurate prediction vs fair treatment;
efficiency vs autonomy; individual benefit vs social solidarity
• Building a more rigorous evidence base for discussion of ethical and societal
issues
24. 24
PRACTICAL STEPS FOR EDUCATION
AI IN EDUCATION
Floridi et al. (2018) recommend action points for education:
• Incentivise (through finance and regulation) zones for testing and developing AI
• Support the creation of educational curricula and public awareness activities
around the societal, legal, and ethical impact of Artificial Intelligence
• School curricula to include computer science
• Qualification programmes to educate employees on societal, legal, &
ethical impact of working with AI
• Include ethics and human rights in scientific and engineering curricula
• Develop educational programmes for the public at large
• Engage with wider initiatives such as UN’s sustainable development goals
25. 25
PRACTICAL STEPS FOR EDUCATION
AI IN EDUCATION
Carman & Rosman (2020) raise the issue of cultural relativism with respect to
ethical expectations. They argue that the Explicability principle provides an
avenue for reaching greater understanding, suggesting that the Explicability
principle needs to be adopted in Global South research contexts.
Here the Explicability principle might be seen to operate as something like a
Discourse Ethics – a procedurally driven dialectic that can be used to identify and
resolve tensions between different morals and norms (Morley et al., 2020;
Habermas, 1991).
26. 26
CONCLUDING THOUGHTS
AI IN EDUCATION
1. The Covid-19 crisis catalyzes AI processes, incentivizes higher education
institutions to move towards online learning and automation; private sector
companies are currently moving into this space.
2. These are structural issues with progress towards ‘Strong’ AI: machine
learning has made little progress with representing higher order thoughts,
higher levels of abstraction, being creative with language, or ‘common sense’
(Russell & Norvig, 2009). We should be circumspect about the hype but look
closely at ‘Weak’ AI applications.
3. Nonetheless, AI could effectively replace a large range of functions in
education and is considered attractive for this reason. The OU has a large
learner base to work from but policies need refreshing...
4. We don’t speak much about the demands that AI enhanced systems will
make of learners: data mining, soft skills, self-assessment, reflection, remote
work, etc. How do they develop these skills? Do we risk another ‘digital
divide’?
27. 27
CONCLUDING THOUGHTS
AI IN EDUCATION
5. Similarly, will learners who elect not to share data be penalised in any way?
6. Are we really ready to meet the demands of the ‘Explicability’ principle? Can
transparency cause harms?
7. In 2018 the OpenAIED group heard a very interesting presentation from Mark
Nichols (LTI) on some of the thinking around AI support for students which
emphasized the need for intersubjectivity, a kind of symmetry in human
relationships expressed as forms of mutual recognition – which machines cannot
genuinely reciprocate even if they can simulate it convincingly. Does this matter?
Weizenbaum (1976) argued that AI should not replace any positions requiring
respect/care/empathy (customer service, therapists, nurses, soldiers, policing,
judges)
6. Chatfield (2020) points out that we can’t think about the ethics of AI distinctly
from the ethics of our society – not all problems can be fixed with code but
Solutionism often presents itself this way.
30. 30
REFERENCES
Carman, M. & Rosman, B. (2020). Applying a principle of explicability to AI research in Africa:
should we do it? Ethics and Information Technology. https://doi.org/10.1007/s10676-020-
09534-2
Floridi, L., Cowls, J., Beltrametti, M. et al. (2018). AI4People—An Ethical Framework for a
Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds &
Machines 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in
Society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
Morley, J., Floridi, L., Kinsey, L. et al. (2020). From What to How: An Initial Review of Publicly
Available AI Ethics Tools, Methods and Research to Translate Principles into Practices.
Science and Engineering Ethics 26, 2141–2168. https://doi.org/10.1007/s11948-019-00165-5
Noble, S. U. (2018). Algorithms of Oppression. NYU Press.
Robbins, S. A (2019). Misdirected Principle with a Catch: Explicability for AI. Minds &
Machines 29, 495–514. https://doi.org/10.1007/s11023-019-09509-3
Russell, S. J. & Norvig, P. (1995, 2002, 2009). Artificial Intelligence: A Modern Approach. 1st
– 3rd ed. Prentice Hall.
31. 31
REFERENCES
Searle, J. (1980). Minds, Brains and Programs. Behavioral and Brain Sciences, 3 (3): 417–
457
Schwab, K. (2016). The Fourth Industrial Revolution. World Economic Forum.
Statista (2020). Artificial Intelligence (AI) worldwide - Statistics & Facts.
https://www.statista.com/topics/3104/artificial-intelligence-ai-worldwide/
Turing, A. (1950). Computing Machinery and Intelligence. Mind, LXI, 236.
Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to
Calculation. W. H. Freeman and Company
Whittlestone, J. Nyrup, R. Alexandrova, A. Dihal, K. Cave, S. (2019) Ethical and societal
implications of algorithms, data, and artificial intelligence: a roadmap for research. London:
Nuffield Foundation. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-
Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf
Notes de l'éditeur
The mechanical simulation of human agency, intelligence, and perception
The use of machines to perform tasks that have traditionally been performed by natural intelligence
QUESTION: Are there examples of ‘Strong’ AI?
Virtual assistants are arguably an attempt to simulate what a Strong AI might do
Turing Test and Chinese Room task-based, hence weak AI
Information processing is not equal to understanding
Based on functionalism in philosophy of mind
Key question – does consciousness matter when it comes to intelligence?
Is rationality or behaving like a human the more reliable indicator of intelligence?
Which is the more desirable in an AI?
Should AI be held to a higher standard?
Changing register slightly…
Think about potential opportunities/risks in your field
AI underuse could also see private enterprise filling this space
Any ideas what the principles underlying they reduced these guidelines to?
Bioethics because it is ‘applied’ and considered closer to digital, medical ethics than traditional ethics
Bioethics because it is ‘applied’ and considered closer to digital, medical ethics than traditional ethics
AI regularly hits the headlines these days – but has the quality of AI improved?
Abu Dhabi has the world’s first AI university
Other examples?
Noble, Safiya Umoja (2018). Algorithms of oppression : how search engines reinforce racism. New York University Press.
Includes data from registration; study record; correspondence; VLE activity; data held by third parties; anonymized data from external sites
Does not include: complaints; enquiries by potential learners; non-formal learning; religious/sexual information
NO CLEAR OVERSIGHT
EXEC is PVC Learning & Teaching – no longer exists
https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible
Citation: Whittlestone, J. Nyrup, R. Alexandrova, A. Dihal, K. Cave, S. (2019) Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. London: Nuffield Foundation. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf
https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible
Citation: Whittlestone, J. Nyrup, R. Alexandrova, A. Dihal, K. Cave, S. (2019) Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. London: Nuffield Foundation. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf
https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible
Citation: Whittlestone, J. Nyrup, R. Alexandrova, A. Dihal, K. Cave, S. (2019) Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. London: Nuffield Foundation. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf