The document discusses governmental transparency and the use of artificial intelligence. It notes that while governments have used AI since the 1980s, recently there has been a shift to more complex, data-driven AI models that are difficult to explain. As AI is increasingly used in governmental decision-making, transparency is important so that citizens understand decisions. The authors developed a framework to assess the quality of explanations for legal decisions provided by public administrations. Through a case study and survey in the Netherlands, they found communication could be improved by providing more interactive explanations of decisions, including details of calculations and laws used. A good explanation led to greater citizen understanding and acceptance of decisions.
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Governmental Transparency in the Era of Artificial Intelligence
1. Governmental Transparency
in the Era of Artificial Intelligence
prof. dr. T.M. van Engers
Leibniz Center for Law
University of Amsterdam
T.M.vanEngers@uva.nl
D.M. de Vries
Informatics Institute
University of Amsterdam
dennis.devries@student.uva.nl
JURIX 2019 - 32nd
International Conference on Legal Knowledge and Information Systems
December 13, 2019
Venue ETSI Minas y Energía School, Madrid, Spain
2. GOVERNMENTAL TRANSPARENCY IN THE ERA OF ARTIFICIAL INTELLIGENCE
Abstract: In the last years governments started to adapt new types of Artificial Intelligence (AI), particularly sub-symbolic data-driven AI, after
having used more traditional types of AI since the mid-eighties of past century. The models generated by such sub-symbolic AI technologies, such
as machine learning and deep learning are generally hard to understand, even by AI-experts. In many use contexts it is essential though that
organisations that apply AI in their decision-making processes produce decisions that are explainable, transparent and comply with the rules set
by law. This study is focused on the current developments of AI within governments and it aims to provide citizens with a good motivation of
(partly) automated decisions. For this study a framework to assess the quality of explanations of legal decisions by public administrations was
developed. It was found that communication with the citizen can be improved by providing a more interactive way to explain those decisions.
Citizens could be offered more insights into the specific components of the decision made, the calculations applied and sources of law that
contain the rules underlying the decision-making process.
Keywords: Artificial Intelligence, XAI, Transparency, Explanations, Government
2
ABOUT THE JURIX CONFERENCE
JURIX 2019 is the 32nd International Conference on Legal Knowledge and Information Systems organised by the Foundation for Legal Knowledge Based Systems
(JURIX) since 1988. JURIX 2019 is hosted by the Ontology Engineering Group (Artificial Intelligence Department, Universidad Politécnica de Madrid).
3. Table of Contents
3
Early bird of Artificial Intelligence (AI)
Research in AI has continued since its early dawn at the
Dartmouth Conference of 1956, and various new technologies
have emerged since then.
1956
Adoption of AI by Governments
In the beginning of 1990’s governments started to adopt a wide
variety of AI applications to reduce burden some tasks in public
administration.
1980’s
Potential threats of AI
Over the past decade, the adoption of AI technologies in public
administrations and digitizing governments are criticized for
several reasons.
2000’s
Renewed Interest in Explainable AI
The increased popularity of AI and the application in fields where
citizen's stakes are high, has just put the topic of XAI back on the
agenda.
2010’s
Explaining Good Explanation
Several criteria have been described that can be used to
determine the user's satisfaction of an explanation, a quality
assessment framework is presented.
2019
What’s next?Forward
Explanation Optimization
We should not forget that explainability, transparency,
accountability and auditability are essential to governmental
processes and that it can be included in digital communication.
Now
5. Adoption of AI by Governments
In the beginning of 1990’s governments started to adopt a wide variety of AI applications to reduce
burden some tasks in public administration:
Symbolic
AI
Applications in government:
● Calculating taxes to be paid
● Decide on student finance
● Determining speeding fines
Techniques such as:
● Rule / Case Based Systems
● Expert Systems
● Automated Decision Systems
Automatic rule-based reasoning
mechanism that uses determined rules to
come to a specific decision.
Sub-symbolic
AI
Applications in government:
● Recognizing handwritings
● Optimizing traffic flows
● Predicting recidivism
Techniques such as:
● Neural Nets
● Machine Learning
● Deep Learning
Automatic reasoning mechanisms that
rely on patterns in existing data and
inferences to come to a specific decision.
5
METHODS OF AI IN GOVERNMENT
1
2
6. Adoption of AI by Governments
In the beginning of 1990’s governments started to adopt a wide variety of AI applications to reduce
burden some tasks in public administration:
6
7. Adoption of AI by Governments
In the beginning of 1990’s governments started to adopt a wide variety of AI applications to reduce
burden some tasks in public administration, which come with some costs:
7
Machine Bias: Risk assessments in criminal
sentencing
ProPublica (2016)
Ongevraagd advies over de effecten van de
digitalisering voor de rechtsstatelijke verhoudingen
Raad van State (2018)
8. Threats of AI-use in Governments
Over the past decade, the adoption of AI technologies such as Automated Decision Systems (ADS) in
public administrations and digitizing governments are criticized for several reasons.
POTENTIAL BIAS AND
INCORRECTNESS IN
AUTOMATED SYSTEM
Computer-system can contain different
kinds of human error. The can be incorrect
in their reasoning or even be biased. In the
United States the COMPAS system is used
for predicting the likelihood of recidivism
for criminals. Researchers proved that this
system is biased against Afro-Americans.
Introducing a bias against a specific group
within society could lead to more
segregation and then decreasing
opportunities for that specific group. As a
result, the system will produce a
self-fulfilling prophecy.
GAINING AN
UNDERSTANDABLE
INSIGHT IN SYSTEMS
In order to being able to trust organizations
in taking (legally) justified decisions, these
decisions when produced by AI
applications, need to be explained and
argued for in such way that the persons
subjected to those decisions at least
understand the what the decision is based
upon. Having insight into the reasoning
mechanisms of AI-algorithms in general is
needed in order to check their correctness,
fairness, normative compliance and
sensitivity to potential biases in their
judgements and governmental decisions.
CHANGING
CONSTITUTIONAL
RELATIONS
In The Netherlands The Council of State
aims to protect the citizen against a
changing constitutional relation. The
Council mentions that in the current
situation, Dutch citizens cannot always
determine which decision-rules and data
are used for a specific decision which is
(partly) made with the help of AI.
Furthermore, it is mentioned that for some
people it might be very challenging to take
part in the digital society. Therefore The
Council advises the government to pay
closer attention to the motivation of
automated decisions.
THREAT THREAT RESOLUTION
+ >
8
9. Research structure
9
Situation
Complication
Key Question
Governments started to adapt Artificial Intelligence technologies to
improve the efficiency and effectiveness of public administration.
Applications do not meet the traditional government requirements for
explainability, transparency, accountability and auditability.
How can governmental agencies improve their digital communication
towards citizens concerning (partly) automated decisions?
1
2
3
10. Research approach
10
INTERVIEWS
Understanding digitizing
governments. Learn more about
adoption of symbolic and
sub-symbolic AI technologies
within governments. Spoke to
people from various fields such
as governance, artificial
intelligence and law.
LITERATURE STUDY
Reviewed research materials
from various sources to identify
the importance of explanations,
a renewed interest in
Explainable AI and also to
conduct a quality assessment
framework for explanations
from governments.
SURVEY
Instrument for measuring the
citizen’s attitude towards a the
adoption of ADS within the
Dutch Government. Some A/B
testing is done to look for a
relation between explanation
quality and acceptance of the
decision is done.
11. Renewed Interest in Explainable AI
The need for Explainable AI (XAI) is not new by all means, it has been addressed in many reports,
academic papers and conferences. The increased popularity of sub-symbolic AI, combined with the
application in fields where citizen's stakes are high, has just put the topic back on the agenda again.
1. A statement or account that makes something clear
2. A reason or justification given for an action or belief
EXPLANATION QUALITY
IMPROVES
Good explanations can be
useful to clarify or justify
the behavior of an AI
agent.
UNDERSTANDING
IMPROVES
This will lead to better
understanding on the
reasoning of those
systems.
ACCEPTANCE OF
DECISION IMPROVES
The acceptance in the
decisions, conclusions and
recommendations of those
systems improves.
MORE TRUST IN AI
AND GOVERNMENT
Acceptance of decisions
improve trust from the
user in the accuracy of
those systems.
WHAT ARE EXPLANATIONS?
WHY EXPLANATIONS MATTER
11
Source: Oxford Dictionaries (2019)
12. Explaining Good Explanation
Based on the vast volume of existing work we developed a evaluative framework enables us to
evaluate the quality of XAI. In literature several criteria have been described that can be used to
determine the satisfaction of an explanation.
CRITERIA FRAMEWORK FOR GOOD EXPLANATION
Criterium Description
External coherence Is the explanation compatible with existing knowledge and beliefs?
Internal coherence Is it sound how the several parts relate to each other?
Simplicity Is the number of causes used sufficient?
Articulation Is the text written in an understandable manner?
Contrastiveness Is the alternative outcome shown as well?
Interaction Is there a possibility to interact between explainer and explainee?
12
13. Case-study in The Netherlands
A qualitative research approach and concordant analysis were used to gain insight in the way
governmental authorities currently corresponds with citizens, i.e. communicate and explain their
decisions and what if this could improve on in several ways.
13
SYMBOLIC AI IN THE DUTCH GOVERNMENT
Several governmental authorities in The Netherlands use ADS for the executing of administrative tasks. One commonality of the ADS
supported tasks is that they typically contain some normative reasoning, assessing if something is allowed or disallowed and in many cases
some form of calculation. Governmental agencies communicate these decisions with citizen on paper but more and more by digital means.
DECIDING ON STUDENT LOANS
The example case is the application that is used to decide on Student Loans, deployed by the administrative agency Education Executive
Agency (referred to as DUO in Dutch). The ADS for deciding on Student Loans is using symbolic AI, more specifically it is a rule-based system
that contains different rules that are evaluated when deciding on the entitlement of students on financial support.
ANALYZING A DISPOSAL FROM DUO
When an address change is received at the authority, the rule-based system will automatically notice the change in the student's address and
it will, when necessary, change the student's loan as well. Students that live at their parents' receive a lower loan than those who live on their
own. This decision will be sent to the student via an online platform called 'Mijn DUO'.
14. Hypotheses on the survey
H1: There is a relation between one's trust in government and trust in computer-systems within the government.
H2: The citizen's support for the deployment of AI by the government varies per use-case.
H3: Students will be more satisfied with a more interactive letter than the original letter from DUO.
H4: A clearer explanation will lead to a greater acceptance of the decision.
H5: A good explanation of the decision will help to reduce the citizen's willingness of objection or appeal.
14
15. 15
The Original Disposal
An original letter retrieved from the
Education Executive Agency about a
student’s address change.
Explanation Framework
External coherence
Internal coherence
Simplicity
Articulation
Contrastiveness
Interaction
16. Explanation Framework
The Conceptual Disposal
16
An interactive letter based on the
original letter from the Education
Executive Agency about a student’s
address change. The Explanation
Assessment Framework was used
to optimize this letter.
External coherence
Internal coherence
Simplicity
Articulation
Contrastiveness
Interaction
18. Respondents
● Sex: 60 (45.1%) female and 73 (54.9%) male
● Age: All were aged between 18 and 30, with an average age of 23.46 (SD = 1.78)
● Education:
18
133 RESPONDENTS
19. Trust in Computer-Systems within Government
There can be found a relation between one’s
Trust in the Dutch Government and one’s Trust in Computer-Systems within Government
F(1,131) = 14.137, p < .0005, R2
= .097, b = 0.333, t(131) = 3.760, p < .0005
19
Trust in Computer-Systems within Government
1. Confidence in automated computer systems for decision-making.
2. Worries about automated computer systems for decision-making.
3. Pro or anti automated computer systems for decision-making.
4. Demand for transparency of mechanisms of automated computer systems for decision-making.
Cronbach's Alpha = .592.
20. Deployment of AI in The Netherlands
20
p < .0005
p < .0005
p < .0005
Student’s attitude - Automated computer-systems might be used for:
21. Conceptual DisposalOriginal Disposal
Comparing the two disposals on clarity
21
68 respondents
Average clarity score: 3.52 / 5
65 respondents
Average clarity score: 4.14 / 5
(Mann-Whitney U = 1082.5, z = 5.112, p < .0005)
22. Acceptation of the Decision
Respondents agree significantly more with the decision of the conceptual letter than the same
decision from the original letter
(U = 1550, z = 3.331, p = .001)
No significant difference between the two conditions was found in the urge to object or appeal the
decision
(U = 1967, z = 1.186, p = .235)
However, the explanation in the conceptual letter proved to be more beneficial for the support,
substantiation and argumentation of a potential objection or appeal
(U = 1577, z = 2.979, p = .003)
Lastly there can be concluded that a good explanation of the decision will help to reduce the change
of objection or appeal
(t(132) = 13.319, p < .0005)
22
23. Key takeaways
● There is a relation between one's trust in government and trust in computer-systems within the
government.
● The citizen's support for the deployment of AI by the government varies per use-case.
● Students will be more satisfied with a more interactive letter than the original letter from DUO.
● A clearer explanation will lead to a greater acceptance of the decision.
● A good explanation of the decision might help to reduce the citizen's willingness of objection or
appeal.
23
24. Conclusion - Explanation Optimization
24
1
Improving Communication of
Automated Decisions
This means the decision is; compatible with
existing knowledge, the parts of the letter fit
together and uses as few causes possible, is
clearly written, provides contrastive
information and offers the opportunity to
interact.
3
Improve citizen satisfaction
As a result the citizen’s satisfaction with the
decision will increase.
5
Diminishes objection and
appeal
A greater acceptance of the decision
change the willingness of citizens to object
against the decision. This improves the
government’s efficiency and effectiveness,
while explainability, transparency,
accountability and auditability are
guaranteed.
2
Improves citizen’s understanding
Providing a more interactive letter which delivers
more insight into the specific calculations and
sources of law, results in better understanding.
4
Increase the acceptance of
the decision
A better satisfaction will mainly lead to
greater acceptance of the decision.
25. Acknowledgements
We would like to thank the Canadian Research Council sponsoring the ACT Project and the CyberJustice Lab. Unique
appreciations should be given to the experts who were involved in this study: Robert van Doesburg, Cees-Jan Visser,
Matthijs van Kempen, Giovanni Sileno, Marlies van Eck, Mark Neerincx, Diederik Dulfer, Vincent Hoek and Koen Smit.
25
26. References
● Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. ProPublica (2016).
● Derek Doran, Sarah Schulz, and Tarek R Besold. 2017. What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794
● English Oxford Dictionaries. 2019. Definition of ’explanation’ in English. (2019). https://en.oxforddictionaries.com/definition/explanation
● John Haugeland. 1985. Artificial Intelligence: The Very Idea. MIT Press.
● Robert Hoffman, Shane Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for Explainable AI: Challenges and Prospects. XAI Metrics (2018).
● Troy D Kelley. 2003. Symbolic and Sub-Symbolic Representations in Computational Models of Human Cognition: What Can be Learned from Biology? Theory & Psychology
13, 6 (2003), 847–860.
● Henry Lieberman. 2016. Symbolic vs. Subsymbolic AI. MIT Media Lab (2016).
● Peter Lipton. 1990. Contrastive explanation. Royal Institute of Philosophy Supplements 27 (1990), 247–266.
● Tania Lombrozo.2007.Simplicity and probability in causal explanation.Cognitive Psychology 55, 3 (2007), 232–257. https://doi.org/10.1016/j.cogpsych.2006.09. 006
● Yisheng Lv, Yanjie Duan, Wenwen Kang, Zhengxi Li, and Fei-Yue Wang. 2014. Traffic flow prediction with big data: a deep learning approach. IEEE Transactions on Intelligent
Transportation Systems 16, 2 (2014), 865–873.
● Hila Mehr. 2017. Artificial Intelligence for Citizen Services and Government.Harvard Ash Center Technology & Democracy (2017).
● Tim Miller. 2018. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence (2018).
● Wolter Pieters. 2011. Explanation and trust: what to tell the user in security and AI? Ethics and information technology 13, 1 (2011), 53–64.
● Raad van State. 2018. Ongevraagd advies over de effecten van de digitalisering voor de rechtsstatelijke verhoudingen. Kamerstukken II 2017/18, 26643, nr. 557(2018).
● Stephen J Read and Amy Marcus-Newhall. 1993. Explanatory coherence in social explanations: A parallel distributed processing account. Journal of Personality and Social
Psychology 65, 3 (1993), 429.
● Thomas R Roth-Berghofer and Jorg Cassens. 2005. Mapping goals and kinds of explanations to the knowledge containers of case-based reasoning systems. In
International Conference on Case-Based Reasoning. Springer, 451–464.
● Robert Victor Schuwer. 1993. Het nut van kennissystemen. https://doi.org/10.6100/IR394707
● Paul Thagard. 1989. Explanatory coherence. Behavioral and brain sciences 12, 3 (1989), 435–467.
● Matthijs van Kempen. 2019. Motivering van automatisch genomen besluiten. Knowbility (2019).
26