SlideShare a Scribd company logo
1 of 97
Download to read offline
ALGORITHMIC
GATEKEEPING FOR
PROFESSIONAL
COMMUNICATORS
Power, Trust, and Legitimacy
Arjen van Dalen
R O U T L E D G E F O C U S
Focus
ALGORITHMIC
GATEKEEPING FOR
PROFESSIONAL
COMMUNICATORS
This book provides a critical study of the power, trust, and legitimacy of
algorithmic gatekeepers.
The news and public information which citizens see and hear is no longer
solely determined by journalists, but increasingly by algorithms. Van Dalen
demonstrates the gatekeeping power of social media algorithms by showing
how they affect exposure to diverse information and misinformation
and shape the behavior of professional communicators. Trust and
legitimacy are foregrounded as two crucial antecedents for the
acceptance of this algorithmic power. This study reveals low trust
among the general population in algorithms performing journalistic
tasks and a perceived lack of legitimacy of algorithmic power among
professional communicators. Drawing on case studies from YouTube
and Instagram, this book challenges technological deterministic discourse
around “filter bubbles” and “echo chambers” and shows how algorithmic
power is situated in the interplay between platforms, audiences, and
professional communicators. Ultimately, trustworthy algorithms used by
news organizations and social media platforms as well as algorithm
literacy training are proposed as ways forward toward democratic
algorithmic gatekeeping.
Presenting a nuanced perspective which challenges the deep divide
between techno-optimistic and techno-pessimistic discourse around
algorithms, Algorithmic Gatekeeping is recommended reading for
journalism and communication researchers in related fields.
Arjen van Dalen is Professor WSR at the Center for Journalism of the
University of Southern Denmark. He wrote his PhD dissertation on
Political Journalism in Comparative Perspective and is co-author of an
award-winning book on this topic with Cambridge University Press. He
published in journals such as Political Communication, Journalism, the
International Journal of Press/Politics, and Public Opinion Quarterly. His
research on algorithmic gatekeeping is funded by the Independent
Research Fund Denmark and Carlsberg Foundation.
Disruptions: Studies in Digital Journalism
Series editor: Bob Franklin
Disruptions refers to the radical changes provoked by the affordances of
digital technologies that occur at a pace and on a scale that disrupts settled
understandings and traditional ways of creating value, interacting, and
communicating both socially and professionally. The consequences for
digital journalism involve far-reaching changes to business models, pro­
fessional practices, roles, ethics, products, and even challenges to the
accepted definitions and understandings of journalism. For Digital
Journalism Studies, the field of academic inquiry which explores and ex­
amines digital journalism, disruption results in paradigmatic and tectonic
shifts in scholarly concerns. It prompts reconsideration of research
methods, theoretical analyses, and responses (oppositional and consen­
sual) to such changes, which have been described as being akin to “a
moment of mind-blowing uncertainty.”
Routledge’s book series, Disruptions: Studies in Digital Journalism,
seeks to capture, examine, and analyze these moments of exciting and
explosive professional and scholarly innovation which characterize
developments in the day-to-day practice of journalism in an age of
digital media, which are articulated in the newly emerging academic
discipline of Digital Journalism Studies.
Disrupting Chinese Journalism
Changing Politics, Economics and Journalistic Practices of the Legacy
Newspaper Press
Haiyan Wang
Algorithmic Gatekeeping for Professional Communicators
Power, Trust, and Legitimacy
Arjen van Dalen
For more information about this series, please visit: www.routledge.com/
Disruptions/book-series/DISRUPTDIGJOUR
Algorithmic
Gatekeeping for
Professional
Communicators
Power, Trust, and Legitimacy
Arjen van Dalen
First published 2023
by Routledge
4 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
605 Third Avenue, New York, NY 10158
Routledge is an imprint of the Taylor & Francis Group, an informa
business
© 2023 Arjen van Dalen
The right of Arjen van Dalen to be identified as author of this work
has been asserted in accordance with sections 77 and 78 of the
Copyright, Designs and Patents Act 1988.
The Open Access version of this book, available at www.
taylorfrancis.com, has been made available under a Creative
Commons Attribution-Non Commercial-No Derivatives 4.0
license.
Trademark notice: Product or corporate names may be trademarks
or registered trademarks, and are used only for identification and
explanation without intent to infringe.
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British
Library
Library of Congress Cataloguing-in-Publication Data
Names: Van Dalen, Arjen, 1980- author.
Title: Algorithmic gatekeeping for professional communicators :
power, trust and legitimac / Arjen van Dalen.
Description: Abingdon, Oxon ; New York : Routledge, 2023. |
Series: Disruptions : studies in digital journalism | Includes
bibliographical references and index.
Identifiers: LCCN 2023003541 (print) | LCCN 2023003542 (ebook)
| ISBN 9781032450711 (hardback) | ISBN 9781032450728
(paperback) | ISBN 9781003375258 (ebook)
Subjects: LCSH: Social media and journalism. | Algorithms. | Filter
bubbles (Information filtering)
Classification: LCC PN4766.V36 2023 (print) | LCC PN4766
(ebook) | DDC 302.23/1--dc23/eng/20230227
LC record available at https://lccn.loc.gov/2023003541
LC ebook record available at https://lccn.loc.gov/2023003542
ISBN: 978-1-032-45071-1 (hbk)
ISBN: 978-1-032-45072-8 (pbk)
ISBN: 978-1-003-37525-8 (ebk)
DOI: 10.4324/9781003375258
Typeset in Times New Roman
by MPS Limited, Dehradun
Contents
List of Figures vii
List of Table viii
Acknowledgments ix
1 Introduction 1
1.1 Algorithmic power 1
1.2 Conceptualizing algorithmic gatekeeping 2
1.3 Technological determinism: hard and weak 4
1.4 Theoretical considerations 8
1.5 Legitimacy and trust 11
1.6 Outline and content of the book 14
2 Algorithm Aversion 17
2.1 Introduction 17
2.2 Making sense of news algorithms 18
2.3 Research questions 21
2.4 Method 22
2.5 Results 23
2.6 Conclusion 29
3 The Mediating Power of Algorithms 31
3.1 Introduction 31
3.2 Misinformation and algorithms 32
3.3 YouTube and autism spectrum disorder 34
3.4 The mediating power of YouTube’s
algorithms 36
3.5 Research questions 38
3.6 Method 38
3.7 Results 40
3.8 Conclusion 43
4 Structuring Power 46
4.1 Introduction 46
4.2 Influencers: politically relevant strategic
actors 48
4.3 Algorithmic power 49
4.4 Research questions 51
4.5 Method 52
4.6 Results 54
4.7 Conclusion 61
5 Algorithmic Power, Trust, and Legitimacy 63
5.1 Algorithmic power 63
5.2 Algorithms, trust, and legitimacy 64
5.3 Implications 65
References 70
Index 82
vi Contents
Figures
1.1 Annual number of social science publications about news
algorithms, social media algorithms, filter bubbles, and
echo chambers 5
2.1 Who do you think is best at selecting the following type
of content? 23
2.2 Should humans or algorithms make decisions in different
areas of public life? 28
2.3 Should humans or algorithms decide which news you
receive? (Generational differences) 29
3.1 YouTube search results after searching for videos
about autism using six different search terms 40
3.2 YouTube recommendations after five types of videos
about autism 42
4.1 The relation between Influencers and Instagram’s
algorithms as a balance of power 56
Table
2.1 Algorithm aversion experiment 26
Acknowledgments
One of the central arguments in this book is that algorithmic gatekeeping
can not be understood in isolation since it is the result of the interaction
between the algorithm and its surroundings. The same can be said for the
writing of this book, which is the result of my interactions with generous
colleagues and institutions.
First of all, I would like to thank the Independent Research Fund
Denmark, which generously supported my research on algorithmic
gatekeepers under grant 7015-0090B and the Carlsberg Foundation
which supported the writing of this book and Open Access Funding with
monograph grant CF19-0544. Kristina Bøhnke, Annette Schmidt, and
research support at the SDU’s Faculty of Business and Social Sciences
provided very professional support in the several rounds of funding
application. A special thanks go to Erik Albæk, Claes de Vreese, and
Michael Latzer for their help in the application process.
The Center for Journalism and the Department of Political Science
and Public Management at the University of Southern Denmark
(SDU) provided a good base to conduct my research. Many thanks to
the following students without whose help it would not have been
possible to conduct the empirical analyses in the book: Anne-Marie
Fabricius Markussen, Camilla Lund Knudsen, Freja Mokkelbost, Julie
Holmegaard Milland, Kasper Viholt Nielsen, Kirsten Vestergaard Prescott,
Kristina Finsen Nielsen, Kristine Tesgaard, Ninna Nygård Jæger, Sasha
Dilling, Selma Marthedal, Sofie Høyer Christensen, and Thor Bech
Schrøder. Tina Guldbrandt Jakobsen managed the employment of these
students.
Some of the results in Chapter 2 were earlier presented in the report
MFI undersøger Synet på nyhedsalgoritmer. Ralf Andersson, Didde
Elnif, and Morten Skovsgaard provided valuable feedback on this report
which also helped shape the argument in Chapter 2.
Drafts of the manuscript were discussed in the Journalism Research
group at SDU. Thanks to research group members Camilla Bjarnøe,
Christina Pontoppidan, David Hopmann, Didde Elnif, Erik Albæk,
Fransiska Marquart, Jakob Ohme, Joe Bordacconi, Jonas Blom, Kim
Andersen, Lene Heiselberg, Lisa Merete Kristensen, Majbritt Christine
Severin, Morten Skovsgaard, Morten Thomsen, Peter Bro, Rasmus
Rønlev, and Sofie Filstrup for their challenging and constructive
feedback. In the last stage of the writing process, I discussed some of
the central ideas with colleagues at the Digital Democracy Centre at
SDU. I am sure this Centre will be a place for continuous conversation
around the central themes of this book.
During the writing of this book, I dove into the emerging research
literature on the topic of algorithmic gatekeeping, which was highly
inspirational. Here, the work of Nick Diakopoulos, Taina Bucher, Axel
Bruns, Tarleton Gillespie, Kevin Munger, and Joseph Philips deserves
special mention.
Before the pandemic, it was possible for me to visit three research
institutions where I worked on data collection and presented selected
findings from this book. From ASCOR at the University of Amsterdam,
I would like to specially thank Rens Vliegenthart and Mark Boukes. Bert
Bakker generously shared his knowledge on pre-registration. From
Södertörn University, I would like to thank Ester Appelgren, Mikolaj
Dymek, Gunnar Nygren, and Nina Springer. From my stay at the
Leibniz Institute for Media Research, Hans-Bredow-Institut (HBI), and
University of Hamburg, I would like to thank Cornelius Puschmann,
Jonathon Hutchinson, Jessica Kunert, and Michael Brüggemann.
I would also like to thank the conference and workshop organizers
and participants who allowed me to test early versions of the arguments
presented in this book. Here, I would like to highlight in particular the
2018 ECREA pre-conference Information diversity and media pluralism
in the age of algorithms, hosted by Edda Humprecht, Judith Möller, and
Natali Hellberger; the ECREA Journalism Studies Section Conference
2019 convened by Folker Hanusch and the Journalism Studies Section at
the University of Vienna; the ECREA Political Communication Section
Conference 2019 hosted by Agnieszka Stępińska and the Faculty of
Political Science and Journalism at the Adam Mickiewicz University in
Poznań; the Knowledge-Resistance-workshop in 2020 upon invitation by
Jesper Strömbäck of the University of Gothenburg; and an Augmented
Journalism network-webinar in 2020, organized by Carl-Gustav Linden,
Mariëlle Wijermans, and Andreas L Opdahl.
A further word of gratitude goes to Routledge. I specially want to
thank Bob Franklin, Editor of Routledge’s Disruptions: Studies in
Digital Journalism-series for his confidence and encouragement, as well
as Hannah McKeating and Elizabeth Cox for their guidance in the
publication process. Three anonymous reviewers provided valuable
input to the book.
x Acknowledgments
A large part of the work on this book took place during the Covid
pandemic. Personal thanks go to Anna Batog, Niels Lachmann, Thomas
Paster, and my family in The Netherlands for their support during this
time. Head of Department Signe Pihl-Thingvad and Vice-Head Morten
Kallestrup are thanked for allowing me to work at the university during
the lockdown. I want to thank Dean Jens Ringsmose for his encouraging
words to the staff of the faculty during this period. A special word of
thanks goes to everyone who reached out to me during the lockdown.
That was very much appreciated!
Acknowledgments xi
1 Introduction
1.1 Algorithmic power
The news and public information which citizens see and hear are
increasingly influenced by algorithms. Algorithms are “encoded pro­
cedures for transforming input data into a desired output, based on
specified calculations” (Gillespie, 2014, p. 167). Such automated proce­
dures fulfill a gatekeeping role and automatically select, sort, and prioritize
public information (e.g., Bucher, 2018; Diakopoulos, 2019; Kruikemeier
et al., 2021; Napoli, 2014; Thurman et al., 2019; Wallace, 2018).
News and social media algorithms are powerful forces (see Bucher,
2018; Diakopoulos, 2019; Gillespie 2014; Just and Latzer, 2017).
Worldwide, people get more and more of their political information
through algorithm-controlled online social media, such as Facebook,
Google, YouTube, or Instagram. This is especially true for younger
generations (Newman et al., 2019). When algorithms act as gate­
keepers, they impact on the information which the audience receives
and on the way people communicate. Prioritization and recommen­
dation algorithms affect exposure to counter-attitudinal views by either
narrowing (Bakshy et al., 2015) or broadening the perspectives which
the audience encounters (Helberger, 2019). Replacing human-curated
content with algorithmically curated news, like in Apple News, affects
exposure to soft news about celebrities (Bandy and Diakopoulos,
2020a). The information which algorithms on social media platforms
show can affect voter turnout (Bond et al., 2012) or the persuasiveness
of misinformation (Bode and Vraga, 2015). Recommendation algo­
rithms on platforms like TikTok play an important role in social
movements, as they influence the reach of calls to action (Bandy and
Diakopoulos, 2020b). On the one hand, algorithms on social media
platforms can enhance the spread of extremist views (e.g., Massanari,
2017). On the other hand, algorithms are used to moderate the online
debate and censor inappropriate speech (e.g., Cobbe, 2021). Changes in
social media algorithms can limit or boost traffic to legacy news outlets
DOI: 10.4324/9781003375258-1
(Trielli and Diakopoulos, 2019) and influence the type of content which
these outlets produce (Diakopoulos, 2019, pp. 178–180). Automation in
news production transforms the news-making process (e.g., Linden, 2017;
Thurman, 2011; Van Dalen, 2012; Wu et al., 2019) and affects the relation
between journalists and the audience (Dörr and Hollnbuchner, 2017).
The power of news and social media algorithms is central to this book
which does not consider algorithms as an autonomous force that ex­
ercises a uni-directional influence on society at large. This book also tries
to steer away from treating algorithms as a scape goat for negative
societal developments such as radicalization or polarization. Instead, its
purpose is to understand how the power of gatekeeping algorithms
emerges and is exerted in the interaction with other actors. The first aim
of the book is to make the power of algorithmic gatekeepers tangible. It
documents how algorithms affect the content which the audience is ex­
posed to (Chapter 3) and how they shape the behavior of professional
communicators (Chapter 4). This shows how algorithmic power is situated
in the interplay between platforms, audiences, and professional commu­
nicators. To be justified, such power needs to be accepted by the general
population and professional communicators. Therefore, the second aim of
this book is to study two crucial antecedents for the acceptance of algo­
rithmic power: trust and legitimacy. It analyzes trust and the acceptance of
algorithmic gatekeeping among the general population (Chapter 2) and
evaluates the legitimacy of algorithmic power (Chapter 4).
1.2 Conceptualizing algorithmic gatekeeping
Journalistic gatekeeping has been defined as “the process of selecting,
writing, editing, positioning, scheduling, repeating and otherwise mas­
saging information to become news” (Shoemaker et al., 2009, p. 73).
Building on this general definition, algorithmic gatekeeping can be
understood as the influence of automated procedures on the process of
selecting, writing, editing, scheduling, repeating, and otherwise massa­
ging information to become news. These automated procedures are
applied by news organizations, but equally important, they are a defining
feature of social media platforms. This book deals with the influence of
automated procedures on the production of news, as well as on the
selection of news.
Technically, algorithms can be described as programmed procedures
that are followed by a computer system. Bucher (2018) distinguishes
between deterministic and machine learning algorithms. Deterministic
algorithms follow the same predefined steps and do not adapt. Machine
learning algorithms on the other hand continuously adapt the way they
function as they learn to optimize their performance. More broadly, al­
gorithms are a symbol for the increasing automation of decision-making in
2 Introduction
public life. The “algorithmic turn” (Napoli, 2014) in gatekeeping which is
central in this book is part of a larger societal trend, where more and more
societal tasks are automated (e.g., Kitchin and Dodge, 2011; Lessig, 1999;
Steiner, 2012; see Chapter 2). While algorithms’ role in gatekeeping is the
main focus, the book ties into the growing journalism research field
studying news automation more broadly (e.g., Diakopoulos, 2019;
Linden, 2017; Thurman et al., 2019). In line with this literature on news
automation, the book does not deal narrowly with technical lines of
computer codes but understands algorithms which influence gatekeeping
as “heterogeneous and diffuse sociotechnical systems, rather than rigidly
constrained and procedural formulas” (Seaver, 2017 in Bandy and
Diakopoulos, 2020a, p. 38).
To understand the process and effects of algorithmic gatekeeping,
algorithms need to be studied in their interplay with other actors, most
notably professional communicators, and the general public.
Journalistic gatekeeping research goes back to White’s (1950) study of
how an individual journalist acts as gatekeeper and decides which
information makes it through the gate and becomes news, and which
information is kept out. Building and expanding on this pioneering
research, later studies have identified the influence of the organizational
context and extra-media influences on the decisions made by individual
journalists. Journalistic gatekeeping has come to mean more than only
selecting news, as later definitions also include the writing and editing
of news (Shoemaker et al., 2009). The increasingly important role of
automation in the news-making process and the role of social media
platforms in the distribution of news have made the gatekeeping process
even more complex. Gatekeeping decisions are made by diverse sets of
actors, most notably journalists and other strategic professionals, indi­
vidual amateurs as well as algorithms (see Wallace, 2018). Gatekeeping
has become decentralized and is in many instances the consequence of
the interaction between different actors. This makes it difficult to isolate
the consequences of algorithms in the gatekeeping process or to use the
language of directional causality when discussing their influence.
Where gatekeeping research traditionally studied gatekeeping within
journalistic news organizations, other professional communicators play an
increasingly important role in gatekeeping in today’s media environment.
On algorithmically controlled social media platforms, audiences encounter
diverse kinds of information about public affairs side-by-side, ranging
from entertainment to hard news, and from conspiracy theories to the
correction of misinformation. A clear example of these new sources of
information are Influencers who post about public affairs on social media
(see Chapter 4). They exemplify how technological, societal, and media-
internal developments have blurred the boundaries around the journalistic
profession (e.g., Lewis, 2012). Due to these faded boundaries, studies of
Introduction 3
algorithmic gatekeeping need to consider professional communicators in a
broad sense.
This book has a normative starting point as it is interested in the effects
of automated procedures on news, understood as information about
current affairs which allows citizens to participate in public life and which
supports the functioning of the public sphere (e.g., Strömbäck, 2005). As
described in the introduction to this chapter, several concerns have been
raised about the influence of algorithms on the news which reaches the
public. The empirical analyses presented here tie into three broader
questions about the democratic impact of algorithmic gatekeeping.
First, do algorithms foster the spread of misinformation on social
media platforms? Misinformation can negatively affect social cohesion,
can undermine the legitimacy of election and democratic governance, or
can be directly damaging to people’s health (e.g., Lewandowsky et al.,
2012). Chapter 3 analyzes the role of YouTube’s recommendation algo­
rithm in the spread of misinformation.
Second, do algorithms prioritize information which confirms people’s
worldview rather than present them with various perspectives? This would
have negative democratic consequences since it could make people’s beliefs
more polarized and undermine a unified public sphere (Stroud, 2010).
The algorithm audit presented in Chapter 3 studies whether watching a
video with misinformation leads YouTube’s algorithm to recommend
more videos confirming this perspective.
Third, do algorithms undermine deliberation on social media plat­
forms? For a well-functioning public sphere, it is important that people
engage in debate and are willing to express their own point of view.
Research has shown that the general population is less willing to speak out
on algorithmically driven social media platforms than in face-to-face
communication (Neubaum and Kramer, 2018). Chapter 4 analyzes how
Influencers perceive their power vis-à-vis Instagram’s algorithms to shed
light on the effects of these perceptions on their public opinion expression.
1.3 Technological determinism: hard and weak
This book studies how the gatekeeping power of algorithms takes shape in
the interaction of algorithms with other actors. This approach can be
contrasted with a technological deterministic view which often dominates
discourse around the democratic consequences of algorithmic gate­
keeping. Technological determinism sees technological developments as
an autonomous force, which creates negative societal effects (e.g., Shade,
2003). While the benefit of this view is that it raises awareness for the role
of media technology and its consequences, in its purest form (hard tech­
nological determinism) the role of technology gets overemphasized (see
Smith and Marx, 1994). It is a technology-centric approach with limited
4 Introduction
attention to social forces which shape the technology, allow for the growth
in its use, and mediate the societal effects of technology. Also, hard
technological determinism pays limited attention to variations across
societal groups or potential positive effects (e.g., Shade, 2003). Technology
often becomes a scape goat in this discourse.
A review of the literature on filter bubbles and echo chambers illus­
trates the limits of hard technological determinism for the understanding
of algorithmic gatekeeping. Debate about the power of algorithms is
strongly influenced by fears around filter bubbles and echo chambers,
despite empirical evidence challenging these fears (see Figure 1.1). In
2011, Eli Pariser published the book The Filter Bubble: What the Internet
is Hiding from You coining the term filter bubble. According to the filter
bubble argument, (1) people hardly get exposed on social media to
information that challenges their worldview, and (2) this is due to al­
gorithms that prioritize content that is similar to content which people
have previously interacted with, over more diverse content. Concretely,
when people with different interests type in the same search term in
Google, Google’s search algorithms would return completely different
information. This should reflect them having interacted with different
types of content previously. A similar effect should be present in
Facebook’s news feed, where people with different political identities see
news and political information with a different political outlook. As
a consequence of these online filter bubbles, the range of information
to which people become exposed would over time get narrower and
0
50
100
150
200
250
300
350
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021
news algorithm social media algorithm
filter bubble echo chamber
Figure 1.1 Annual number of social science publications about news algorithms,
social media algorithms, filter bubbles, and echo chambers.
Note: Based Social Sciences Citation Index (SSCI); numbers of annually published
articles with these topics.
Introduction 5
narrower, which in turn creates self-reinforcing spirals. By prioritizing
information that confirms people’s worldview, algorithms would amplify
confirmation biases: people’s tendency to seek out information in line
with pre-existing beliefs (e.g., Nickerson, 1998), thus making them less
open to and curious about views that challenge their preconceptions.
Filter bubbles are closely related to echo chambers. As Nguyen (2020)
argues, it is important to distinguish between filter bubbles and echo
chambers, although the two terms in practice often are used inter­
changeably (see also Bruns, 2019). While a filter bubble refers to a situ­
ation where individuals are isolated from information that challenges their
worldview, an echo chamber is a group-level phenomenon where the
members of the group share the same worldview and reinforce this
worldview by on the one hand continuously confirming this to one
another and, on the other hand, discrediting voices with another per­
spective. Such groups can be found on social media such as Facebook or
YouTube, for example, around incels, climate change deniers, or anti-
vaxxers. An “echo chamber is a social epistemic structure from which
other relevant voices have been actively excluded and discredited”
(Nguyen, 2020, p. 141). The term echo chamber was popularized by Cass
R. Sunstein in the first decade of the 21st minutes focusing primarily on
the internet and the blogosphere. In his 2017 book #Republic, Sunstein
expanded the discussion of echo chambers to social media. As with filter
bubbles, critics have pointed to algorithms behind social media as a villain
and cause of echo chambers. On Facebook, algorithms might drive people
toward groups of like-minded individuals and keep them entrapped in
these groups (e.g., Sunstein, 2017).
In their most extreme form, such concerns about algorithm-driven
filter bubbles and echo chambers reflect the central characteristics of
technological determinism: algorithms are seen as an autonomous force
leading to societal consequences such as radicalization and polarization,
with limited attention to variation in the way different societal groups
use the technology or to the way societal trends interact with the effects
of algorithms.
Empirical studies of filter bubbles and echo chambers challenge this
perspective (e.g., Flaxman et al., 2016; Geiß et al., 2021; Nechushtai and
Lewis, 2019; Puschmann, 2019). A thorough discussion of relevant
studies led Bruns (2019, p. 96) to conclude that “mainly, the debate
around these concepts and their apparent impact on society and
democracy constitutes a moral panic.” Based on their review of the lit­
erature, Zuiderveen Borgesius et al. (2016, p. 1) answered the question
Should we worry about filter bubbles? with the conclusion that “at present
there is little empirical evidence that warrants any worries about filter
bubbles.” They list the following reasons why worries about filter bub­
bles might be overstated.
6 Introduction
• Without personalization algorithms, people tend to limit the range of
views they expose themselves to. Selectively exposing oneself to
information in line with one’s preferences is not a new phenomenon.
It has existed long before social media and algorithms.
• Social media and the internet also offer new opportunities for
encountering surprising information which widens one’s worldview.
• At the time of writing, personalization algorithms were not techni­
cally advanced enough to give each audience member information in
line with what they want.
Several empirical studies indeed support the observation that algorithms do
not have the strong personalization effects which critics suggest. Haim et al.
(2018), for example, studied the personalization effect of Google News and
found very limited differences in search results depending on what people
had previously searched for. Möller et al. (2018) studied how different types
of news recommendation systems on a newspaper website affect diversity of
recommended content. They found no evidence that basing recommenda­
tions on user history leads to limited diversity in recommendations.
Two studies by the Reuters Institute for the Study of Journalism not
only challenge the idea that algorithm-driven social media narrow the
range of views to which the audience is exposed (Fletcher and Nielsen,
2018a, 2018b). They even showed evidence suggesting the exact opposite:
on algorithm-driven social media, people get exposed to more diverse
information than people who do not use such social media at all. In one
study, they illustrated empirically that people who use search engines like
Google to find news are exposed to more diverse and more balanced news
sources than people who do not use search engines. They attribute this to
what they call automated serendipity, “forms of algorithmic selection that
expose people to news sources they would not otherwise have used”
(Fletcher and Nielsen, 2018a, p. 977). In another study, Fletcher and
Nielsen (2018b) studied whether people get exposed to news sources
without explicitly looking for them (so-called incidental exposure) when
using social media channels. They concluded that people who use
Facebook, YouTube, and Twitter primarily for entertainment, get ex­
posed to more online news channels than people who do not use social
media at all. Boulianne et al. (2020) likewise challenged the idea that ex­
posure to algorithm-driven news sources leads to partisan polarization
because people are no longer exposed to opposing views.
Some studies illustrate that algorithms can foster selective exposure,
but that these effects cannot be attributed to algorithms alone. A study
into exposure to cross-cutting information on Facebook by Bakshy et al.
(2015) found that Facebook’s algorithms decreased content diversity.
However, this effect of algorithms on content diversity was remarkably
smaller than the effects of people’s own tendency to self-select into
Introduction 7
networks with people with similar political preferences (Bakshy et al.,
2015). Dubois and Blank’s (2018) argue that the effect of algorithms
should be studied in the context of the broader media environment, not
focusing on single social media channels. In a representative survey among
the British population, they found that only a minority of the respondents
never or rarely disagree with content they encounter on social media and
never or rarely check diverse political sources online. They estimate that only
8% of the population might be in danger of being caught in an online echo
chamber, and these people might still be exposed to diverse information
offline through personal conversations with friends and family. Bodó et al.
(2019) revealed that there are large parts of the population who prefer
diversity-enhancing automated news recommendations over diversity-
limiting news recommendations. These people are less likely to fall into a
negative spiral where their news preferences combined with news recom­
mendations limit the diversity of the information they are exposed to. On the
other hand, there are parts of the population who are less interested in
diversity and who might get caught in “diversity reducing feedback loops”
(Bodó et al., 2019, p. 206). This research shows that algorithmic gatekeeping
can have distinct effects across various societal groups.
In sum, empirical research challenges the hard technological determi­
nistic view of the negative influence of algorithmic gatekeeping on filter
bubbles and echo chambers. Instead, this research is more supportive of a
softer version of technological determinism, acknowledging that the
influence of algorithmic gatekeeping should be studied in a broader con­
text; that the influence is not necessarily negative; that algorithmic gate­
keeping affects people differently; and that it is the way social actors and
social factors shape and use these algorithms which determine their power.
This soft technological determinism is also supported by other algorithmic
gatekeeping studies beyond research into filter bubbles and echo cham­
bers. Bandy and Diakopoulos (2021) for example showed that Twitter
features more junk news (understood as news which does not help the
audience to fulfill its citizen role) when Twitter’s timeline is driven by al­
gorithms, compared to when it is sorted chronologically. Part of this can
be explained by the specific working of the algorithm, but they conclude
that the role of the algorithm is “a fairly minor, supporting role in shifting
media exposure for end users, especially considering upstream factors that
create the algorithm’s input – factors such as human behavior, platform
incentives, and content creation” (Bandy and Diakopoulos, 2021, p. 1).
1.4 Theoretical considerations
Supply-and-demand framework
One theoretical starting point to understand how algorithmic gatekeeping is
situated in the interplay between platforms, professional communicators,
8 Introduction
and audiences is offered by Munger and Phillips’ (2022) supply-and-
demand framework. Munger and Phillips (2022) list factors which explain
the emergence of communities around right-wing content on YouTube, or
more broadly, the emergence of alternative media clusters. This frame­
work helps to see the effects of social media algorithms in context and to
understand how they interact with other technical affordances and audi­
ence desires and behaviors.
On the supply side, Munger and Phillips (2022) highlight the affor­
dances of the YouTube platform which are favorable to the growth of
communities around right-wing content. The possibility to discover con­
tent through video suggestions by YouTube’s algorithms is one of these
supply-side features of the YouTube platform which fosters alternative
media clusters. YouTube’s algorithms determine which videos show up in
a search, which videos are suggested on YouTube’s starting page, and
which videos are suggested to watch next after watching a video. There is a
large degree of similarity between videos which have been watched pre­
viously and what YouTube’s algorithms suggest watching next (e.g.,
Airoldi et al., 2016; O’Callaghan et al., 2015). This could foster the growth
of communities around specific types of content. Central in the supply-
and-demand model is the notion that YouTube’s algorithms are far from
the only factor driving the growth of right-wing communities. There are
other affordances which make the growth of such communities more
likely: YouTubers can actively influence whether their videos will be dis­
covered by adding descriptive keywords as metadata (tagging) or by using
clickbait-style headlines. Furthermore, video is a form of content which is
relatively cheap to produce and can convey more impacting emotions than
text. The specific monetization structure of YouTube favors the building
of communities and the continuous stream of new content. Popular
YouTubers can sell advertising around their videos, offer channel mem­
bership, or create content specifically for YouTube Premium subscribers.
YouTube can demonetize videos around sensitive topics, but this still
leaves YouTubers with the opportunity to use alternative crowdfunding
sources to get paid.
Isolating YouTube’s algorithms as the driving force behind commu­
nities around right-wing content on the platform would also overlook the
importance of audience demand for such content (Munger and Phillips,
2022). On the demand side, we find factors such as a loss of identity due to
societal changes, or feelings of disconnection and isolation in the offline
world, which might drive people to search for a feeling of community on
social media platforms. They may develop para-social relations with
YouTubers, who substitute for friends and meaningful connections which
the audience lacks in real life. The video format seems to be particularly
attractive to audiences looking for such connections, as it is less de­
manding to process than for example reading long blog posts.
Introduction 9
While Munger and Phillips (2022) use the supply-and-demand model
to explain the presence of right-wing communities on YouTube, the
model can likewise be used to understand that algorithms are only one
factor among several factors influencing the spread of content on social
media platforms. This framework is a useful antidote against a hard
technologically deterministic understanding of the power of news and
social media algorithms.
The structuring power of algorithms
Discussions on how to perceive the power of algorithms mirror similar
discussions about the influence of the media on politics (see also Klinger
and Svensson, 2018). Here, two broad perspectives can be distinguished
(e.g., Van Aelst et al., 2008). First, an actor perspective, where power is
seen as “a successful attempt by A to get B to do something that he would
otherwise not do” (Dahl, 1957, p. 203). Here, being in power is seen as
having the upper hand in an asymmetric relation. In the second perspec­
tive, the power of the media is assessed from a structural perspective. Here,
the focus is more on the way the media set the stage on which politics is
performed and the conditions within which politicians have to maneuver.
According to this perspective, the media do not determine specific political
outcomes, but instead determine the unwritten rules which politicians have
to adapt to when they pursue their goals.
In line with a soft technological deterministic view on algorithmic
gatekeeping, the power of algorithms can be better understood from a
structural rather than actor perspective. Following Dahl’s definition of
power, seeing the algorithm–human relation as a power balance would
require the algorithm to purposefully force users to do something against
their will. To purposefully force others to do something would require
agency: the ability to autonomously act out of free will. However, it is
problematic to talk about an algorithm as having agency in the same
way as humans would have agency. Klinger and Svensson (2018) stress
that algorithms lack evaluative and reflective capacities. Even though
algorithms might be self-learning, this does not mean that they make
deliberative decisions on what is desirable or the right trajectory of
action when faced with unexpected situations. Without such evaluative
and reflective capacities, algorithms lack the intentionality that defines
human agency. Bucher (2018, pp. 50–54) underlines the fact that the way
algorithms operate is the result of the interaction of different actors and
forces, including the people who designed and programmed the algo­
rithm, the algorithm itself as well as the users whose behavior also affects
the process and outcomes of the algorithms. The assemblage of actors
which determine the working of the algorithm makes it difficult to see
algorithms as possessing agency.
10 Introduction
Instead of studying the power of social media algorithms for an actor
perspective, it makes sense to see these algorithms as a structuring power.
The algorithms help social media users to obtain specific resources (such as
access to followers) or fulfill certain needs (e.g., feeling part of a com­
munity). Consequently, these algorithms structure how social media users
interact with each other and the social media platform. In doing so, the
algorithms do not determine the exact decisions which the social media
users make but set the ground rules which the social media users have to
adhere to if they want to be in the best position to obtain these resources or
fulfill these needs. Bucher (2012, p. 1165) made this concrete by showing
that “algorithmic architectures dynamically constitute certain forms of
social practice around the pursuit of visibility.” Facebook’s algorithm
determines which posts are shown to whom and in which order and thus
whose views and output are visible. In order to be visible, Facebook users
are forced to post regularly and have an incentive to like and comment on
other people’s posts. As we shall see in Chapter 4, Instagram’s algorithms
have a similar shaping power over Influencers.
1.5 Legitimacy and trust
Algorithmic gatekeeping, which positively affects the quality of the
public sphere and the ability of people to fulfill their citizen role, needs to
be legitimate and trusted. This is particularly relevant because algo­
rithms are opaque: their role is often invisible. Even when algorithms
are visible their working remains obscure (Bucher, 2018). Since people
cannot monitor how algorithms work, they have to accept their power
based on a perception that they are just and fair (in other words see them
as legitimate) and they have to be willing to be vulnerable to the actions
of the algorithms (in other words trust them).
Legitimacy can be defined as “a generalized perception or assumption
that the actions of an entity are desirable, proper and appropriate within
some socially constructed system of norms, values, beliefs, and definitions”
(Suchman, 1995, p. 574). When people see the power of algorithmic
gatekeepers as legitimate, they accept the role which algorithms play in the
public sphere and see this role as justified and rightful. A distinction can be
made between objective and subjective approaches to legitimacy (Hurd,
2012). Following the objective approach, we can discuss whether algo­
rithmic gatekeepers support the public sphere according to some norma­
tive standards or specific views on democracy. Following the subjective
approach, we can study whether people perceive algorithmic gatekeeping
as acceptable. From this perspective, legitimacy “exists only in the beliefs
of an individual about the rightfulness of rule” (Hurd, 2012). Ideally, there
is a close link between objective and subjective legitimacy and the general
public and communication professionals perceive algorithmic gatekeepers
Introduction 11
as just when they indeed support the public sphere. If power is not per­
ceived as legitimate, people might still adhere to it, but motivated by
possible personal gains or lack of alternatives rather than because they
believe the power is just and fair. Algorithmic power without legitimacy
might lead to resistance against the algorithms and could ultimately be an
incentive for people to leave a social media platform (Velkova and Kaun,
2021). While sources of legitimacy of media institutions have been ex­
tensively studied (e.g., Skovsgaard and Bro, 2011; Van Dalen, 2019b), less
is known about which criteria people use to assess whether the power of
algorithms is legitimate. Van Dalen (2019b) summarizes five sources of
legitimacy for media institutions: the character of the professionals (in
other words, whether journalists are perceived as good people), connec­
tions to other legitimate institutions (in other words, whether media
institutions can piggyback on its relation with other institutions, which are
seen as legitimate, such as the state or experts), ethical standards (such as
the objectivity norm), constituents (in other words, the target group who
benefit from the actions of media institutions, such as the public at large),
and beneficial outcomes (such as the important democratic roles which the
media fulfill by holding governments accountable or informing the elec­
torate). Chapter 4 will describe similar legitimacy criteria for social media
algorithms.
Trust can be defined as “the willingness of a trustor to be vulnerable to
the actions of a trustee based on the expectations that the trustee will
perform a particular action, irrespective of the ability to monitor or con­
trol that other party” (Mayer et al., 1995, p. 712). Given the enormous
amount of information available online, it would be a challenging task for
the public to individually go through all sources and select relevant
information. Similar to the trusted news media (see Coleman, 2012),
trusted algorithmic gatekeeping can thus reduce complexity and help to
minimize efforts which allows the audience to fulfill their citizen role.
Gatekeeping algorithms are complex and get continuously updated. Thus,
social media users do not have the possibility to monitor or control
the algorithms. When using social media and automated news, the users
are vulnerable to the actions of the algorithms. They are dependent on the
algorithms to on the one hand produced, select, and prioritize information
which is relevant for them, and on the other hand, make sure that they do
not miss essential information or get wrongly informed. To trust algo­
rithms thus requires humans to believe that the algorithms do a good job
in selecting, prioritizing, and creating information.
In order to invoke trust, social media platforms often underline the
fact that their algorithms lack subjectivity and personal biases (Gillespie,
2014, p. 179): “The careful articulation of an algorithm as impartial
(even when that characterization is more obfuscation than explanation)
certifies it as a reliable sociotechnical actor, lends its results relevance
12 Introduction
and credibility, and maintains the provider’s apparent neutrality in the
face of the millions of evaluations it makes.” However, as we shall see in
Chapter 2, trust in social media algorithms is not only determined by
platforms and the working of their algorithms (the object of trust or
trustee), but as much by the social media users who use them (the
trustors). Trust in algorithms is influenced by people’s understanding of
how these automated computer systems work, which in turn is strongly
shaped by comparisons with how the human mind works (Shariff et al.,
2017). Such comparisons could either result in attributing human char­
acteristics to the algorithmic systems, or to sharply distinguish between
tasks which are suitable for machines on the one hand, and tasks which
should be left to humans on the other (e.g., Lee, 2018). Research into
algorithmic forecasting has for example shown that people are often
unwilling to rely on the advice of algorithms, even when such advice
outperforms human forecasting (Burton et al., 2020). Thus, the promise
of algorithmic objectivity does not necessarily translate into public trust
in these algorithms. This makes it relevant to study how people perceive
news algorithms and how this is related to trust.
For a deeper understanding of why legitimacy and trust are relevant
for algorithmic gatekeeping, it is helpful to look at why legitimacy and
trust matter for journalistic gatekeeping. The press is often described as
the fourth estate since it performs the important democratic role of in­
forming citizens and holding government accountable. Still, the basis of
authority of journalism is different from the basis of authority of the first
three powers: the workings of the legislative, executive, and judiciary are
guided by clear rules and regulations which are laid down in constitu­
tions and democratic procedures (e.g., Cook, 1998, p. 109). Journalists’
work is rather based on unwritten social patterns of behavior and rou­
tines which are followed across organizations and remain rather stable
over time (Cook, 1998). In the Western world, journalists’ work is
protected by freedom of expression and their editorial independence is
guaranteed in press laws. Other professions like medics or lawyers have
stricter guidelines and inclusion and exclusion mechanisms into the
profession than journalism which is in many Western countries not a
protected title (Skovsgaard and Bro, 2011). On the one hand, this gives
journalists and news organizations freedom to organize their work ac­
cording to their own standards rather than standards and guidelines set
by other institutions. On the other hand, this also comes with the
downside that journalists cannot claim to adhere to written rules and
procedures when they want to justify their power and claim legitimacy.
Trust in the mainstream media is a necessary condition for the legitimacy
of the press. If the press is not trusted by the general public, it is easier
for politicians to ignore criticism from journalists or obstruct efforts by
the media to hold them accountable (e.g., Van Dalen, 2019a).
Introduction 13
Like journalistic gatekeeping, algorithmic gatekeeping has long been
shielded from government regulation. European regulation stipulates
that courts or regulatory authorities may not impose on journalists and
news organizations certain ways of conducting their work (Helberger
et al., 2020). This gives news organizations freedom to use algorithms in
their reporting “as long as doing so does not conflict with the enjoyment
of fundamental rights of others” (Helberger et al., 2020). An argument
can be made that automated prioritization and selection of information
on social media platforms are also forms of (quasi-)editorial judgment
(Leerssen, 2020). Still, the rights and freedoms also come with respon­
sibilities, in particular to recognize and deal with the negative conse­
quences of algorithmic gatekeeping on fundamental rights and the
functioning of the public sphere. Examples of such negative conse­
quences include the circulation of hate speech, interference with election
results, or the spread of health-related disinformation.
Concerns about the rapid implementation of algorithms in society
and debate about the negative ethical implications have led to the for­
mulation of several ethical guidelines. Hagendorff’s (2020) analysis of 22
ethical guidelines for AI however showed that such guidelines are gen­
erally non-binding and lack mechanisms of enforcement. This limits the
role of ethical guidelines in making the general public and professional
communicators accept algorithmic power. Concerns about negative
consequences of algorithm-driven social media platforms combined with
the limited effects of ethical guidelines have led to legislative initiatives to
counter democratic risks posed by these platforms. Examples of this are
the European Union’s AI Act, Digital Services Act, and Digital Market
Acts (e.g., Helberger and Diakopoulos, 2022). These Acts demand
platforms to be transparent about the algorithms used for recommen­
dations; oblige large platforms to allow independent audits of their
measures to prevent abuse of their systems; and force them to provide
researchers access to data (European Commission, 2022a). The impli­
cations of such regulation for trust in algorithmic gatekeeping will be
addressed in the final chapter.
1.6 Outline and content of the book
Three empirical chapters further our understanding about the power of
news algorithms by focusing on, respectively, the general public, plat­
forms, and professional communicators.
Chapter 2 analyzes how the general public perceives and approves of
news algorithms. Following an overview of the antecedents of algorithm
approval found in the literature, the chapter reports the results of three
surveys among the general Danish population. The results show that
trust in news algorithms and approval of news algorithms is low. Behind
14 Introduction
this seems to lie an implicit comparison with human journalists, who are
seen as more objective and neutral, and thus more trustworthy. Still, the
general population is more approving of algorithms playing a role in
news selection than in other areas in public life. This is especially true for
younger generations and when humans are involved in the process.
Chapter 3 analyzes the mediating power of algorithms on social media
platforms. Mediation refers to “the relaying of second-hand (or third-
party) versions of events and conditions which we cannot directly
observe for ourselves” (McQuail, 2010, p. 83). Media organizations have
always had a mediating role between the audience and broader society.
Social media algorithms play a similar role. When they filter, prioritize,
or censor certain information, they influence which views and perspec­
tives are most likely to reach the audience. This mediating power is
analyzed in a case study of the presence of different types of mis­
information in YouTube search results and recommendations around
autism. The chapter shows that YouTube’s search and recommendation
algorithms do not recommend misinformation which is directly harmful.
Still, some evidence was found that the algorithms suggest and recom­
mend less harmful misinformation. These results can not only be at­
tributed to the algorithms since they are also shaped by other supply as
well as demand factors on YouTube. Still, they raise important questions
about how social media platforms should handle their responsibility and
what duty and opportunity they have to use algorithms to downplay or
correct misinformation.
Chapter 4 analyzes how professional communicators like Influencers
perceive the power balance between themselves and the algorithms of the
platforms on which they communicate. A study of how professional
communicators perceive and react to the power of Instagram’s algorithms
shows how dependent they are on these non-transparent algorithms for
access to user engagement. The algorithms limit their autonomy. This can
have serious democratic consequences such as a tendency toward main­
stream perspectives and limited room for doubts or nuance. Against this
background, the chapter lists criteria which professional communicators
use to assess the legitimacy of algorithmic power.
The final chapter reflects on the findings of the preceding chapters in
the light of the aims of the book. The chapter highlights lessons learned
about the power, trust, and legitimacy of news algorithms and the
challenges they pose for social media platforms and news organizations.
Ultimately, concrete initiatives are proposed to strengthen the trust­
worthiness and legitimacy of news algorithms.
Empirically, these chapters build on the results of three representative
surveys, two survey-embedded experiments, a quantitative content anal­
ysis of videos recommended by YouTube’s algorithms and a qualitative
analysis of the way professional communicators relate to social media
Introduction 15
algorithms. The surveys and experiments are conducted in Denmark,
while the other analyses study English-language content. Thus, the em­
pirical results should be understood in the context of Western media
systems. Instead of analyzing much studied social media platforms like
Twitter or Facebook, the manuscript focuses on YouTube and Instagram.
As described in Chapters 3 and 4, these two platforms are used by large
parts of the population, especially younger segments. They distinguish
themselves from other social media platforms by their visual nature. On
YouTube and Instagram, audiences encounter public information of all
kinds, ranging from entertainment to hard news, and from conspiracy
theories to the correction of misinformation. These are also the platforms
where new actors like celebrities and Influencers compete for attention
with mainstream outlets and expert sources. Thus, these platforms
exemplify what the new media environment looks like, making them
exemplary cases to study the role of algorithms in disseminating
information and in affecting communication behavior. Furthermore,
YouTube and Instagram actively change and update their algorithms
to counter potential negative consequences. This provides a basis for
discussing how social media platforms should responsibly deal with the
power of their algorithms. Before narrowing the focus to these two
platforms, Chapter 2 first maps generalized trust and approval of news
algorithms among the general population.
16 Introduction
2 Algorithm Aversion
2.1 Introduction
The previous chapter discussed the distinctive ways in which algorithms
play an increasingly important role in selecting news and writing news
(see Diakopoulos, 2019). How the public perceives such news algorithms
affects how they interact with the information written and selected by
these automated computer systems (Bucher, 2018; Rader and Gray,
2015). People often lack awareness of the role of algorithms in news
production and selection (Eslami et al., 2015). Those members of the
general public who are aware that news algorithms exist make sense of
them based on simplified mental models of how the technologies func­
tion rather than a deep understanding of their working. These mental
models are strongly influenced by two mental shortcuts. The first
shortcut is the tendency to compare algorithms to humans (Hofstadter,
1995). The second shortcut is the machine heuristic (Sundar, 2008),
the idea that decisions made by algorithms are neutral, since these
automated computer programs follow prescribed rules. As we will see in
this chapter, these heuristics and comparisons affect the way people
perceive, trust, and approve of news algorithms. Based on representative
surveys and two survey-embedded experiments, the chapter addresses
the following questions: How do people perceive the strengths and
weaknesses of news algorithms compared to human journalists? and Do
people trust and approve of news algorithms?
To address these questions, this chapter first presents a review of the
literature on news algorithm perceptions and approval. This review fo­
cuses on the comparisons between humans and algorithms and on the
machine heuristic. Previous research on approval of news algorithms is
placed in context by relating the findings to the broader literature of
algorithm approval. Next, new empirical evidence shows the impact of
machine heuristics on perceptions of strengths and weaknesses of news
algorithms. The data further reveal an alarming lack of trust in news
selected and written by algorithms compared to news selected and written
DOI: 10.4324/9781003375258-2
by humans. People however remain more positive toward algorithms
playing a role in the selection of news than in other areas of public life.
This is especially the case when algorithms and humans work together in
selecting the news, and among younger generations. The final chapter of
this book will return to these findings and point to human-algorithmic
collaboration as a way to build trust toward news algorithms.
2.2 Making sense of news algorithms
A central feature of news algorithms which affects how they are per­
ceived is their opacity; the lack of transparency and clarity around how
they operate (Eslami et al.,2019). Many people lack awareness of the
existence of algorithms which prioritize and select information, such as
the Facebook News Feed curation algorithm (Eslami et al., 2015), or
Yelp’s review filtering algorithm (Eslami et al., 2019). Even when people
are aware of the existence of news algorithms, there is a lack of under­
standing about the way such algorithms make decisions and a lack of
awareness of their effects. Many college students who use Google to
access news for example were not aware that search results are person­
alized (Powers, 2017).
People who are confronted with new technologies under uncertain
conditions like the ones created by the opacity of the news algorithms,
develop their own mental models to make sense of how these technol­
ogies function (Bucher, 2018, pp. 96–7). Such mental models are based
on different types of input, ranging from representations in popular
culture, public debate, and media coverage, to so-called folk theories
grounded on personal everyday experiences with news algorithms
(Bucher, 2018). Following DeVito et al. (2017, p. 3165), folk theories can
be defined as “intuitive, informal theories that individuals develop to
explain the outcomes, effects, or consequences of technological systems,
which guide reactions to and behavior towards said systems.” DeVito
et al. (2017) distinguish between operational theories and abstract the­
ories. Operational theories relate to the specific working of algorithms
and concrete rules which the algorithms follow (e.g., popular content will
be prioritized on Twitter). Abstract theories on the other hand represent
a generic idea of the function and broad role of algorithms on the
platform, without specification of concrete mechanisms or specific out­
comes (e.g., Twitter’s algorithm affects a user’s timeline).
These mental models are often based on (implicit) comparisons with
humans. Research in robotics has for example documented that people
often attribute human characteristics to automated systems, such as
moods or emotions. This anthropomorphism helps people to make sense
of the working of these abstract systems (Hofstadter, 1995; Zarouali,
Makhortykh et al., 2021). These implicit comparisons can be more
18 Algorithm Aversion
favorable toward either the automated systems or humans performing
similar tasks. When assessing the credibility of news and information
online, people are faced with more uncertainty than when consuming
news through more traditional channels like newspapers or television
where the sender can clearly be distinguished. Under such uncertain cir­
cumstances, the technical affordances, such as whether there is the pos­
sibility for interactivity, are used as cues to assess the credibility of
information (Sundar, 2008). The cues trigger heuristics or rules of thumb,
which affect credibility assessments. One mental shortcut that affects how
people think about news algorithms is the so-called machine heuristic
(Sundar, 2008). The machine heuristic assumes that because a machine
follows pre-defined rules it is less susceptible to subjectivity or unfair
favoritism than humans are: “if a machine chooses the story, then it must
be objective in its selection and free of ideological bias” (Sundar, 2008,
p. 83). Sundar and Nass (2001) found that people rated the same stories as
of higher quality when they were allegedly selected by computer systems
rather than human journalists. The machine heuristic and presumed lack
of bias might account for this result (see also Nielsen, 2016).
Contrary to what the machine heuristic suggests, news is perceived as
less credible when it is allegedly written by a “robot reporter” compared
to when a human journalist is portrayed as the author (Franklin
Waddell, 2018). This can be explained by expectancy violation theory.
Using the label robot could lead to high expectations about the objec­
tivity of the content. When the actual content then does not live up to
these expectations, people will be disappointed, and the credibility will
be perceived as lower (Franklin Waddell, 2018).
This research by Sundar (2008), Sundar and Nass (2001), and
Franklin Waddell (2018) suggests that the machine heuristic has dif­
ferent effects on the credibility of automatically selected news than on
automatically written news (Franklin Waddell, 2018, p. 249). This could
reflect that people understand news selection as a less creative and thus
less human task than news writing. Lee (2018, p. 4) makes a distinction
between “tasks that require more ‘human’ skills (e.g., subjective judg­
ment and emotional capability) and those that require more ‘mechanical’
skills (e.g., processing quantitative data for objective measures).” People
had equal trust in either algorithms or humans completing mechanical
tasks, such as making a work schedule. When it comes to tasks which are
perceived as requiring a human skill (such as hiring or work evalua­
tions), algorithms were perceived as less fair than humans, since algo­
rithms do not have intuition and cannot make subjective judgments
(Lee, 2018). Other studies confirm this distinction. Bigman and Gray
(2018) showed that for moral decisions, people are algorithm averse: when
it comes to decisions in moral areas, such as life and death decisions in
the military, medicine, or law, humans are preferred as decision-makers
Algorithm Aversion 19
compared to algorithms. Logg et al. (2019), on the other hand, found
algorithm appreciation when it comes to numerical estimates. Here,
algorithmic advice is preferred over human advice. These different pref­
erences of algorithms versus humans reflect how people perceive the rel­
ative strengths and weaknesses of algorithms and how this differs from the
way they think the human mind operates (Logg et al., 2019).
Dietvorst et al. (2015) showed that apart from assumptions about the
general working of algorithms, experience with algorithms matters for
algorithm trust and approval. When people have seen an algorithm
make a mistake, they quickly lose faith and subsequently will avoid al­
gorithms. When a human makes a mistake, this is more easily forgiven
and does not affect trust in other decisions to the same degree, even when
the mistake is more severe than the one made by the algorithm.
Representations of and discussions about algorithms in the main­
stream media might likewise influence the approval of news algorithms.
From research on robotics, it is known that representations of robots in
popular culture can either make people more accepting toward such
forms of automation, or more skeptical. Following the “Hollywood
Robot Syndrome” media depictions of robots affect people’s general
perceptions of robots (Sundar et al., 2016). When they recall friendly
movie robots, people are generally more supportive of robots’ roles in
society more generally. This also makes them more positive toward
automatically generated news stories (Franklin Waddell, 2018). Media
depictions may also affect robot support negatively. Following the
Frankenstein complex, depictions of evil robots might trigger negative
connotations of robots, focusing on their shortcomings and the danger
they pose to humans (Kaplan, 2004; Kim and Kim, 2018). Similar to the
Frankenstein complex, it could be expected that the simple use of the term
“algorithm” might make people skeptical, given the negative attention to
algorithm-driven filter bubbles and echo chambers in public and media
discourse.
Surveys have given insight into how the general public approves of
news algorithms. Zarouali et al. (2021) documented that misconceptions
about media algorithms are common among the general population.
Araujo et al. (2020) showed that news recommendations attributed to AI
were seen as equally fair and useful as news recommendations attributed
to human editors. Fletcher and Nielsen (2019) showed that people across
different media systems even prefer automatically selected news stories
over stories selected by journalists. At the same time, there was broad
concern that such automatically selected news might give a narrow
perspective (Nielsen, 2016). This reflected a generalized skepticism where
respondents are critical toward both journalists and algorithms.
Thurman, Moeller et al. (2019) found that people prefer automatically
selected news when it is based on their own prior news consumption over
20 Algorithm Aversion
news which is automatically selected based on what peers have seen. This
could be related to on the one hand overconfidence in one’s own ability
to select relevant news, and on the other hand to performance problems
of automatically generated peer recommendations. Younger respondents
are generally more aware of the role of algorithms in their social media
feeds and also take a more active role in trying to influence which
content algorithms show in their social media feeds (e.g., Bucher, 2018).
This might explain why younger generations are more positive toward
news algorithms and algorithm-driven news personalization than older
generations (Bodó et al., 2019; Fletcher and Nielsen, 2019).
2.3 Research questions
Building on the literature described above, two research questions are
posed to contribute to our understanding of news algorithm perceptions
and approval among the general population.
Research question 2.1:
How do people perceive the strengths and weaknesses of news algorithms
compared to human journalists?
As previous research has shown, perceptions of news algorithms are
heavily shaped by comparisons to human journalists. This chapter makes
these comparisons explicit, by asking whether people think that algorithms
or human journalists are better at selecting a specific type of content. In
line with the machine heuristic, it could be expected that algorithms are
seen as better than humans at selecting information, which is neutral,
objective, balanced, and trustworthy. On the other hand, selecting such
information could be seen as a “human,” rather than “mechanical” task
and should therefore be left to journalists. Following concerns about filter
bubbles and echo chambers, the chapter explores whether algorithms or
humans are perceived as better at selecting information, that is surprising,
relevant, and offers different perspectives, will be explored.
Research question 2.2:
Do people trust and approve of news algorithms?
Trust and approval of news algorithms are closely connected to how their
strengths and weaknesses are perceived. Generalized trust in algorithms
selecting and writing news will firstly be compared to generalized trust in
human journalists and news organizations. This analysis of generalized
trust will be supplemented by a more narrowly focused analysis of trust in
news algorithms, studying how people react to scenarios where either
human journalists or an algorithm makes a journalistic mistake. Following
Algorithm Aversion 21
Dietvorst et al. (2015), it could be expected that people are less forgiving
toward algorithms making mistakes than toward human journalists making
mistakes. Next, the approval of news algorithms will be compared to
approval of news algorithms making decisions in other areas of public life,
such as court cases, hiring decisions, or hospital patient prioritization. This
will give insight into whether news selection is seen as a more human or
more mechanical task than other important decisions in public life (Lee,
2018). In the analysis of trust in algorithms and approval, special attention
will be given to age differences. Following Bodó et al. (2019) and Fletcher
and Nielsen (2019), it can be expected that younger generations are more
positive toward news algorithms than older generations.
2.4 Method
To answer these research questions, this chapter reports the results of three
representative surveys among the Danish population. These surveys
included both questions about perceptions of news algorithms as well as
two survey-embedded experiments. As described above, previous surveys
about perceptions of news algorithms have given insight into approval and
trust in news algorithms around the world (e.g., Fletcher and Nielsen, 2019;
Thurman, Moeller et al., 2019). This chapter expands on these surveys
by directly comparing perceptions of news algorithms with perceptions of
human journalists, and by placing the approval of news algorithms in
context by comparing this to approval of algorithms in other areas of
public life. The first survey-embedded experiment tries to establish whether
the simple use of the value-loaded term algorithms affects approval of news
algorithms. The second survey-embedded experiment follows the approach
by Bigman and Gray (2018) and presents the respondents with different
scenarios in order to compare approval of news algorithms with approval
of human journalists and how mistakes affect this approval.
Concretely, the analysis is based on the following three surveys:
• The question whether journalists or algorithms are better at selecting
different types of content (Figure 2.1) was examined in a survey
among 1225 Danish social media users. These data were collected
between 25 March and 7 April 2020.
• The questions about trust in news algorithms compared to trust in
other actors as well as the questions comparing approval of news
algorithms with approval of algorithms in other areas of public life
were studied in a survey among 1210 social media users in the period
between 15 and 30 April 2020.
Both these surveys were conducted by the polling company Epinion,
and participants were recruited from Norstats Online panel. All
respondents were between 18 and 65 years old and use Facebook at
22 Algorithm Aversion
least once a month. Both studies were weighted to be representative of
the Danish population on the parameters of gender, age, and region.
• The survey-embedded experiment comparing approval of news
algorithms with approval of human journalists was part of a survey
completed by 1200 Danes. These data were collected between 17 and
25 November 2021. This survey was conducted by polling company
DMA research, and participants were recruited from Bilendi’s online
panel. Respondents were between 18 and 88 years old and 81.2% of
the respondents were regular Facebook users. The data were weighted
to be representative for the Danish population on the parameters of
gender, age, region, and education.
The final part of the analysis reports differences in approval of news al­
gorithms between four generations. The operationalization of the four
generations is based on a classification by Pew Research Centre (Dimock,
2019): Generation Z: born in 1997 or later; Millennials: 1981–1996;
Generation X: 1965–1980; Boomers: 1946–1964.
The exact question wordings and experimental conditions in the three
surveys are described in the following section together with the results.
2.5 Results
Man vs machine
Respondents were asked whether they thought that algorithms or
human journalists and editors are best at selecting different types of
0 10 20 30 40 50 60 70 80 90 100
trustworthy content
news that offers different perspec"ves
content that is surprising
balanced content
objec"ve content
neutral content
content that has personal relevance for me
computer algorithms equally well journalists do not know
Figure 2.1 Who do you think is best at selecting the following type of content?
Notes: N = 1225 social media users between 18 and 65. Question wording: Every
news website, mobile app or social network makes decisions about what news
content to show to you. Increasingly, these decisions are made by computer al­
gorithms, which use different criteria to select which news stories to show. Who
do you think is better able to select content with the following qualities? 1. Human
editors and journalists; 2. Computer algorithms; 3. Equally well, 4. I do not know
(% of population choosing each category).
Algorithm Aversion 23
content (see Figure 2.1). When it comes to content with personal rel­
evance for the user, the majority of respondents think that algorithms
outperform human journalists. Only 16% of respondents have more
faith in human journalists in this respect. Personally, relevant content
seems to come with a downside. Less than 20% of respondents think
that algorithms are better than humans at selecting content that is
surprising or news that offers different perspectives. Here human
journalists are clearly preferred. This seems to indicate that fears of
filter bubbles influence people’s assessment of news algorithms.
In total, 30% of social media users think that algorithms are better at
selecting neutral content than humans, which is in line with the machine
heuristic described above. This is more than the share of respondents
who think that human journalists are best at this. Even though news
algorithms are seen as more neutral than human journalists, they are at
the same time seen as less able to select objective or balanced content.
Here the share of respondents who prefer human journalists is almost
twice as large. Thus, the perceived technical neutrality of algorithms is
not seen as the same as absence of bias.
When it comes to selecting trustworthy information, respondents
overwhelmingly prefer human journalists (60%) over computer algo­
rithms (9%). Low trust in news algorithms was confirmed in further
survey questions. When asked directly how much social media users
trust news selected by computer algorithms, 72% of respondents
answer that they have no or low trust, 25% has neither high nor low
trust, and only 3% has trust or high trust (n = 595). To test whether
this low trust might be due to negative connotations with the term
algorithm rather than an expression of true concerns with their role in
news selection, we randomly divided respondents into two groups.
One group was asked how much trust they have in news selected
by algorithms, and one group was asked how much trust they have in
news selected by automated computer systems. Also, among the
group who was asked about automated computer systems, trust was
very low: only 2% of respondents said they (highly) trust such news
(n=615).
This low trust in news selected by algorithms stands out even more
clearly when it is compared to trust in the news media and to trust in
automated computer systems. Around half of the respondents have (a
great deal of) trust in the news media (52%) or journalists (44%). Less
than one in six respondents say they trust artificial intelligence (12%),
robots (14%), or computer algorithms (15%), but such trust levels are
still significantly higher than trust in news selected by algorithms.
Thus, when it comes to trust in news selected by computer algorithms,
the whole is smaller than the sum of the parts.
24 Algorithm Aversion
Algorithm aversion
The lack of trust in news algorithms was confirmed in an experiment
studying the approval of news written by news algorithms. Following
the literature on algorithm aversion discussed above, an experiment
was set up to test two hypotheses. Hypothesis 1 predicts that people
approve less of automated computer systems writing articles than of
journalists writing such articles. Hypothesis 2 predicts that approval
decreases more when an automated computer system makes a jour­
nalistic error than when a journalist makes a journalistic error. To test
these hypotheses, 1200 respondents were randomly divided into eight
groups of 150 respondents. Each of the respondents was asked to think
of the following scenario:
A Danish news website publishes articles about each question to a minister
in parliament and the subsequent answer.
These articles are written by a journalist
OR
These articles are written by an automated computer system.
The experiment furthermore varied whether the scenario indicates that a
human journalist checks the automated articles before publication and
whether a mistake was made in one of the published articles by
incorrectly referring to former PM Poul Schlüter instead of current PM
Mette Frederiksen as Prime Minster in a question about Corona
(see Table 2.1). The different scenarios are based on an actual mistake
made in an automated story on the website of the Danish news medium
Altinget (see Andreassen, 2020). After reading one of the scenarios, the
respondents indicated on a scale from 1 (strongly disagree) to 5 (strongly
agree) whether they agree that (a) it is appropriate for a journalist/
automated computer system to write these articles, (b) a journalist/
automated computer system should be forbidden from writing these
articles, and (c) I trust the journalist/automated computer system writing
these articles. The first two statements were adapted from Bigman and
Gray (2018). After reversing the answers to statement b, the three
questions were combined into one scale measuring approval (Cronbach’s
alpha .69, M = 2.44, SD = .97).
The results of the experiment revealed general disapproval of the
idea of automated computer systems writing news stories. Approval
across the six conditions where the automated computer system writes
the news stories is well below the midpoint of the scale (M = 2.21,
SD = .90), and around one point on the five-point scale lower than
approval of journalists writing news stories about ministerial questions
Algorithm Aversion 25
Table
2.1
Algorithm
aversion
experiment
Condition
Articles
are
written
by
Mentions
whether
articles
are
checked?
Mentions
mistake¹
Additional
information
Approval²
M
(S.D.)
1
A
journalist
No
No
3.33
(.79)
2
A
journalist
No
Yes
2.96
(.82)
3
An
automated
computer
system
No
No
2.20
(.88)
4
An
automated
computer
system
Yes:
The
articles
are
not
checked
by
human
journalist
before
publication.
No
2.18
(.95)
5
An
automated
computer
system
Yes:
The
articles
are
checked
by
human
journalist
before
publication.
No
2.33
(.95)
6
An
automated
computer
system
No
Yes
2.18
(.90)
7
An
automated
computer
system
Yes:
The
articles
are
checked
by
human
journalist
before
publication
Yes
2.19
(.83)
8
An
automated
computer
system
Yes:
The
articles
are
not
checked
by
human
journalist
before
publication
Yes
“The
editor
of
the
news
website
said
they
take
the
mistake
just
as
serious
as
if
it
was
made
by
a
human.”
2.15
(.90)
Notes:
Danish
news
website
publishes
articles
about
each
question
to
a
minister
in
parliament
and
subsequent
answer.
These
articles
are
written
by
….
¹Recently
one
of
these
published
articles
mistakenly
referred
to
Poul
Schlüter
instead
of
Mette
Frederiksen
as
Prime
Minster
in
a
question
about
Corona.
²Scale
from
1
(fully
disapprove)
to
5
(fully
approve).
26 Algorithm Aversion
(M = 3.15, SD = .82). As expected, there is a significant difference in
the approval of news written by journalists (scenario 1; M = 3.33,
SD = .79) and news written by automated computer systems (scenario 3;
M = 2.20, SD = .88, t(295)= 11.72, p < .001). Thus, there is support for
Hypothesis 1. What is more, people are more approving of news written by
a journalist who makes a mistake (scenario 2; M = 2.96, SD = .82) than of
an automated computer system when no mistake is mentioned (scenario 3,
t(293) = 7.69, p < .001).
There are no significant differences between the six scenarios in
which the articles are written by automated computer systems. Thus,
contrary to Hypothesis 2, whether the algorithm made a mistake or
not did not affect approval. Nor did approval improve when the sce­
nario mentions that human journalists check the articles written by the
automated system before publication (scenarios 5 and 7), or when the
scenario mentions that the mistake is taken just as seriously as if it was
made by a human.
Algorithms everywhere
Automated news selection is only one example of the growing role of
algorithms in public life. Figure 2.2 shows that social media users are
more positive about the use of algorithms in news selection and dis­
semination than in other areas of public life. News selection is one of
the areas in public life where least people think only humans should
make decisions (24%). In line with low trust in news algorithms
described above, only a small minority (11%) thinks that algorithms
alone should select the news one receives. On the other hand, the large
majority (65%) thinks that algorithms should play a role in news
selection, as long as there are also human journalists involved. Also,
for tasks like matching unemployed people with firms, accounting,
speeding tickets, and the allocation of public funds, the majority of the
respondents think that algorithms should play a role, but primarily
with human oversight. For other areas of public life, the public is much
more skeptical about the added values of algorithms, compared to
humans making decisions. This is the case for decisions that have a
life-changing impact on people’s life, such as hospital patient priori­
tization, parole, triage for nursing homes, or removing children from
their families. Also, when it comes to democratic decision-making,
such as elections or court cases, only a handful of respondents see a
role for algorithms and hardly anyone thinks that algorithms alone
should make decisions.
Algorithm Aversion 27
Gen Z
Figure 2.3 shows how different generations think about whether algorithms
should play a role in news selection. For all generations, the majority of
respondents think that it is best that algorithms and human together select
news, but apart from that there are strong inter-generational differences.
Among the Boomer generation (born between 1946 and 1964), 40% believe
that humans alone should decide which news they receive, and only a small
minority thinks that this should be decided by algorithms alone. The
younger the respondents, the more positive they are toward news algo­
rithms. Among Generation Z (born after 1997), only 6% believe that hu­
mans alone should decide which news they receive. No less than 27% of this
generation believes that algorithms should select news without the inter­
ference of human journalists. A secondary analysis of the other results
presented confirms that Generation Z is the most positive toward news
algorithms. They are more positive than older generations about how
trustworthy, balanced, and objective news selected by algorithms is (results
not shown). Still, when asked directly, only 8% of Generation Z say that
they trust news selected by news algorithms.
0 10 20 30 40 50 60 70 80 90 100
matching unemployed people with firms
accounng
the news i receive
targeted polical adversement
speeding ckets
allocaon of public funds
hospital paent priorizaon
Job hiring decisions
Parole (who should get it and when you are eligible)
Triage for nursing home
Who gets elected to the local city council
court cases
decisions on moral dillemas (like euthanasia)
placing children outside of their family
Humans alone Humans and algorithms together Algorithms alone
Figure 2.2 Should humans or algorithms make decisions in different areas of
public life?
Notes: N = 1210 social media users between 18 and 65. Question wording:
Decisions in society are increasingly made by computer algorithms or with the
support of computer algorithms. In your view, what is the best way to make
decisions in the following areas? 1. Humans alone should make decisions, 2.
Humans should make the decisions together with algorithms, 3. Algorithms alone
should make decisions (% of respondents choosing each category).
28 Algorithm Aversion
2.6 Conclusion
This chapter presented the results of surveys studying news algorithm per­
ceptions and trust among the general population. The surveys first and
foremost showed a lack of trust in news algorithms. Only a very small
minority said that they trust algorithms selecting news, which stood in marked
contrast with trust in journalists selecting news. When it comes to trust in
algorithms writing news, the results were similar. People even prefer jour­
nalists who make mistakes to write news rather than algorithms, which do not
make mistakes. These results reveal low generalized trust in algorithms, which
is not dependent on the actual performance of news algorithms. The low
levels of trust in automatically selected and created news were not triggered by
the term “algorithm” alone. When the more neutral term “automated com­
puter systems” was used people also express low levels of trust. This lack of
trust is accompanied by skepticism about the ability of news algorithms to
select content which is objective and presents diverse perspectives.
On the more positive side, the majority of the respondents think that
algorithms are better than journalists at selecting news with personal
relevance. People are more approving of news algorithms than of algo­
rithms playing a role in other areas of public life, such as ethical and
moral decision-making. Across generations, there is general acceptance
of algorithms playing a role in news selection as long as there are also
humans involved in the process. This suggests that the mental model of
0 10 20 30 40 50 60 70 80 90 100
Genera on Z (127)
Millenials (326)
Genera on X (473)
Boomers (284)
Humans alone Humans and algorithms together Algorithms alone
Figure 2.3 Should humans or algorithms decide which news you receive?
(Generational differences).
Notes: Number of respondents per generation in brackets. Question wording:
Decisions in society are increasingly made by computer algorithms or with the
support of computer algorithms. In your view, what is the best way to make decisions
in the following areas? The news I receive 1. Humans alone should make decisions,
2. Humans should make the decisions together with algorithms, 3. Algorithms alone
should make decisions (% of each generation choosing each category).
Algorithm Aversion 29
what it means to select and write information is still very much based on
the traditional image of a human journalist. This seems to change from
generation to generation. The Boomer generation which grew up with
newspapers and traditional news outlets like radio and television was less
supportive of algorithmic news selection. There was more support for
algorithmic news among younger generations, in particularly Generation
Z who from a young age have used social media for news consumption.
This could indicate that over time the general population will become
more used to and thus more accepting of news algorithms.
The next chapter will shift the focus from broad perceptions of
algorithmic gatekeepers to a more narrow analysis of the mediating
power of algorithms on the social media platform YouTube.
30 Algorithm Aversion
3 The Mediating Power of
Algorithms
3.1 Introduction
From the 2016 US Presidential elections to debates about climate change
or COVID-19 vaccines, concerns have been raised worldwide about the
spread of misinformation on social media. Central in discussions about
misinformation on social media is the mediating power of algorithms. As
this chapter addresses, algorithms are blamed for stimulating the spread
of misinformation, but at the same time seen as potential remedies
against the spread of such information. Thus, algorithms play an
important role in maintaining the boundaries between legitimate and
illegitimate information.
This chapter presents an analysis of the role of social media algo­
rithms in the spread and correction of misinformation. This is analyzed
through a case study of YouTube’s algorithms and (mis)information
concerning the treatment of autism. The treatment of autism spectrum
disorder (ASD) is chosen as a case to analyze the role of algorithms in
spreading and countering misinformation for two reasons. (1) Social
media have become an important platform where people share and
search for information about autism (e.g., Saha and Agarwal, 2015).
(2) Apart from the importance of social media as a channel, the treat­
ment of autism is an interesting case, since it allows distinctions to be
made between truthful information, misinformation, and correcting
misinformation (see below). Thus, the research question which this
chapter addresses is: Do YouTube’s algorithms recommend truthful
information, misinformation, or corrections to misinformation concerning
the treatment of autism?
Before addressing this question, this chapter conceptualizes mis­
information and discusses the double role of social media algorithms in
spreading and limiting misinformation. This is followed by a presenta­
tion of the case and a review of previous research on the mediating
power of YouTube’s algorithms. The empirical analysis of the presence
of misinformation and corrections to misinformation among YouTube’s
DOI: 10.4324/9781003375258-3
search results and recommendations reveals that the algorithms suggest
some misinformation, but at the same time seem to limit the spread of
directly harmful misinformation. The final chapter of this book will
return to these results to discuss how social media platforms should best
handle the mediating power of their algorithms.
3.2 Misinformation and algorithms
From the perspective of the sender, misinformation can be defined as “false
or inaccurate information, especially that which is deliberately intended
to deceive” (Lexico, 2021). From the perspective of the receiver, mis­
information can be defined as “information that is initially believed to be
valid but is subsequently retracted or corrected” (Ecker et al., 2014, p. 293).
Misinformation can have severe negative consequences, both at the societal
level and at the individual level. On the societal level, the general popula­
tion, electorate, or political elite may make decisions which go against the
best interests of society. On the individual level, misinformation can lead
people to make decisions which are directly harmful for themselves and
others. Misinformation is worse than ignorance, since research has con­
sistently shown that once people have encountered misinformation it is
difficult to correct and even when people know that information is false, it
remains influential (e.g., Ecker et al., 2014 for an overview).
Misinformation is not a new phenomenon, but the social media en­
vironment has made it a highly pressing topic for at least three reasons.
First, it is easy to publish misinformation on social media which can
potentially reach a large audience when it spreads virally. Second, it is
more difficult for social users to distinguish misinformation from reliable
information than it would be on more traditional channels. In news­
papers and on television, it is easier for the audience to identify the
sender of a message than on social media. Third, once misinformation
has been published and shared on social media, it is difficult to control
and correct this information. Vosoughi et al. (2018) have shown that on
Twitter, false information reaches more people and is spread faster than
truthful information. They point to the newness of false information and
its emotional appeal as possible explanations for why this type of content
spreads so rapidly.
The presence and spread of misinformation on social media have led
to broad concerns among the general population and public authorities.
A poll in the United States in 2020 showed that two-thirds of Americans
thought that social media have a negative impact on the country and
that the main reason for this was concerns about the spread of mis­
information (Auxier, 2020). Public authorities like the FBI (2020) and
the European Commission (2022b) have warned against the spread of
misinformation on social media.
32 The Mediating Power of Algorithms
Algorithms play a double role in this spread of misinformation. On
the one hand, algorithms are blamed for fostering the spread of mis­
information, for example by leading people from more moderate content
toward more extreme content (e.g., Ribeiro et al., 2020) or by priori­
tizing misinformation because the content is emotional, novel, and
quickly gains a good deal of traction and interaction (e.g., Avaaz, 2020).
On the other hand, social media companies use algorithms and
machine learning to counter the spread of misinformation. Social media
companies themselves become increasingly aware of misinformation on
their platforms and take action against it. For example, Facebook states
on its website that it works actively toward “stopping false news from
spreading, removing content that violates our policies, and giving people
more information so they can decide what to read, trust and share”
(Facebook, 2020). Algorithms often play a central role in the detection
of misinforming content and subsequent actions taken to correct it (e.g.,
Gillespie et al., 2020). YouTube for example makes an active effort to
reduce recommendations for videos containing misinformation. It
announced in 2019 that it takes action aimed at “reducing recommen­
dations of borderline content and content that could misinform users in
harmful ways – such as videos promoting a phony miracle cure for a
serious illness, claiming the earth is flat, or making blatantly false claims
about historic events like 9/11” (YouTube, 2019). This misinformation is
identified using a “combination of machine learning and real people,”
involving “human evaluators and experts from all over the United States
to help train the machine learning systems that generate recommenda­
tions” (YouTube, 2019). With such efforts, algorithms take on a pow­
erful role in content moderation, which can be defined as “the detection
of, assessment of, and interventions taken on content or behavior
deemed unacceptable by platforms or other information intermediaries,
including the rules they impose, the human labor and technologies
required, and the institutional mechanisms of adjudication, enforcement,
and appeal that support it” (Gillespie et al., 2020, p. 2).
The intention to correct misinformation by using algorithms is inher­
ently praiseworthy. Given the enormous amount of content uploaded onto
social media platforms, automation can be a way to address problems of
scale. At the same time, scholars warn of the potential risks that this new
approach to content moderation poses (e.g., Cobbe, 2021; Gillespie et al.,
2020). Some of the risks mentioned by Aram Sinnreich (in Gillespie et al.,
2020) are the following. First the danger of false positives and negatives;
machine learning might wrongfully identify some correct information as
misinformation, while overlooking other cases of misinformation. Without
human oversights, this will go uncorrected. Second, subtle and nuanced
distinctions and assessments require human judgment. If these decisions are
left to algorithms, the more rigid algorithmic logic will determine between
The Mediating Power of Algorithms 33
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf
Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf

More Related Content

Similar to Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf

The Impact of Digital Media on the Decentralization of Power and the Erosion ...
The Impact of Digital Media on the Decentralization of Power and the Erosion ...The Impact of Digital Media on the Decentralization of Power and the Erosion ...
The Impact of Digital Media on the Decentralization of Power and the Erosion ...
ijtsrd
 
Trending War, Post Submission edits
Trending War, Post Submission editsTrending War, Post Submission edits
Trending War, Post Submission edits
Douglas Simkin
 
Beyond-Data-Literacy-2015
Beyond-Data-Literacy-2015Beyond-Data-Literacy-2015
Beyond-Data-Literacy-2015
Amanda noonan
 
TSRC Discussion Paper E: btr11
TSRC Discussion Paper E:  btr11TSRC Discussion Paper E:  btr11
TSRC Discussion Paper E: btr11
Roxanne Persaud
 
Science & Society -- From Dissemination to Deliberation
Science & Society -- From Dissemination to DeliberationScience & Society -- From Dissemination to Deliberation
Science & Society -- From Dissemination to Deliberation
Prof. Alexander Gerber
 
Brain juicer digividuals_research_robots_paper
Brain juicer digividuals_research_robots_paperBrain juicer digividuals_research_robots_paper
Brain juicer digividuals_research_robots_paper
Philter Phactory
 

Similar to Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf (20)

TheimpactofeWOM-IR1820083229-247.pdf
TheimpactofeWOM-IR1820083229-247.pdfTheimpactofeWOM-IR1820083229-247.pdf
TheimpactofeWOM-IR1820083229-247.pdf
 
The Impact of Digital Media on the Decentralization of Power and the Erosion ...
The Impact of Digital Media on the Decentralization of Power and the Erosion ...The Impact of Digital Media on the Decentralization of Power and the Erosion ...
The Impact of Digital Media on the Decentralization of Power and the Erosion ...
 
Trending War, Post Submission edits
Trending War, Post Submission editsTrending War, Post Submission edits
Trending War, Post Submission edits
 
Philosophical Aspects of Big Data
Philosophical Aspects of Big DataPhilosophical Aspects of Big Data
Philosophical Aspects of Big Data
 
Social Media Datasets for Analysis and Modeling Drug Usage
Social Media Datasets for Analysis and Modeling Drug UsageSocial Media Datasets for Analysis and Modeling Drug Usage
Social Media Datasets for Analysis and Modeling Drug Usage
 
The Triangulation of Truth
The Triangulation of TruthThe Triangulation of Truth
The Triangulation of Truth
 
Future Flight Fridays: Public Trust in Future Flight
Future Flight Fridays: Public Trust in Future FlightFuture Flight Fridays: Public Trust in Future Flight
Future Flight Fridays: Public Trust in Future Flight
 
Altmetrics
Altmetrics Altmetrics
Altmetrics
 
Beyond-Data-Literacy-2015
Beyond-Data-Literacy-2015Beyond-Data-Literacy-2015
Beyond-Data-Literacy-2015
 
204
204204
204
 
Redistributing journalism: Journalism as a data public and the politics of qu...
Redistributing journalism: Journalism as a data public and the politics of qu...Redistributing journalism: Journalism as a data public and the politics of qu...
Redistributing journalism: Journalism as a data public and the politics of qu...
 
TSRC Discussion Paper E: btr11
TSRC Discussion Paper E:  btr11TSRC Discussion Paper E:  btr11
TSRC Discussion Paper E: btr11
 
Summary of the Impact of a Non-Fiction, Low Budget, Digital Social Media Project
Summary of the Impact of a Non-Fiction, Low Budget, Digital Social Media ProjectSummary of the Impact of a Non-Fiction, Low Budget, Digital Social Media Project
Summary of the Impact of a Non-Fiction, Low Budget, Digital Social Media Project
 
Science & Society -- From Dissemination to Deliberation
Science & Society -- From Dissemination to DeliberationScience & Society -- From Dissemination to Deliberation
Science & Society -- From Dissemination to Deliberation
 
Brain juicer digividuals_research_robots_paper
Brain juicer digividuals_research_robots_paperBrain juicer digividuals_research_robots_paper
Brain juicer digividuals_research_robots_paper
 
Examples Of Thesis Statements For Expository Essays.pdf
Examples Of Thesis Statements For Expository Essays.pdfExamples Of Thesis Statements For Expository Essays.pdf
Examples Of Thesis Statements For Expository Essays.pdf
 
Altmetric: Getting Started with Article-Level Metrics
Altmetric: Getting Started with Article-Level MetricsAltmetric: Getting Started with Article-Level Metrics
Altmetric: Getting Started with Article-Level Metrics
 
Social Νetworks Data Mining
Social Νetworks Data MiningSocial Νetworks Data Mining
Social Νetworks Data Mining
 
Thesis
ThesisThesis
Thesis
 
WEBINAR: Joining the "buzz": the role of social media in raising research vi...
WEBINAR:  Joining the "buzz": the role of social media in raising research vi...WEBINAR:  Joining the "buzz": the role of social media in raising research vi...
WEBINAR: Joining the "buzz": the role of social media in raising research vi...
 

More from Arjen Van Dalen

Van dalen algorithms behind the headlines
Van dalen algorithms behind the headlinesVan dalen algorithms behind the headlines
Van dalen algorithms behind the headlines
Arjen Van Dalen
 
Van dalen auditorium as theatre
Van dalen auditorium as theatreVan dalen auditorium as theatre
Van dalen auditorium as theatre
Arjen Van Dalen
 
Van dalen van aelst media as political agendasetters crossnationally
Van dalen van aelst media as political agendasetters crossnationallyVan dalen van aelst media as political agendasetters crossnationally
Van dalen van aelst media as political agendasetters crossnationally
Arjen Van Dalen
 
Van dalen people behind the political headlines
Van dalen people behind the political headlinesVan dalen people behind the political headlines
Van dalen people behind the political headlines
Arjen Van Dalen
 
Van dalen et al. suspicious minds
Van dalen et al. suspicious mindsVan dalen et al. suspicious minds
Van dalen et al. suspicious minds
Arjen Van Dalen
 
Van dalen et al. different roles different content
Van dalen et al. different roles different contentVan dalen et al. different roles different content
Van dalen et al. different roles different content
Arjen Van Dalen
 
Van dalen structural bias
Van dalen structural biasVan dalen structural bias
Van dalen structural bias
Arjen Van Dalen
 

More from Arjen Van Dalen (13)

Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
 
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
 
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
 
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
 
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
Algorithmic Gatekeeping for Professional Communicators Power Trust and Legiti...
 
Van dalen et al 2014 sporgeskemaer og indholdsanalyse
Van dalen et al 2014 sporgeskemaer og indholdsanalyseVan dalen et al 2014 sporgeskemaer og indholdsanalyse
Van dalen et al 2014 sporgeskemaer og indholdsanalyse
 
Van dalen algorithms behind the headlines
Van dalen algorithms behind the headlinesVan dalen algorithms behind the headlines
Van dalen algorithms behind the headlines
 
Van dalen auditorium as theatre
Van dalen auditorium as theatreVan dalen auditorium as theatre
Van dalen auditorium as theatre
 
Van dalen van aelst media as political agendasetters crossnationally
Van dalen van aelst media as political agendasetters crossnationallyVan dalen van aelst media as political agendasetters crossnationally
Van dalen van aelst media as political agendasetters crossnationally
 
Van dalen people behind the political headlines
Van dalen people behind the political headlinesVan dalen people behind the political headlines
Van dalen people behind the political headlines
 
Van dalen et al. suspicious minds
Van dalen et al. suspicious mindsVan dalen et al. suspicious minds
Van dalen et al. suspicious minds
 
Van dalen et al. different roles different content
Van dalen et al. different roles different contentVan dalen et al. different roles different content
Van dalen et al. different roles different content
 
Van dalen structural bias
Van dalen structural biasVan dalen structural bias
Van dalen structural bias
 

Recently uploaded

The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 

Recently uploaded (20)

Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Third Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptxThird Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptx
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 

Van Dalen 2023 Algorithmic gatekeeping for professional communicators Power trust and legitimacy.pdf

  • 1. ALGORITHMIC GATEKEEPING FOR PROFESSIONAL COMMUNICATORS Power, Trust, and Legitimacy Arjen van Dalen R O U T L E D G E F O C U S Focus
  • 2. ALGORITHMIC GATEKEEPING FOR PROFESSIONAL COMMUNICATORS This book provides a critical study of the power, trust, and legitimacy of algorithmic gatekeepers. The news and public information which citizens see and hear is no longer solely determined by journalists, but increasingly by algorithms. Van Dalen demonstrates the gatekeeping power of social media algorithms by showing how they affect exposure to diverse information and misinformation and shape the behavior of professional communicators. Trust and legitimacy are foregrounded as two crucial antecedents for the acceptance of this algorithmic power. This study reveals low trust among the general population in algorithms performing journalistic tasks and a perceived lack of legitimacy of algorithmic power among professional communicators. Drawing on case studies from YouTube and Instagram, this book challenges technological deterministic discourse around “filter bubbles” and “echo chambers” and shows how algorithmic power is situated in the interplay between platforms, audiences, and professional communicators. Ultimately, trustworthy algorithms used by news organizations and social media platforms as well as algorithm literacy training are proposed as ways forward toward democratic algorithmic gatekeeping. Presenting a nuanced perspective which challenges the deep divide between techno-optimistic and techno-pessimistic discourse around algorithms, Algorithmic Gatekeeping is recommended reading for journalism and communication researchers in related fields. Arjen van Dalen is Professor WSR at the Center for Journalism of the University of Southern Denmark. He wrote his PhD dissertation on Political Journalism in Comparative Perspective and is co-author of an award-winning book on this topic with Cambridge University Press. He published in journals such as Political Communication, Journalism, the International Journal of Press/Politics, and Public Opinion Quarterly. His research on algorithmic gatekeeping is funded by the Independent Research Fund Denmark and Carlsberg Foundation.
  • 3. Disruptions: Studies in Digital Journalism Series editor: Bob Franklin Disruptions refers to the radical changes provoked by the affordances of digital technologies that occur at a pace and on a scale that disrupts settled understandings and traditional ways of creating value, interacting, and communicating both socially and professionally. The consequences for digital journalism involve far-reaching changes to business models, pro­ fessional practices, roles, ethics, products, and even challenges to the accepted definitions and understandings of journalism. For Digital Journalism Studies, the field of academic inquiry which explores and ex­ amines digital journalism, disruption results in paradigmatic and tectonic shifts in scholarly concerns. It prompts reconsideration of research methods, theoretical analyses, and responses (oppositional and consen­ sual) to such changes, which have been described as being akin to “a moment of mind-blowing uncertainty.” Routledge’s book series, Disruptions: Studies in Digital Journalism, seeks to capture, examine, and analyze these moments of exciting and explosive professional and scholarly innovation which characterize developments in the day-to-day practice of journalism in an age of digital media, which are articulated in the newly emerging academic discipline of Digital Journalism Studies. Disrupting Chinese Journalism Changing Politics, Economics and Journalistic Practices of the Legacy Newspaper Press Haiyan Wang Algorithmic Gatekeeping for Professional Communicators Power, Trust, and Legitimacy Arjen van Dalen For more information about this series, please visit: www.routledge.com/ Disruptions/book-series/DISRUPTDIGJOUR
  • 5. First published 2023 by Routledge 4 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 605 Third Avenue, New York, NY 10158 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2023 Arjen van Dalen The right of Arjen van Dalen to be identified as author of this work has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. The Open Access version of this book, available at www. taylorfrancis.com, has been made available under a Creative Commons Attribution-Non Commercial-No Derivatives 4.0 license. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloguing-in-Publication Data Names: Van Dalen, Arjen, 1980- author. Title: Algorithmic gatekeeping for professional communicators : power, trust and legitimac / Arjen van Dalen. Description: Abingdon, Oxon ; New York : Routledge, 2023. | Series: Disruptions : studies in digital journalism | Includes bibliographical references and index. Identifiers: LCCN 2023003541 (print) | LCCN 2023003542 (ebook) | ISBN 9781032450711 (hardback) | ISBN 9781032450728 (paperback) | ISBN 9781003375258 (ebook) Subjects: LCSH: Social media and journalism. | Algorithms. | Filter bubbles (Information filtering) Classification: LCC PN4766.V36 2023 (print) | LCC PN4766 (ebook) | DDC 302.23/1--dc23/eng/20230227 LC record available at https://lccn.loc.gov/2023003541 LC ebook record available at https://lccn.loc.gov/2023003542 ISBN: 978-1-032-45071-1 (hbk) ISBN: 978-1-032-45072-8 (pbk) ISBN: 978-1-003-37525-8 (ebk) DOI: 10.4324/9781003375258 Typeset in Times New Roman by MPS Limited, Dehradun
  • 6. Contents List of Figures vii List of Table viii Acknowledgments ix 1 Introduction 1 1.1 Algorithmic power 1 1.2 Conceptualizing algorithmic gatekeeping 2 1.3 Technological determinism: hard and weak 4 1.4 Theoretical considerations 8 1.5 Legitimacy and trust 11 1.6 Outline and content of the book 14 2 Algorithm Aversion 17 2.1 Introduction 17 2.2 Making sense of news algorithms 18 2.3 Research questions 21 2.4 Method 22 2.5 Results 23 2.6 Conclusion 29 3 The Mediating Power of Algorithms 31 3.1 Introduction 31 3.2 Misinformation and algorithms 32 3.3 YouTube and autism spectrum disorder 34
  • 7. 3.4 The mediating power of YouTube’s algorithms 36 3.5 Research questions 38 3.6 Method 38 3.7 Results 40 3.8 Conclusion 43 4 Structuring Power 46 4.1 Introduction 46 4.2 Influencers: politically relevant strategic actors 48 4.3 Algorithmic power 49 4.4 Research questions 51 4.5 Method 52 4.6 Results 54 4.7 Conclusion 61 5 Algorithmic Power, Trust, and Legitimacy 63 5.1 Algorithmic power 63 5.2 Algorithms, trust, and legitimacy 64 5.3 Implications 65 References 70 Index 82 vi Contents
  • 8. Figures 1.1 Annual number of social science publications about news algorithms, social media algorithms, filter bubbles, and echo chambers 5 2.1 Who do you think is best at selecting the following type of content? 23 2.2 Should humans or algorithms make decisions in different areas of public life? 28 2.3 Should humans or algorithms decide which news you receive? (Generational differences) 29 3.1 YouTube search results after searching for videos about autism using six different search terms 40 3.2 YouTube recommendations after five types of videos about autism 42 4.1 The relation between Influencers and Instagram’s algorithms as a balance of power 56
  • 10. Acknowledgments One of the central arguments in this book is that algorithmic gatekeeping can not be understood in isolation since it is the result of the interaction between the algorithm and its surroundings. The same can be said for the writing of this book, which is the result of my interactions with generous colleagues and institutions. First of all, I would like to thank the Independent Research Fund Denmark, which generously supported my research on algorithmic gatekeepers under grant 7015-0090B and the Carlsberg Foundation which supported the writing of this book and Open Access Funding with monograph grant CF19-0544. Kristina Bøhnke, Annette Schmidt, and research support at the SDU’s Faculty of Business and Social Sciences provided very professional support in the several rounds of funding application. A special thanks go to Erik Albæk, Claes de Vreese, and Michael Latzer for their help in the application process. The Center for Journalism and the Department of Political Science and Public Management at the University of Southern Denmark (SDU) provided a good base to conduct my research. Many thanks to the following students without whose help it would not have been possible to conduct the empirical analyses in the book: Anne-Marie Fabricius Markussen, Camilla Lund Knudsen, Freja Mokkelbost, Julie Holmegaard Milland, Kasper Viholt Nielsen, Kirsten Vestergaard Prescott, Kristina Finsen Nielsen, Kristine Tesgaard, Ninna Nygård Jæger, Sasha Dilling, Selma Marthedal, Sofie Høyer Christensen, and Thor Bech Schrøder. Tina Guldbrandt Jakobsen managed the employment of these students. Some of the results in Chapter 2 were earlier presented in the report MFI undersøger Synet på nyhedsalgoritmer. Ralf Andersson, Didde Elnif, and Morten Skovsgaard provided valuable feedback on this report which also helped shape the argument in Chapter 2. Drafts of the manuscript were discussed in the Journalism Research group at SDU. Thanks to research group members Camilla Bjarnøe, Christina Pontoppidan, David Hopmann, Didde Elnif, Erik Albæk,
  • 11. Fransiska Marquart, Jakob Ohme, Joe Bordacconi, Jonas Blom, Kim Andersen, Lene Heiselberg, Lisa Merete Kristensen, Majbritt Christine Severin, Morten Skovsgaard, Morten Thomsen, Peter Bro, Rasmus Rønlev, and Sofie Filstrup for their challenging and constructive feedback. In the last stage of the writing process, I discussed some of the central ideas with colleagues at the Digital Democracy Centre at SDU. I am sure this Centre will be a place for continuous conversation around the central themes of this book. During the writing of this book, I dove into the emerging research literature on the topic of algorithmic gatekeeping, which was highly inspirational. Here, the work of Nick Diakopoulos, Taina Bucher, Axel Bruns, Tarleton Gillespie, Kevin Munger, and Joseph Philips deserves special mention. Before the pandemic, it was possible for me to visit three research institutions where I worked on data collection and presented selected findings from this book. From ASCOR at the University of Amsterdam, I would like to specially thank Rens Vliegenthart and Mark Boukes. Bert Bakker generously shared his knowledge on pre-registration. From Södertörn University, I would like to thank Ester Appelgren, Mikolaj Dymek, Gunnar Nygren, and Nina Springer. From my stay at the Leibniz Institute for Media Research, Hans-Bredow-Institut (HBI), and University of Hamburg, I would like to thank Cornelius Puschmann, Jonathon Hutchinson, Jessica Kunert, and Michael Brüggemann. I would also like to thank the conference and workshop organizers and participants who allowed me to test early versions of the arguments presented in this book. Here, I would like to highlight in particular the 2018 ECREA pre-conference Information diversity and media pluralism in the age of algorithms, hosted by Edda Humprecht, Judith Möller, and Natali Hellberger; the ECREA Journalism Studies Section Conference 2019 convened by Folker Hanusch and the Journalism Studies Section at the University of Vienna; the ECREA Political Communication Section Conference 2019 hosted by Agnieszka Stępińska and the Faculty of Political Science and Journalism at the Adam Mickiewicz University in Poznań; the Knowledge-Resistance-workshop in 2020 upon invitation by Jesper Strömbäck of the University of Gothenburg; and an Augmented Journalism network-webinar in 2020, organized by Carl-Gustav Linden, Mariëlle Wijermans, and Andreas L Opdahl. A further word of gratitude goes to Routledge. I specially want to thank Bob Franklin, Editor of Routledge’s Disruptions: Studies in Digital Journalism-series for his confidence and encouragement, as well as Hannah McKeating and Elizabeth Cox for their guidance in the publication process. Three anonymous reviewers provided valuable input to the book. x Acknowledgments
  • 12. A large part of the work on this book took place during the Covid pandemic. Personal thanks go to Anna Batog, Niels Lachmann, Thomas Paster, and my family in The Netherlands for their support during this time. Head of Department Signe Pihl-Thingvad and Vice-Head Morten Kallestrup are thanked for allowing me to work at the university during the lockdown. I want to thank Dean Jens Ringsmose for his encouraging words to the staff of the faculty during this period. A special word of thanks goes to everyone who reached out to me during the lockdown. That was very much appreciated! Acknowledgments xi
  • 13.
  • 14. 1 Introduction 1.1 Algorithmic power The news and public information which citizens see and hear are increasingly influenced by algorithms. Algorithms are “encoded pro­ cedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2014, p. 167). Such automated proce­ dures fulfill a gatekeeping role and automatically select, sort, and prioritize public information (e.g., Bucher, 2018; Diakopoulos, 2019; Kruikemeier et al., 2021; Napoli, 2014; Thurman et al., 2019; Wallace, 2018). News and social media algorithms are powerful forces (see Bucher, 2018; Diakopoulos, 2019; Gillespie 2014; Just and Latzer, 2017). Worldwide, people get more and more of their political information through algorithm-controlled online social media, such as Facebook, Google, YouTube, or Instagram. This is especially true for younger generations (Newman et al., 2019). When algorithms act as gate­ keepers, they impact on the information which the audience receives and on the way people communicate. Prioritization and recommen­ dation algorithms affect exposure to counter-attitudinal views by either narrowing (Bakshy et al., 2015) or broadening the perspectives which the audience encounters (Helberger, 2019). Replacing human-curated content with algorithmically curated news, like in Apple News, affects exposure to soft news about celebrities (Bandy and Diakopoulos, 2020a). The information which algorithms on social media platforms show can affect voter turnout (Bond et al., 2012) or the persuasiveness of misinformation (Bode and Vraga, 2015). Recommendation algo­ rithms on platforms like TikTok play an important role in social movements, as they influence the reach of calls to action (Bandy and Diakopoulos, 2020b). On the one hand, algorithms on social media platforms can enhance the spread of extremist views (e.g., Massanari, 2017). On the other hand, algorithms are used to moderate the online debate and censor inappropriate speech (e.g., Cobbe, 2021). Changes in social media algorithms can limit or boost traffic to legacy news outlets DOI: 10.4324/9781003375258-1
  • 15. (Trielli and Diakopoulos, 2019) and influence the type of content which these outlets produce (Diakopoulos, 2019, pp. 178–180). Automation in news production transforms the news-making process (e.g., Linden, 2017; Thurman, 2011; Van Dalen, 2012; Wu et al., 2019) and affects the relation between journalists and the audience (Dörr and Hollnbuchner, 2017). The power of news and social media algorithms is central to this book which does not consider algorithms as an autonomous force that ex­ ercises a uni-directional influence on society at large. This book also tries to steer away from treating algorithms as a scape goat for negative societal developments such as radicalization or polarization. Instead, its purpose is to understand how the power of gatekeeping algorithms emerges and is exerted in the interaction with other actors. The first aim of the book is to make the power of algorithmic gatekeepers tangible. It documents how algorithms affect the content which the audience is ex­ posed to (Chapter 3) and how they shape the behavior of professional communicators (Chapter 4). This shows how algorithmic power is situated in the interplay between platforms, audiences, and professional commu­ nicators. To be justified, such power needs to be accepted by the general population and professional communicators. Therefore, the second aim of this book is to study two crucial antecedents for the acceptance of algo­ rithmic power: trust and legitimacy. It analyzes trust and the acceptance of algorithmic gatekeeping among the general population (Chapter 2) and evaluates the legitimacy of algorithmic power (Chapter 4). 1.2 Conceptualizing algorithmic gatekeeping Journalistic gatekeeping has been defined as “the process of selecting, writing, editing, positioning, scheduling, repeating and otherwise mas­ saging information to become news” (Shoemaker et al., 2009, p. 73). Building on this general definition, algorithmic gatekeeping can be understood as the influence of automated procedures on the process of selecting, writing, editing, scheduling, repeating, and otherwise massa­ ging information to become news. These automated procedures are applied by news organizations, but equally important, they are a defining feature of social media platforms. This book deals with the influence of automated procedures on the production of news, as well as on the selection of news. Technically, algorithms can be described as programmed procedures that are followed by a computer system. Bucher (2018) distinguishes between deterministic and machine learning algorithms. Deterministic algorithms follow the same predefined steps and do not adapt. Machine learning algorithms on the other hand continuously adapt the way they function as they learn to optimize their performance. More broadly, al­ gorithms are a symbol for the increasing automation of decision-making in 2 Introduction
  • 16. public life. The “algorithmic turn” (Napoli, 2014) in gatekeeping which is central in this book is part of a larger societal trend, where more and more societal tasks are automated (e.g., Kitchin and Dodge, 2011; Lessig, 1999; Steiner, 2012; see Chapter 2). While algorithms’ role in gatekeeping is the main focus, the book ties into the growing journalism research field studying news automation more broadly (e.g., Diakopoulos, 2019; Linden, 2017; Thurman et al., 2019). In line with this literature on news automation, the book does not deal narrowly with technical lines of computer codes but understands algorithms which influence gatekeeping as “heterogeneous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas” (Seaver, 2017 in Bandy and Diakopoulos, 2020a, p. 38). To understand the process and effects of algorithmic gatekeeping, algorithms need to be studied in their interplay with other actors, most notably professional communicators, and the general public. Journalistic gatekeeping research goes back to White’s (1950) study of how an individual journalist acts as gatekeeper and decides which information makes it through the gate and becomes news, and which information is kept out. Building and expanding on this pioneering research, later studies have identified the influence of the organizational context and extra-media influences on the decisions made by individual journalists. Journalistic gatekeeping has come to mean more than only selecting news, as later definitions also include the writing and editing of news (Shoemaker et al., 2009). The increasingly important role of automation in the news-making process and the role of social media platforms in the distribution of news have made the gatekeeping process even more complex. Gatekeeping decisions are made by diverse sets of actors, most notably journalists and other strategic professionals, indi­ vidual amateurs as well as algorithms (see Wallace, 2018). Gatekeeping has become decentralized and is in many instances the consequence of the interaction between different actors. This makes it difficult to isolate the consequences of algorithms in the gatekeeping process or to use the language of directional causality when discussing their influence. Where gatekeeping research traditionally studied gatekeeping within journalistic news organizations, other professional communicators play an increasingly important role in gatekeeping in today’s media environment. On algorithmically controlled social media platforms, audiences encounter diverse kinds of information about public affairs side-by-side, ranging from entertainment to hard news, and from conspiracy theories to the correction of misinformation. A clear example of these new sources of information are Influencers who post about public affairs on social media (see Chapter 4). They exemplify how technological, societal, and media- internal developments have blurred the boundaries around the journalistic profession (e.g., Lewis, 2012). Due to these faded boundaries, studies of Introduction 3
  • 17. algorithmic gatekeeping need to consider professional communicators in a broad sense. This book has a normative starting point as it is interested in the effects of automated procedures on news, understood as information about current affairs which allows citizens to participate in public life and which supports the functioning of the public sphere (e.g., Strömbäck, 2005). As described in the introduction to this chapter, several concerns have been raised about the influence of algorithms on the news which reaches the public. The empirical analyses presented here tie into three broader questions about the democratic impact of algorithmic gatekeeping. First, do algorithms foster the spread of misinformation on social media platforms? Misinformation can negatively affect social cohesion, can undermine the legitimacy of election and democratic governance, or can be directly damaging to people’s health (e.g., Lewandowsky et al., 2012). Chapter 3 analyzes the role of YouTube’s recommendation algo­ rithm in the spread of misinformation. Second, do algorithms prioritize information which confirms people’s worldview rather than present them with various perspectives? This would have negative democratic consequences since it could make people’s beliefs more polarized and undermine a unified public sphere (Stroud, 2010). The algorithm audit presented in Chapter 3 studies whether watching a video with misinformation leads YouTube’s algorithm to recommend more videos confirming this perspective. Third, do algorithms undermine deliberation on social media plat­ forms? For a well-functioning public sphere, it is important that people engage in debate and are willing to express their own point of view. Research has shown that the general population is less willing to speak out on algorithmically driven social media platforms than in face-to-face communication (Neubaum and Kramer, 2018). Chapter 4 analyzes how Influencers perceive their power vis-à-vis Instagram’s algorithms to shed light on the effects of these perceptions on their public opinion expression. 1.3 Technological determinism: hard and weak This book studies how the gatekeeping power of algorithms takes shape in the interaction of algorithms with other actors. This approach can be contrasted with a technological deterministic view which often dominates discourse around the democratic consequences of algorithmic gate­ keeping. Technological determinism sees technological developments as an autonomous force, which creates negative societal effects (e.g., Shade, 2003). While the benefit of this view is that it raises awareness for the role of media technology and its consequences, in its purest form (hard tech­ nological determinism) the role of technology gets overemphasized (see Smith and Marx, 1994). It is a technology-centric approach with limited 4 Introduction
  • 18. attention to social forces which shape the technology, allow for the growth in its use, and mediate the societal effects of technology. Also, hard technological determinism pays limited attention to variations across societal groups or potential positive effects (e.g., Shade, 2003). Technology often becomes a scape goat in this discourse. A review of the literature on filter bubbles and echo chambers illus­ trates the limits of hard technological determinism for the understanding of algorithmic gatekeeping. Debate about the power of algorithms is strongly influenced by fears around filter bubbles and echo chambers, despite empirical evidence challenging these fears (see Figure 1.1). In 2011, Eli Pariser published the book The Filter Bubble: What the Internet is Hiding from You coining the term filter bubble. According to the filter bubble argument, (1) people hardly get exposed on social media to information that challenges their worldview, and (2) this is due to al­ gorithms that prioritize content that is similar to content which people have previously interacted with, over more diverse content. Concretely, when people with different interests type in the same search term in Google, Google’s search algorithms would return completely different information. This should reflect them having interacted with different types of content previously. A similar effect should be present in Facebook’s news feed, where people with different political identities see news and political information with a different political outlook. As a consequence of these online filter bubbles, the range of information to which people become exposed would over time get narrower and 0 50 100 150 200 250 300 350 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 news algorithm social media algorithm filter bubble echo chamber Figure 1.1 Annual number of social science publications about news algorithms, social media algorithms, filter bubbles, and echo chambers. Note: Based Social Sciences Citation Index (SSCI); numbers of annually published articles with these topics. Introduction 5
  • 19. narrower, which in turn creates self-reinforcing spirals. By prioritizing information that confirms people’s worldview, algorithms would amplify confirmation biases: people’s tendency to seek out information in line with pre-existing beliefs (e.g., Nickerson, 1998), thus making them less open to and curious about views that challenge their preconceptions. Filter bubbles are closely related to echo chambers. As Nguyen (2020) argues, it is important to distinguish between filter bubbles and echo chambers, although the two terms in practice often are used inter­ changeably (see also Bruns, 2019). While a filter bubble refers to a situ­ ation where individuals are isolated from information that challenges their worldview, an echo chamber is a group-level phenomenon where the members of the group share the same worldview and reinforce this worldview by on the one hand continuously confirming this to one another and, on the other hand, discrediting voices with another per­ spective. Such groups can be found on social media such as Facebook or YouTube, for example, around incels, climate change deniers, or anti- vaxxers. An “echo chamber is a social epistemic structure from which other relevant voices have been actively excluded and discredited” (Nguyen, 2020, p. 141). The term echo chamber was popularized by Cass R. Sunstein in the first decade of the 21st minutes focusing primarily on the internet and the blogosphere. In his 2017 book #Republic, Sunstein expanded the discussion of echo chambers to social media. As with filter bubbles, critics have pointed to algorithms behind social media as a villain and cause of echo chambers. On Facebook, algorithms might drive people toward groups of like-minded individuals and keep them entrapped in these groups (e.g., Sunstein, 2017). In their most extreme form, such concerns about algorithm-driven filter bubbles and echo chambers reflect the central characteristics of technological determinism: algorithms are seen as an autonomous force leading to societal consequences such as radicalization and polarization, with limited attention to variation in the way different societal groups use the technology or to the way societal trends interact with the effects of algorithms. Empirical studies of filter bubbles and echo chambers challenge this perspective (e.g., Flaxman et al., 2016; Geiß et al., 2021; Nechushtai and Lewis, 2019; Puschmann, 2019). A thorough discussion of relevant studies led Bruns (2019, p. 96) to conclude that “mainly, the debate around these concepts and their apparent impact on society and democracy constitutes a moral panic.” Based on their review of the lit­ erature, Zuiderveen Borgesius et al. (2016, p. 1) answered the question Should we worry about filter bubbles? with the conclusion that “at present there is little empirical evidence that warrants any worries about filter bubbles.” They list the following reasons why worries about filter bub­ bles might be overstated. 6 Introduction
  • 20. • Without personalization algorithms, people tend to limit the range of views they expose themselves to. Selectively exposing oneself to information in line with one’s preferences is not a new phenomenon. It has existed long before social media and algorithms. • Social media and the internet also offer new opportunities for encountering surprising information which widens one’s worldview. • At the time of writing, personalization algorithms were not techni­ cally advanced enough to give each audience member information in line with what they want. Several empirical studies indeed support the observation that algorithms do not have the strong personalization effects which critics suggest. Haim et al. (2018), for example, studied the personalization effect of Google News and found very limited differences in search results depending on what people had previously searched for. Möller et al. (2018) studied how different types of news recommendation systems on a newspaper website affect diversity of recommended content. They found no evidence that basing recommenda­ tions on user history leads to limited diversity in recommendations. Two studies by the Reuters Institute for the Study of Journalism not only challenge the idea that algorithm-driven social media narrow the range of views to which the audience is exposed (Fletcher and Nielsen, 2018a, 2018b). They even showed evidence suggesting the exact opposite: on algorithm-driven social media, people get exposed to more diverse information than people who do not use such social media at all. In one study, they illustrated empirically that people who use search engines like Google to find news are exposed to more diverse and more balanced news sources than people who do not use search engines. They attribute this to what they call automated serendipity, “forms of algorithmic selection that expose people to news sources they would not otherwise have used” (Fletcher and Nielsen, 2018a, p. 977). In another study, Fletcher and Nielsen (2018b) studied whether people get exposed to news sources without explicitly looking for them (so-called incidental exposure) when using social media channels. They concluded that people who use Facebook, YouTube, and Twitter primarily for entertainment, get ex­ posed to more online news channels than people who do not use social media at all. Boulianne et al. (2020) likewise challenged the idea that ex­ posure to algorithm-driven news sources leads to partisan polarization because people are no longer exposed to opposing views. Some studies illustrate that algorithms can foster selective exposure, but that these effects cannot be attributed to algorithms alone. A study into exposure to cross-cutting information on Facebook by Bakshy et al. (2015) found that Facebook’s algorithms decreased content diversity. However, this effect of algorithms on content diversity was remarkably smaller than the effects of people’s own tendency to self-select into Introduction 7
  • 21. networks with people with similar political preferences (Bakshy et al., 2015). Dubois and Blank’s (2018) argue that the effect of algorithms should be studied in the context of the broader media environment, not focusing on single social media channels. In a representative survey among the British population, they found that only a minority of the respondents never or rarely disagree with content they encounter on social media and never or rarely check diverse political sources online. They estimate that only 8% of the population might be in danger of being caught in an online echo chamber, and these people might still be exposed to diverse information offline through personal conversations with friends and family. Bodó et al. (2019) revealed that there are large parts of the population who prefer diversity-enhancing automated news recommendations over diversity- limiting news recommendations. These people are less likely to fall into a negative spiral where their news preferences combined with news recom­ mendations limit the diversity of the information they are exposed to. On the other hand, there are parts of the population who are less interested in diversity and who might get caught in “diversity reducing feedback loops” (Bodó et al., 2019, p. 206). This research shows that algorithmic gatekeeping can have distinct effects across various societal groups. In sum, empirical research challenges the hard technological determi­ nistic view of the negative influence of algorithmic gatekeeping on filter bubbles and echo chambers. Instead, this research is more supportive of a softer version of technological determinism, acknowledging that the influence of algorithmic gatekeeping should be studied in a broader con­ text; that the influence is not necessarily negative; that algorithmic gate­ keeping affects people differently; and that it is the way social actors and social factors shape and use these algorithms which determine their power. This soft technological determinism is also supported by other algorithmic gatekeeping studies beyond research into filter bubbles and echo cham­ bers. Bandy and Diakopoulos (2021) for example showed that Twitter features more junk news (understood as news which does not help the audience to fulfill its citizen role) when Twitter’s timeline is driven by al­ gorithms, compared to when it is sorted chronologically. Part of this can be explained by the specific working of the algorithm, but they conclude that the role of the algorithm is “a fairly minor, supporting role in shifting media exposure for end users, especially considering upstream factors that create the algorithm’s input – factors such as human behavior, platform incentives, and content creation” (Bandy and Diakopoulos, 2021, p. 1). 1.4 Theoretical considerations Supply-and-demand framework One theoretical starting point to understand how algorithmic gatekeeping is situated in the interplay between platforms, professional communicators, 8 Introduction
  • 22. and audiences is offered by Munger and Phillips’ (2022) supply-and- demand framework. Munger and Phillips (2022) list factors which explain the emergence of communities around right-wing content on YouTube, or more broadly, the emergence of alternative media clusters. This frame­ work helps to see the effects of social media algorithms in context and to understand how they interact with other technical affordances and audi­ ence desires and behaviors. On the supply side, Munger and Phillips (2022) highlight the affor­ dances of the YouTube platform which are favorable to the growth of communities around right-wing content. The possibility to discover con­ tent through video suggestions by YouTube’s algorithms is one of these supply-side features of the YouTube platform which fosters alternative media clusters. YouTube’s algorithms determine which videos show up in a search, which videos are suggested on YouTube’s starting page, and which videos are suggested to watch next after watching a video. There is a large degree of similarity between videos which have been watched pre­ viously and what YouTube’s algorithms suggest watching next (e.g., Airoldi et al., 2016; O’Callaghan et al., 2015). This could foster the growth of communities around specific types of content. Central in the supply- and-demand model is the notion that YouTube’s algorithms are far from the only factor driving the growth of right-wing communities. There are other affordances which make the growth of such communities more likely: YouTubers can actively influence whether their videos will be dis­ covered by adding descriptive keywords as metadata (tagging) or by using clickbait-style headlines. Furthermore, video is a form of content which is relatively cheap to produce and can convey more impacting emotions than text. The specific monetization structure of YouTube favors the building of communities and the continuous stream of new content. Popular YouTubers can sell advertising around their videos, offer channel mem­ bership, or create content specifically for YouTube Premium subscribers. YouTube can demonetize videos around sensitive topics, but this still leaves YouTubers with the opportunity to use alternative crowdfunding sources to get paid. Isolating YouTube’s algorithms as the driving force behind commu­ nities around right-wing content on the platform would also overlook the importance of audience demand for such content (Munger and Phillips, 2022). On the demand side, we find factors such as a loss of identity due to societal changes, or feelings of disconnection and isolation in the offline world, which might drive people to search for a feeling of community on social media platforms. They may develop para-social relations with YouTubers, who substitute for friends and meaningful connections which the audience lacks in real life. The video format seems to be particularly attractive to audiences looking for such connections, as it is less de­ manding to process than for example reading long blog posts. Introduction 9
  • 23. While Munger and Phillips (2022) use the supply-and-demand model to explain the presence of right-wing communities on YouTube, the model can likewise be used to understand that algorithms are only one factor among several factors influencing the spread of content on social media platforms. This framework is a useful antidote against a hard technologically deterministic understanding of the power of news and social media algorithms. The structuring power of algorithms Discussions on how to perceive the power of algorithms mirror similar discussions about the influence of the media on politics (see also Klinger and Svensson, 2018). Here, two broad perspectives can be distinguished (e.g., Van Aelst et al., 2008). First, an actor perspective, where power is seen as “a successful attempt by A to get B to do something that he would otherwise not do” (Dahl, 1957, p. 203). Here, being in power is seen as having the upper hand in an asymmetric relation. In the second perspec­ tive, the power of the media is assessed from a structural perspective. Here, the focus is more on the way the media set the stage on which politics is performed and the conditions within which politicians have to maneuver. According to this perspective, the media do not determine specific political outcomes, but instead determine the unwritten rules which politicians have to adapt to when they pursue their goals. In line with a soft technological deterministic view on algorithmic gatekeeping, the power of algorithms can be better understood from a structural rather than actor perspective. Following Dahl’s definition of power, seeing the algorithm–human relation as a power balance would require the algorithm to purposefully force users to do something against their will. To purposefully force others to do something would require agency: the ability to autonomously act out of free will. However, it is problematic to talk about an algorithm as having agency in the same way as humans would have agency. Klinger and Svensson (2018) stress that algorithms lack evaluative and reflective capacities. Even though algorithms might be self-learning, this does not mean that they make deliberative decisions on what is desirable or the right trajectory of action when faced with unexpected situations. Without such evaluative and reflective capacities, algorithms lack the intentionality that defines human agency. Bucher (2018, pp. 50–54) underlines the fact that the way algorithms operate is the result of the interaction of different actors and forces, including the people who designed and programmed the algo­ rithm, the algorithm itself as well as the users whose behavior also affects the process and outcomes of the algorithms. The assemblage of actors which determine the working of the algorithm makes it difficult to see algorithms as possessing agency. 10 Introduction
  • 24. Instead of studying the power of social media algorithms for an actor perspective, it makes sense to see these algorithms as a structuring power. The algorithms help social media users to obtain specific resources (such as access to followers) or fulfill certain needs (e.g., feeling part of a com­ munity). Consequently, these algorithms structure how social media users interact with each other and the social media platform. In doing so, the algorithms do not determine the exact decisions which the social media users make but set the ground rules which the social media users have to adhere to if they want to be in the best position to obtain these resources or fulfill these needs. Bucher (2012, p. 1165) made this concrete by showing that “algorithmic architectures dynamically constitute certain forms of social practice around the pursuit of visibility.” Facebook’s algorithm determines which posts are shown to whom and in which order and thus whose views and output are visible. In order to be visible, Facebook users are forced to post regularly and have an incentive to like and comment on other people’s posts. As we shall see in Chapter 4, Instagram’s algorithms have a similar shaping power over Influencers. 1.5 Legitimacy and trust Algorithmic gatekeeping, which positively affects the quality of the public sphere and the ability of people to fulfill their citizen role, needs to be legitimate and trusted. This is particularly relevant because algo­ rithms are opaque: their role is often invisible. Even when algorithms are visible their working remains obscure (Bucher, 2018). Since people cannot monitor how algorithms work, they have to accept their power based on a perception that they are just and fair (in other words see them as legitimate) and they have to be willing to be vulnerable to the actions of the algorithms (in other words trust them). Legitimacy can be defined as “a generalized perception or assumption that the actions of an entity are desirable, proper and appropriate within some socially constructed system of norms, values, beliefs, and definitions” (Suchman, 1995, p. 574). When people see the power of algorithmic gatekeepers as legitimate, they accept the role which algorithms play in the public sphere and see this role as justified and rightful. A distinction can be made between objective and subjective approaches to legitimacy (Hurd, 2012). Following the objective approach, we can discuss whether algo­ rithmic gatekeepers support the public sphere according to some norma­ tive standards or specific views on democracy. Following the subjective approach, we can study whether people perceive algorithmic gatekeeping as acceptable. From this perspective, legitimacy “exists only in the beliefs of an individual about the rightfulness of rule” (Hurd, 2012). Ideally, there is a close link between objective and subjective legitimacy and the general public and communication professionals perceive algorithmic gatekeepers Introduction 11
  • 25. as just when they indeed support the public sphere. If power is not per­ ceived as legitimate, people might still adhere to it, but motivated by possible personal gains or lack of alternatives rather than because they believe the power is just and fair. Algorithmic power without legitimacy might lead to resistance against the algorithms and could ultimately be an incentive for people to leave a social media platform (Velkova and Kaun, 2021). While sources of legitimacy of media institutions have been ex­ tensively studied (e.g., Skovsgaard and Bro, 2011; Van Dalen, 2019b), less is known about which criteria people use to assess whether the power of algorithms is legitimate. Van Dalen (2019b) summarizes five sources of legitimacy for media institutions: the character of the professionals (in other words, whether journalists are perceived as good people), connec­ tions to other legitimate institutions (in other words, whether media institutions can piggyback on its relation with other institutions, which are seen as legitimate, such as the state or experts), ethical standards (such as the objectivity norm), constituents (in other words, the target group who benefit from the actions of media institutions, such as the public at large), and beneficial outcomes (such as the important democratic roles which the media fulfill by holding governments accountable or informing the elec­ torate). Chapter 4 will describe similar legitimacy criteria for social media algorithms. Trust can be defined as “the willingness of a trustor to be vulnerable to the actions of a trustee based on the expectations that the trustee will perform a particular action, irrespective of the ability to monitor or con­ trol that other party” (Mayer et al., 1995, p. 712). Given the enormous amount of information available online, it would be a challenging task for the public to individually go through all sources and select relevant information. Similar to the trusted news media (see Coleman, 2012), trusted algorithmic gatekeeping can thus reduce complexity and help to minimize efforts which allows the audience to fulfill their citizen role. Gatekeeping algorithms are complex and get continuously updated. Thus, social media users do not have the possibility to monitor or control the algorithms. When using social media and automated news, the users are vulnerable to the actions of the algorithms. They are dependent on the algorithms to on the one hand produced, select, and prioritize information which is relevant for them, and on the other hand, make sure that they do not miss essential information or get wrongly informed. To trust algo­ rithms thus requires humans to believe that the algorithms do a good job in selecting, prioritizing, and creating information. In order to invoke trust, social media platforms often underline the fact that their algorithms lack subjectivity and personal biases (Gillespie, 2014, p. 179): “The careful articulation of an algorithm as impartial (even when that characterization is more obfuscation than explanation) certifies it as a reliable sociotechnical actor, lends its results relevance 12 Introduction
  • 26. and credibility, and maintains the provider’s apparent neutrality in the face of the millions of evaluations it makes.” However, as we shall see in Chapter 2, trust in social media algorithms is not only determined by platforms and the working of their algorithms (the object of trust or trustee), but as much by the social media users who use them (the trustors). Trust in algorithms is influenced by people’s understanding of how these automated computer systems work, which in turn is strongly shaped by comparisons with how the human mind works (Shariff et al., 2017). Such comparisons could either result in attributing human char­ acteristics to the algorithmic systems, or to sharply distinguish between tasks which are suitable for machines on the one hand, and tasks which should be left to humans on the other (e.g., Lee, 2018). Research into algorithmic forecasting has for example shown that people are often unwilling to rely on the advice of algorithms, even when such advice outperforms human forecasting (Burton et al., 2020). Thus, the promise of algorithmic objectivity does not necessarily translate into public trust in these algorithms. This makes it relevant to study how people perceive news algorithms and how this is related to trust. For a deeper understanding of why legitimacy and trust are relevant for algorithmic gatekeeping, it is helpful to look at why legitimacy and trust matter for journalistic gatekeeping. The press is often described as the fourth estate since it performs the important democratic role of in­ forming citizens and holding government accountable. Still, the basis of authority of journalism is different from the basis of authority of the first three powers: the workings of the legislative, executive, and judiciary are guided by clear rules and regulations which are laid down in constitu­ tions and democratic procedures (e.g., Cook, 1998, p. 109). Journalists’ work is rather based on unwritten social patterns of behavior and rou­ tines which are followed across organizations and remain rather stable over time (Cook, 1998). In the Western world, journalists’ work is protected by freedom of expression and their editorial independence is guaranteed in press laws. Other professions like medics or lawyers have stricter guidelines and inclusion and exclusion mechanisms into the profession than journalism which is in many Western countries not a protected title (Skovsgaard and Bro, 2011). On the one hand, this gives journalists and news organizations freedom to organize their work ac­ cording to their own standards rather than standards and guidelines set by other institutions. On the other hand, this also comes with the downside that journalists cannot claim to adhere to written rules and procedures when they want to justify their power and claim legitimacy. Trust in the mainstream media is a necessary condition for the legitimacy of the press. If the press is not trusted by the general public, it is easier for politicians to ignore criticism from journalists or obstruct efforts by the media to hold them accountable (e.g., Van Dalen, 2019a). Introduction 13
  • 27. Like journalistic gatekeeping, algorithmic gatekeeping has long been shielded from government regulation. European regulation stipulates that courts or regulatory authorities may not impose on journalists and news organizations certain ways of conducting their work (Helberger et al., 2020). This gives news organizations freedom to use algorithms in their reporting “as long as doing so does not conflict with the enjoyment of fundamental rights of others” (Helberger et al., 2020). An argument can be made that automated prioritization and selection of information on social media platforms are also forms of (quasi-)editorial judgment (Leerssen, 2020). Still, the rights and freedoms also come with respon­ sibilities, in particular to recognize and deal with the negative conse­ quences of algorithmic gatekeeping on fundamental rights and the functioning of the public sphere. Examples of such negative conse­ quences include the circulation of hate speech, interference with election results, or the spread of health-related disinformation. Concerns about the rapid implementation of algorithms in society and debate about the negative ethical implications have led to the for­ mulation of several ethical guidelines. Hagendorff’s (2020) analysis of 22 ethical guidelines for AI however showed that such guidelines are gen­ erally non-binding and lack mechanisms of enforcement. This limits the role of ethical guidelines in making the general public and professional communicators accept algorithmic power. Concerns about negative consequences of algorithm-driven social media platforms combined with the limited effects of ethical guidelines have led to legislative initiatives to counter democratic risks posed by these platforms. Examples of this are the European Union’s AI Act, Digital Services Act, and Digital Market Acts (e.g., Helberger and Diakopoulos, 2022). These Acts demand platforms to be transparent about the algorithms used for recommen­ dations; oblige large platforms to allow independent audits of their measures to prevent abuse of their systems; and force them to provide researchers access to data (European Commission, 2022a). The impli­ cations of such regulation for trust in algorithmic gatekeeping will be addressed in the final chapter. 1.6 Outline and content of the book Three empirical chapters further our understanding about the power of news algorithms by focusing on, respectively, the general public, plat­ forms, and professional communicators. Chapter 2 analyzes how the general public perceives and approves of news algorithms. Following an overview of the antecedents of algorithm approval found in the literature, the chapter reports the results of three surveys among the general Danish population. The results show that trust in news algorithms and approval of news algorithms is low. Behind 14 Introduction
  • 28. this seems to lie an implicit comparison with human journalists, who are seen as more objective and neutral, and thus more trustworthy. Still, the general population is more approving of algorithms playing a role in news selection than in other areas in public life. This is especially true for younger generations and when humans are involved in the process. Chapter 3 analyzes the mediating power of algorithms on social media platforms. Mediation refers to “the relaying of second-hand (or third- party) versions of events and conditions which we cannot directly observe for ourselves” (McQuail, 2010, p. 83). Media organizations have always had a mediating role between the audience and broader society. Social media algorithms play a similar role. When they filter, prioritize, or censor certain information, they influence which views and perspec­ tives are most likely to reach the audience. This mediating power is analyzed in a case study of the presence of different types of mis­ information in YouTube search results and recommendations around autism. The chapter shows that YouTube’s search and recommendation algorithms do not recommend misinformation which is directly harmful. Still, some evidence was found that the algorithms suggest and recom­ mend less harmful misinformation. These results can not only be at­ tributed to the algorithms since they are also shaped by other supply as well as demand factors on YouTube. Still, they raise important questions about how social media platforms should handle their responsibility and what duty and opportunity they have to use algorithms to downplay or correct misinformation. Chapter 4 analyzes how professional communicators like Influencers perceive the power balance between themselves and the algorithms of the platforms on which they communicate. A study of how professional communicators perceive and react to the power of Instagram’s algorithms shows how dependent they are on these non-transparent algorithms for access to user engagement. The algorithms limit their autonomy. This can have serious democratic consequences such as a tendency toward main­ stream perspectives and limited room for doubts or nuance. Against this background, the chapter lists criteria which professional communicators use to assess the legitimacy of algorithmic power. The final chapter reflects on the findings of the preceding chapters in the light of the aims of the book. The chapter highlights lessons learned about the power, trust, and legitimacy of news algorithms and the challenges they pose for social media platforms and news organizations. Ultimately, concrete initiatives are proposed to strengthen the trust­ worthiness and legitimacy of news algorithms. Empirically, these chapters build on the results of three representative surveys, two survey-embedded experiments, a quantitative content anal­ ysis of videos recommended by YouTube’s algorithms and a qualitative analysis of the way professional communicators relate to social media Introduction 15
  • 29. algorithms. The surveys and experiments are conducted in Denmark, while the other analyses study English-language content. Thus, the em­ pirical results should be understood in the context of Western media systems. Instead of analyzing much studied social media platforms like Twitter or Facebook, the manuscript focuses on YouTube and Instagram. As described in Chapters 3 and 4, these two platforms are used by large parts of the population, especially younger segments. They distinguish themselves from other social media platforms by their visual nature. On YouTube and Instagram, audiences encounter public information of all kinds, ranging from entertainment to hard news, and from conspiracy theories to the correction of misinformation. These are also the platforms where new actors like celebrities and Influencers compete for attention with mainstream outlets and expert sources. Thus, these platforms exemplify what the new media environment looks like, making them exemplary cases to study the role of algorithms in disseminating information and in affecting communication behavior. Furthermore, YouTube and Instagram actively change and update their algorithms to counter potential negative consequences. This provides a basis for discussing how social media platforms should responsibly deal with the power of their algorithms. Before narrowing the focus to these two platforms, Chapter 2 first maps generalized trust and approval of news algorithms among the general population. 16 Introduction
  • 30. 2 Algorithm Aversion 2.1 Introduction The previous chapter discussed the distinctive ways in which algorithms play an increasingly important role in selecting news and writing news (see Diakopoulos, 2019). How the public perceives such news algorithms affects how they interact with the information written and selected by these automated computer systems (Bucher, 2018; Rader and Gray, 2015). People often lack awareness of the role of algorithms in news production and selection (Eslami et al., 2015). Those members of the general public who are aware that news algorithms exist make sense of them based on simplified mental models of how the technologies func­ tion rather than a deep understanding of their working. These mental models are strongly influenced by two mental shortcuts. The first shortcut is the tendency to compare algorithms to humans (Hofstadter, 1995). The second shortcut is the machine heuristic (Sundar, 2008), the idea that decisions made by algorithms are neutral, since these automated computer programs follow prescribed rules. As we will see in this chapter, these heuristics and comparisons affect the way people perceive, trust, and approve of news algorithms. Based on representative surveys and two survey-embedded experiments, the chapter addresses the following questions: How do people perceive the strengths and weaknesses of news algorithms compared to human journalists? and Do people trust and approve of news algorithms? To address these questions, this chapter first presents a review of the literature on news algorithm perceptions and approval. This review fo­ cuses on the comparisons between humans and algorithms and on the machine heuristic. Previous research on approval of news algorithms is placed in context by relating the findings to the broader literature of algorithm approval. Next, new empirical evidence shows the impact of machine heuristics on perceptions of strengths and weaknesses of news algorithms. The data further reveal an alarming lack of trust in news selected and written by algorithms compared to news selected and written DOI: 10.4324/9781003375258-2
  • 31. by humans. People however remain more positive toward algorithms playing a role in the selection of news than in other areas of public life. This is especially the case when algorithms and humans work together in selecting the news, and among younger generations. The final chapter of this book will return to these findings and point to human-algorithmic collaboration as a way to build trust toward news algorithms. 2.2 Making sense of news algorithms A central feature of news algorithms which affects how they are per­ ceived is their opacity; the lack of transparency and clarity around how they operate (Eslami et al.,2019). Many people lack awareness of the existence of algorithms which prioritize and select information, such as the Facebook News Feed curation algorithm (Eslami et al., 2015), or Yelp’s review filtering algorithm (Eslami et al., 2019). Even when people are aware of the existence of news algorithms, there is a lack of under­ standing about the way such algorithms make decisions and a lack of awareness of their effects. Many college students who use Google to access news for example were not aware that search results are person­ alized (Powers, 2017). People who are confronted with new technologies under uncertain conditions like the ones created by the opacity of the news algorithms, develop their own mental models to make sense of how these technol­ ogies function (Bucher, 2018, pp. 96–7). Such mental models are based on different types of input, ranging from representations in popular culture, public debate, and media coverage, to so-called folk theories grounded on personal everyday experiences with news algorithms (Bucher, 2018). Following DeVito et al. (2017, p. 3165), folk theories can be defined as “intuitive, informal theories that individuals develop to explain the outcomes, effects, or consequences of technological systems, which guide reactions to and behavior towards said systems.” DeVito et al. (2017) distinguish between operational theories and abstract the­ ories. Operational theories relate to the specific working of algorithms and concrete rules which the algorithms follow (e.g., popular content will be prioritized on Twitter). Abstract theories on the other hand represent a generic idea of the function and broad role of algorithms on the platform, without specification of concrete mechanisms or specific out­ comes (e.g., Twitter’s algorithm affects a user’s timeline). These mental models are often based on (implicit) comparisons with humans. Research in robotics has for example documented that people often attribute human characteristics to automated systems, such as moods or emotions. This anthropomorphism helps people to make sense of the working of these abstract systems (Hofstadter, 1995; Zarouali, Makhortykh et al., 2021). These implicit comparisons can be more 18 Algorithm Aversion
  • 32. favorable toward either the automated systems or humans performing similar tasks. When assessing the credibility of news and information online, people are faced with more uncertainty than when consuming news through more traditional channels like newspapers or television where the sender can clearly be distinguished. Under such uncertain cir­ cumstances, the technical affordances, such as whether there is the pos­ sibility for interactivity, are used as cues to assess the credibility of information (Sundar, 2008). The cues trigger heuristics or rules of thumb, which affect credibility assessments. One mental shortcut that affects how people think about news algorithms is the so-called machine heuristic (Sundar, 2008). The machine heuristic assumes that because a machine follows pre-defined rules it is less susceptible to subjectivity or unfair favoritism than humans are: “if a machine chooses the story, then it must be objective in its selection and free of ideological bias” (Sundar, 2008, p. 83). Sundar and Nass (2001) found that people rated the same stories as of higher quality when they were allegedly selected by computer systems rather than human journalists. The machine heuristic and presumed lack of bias might account for this result (see also Nielsen, 2016). Contrary to what the machine heuristic suggests, news is perceived as less credible when it is allegedly written by a “robot reporter” compared to when a human journalist is portrayed as the author (Franklin Waddell, 2018). This can be explained by expectancy violation theory. Using the label robot could lead to high expectations about the objec­ tivity of the content. When the actual content then does not live up to these expectations, people will be disappointed, and the credibility will be perceived as lower (Franklin Waddell, 2018). This research by Sundar (2008), Sundar and Nass (2001), and Franklin Waddell (2018) suggests that the machine heuristic has dif­ ferent effects on the credibility of automatically selected news than on automatically written news (Franklin Waddell, 2018, p. 249). This could reflect that people understand news selection as a less creative and thus less human task than news writing. Lee (2018, p. 4) makes a distinction between “tasks that require more ‘human’ skills (e.g., subjective judg­ ment and emotional capability) and those that require more ‘mechanical’ skills (e.g., processing quantitative data for objective measures).” People had equal trust in either algorithms or humans completing mechanical tasks, such as making a work schedule. When it comes to tasks which are perceived as requiring a human skill (such as hiring or work evalua­ tions), algorithms were perceived as less fair than humans, since algo­ rithms do not have intuition and cannot make subjective judgments (Lee, 2018). Other studies confirm this distinction. Bigman and Gray (2018) showed that for moral decisions, people are algorithm averse: when it comes to decisions in moral areas, such as life and death decisions in the military, medicine, or law, humans are preferred as decision-makers Algorithm Aversion 19
  • 33. compared to algorithms. Logg et al. (2019), on the other hand, found algorithm appreciation when it comes to numerical estimates. Here, algorithmic advice is preferred over human advice. These different pref­ erences of algorithms versus humans reflect how people perceive the rel­ ative strengths and weaknesses of algorithms and how this differs from the way they think the human mind operates (Logg et al., 2019). Dietvorst et al. (2015) showed that apart from assumptions about the general working of algorithms, experience with algorithms matters for algorithm trust and approval. When people have seen an algorithm make a mistake, they quickly lose faith and subsequently will avoid al­ gorithms. When a human makes a mistake, this is more easily forgiven and does not affect trust in other decisions to the same degree, even when the mistake is more severe than the one made by the algorithm. Representations of and discussions about algorithms in the main­ stream media might likewise influence the approval of news algorithms. From research on robotics, it is known that representations of robots in popular culture can either make people more accepting toward such forms of automation, or more skeptical. Following the “Hollywood Robot Syndrome” media depictions of robots affect people’s general perceptions of robots (Sundar et al., 2016). When they recall friendly movie robots, people are generally more supportive of robots’ roles in society more generally. This also makes them more positive toward automatically generated news stories (Franklin Waddell, 2018). Media depictions may also affect robot support negatively. Following the Frankenstein complex, depictions of evil robots might trigger negative connotations of robots, focusing on their shortcomings and the danger they pose to humans (Kaplan, 2004; Kim and Kim, 2018). Similar to the Frankenstein complex, it could be expected that the simple use of the term “algorithm” might make people skeptical, given the negative attention to algorithm-driven filter bubbles and echo chambers in public and media discourse. Surveys have given insight into how the general public approves of news algorithms. Zarouali et al. (2021) documented that misconceptions about media algorithms are common among the general population. Araujo et al. (2020) showed that news recommendations attributed to AI were seen as equally fair and useful as news recommendations attributed to human editors. Fletcher and Nielsen (2019) showed that people across different media systems even prefer automatically selected news stories over stories selected by journalists. At the same time, there was broad concern that such automatically selected news might give a narrow perspective (Nielsen, 2016). This reflected a generalized skepticism where respondents are critical toward both journalists and algorithms. Thurman, Moeller et al. (2019) found that people prefer automatically selected news when it is based on their own prior news consumption over 20 Algorithm Aversion
  • 34. news which is automatically selected based on what peers have seen. This could be related to on the one hand overconfidence in one’s own ability to select relevant news, and on the other hand to performance problems of automatically generated peer recommendations. Younger respondents are generally more aware of the role of algorithms in their social media feeds and also take a more active role in trying to influence which content algorithms show in their social media feeds (e.g., Bucher, 2018). This might explain why younger generations are more positive toward news algorithms and algorithm-driven news personalization than older generations (Bodó et al., 2019; Fletcher and Nielsen, 2019). 2.3 Research questions Building on the literature described above, two research questions are posed to contribute to our understanding of news algorithm perceptions and approval among the general population. Research question 2.1: How do people perceive the strengths and weaknesses of news algorithms compared to human journalists? As previous research has shown, perceptions of news algorithms are heavily shaped by comparisons to human journalists. This chapter makes these comparisons explicit, by asking whether people think that algorithms or human journalists are better at selecting a specific type of content. In line with the machine heuristic, it could be expected that algorithms are seen as better than humans at selecting information, which is neutral, objective, balanced, and trustworthy. On the other hand, selecting such information could be seen as a “human,” rather than “mechanical” task and should therefore be left to journalists. Following concerns about filter bubbles and echo chambers, the chapter explores whether algorithms or humans are perceived as better at selecting information, that is surprising, relevant, and offers different perspectives, will be explored. Research question 2.2: Do people trust and approve of news algorithms? Trust and approval of news algorithms are closely connected to how their strengths and weaknesses are perceived. Generalized trust in algorithms selecting and writing news will firstly be compared to generalized trust in human journalists and news organizations. This analysis of generalized trust will be supplemented by a more narrowly focused analysis of trust in news algorithms, studying how people react to scenarios where either human journalists or an algorithm makes a journalistic mistake. Following Algorithm Aversion 21
  • 35. Dietvorst et al. (2015), it could be expected that people are less forgiving toward algorithms making mistakes than toward human journalists making mistakes. Next, the approval of news algorithms will be compared to approval of news algorithms making decisions in other areas of public life, such as court cases, hiring decisions, or hospital patient prioritization. This will give insight into whether news selection is seen as a more human or more mechanical task than other important decisions in public life (Lee, 2018). In the analysis of trust in algorithms and approval, special attention will be given to age differences. Following Bodó et al. (2019) and Fletcher and Nielsen (2019), it can be expected that younger generations are more positive toward news algorithms than older generations. 2.4 Method To answer these research questions, this chapter reports the results of three representative surveys among the Danish population. These surveys included both questions about perceptions of news algorithms as well as two survey-embedded experiments. As described above, previous surveys about perceptions of news algorithms have given insight into approval and trust in news algorithms around the world (e.g., Fletcher and Nielsen, 2019; Thurman, Moeller et al., 2019). This chapter expands on these surveys by directly comparing perceptions of news algorithms with perceptions of human journalists, and by placing the approval of news algorithms in context by comparing this to approval of algorithms in other areas of public life. The first survey-embedded experiment tries to establish whether the simple use of the value-loaded term algorithms affects approval of news algorithms. The second survey-embedded experiment follows the approach by Bigman and Gray (2018) and presents the respondents with different scenarios in order to compare approval of news algorithms with approval of human journalists and how mistakes affect this approval. Concretely, the analysis is based on the following three surveys: • The question whether journalists or algorithms are better at selecting different types of content (Figure 2.1) was examined in a survey among 1225 Danish social media users. These data were collected between 25 March and 7 April 2020. • The questions about trust in news algorithms compared to trust in other actors as well as the questions comparing approval of news algorithms with approval of algorithms in other areas of public life were studied in a survey among 1210 social media users in the period between 15 and 30 April 2020. Both these surveys were conducted by the polling company Epinion, and participants were recruited from Norstats Online panel. All respondents were between 18 and 65 years old and use Facebook at 22 Algorithm Aversion
  • 36. least once a month. Both studies were weighted to be representative of the Danish population on the parameters of gender, age, and region. • The survey-embedded experiment comparing approval of news algorithms with approval of human journalists was part of a survey completed by 1200 Danes. These data were collected between 17 and 25 November 2021. This survey was conducted by polling company DMA research, and participants were recruited from Bilendi’s online panel. Respondents were between 18 and 88 years old and 81.2% of the respondents were regular Facebook users. The data were weighted to be representative for the Danish population on the parameters of gender, age, region, and education. The final part of the analysis reports differences in approval of news al­ gorithms between four generations. The operationalization of the four generations is based on a classification by Pew Research Centre (Dimock, 2019): Generation Z: born in 1997 or later; Millennials: 1981–1996; Generation X: 1965–1980; Boomers: 1946–1964. The exact question wordings and experimental conditions in the three surveys are described in the following section together with the results. 2.5 Results Man vs machine Respondents were asked whether they thought that algorithms or human journalists and editors are best at selecting different types of 0 10 20 30 40 50 60 70 80 90 100 trustworthy content news that offers different perspec"ves content that is surprising balanced content objec"ve content neutral content content that has personal relevance for me computer algorithms equally well journalists do not know Figure 2.1 Who do you think is best at selecting the following type of content? Notes: N = 1225 social media users between 18 and 65. Question wording: Every news website, mobile app or social network makes decisions about what news content to show to you. Increasingly, these decisions are made by computer al­ gorithms, which use different criteria to select which news stories to show. Who do you think is better able to select content with the following qualities? 1. Human editors and journalists; 2. Computer algorithms; 3. Equally well, 4. I do not know (% of population choosing each category). Algorithm Aversion 23
  • 37. content (see Figure 2.1). When it comes to content with personal rel­ evance for the user, the majority of respondents think that algorithms outperform human journalists. Only 16% of respondents have more faith in human journalists in this respect. Personally, relevant content seems to come with a downside. Less than 20% of respondents think that algorithms are better than humans at selecting content that is surprising or news that offers different perspectives. Here human journalists are clearly preferred. This seems to indicate that fears of filter bubbles influence people’s assessment of news algorithms. In total, 30% of social media users think that algorithms are better at selecting neutral content than humans, which is in line with the machine heuristic described above. This is more than the share of respondents who think that human journalists are best at this. Even though news algorithms are seen as more neutral than human journalists, they are at the same time seen as less able to select objective or balanced content. Here the share of respondents who prefer human journalists is almost twice as large. Thus, the perceived technical neutrality of algorithms is not seen as the same as absence of bias. When it comes to selecting trustworthy information, respondents overwhelmingly prefer human journalists (60%) over computer algo­ rithms (9%). Low trust in news algorithms was confirmed in further survey questions. When asked directly how much social media users trust news selected by computer algorithms, 72% of respondents answer that they have no or low trust, 25% has neither high nor low trust, and only 3% has trust or high trust (n = 595). To test whether this low trust might be due to negative connotations with the term algorithm rather than an expression of true concerns with their role in news selection, we randomly divided respondents into two groups. One group was asked how much trust they have in news selected by algorithms, and one group was asked how much trust they have in news selected by automated computer systems. Also, among the group who was asked about automated computer systems, trust was very low: only 2% of respondents said they (highly) trust such news (n=615). This low trust in news selected by algorithms stands out even more clearly when it is compared to trust in the news media and to trust in automated computer systems. Around half of the respondents have (a great deal of) trust in the news media (52%) or journalists (44%). Less than one in six respondents say they trust artificial intelligence (12%), robots (14%), or computer algorithms (15%), but such trust levels are still significantly higher than trust in news selected by algorithms. Thus, when it comes to trust in news selected by computer algorithms, the whole is smaller than the sum of the parts. 24 Algorithm Aversion
  • 38. Algorithm aversion The lack of trust in news algorithms was confirmed in an experiment studying the approval of news written by news algorithms. Following the literature on algorithm aversion discussed above, an experiment was set up to test two hypotheses. Hypothesis 1 predicts that people approve less of automated computer systems writing articles than of journalists writing such articles. Hypothesis 2 predicts that approval decreases more when an automated computer system makes a jour­ nalistic error than when a journalist makes a journalistic error. To test these hypotheses, 1200 respondents were randomly divided into eight groups of 150 respondents. Each of the respondents was asked to think of the following scenario: A Danish news website publishes articles about each question to a minister in parliament and the subsequent answer. These articles are written by a journalist OR These articles are written by an automated computer system. The experiment furthermore varied whether the scenario indicates that a human journalist checks the automated articles before publication and whether a mistake was made in one of the published articles by incorrectly referring to former PM Poul Schlüter instead of current PM Mette Frederiksen as Prime Minster in a question about Corona (see Table 2.1). The different scenarios are based on an actual mistake made in an automated story on the website of the Danish news medium Altinget (see Andreassen, 2020). After reading one of the scenarios, the respondents indicated on a scale from 1 (strongly disagree) to 5 (strongly agree) whether they agree that (a) it is appropriate for a journalist/ automated computer system to write these articles, (b) a journalist/ automated computer system should be forbidden from writing these articles, and (c) I trust the journalist/automated computer system writing these articles. The first two statements were adapted from Bigman and Gray (2018). After reversing the answers to statement b, the three questions were combined into one scale measuring approval (Cronbach’s alpha .69, M = 2.44, SD = .97). The results of the experiment revealed general disapproval of the idea of automated computer systems writing news stories. Approval across the six conditions where the automated computer system writes the news stories is well below the midpoint of the scale (M = 2.21, SD = .90), and around one point on the five-point scale lower than approval of journalists writing news stories about ministerial questions Algorithm Aversion 25
  • 39. Table 2.1 Algorithm aversion experiment Condition Articles are written by Mentions whether articles are checked? Mentions mistake¹ Additional information Approval² M (S.D.) 1 A journalist No No 3.33 (.79) 2 A journalist No Yes 2.96 (.82) 3 An automated computer system No No 2.20 (.88) 4 An automated computer system Yes: The articles are not checked by human journalist before publication. No 2.18 (.95) 5 An automated computer system Yes: The articles are checked by human journalist before publication. No 2.33 (.95) 6 An automated computer system No Yes 2.18 (.90) 7 An automated computer system Yes: The articles are checked by human journalist before publication Yes 2.19 (.83) 8 An automated computer system Yes: The articles are not checked by human journalist before publication Yes “The editor of the news website said they take the mistake just as serious as if it was made by a human.” 2.15 (.90) Notes: Danish news website publishes articles about each question to a minister in parliament and subsequent answer. These articles are written by …. ¹Recently one of these published articles mistakenly referred to Poul Schlüter instead of Mette Frederiksen as Prime Minster in a question about Corona. ²Scale from 1 (fully disapprove) to 5 (fully approve). 26 Algorithm Aversion
  • 40. (M = 3.15, SD = .82). As expected, there is a significant difference in the approval of news written by journalists (scenario 1; M = 3.33, SD = .79) and news written by automated computer systems (scenario 3; M = 2.20, SD = .88, t(295)= 11.72, p < .001). Thus, there is support for Hypothesis 1. What is more, people are more approving of news written by a journalist who makes a mistake (scenario 2; M = 2.96, SD = .82) than of an automated computer system when no mistake is mentioned (scenario 3, t(293) = 7.69, p < .001). There are no significant differences between the six scenarios in which the articles are written by automated computer systems. Thus, contrary to Hypothesis 2, whether the algorithm made a mistake or not did not affect approval. Nor did approval improve when the sce­ nario mentions that human journalists check the articles written by the automated system before publication (scenarios 5 and 7), or when the scenario mentions that the mistake is taken just as seriously as if it was made by a human. Algorithms everywhere Automated news selection is only one example of the growing role of algorithms in public life. Figure 2.2 shows that social media users are more positive about the use of algorithms in news selection and dis­ semination than in other areas of public life. News selection is one of the areas in public life where least people think only humans should make decisions (24%). In line with low trust in news algorithms described above, only a small minority (11%) thinks that algorithms alone should select the news one receives. On the other hand, the large majority (65%) thinks that algorithms should play a role in news selection, as long as there are also human journalists involved. Also, for tasks like matching unemployed people with firms, accounting, speeding tickets, and the allocation of public funds, the majority of the respondents think that algorithms should play a role, but primarily with human oversight. For other areas of public life, the public is much more skeptical about the added values of algorithms, compared to humans making decisions. This is the case for decisions that have a life-changing impact on people’s life, such as hospital patient priori­ tization, parole, triage for nursing homes, or removing children from their families. Also, when it comes to democratic decision-making, such as elections or court cases, only a handful of respondents see a role for algorithms and hardly anyone thinks that algorithms alone should make decisions. Algorithm Aversion 27
  • 41. Gen Z Figure 2.3 shows how different generations think about whether algorithms should play a role in news selection. For all generations, the majority of respondents think that it is best that algorithms and human together select news, but apart from that there are strong inter-generational differences. Among the Boomer generation (born between 1946 and 1964), 40% believe that humans alone should decide which news they receive, and only a small minority thinks that this should be decided by algorithms alone. The younger the respondents, the more positive they are toward news algo­ rithms. Among Generation Z (born after 1997), only 6% believe that hu­ mans alone should decide which news they receive. No less than 27% of this generation believes that algorithms should select news without the inter­ ference of human journalists. A secondary analysis of the other results presented confirms that Generation Z is the most positive toward news algorithms. They are more positive than older generations about how trustworthy, balanced, and objective news selected by algorithms is (results not shown). Still, when asked directly, only 8% of Generation Z say that they trust news selected by news algorithms. 0 10 20 30 40 50 60 70 80 90 100 matching unemployed people with firms accounng the news i receive targeted polical adversement speeding ckets allocaon of public funds hospital paent priorizaon Job hiring decisions Parole (who should get it and when you are eligible) Triage for nursing home Who gets elected to the local city council court cases decisions on moral dillemas (like euthanasia) placing children outside of their family Humans alone Humans and algorithms together Algorithms alone Figure 2.2 Should humans or algorithms make decisions in different areas of public life? Notes: N = 1210 social media users between 18 and 65. Question wording: Decisions in society are increasingly made by computer algorithms or with the support of computer algorithms. In your view, what is the best way to make decisions in the following areas? 1. Humans alone should make decisions, 2. Humans should make the decisions together with algorithms, 3. Algorithms alone should make decisions (% of respondents choosing each category). 28 Algorithm Aversion
  • 42. 2.6 Conclusion This chapter presented the results of surveys studying news algorithm per­ ceptions and trust among the general population. The surveys first and foremost showed a lack of trust in news algorithms. Only a very small minority said that they trust algorithms selecting news, which stood in marked contrast with trust in journalists selecting news. When it comes to trust in algorithms writing news, the results were similar. People even prefer jour­ nalists who make mistakes to write news rather than algorithms, which do not make mistakes. These results reveal low generalized trust in algorithms, which is not dependent on the actual performance of news algorithms. The low levels of trust in automatically selected and created news were not triggered by the term “algorithm” alone. When the more neutral term “automated com­ puter systems” was used people also express low levels of trust. This lack of trust is accompanied by skepticism about the ability of news algorithms to select content which is objective and presents diverse perspectives. On the more positive side, the majority of the respondents think that algorithms are better than journalists at selecting news with personal relevance. People are more approving of news algorithms than of algo­ rithms playing a role in other areas of public life, such as ethical and moral decision-making. Across generations, there is general acceptance of algorithms playing a role in news selection as long as there are also humans involved in the process. This suggests that the mental model of 0 10 20 30 40 50 60 70 80 90 100 Genera on Z (127) Millenials (326) Genera on X (473) Boomers (284) Humans alone Humans and algorithms together Algorithms alone Figure 2.3 Should humans or algorithms decide which news you receive? (Generational differences). Notes: Number of respondents per generation in brackets. Question wording: Decisions in society are increasingly made by computer algorithms or with the support of computer algorithms. In your view, what is the best way to make decisions in the following areas? The news I receive 1. Humans alone should make decisions, 2. Humans should make the decisions together with algorithms, 3. Algorithms alone should make decisions (% of each generation choosing each category). Algorithm Aversion 29
  • 43. what it means to select and write information is still very much based on the traditional image of a human journalist. This seems to change from generation to generation. The Boomer generation which grew up with newspapers and traditional news outlets like radio and television was less supportive of algorithmic news selection. There was more support for algorithmic news among younger generations, in particularly Generation Z who from a young age have used social media for news consumption. This could indicate that over time the general population will become more used to and thus more accepting of news algorithms. The next chapter will shift the focus from broad perceptions of algorithmic gatekeepers to a more narrow analysis of the mediating power of algorithms on the social media platform YouTube. 30 Algorithm Aversion
  • 44. 3 The Mediating Power of Algorithms 3.1 Introduction From the 2016 US Presidential elections to debates about climate change or COVID-19 vaccines, concerns have been raised worldwide about the spread of misinformation on social media. Central in discussions about misinformation on social media is the mediating power of algorithms. As this chapter addresses, algorithms are blamed for stimulating the spread of misinformation, but at the same time seen as potential remedies against the spread of such information. Thus, algorithms play an important role in maintaining the boundaries between legitimate and illegitimate information. This chapter presents an analysis of the role of social media algo­ rithms in the spread and correction of misinformation. This is analyzed through a case study of YouTube’s algorithms and (mis)information concerning the treatment of autism. The treatment of autism spectrum disorder (ASD) is chosen as a case to analyze the role of algorithms in spreading and countering misinformation for two reasons. (1) Social media have become an important platform where people share and search for information about autism (e.g., Saha and Agarwal, 2015). (2) Apart from the importance of social media as a channel, the treat­ ment of autism is an interesting case, since it allows distinctions to be made between truthful information, misinformation, and correcting misinformation (see below). Thus, the research question which this chapter addresses is: Do YouTube’s algorithms recommend truthful information, misinformation, or corrections to misinformation concerning the treatment of autism? Before addressing this question, this chapter conceptualizes mis­ information and discusses the double role of social media algorithms in spreading and limiting misinformation. This is followed by a presenta­ tion of the case and a review of previous research on the mediating power of YouTube’s algorithms. The empirical analysis of the presence of misinformation and corrections to misinformation among YouTube’s DOI: 10.4324/9781003375258-3
  • 45. search results and recommendations reveals that the algorithms suggest some misinformation, but at the same time seem to limit the spread of directly harmful misinformation. The final chapter of this book will return to these results to discuss how social media platforms should best handle the mediating power of their algorithms. 3.2 Misinformation and algorithms From the perspective of the sender, misinformation can be defined as “false or inaccurate information, especially that which is deliberately intended to deceive” (Lexico, 2021). From the perspective of the receiver, mis­ information can be defined as “information that is initially believed to be valid but is subsequently retracted or corrected” (Ecker et al., 2014, p. 293). Misinformation can have severe negative consequences, both at the societal level and at the individual level. On the societal level, the general popula­ tion, electorate, or political elite may make decisions which go against the best interests of society. On the individual level, misinformation can lead people to make decisions which are directly harmful for themselves and others. Misinformation is worse than ignorance, since research has con­ sistently shown that once people have encountered misinformation it is difficult to correct and even when people know that information is false, it remains influential (e.g., Ecker et al., 2014 for an overview). Misinformation is not a new phenomenon, but the social media en­ vironment has made it a highly pressing topic for at least three reasons. First, it is easy to publish misinformation on social media which can potentially reach a large audience when it spreads virally. Second, it is more difficult for social users to distinguish misinformation from reliable information than it would be on more traditional channels. In news­ papers and on television, it is easier for the audience to identify the sender of a message than on social media. Third, once misinformation has been published and shared on social media, it is difficult to control and correct this information. Vosoughi et al. (2018) have shown that on Twitter, false information reaches more people and is spread faster than truthful information. They point to the newness of false information and its emotional appeal as possible explanations for why this type of content spreads so rapidly. The presence and spread of misinformation on social media have led to broad concerns among the general population and public authorities. A poll in the United States in 2020 showed that two-thirds of Americans thought that social media have a negative impact on the country and that the main reason for this was concerns about the spread of mis­ information (Auxier, 2020). Public authorities like the FBI (2020) and the European Commission (2022b) have warned against the spread of misinformation on social media. 32 The Mediating Power of Algorithms
  • 46. Algorithms play a double role in this spread of misinformation. On the one hand, algorithms are blamed for fostering the spread of mis­ information, for example by leading people from more moderate content toward more extreme content (e.g., Ribeiro et al., 2020) or by priori­ tizing misinformation because the content is emotional, novel, and quickly gains a good deal of traction and interaction (e.g., Avaaz, 2020). On the other hand, social media companies use algorithms and machine learning to counter the spread of misinformation. Social media companies themselves become increasingly aware of misinformation on their platforms and take action against it. For example, Facebook states on its website that it works actively toward “stopping false news from spreading, removing content that violates our policies, and giving people more information so they can decide what to read, trust and share” (Facebook, 2020). Algorithms often play a central role in the detection of misinforming content and subsequent actions taken to correct it (e.g., Gillespie et al., 2020). YouTube for example makes an active effort to reduce recommendations for videos containing misinformation. It announced in 2019 that it takes action aimed at “reducing recommen­ dations of borderline content and content that could misinform users in harmful ways – such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11” (YouTube, 2019). This misinformation is identified using a “combination of machine learning and real people,” involving “human evaluators and experts from all over the United States to help train the machine learning systems that generate recommenda­ tions” (YouTube, 2019). With such efforts, algorithms take on a pow­ erful role in content moderation, which can be defined as “the detection of, assessment of, and interventions taken on content or behavior deemed unacceptable by platforms or other information intermediaries, including the rules they impose, the human labor and technologies required, and the institutional mechanisms of adjudication, enforcement, and appeal that support it” (Gillespie et al., 2020, p. 2). The intention to correct misinformation by using algorithms is inher­ ently praiseworthy. Given the enormous amount of content uploaded onto social media platforms, automation can be a way to address problems of scale. At the same time, scholars warn of the potential risks that this new approach to content moderation poses (e.g., Cobbe, 2021; Gillespie et al., 2020). Some of the risks mentioned by Aram Sinnreich (in Gillespie et al., 2020) are the following. First the danger of false positives and negatives; machine learning might wrongfully identify some correct information as misinformation, while overlooking other cases of misinformation. Without human oversights, this will go uncorrected. Second, subtle and nuanced distinctions and assessments require human judgment. If these decisions are left to algorithms, the more rigid algorithmic logic will determine between The Mediating Power of Algorithms 33