VRA 2022 Teaching Visual Literacy session. Presenter: Molly Schoen
Our everyday lives are more saturated in images and videos than any other time in human history. This fact alone underscores the need to implement visual literacy skills in all stages of education, from pre-K to post-grad. Learning how to read images with critical, analytical eyes is crucial to understanding the world around us as we see it represented in the news, social media, advertisements, etc. New technologies have exasperated this already urgent need for visual literacy education. Synthetic media, deepfakes, APIs, bot farms, and other forms of artificial intelligence have many innovative uses, but bad actors also use them to fan the flames of disinformation. We have seen the grave consequences from this age of disinformation, from undermining elections to attempts to delegitimize science and doctors, undoubtedly raising the death toll from the COVID-19 pandemic. What do we need to know about these new forms of altered images made by artificial intelligence? How do we discern between real, human-made content versus fakes made by computers, which are becoming more and more difficult to discern? This paper aims to raise awareness of how new forms of visual media can manipulate and deceive the viewer. Audience participants will learn how to empower themselves and their peers into being more savvy consumers of visual materials by understanding the basics of AI and recognizing the characteristics of faked media.
Disinformation and Deepfakes: The Urgent Need for Visual Literacy
1. Disinformation and
Deepfakes: The Urgent
Need for Visual Literacy
Teaching Visual Literacy session
Visual Resources Association 2022 annual conference
Molly Schoen, Visual Resources Curator
Fashion Institute of Technology
2. Visual Literacy: an Overview
Association of College & Research Libraries’
definition
Visual literacy is a set of abilities that enables an
individual to effectively find, interpret, evaluate, use,
and create images and visual media. Visual literacy
skills equip a learner to understand and analyze the
contextual, cultural, ethical, aesthetic, intellectual, and
technical components involved in the production and use
of visual materials. A visually literate individual is both a
critical consumer of visual media and a competent
contributor to a body of shared knowledge and culture.
3.
4. Definitions
Artificial Intelligence
The theory and development of
computers to do tasks that would
normally require humans to complete
Machine learning
A branch of AI based on the
idea that computers can learn, automate
tasks, and make decisions from data
sets and other information.
Remember, despite seeming neutral,
AI is not free of bias.
5.
6. Definitions
- Deepfake: a synthetic media in which a
person in an existing image or video is
replaced by someone else’s likeness
- Machine learning and artificial intelligence can
manipulate or generate media with a higher
potential to deceive
- Uses of deepfakes range from playful / satirical to
disturbing and harmful
- Humorous fake videos of Tom Cruise
- Fabricated admissions of defeat by world
leaders
- As of now, many deepfakes are relatively easy to
detect when you know how to look for them
- But technology will improve in the future and fake
videos / visual media may become nearly
indistinguishable from the real deal.
7.
8. Definitions
- Disinformation vs. Misinformation
- Misinformation is incorrect or misleading information presented as fact, regardless of the intent
of its author or distributor
- Disinformation is Deceptive. Disinformation is created and spread deliberately to deceive or
harm others
9.
10.
11. Definitions
- Bots: bot accounts, bot farms
- Spam accounts, sometimes created by computers in mass numbers using algorithms
- Example: Oneworld.press: a Russian bot farm aiming to disrupt American elections and
spread pandemic-related disinformation
- Troll farms:
- Paid or volunteer individuals coordinated in online attacks to interfere with how others to think,
act, or vote
- Erode trust
- 80% of fake news sources are shared by 0.1% of users, who are "super-
sharers"
12. The problem
- False information spreads faster, further, deeper, and more broadly compared
to accurate information
- Why? It is designed to be more scandalous, more provocative than actual news. Thus it gains
more attention and is more memorable than actual news information
- In 2017, 47% of Americans said they get most of their news from social media
- People tend to believe what they see and hear first.
- Correcting misinformation can have the effect of spreading misinformation, because it keeps
the subject on people’s minds
- Misinformation, when conveyed with authority and confidence, is more
believable than accurate information that happens to be vague or incomplete
13. The problem
What do we do when mis- and disinformation comes from supposedly reputable
sources?
- Fox News
- The 45th president
- Georgia Department of Public Health
14. Combatting disinformation
- Spread awareness to your peers and students
- Require citations for media in addition to written sources in student work
- Only allow citations from reputable sources
- Always cross-check against different sources
L to R: 28-Apr, 27-Apr, 29-Apr, 1-May, 30 Apr, 4-May, 6-May, 5-May, 2-May, 7-May, etc.
16. Combatting disinformation
Know how to communicate with others
who believe in fake news
- Scolding, arguing, or lecturing
will only make them dig in their
heels more
- Instead, get curious! Ask them
where they found this
information. Ask them why
they believe in it. Ask them if
they’d like to hear your
reasoning as to why you
believe the information is false.
17.
18. Combatting disinformation
For deepfakes, learn how to identify them
- What is the context of the photo?
- Are there any weird glitches? Do the borders surrounding a person’s face look blurry, wavy,
or otherwise just not right?
Learn the forensics of photos – FotoForensics.com is a good place to start
- Is this the original photograph?
- Has it been cropped?
- How has it been altered?
- Why was it altered?
- For cosmetic reasons?
- To fit a certain size?
- Look at the image’s metadata for clues
19. Combatting disinformation
- Ask yourself:
- What do I see in this image?
- What don’t I see?
- Who created this image?
- Who published it and why?
- How has it been altered?
- Am I reacting emotionally?
20.
21. Reputable sources have standards
“AP pictures must always tell
the truth. We do not alter or
digitally manipulate the
content of a photograph in
any way.”
24. Sources (in order of presentation)
Toledo Museum of Art (2019). Why Visual Literacy? Website. Retrieved 2022-03-28.
https://www.toledomuseum.org/education/visual-literacy/why-visual-literacy
Galloway, J. (2019). Photo. In the personal collection of the author.
Barber, G. (2019). Deepfakes Are Getting Better but They’re Still Easy to Spot. Wired. Retrieved 2022-03-28.
https://www.wired.com/story/deepfakes-getting-better-theyre-easy-spot/
Katz, G. C. (2021). Tweet. Accessed March 30, 2022,
https://twitter.com/gwenckatz/status/1381652071695351810?lang=en
Liebieghaus (2020). Gods in Color: Polychromy in Antiquity. Retrieved 2022-03-30.
https://buntegoetter.liebieghaus.de/en/
Delgado, M. (2020). The Colorful History of the Troll Doll. Smithsonian Mag. (Photo). Retrieved 2022-03-30.
https://www.smithsonianmag.com/innovation/colorful-history-troll-doll-180974634/
Guarino, B. (2019). Older, right-leaning Twitter users spread the most fake news in 2016, sources say.
Washington Post. Accessed March 30, 2022. https://www.washingtonpost.com/science/2019/01/24/older-right-
leaning-twitter-users-spread-most-fake-news-study-finds/
25. Sources (in order of presentation)
Shearer, E.; Gottfried, J. (2017). "News Use Across Social Media Platforms 2017". Pew Research
Center's Journalism Project. Retrieved 2021-03-28.
Covid-19 data misrepresented by Georgia Health Department. 2020. Columbia Law School. Retrieved
2022-03-30. https://climate.law.columbia.edu/content/covid-19-data-misrepresented-georgia-health-
department
Zhang, M. (2016). Nikon Awards Prize to Badly ‘Shopped Photo, Hilarity Ensues. Petapixel. Retrieved
2022-03-30. https://petapixel.com/2016/01/29/nikon-awards-prize-to-badly-shopped-photo-hilarity-
ensues/
Christina @wocintechchat.com (n.d.) Photo. Retrieved 2022-03-30.
https://unsplash.com/photos/eF7HN40WbAQ
Arego, P. (2016). Sears’ Trouser Legs. Photo. Retrieved 2022-03-30. https://phil-are-
go.blogspot.com/2016/12/sears-trouser-legs.html
Wright, E. (1917). Frances Griffiths with Fairies. (Photo). Retrieved 2022-03-30.
https://en.wikipedia.org/wiki/Cottingley_Fairies#/media/File:Cottingley_Fairies_1.jpg
Notes de l'éditeur
Thank you Allan for that introduction and to my fellow panelists, Debra and Michalle. This presentation is called disinformation and deepfakes: the urgent need for visual literacy. My aim for this presentation is to provide an overview of synthetic media and how it can disrupt our way of thinking. This is still a new, emerging area of study and I haven’t found a lot of research on it. I hope to encourage us, as members of the VRA, to be proactive and vigilant in disarming these new forms of disinformation.
I thought I would begin with some definitions. Many of us are familiar with the term visual literacy, but for those who aren’t, let me go over it briefly. There are multiple definitions of visual literacy, but in a nutshell, they all deal with how we read images. For many of us, the Association of College and Research Libraries’ definition of visual literacy is the most relevant, so I have it here on the screen. Visual literacy is a set of abilities that enables an individual to effectively find, interpret, evaluate, use, and create images and visual media. Visual literacy skills equip a learner to understand and analyze the contextual, cultural, ethical, aesthetic, intellectual, and technical components involved in the production and use of visual materials. A visually literate individual is both a critical consumer of visual media and a competent contributor to a body of shared knowledge and culture.
This slide is from the Toledo Museum of Art. They do a lot of work with visual literacy and I like how they’ve summed it up with these key points: Visual Literacy is a form of critical thinking. Visual literacy is important for people in every field. Visual literacy is all about understanding what you see. Well, nowadays we see a lot more visual media than ever before, and we can never be totally sure how much of it is fake or misleading.
Next I want to briefly define artificial intelligence and machine learning. AI is of course just what it says—intelligence that is artificial, ie created by machine rather than humans. Machine learning is a branch of AI that is based in the idea that computers can learn, automate tasks, and make decisions from data sets and other information fed into them.
It is very important to remember that AI is not neutral. While machines are able to generate their own material, they are only able to do so after being programmed with algorithms written originally by humans. These algorithms often reinforce existing biases in our society. Take for example this photo of my husband and me, which I uploaded to a facial recognition interface. The computer predicted that my husband—a white man—was a highly educated doctor, even a heart surgeon. The same algorithm predicted that I—a white woman—had a less distinguished career of a “critter sitter.” The site that ran this little tool was later taken down because of complaints of it being racist and sexist, with women and people of color being labeled as having less notable careers that required less education.
These are some examples of synthetic media made by computers using artificial intelligence. When trained to identify patterns in images of human faces, the computer can generate surprisingly realistic looking results. These portraits show an ability to generate realistic looking videos from a single image, like a painting.
These are called deepfakes, which is the name given to a synthetic media in which a person in an existing image or video is replaced by someone else’s likeness.
The person shown here on the right side of the screen doesn’t actually exist. This is a computer-generated image. If you want to see more and test your own knowledge, visit whichfaceisreal.com. You’ll see some faked faces are easy to spot—they have obvious glitches or have blurry areas—but others, like this one, are very convincing. The technology will only improve over time.
We see how artificial media can be used to deceive. Like any other technology, these faked images can be used innocently or for more nefarious uses. The problem is we can’t always recognize what is real and what is fake. So when we see a photorealistic image, we assume it is a real photo of an actual person and / or an event that occurred.
Instead of “seeing is believing,” we should remember “looks can be deceiving.”
Finally, I want to go over misinformation and disinformation—terms we’ve seen thrown around a lot in the news lately. It can be easy to mix them up, but there’s an important difference between the two terms.
Misinformation is misleading; disinformation is deceiving. Misinformation is inaccurate, incomplete, or otherwise misleading information presented as fact, regardless of the intent of its author. Disinformation is a kind of misinformation that is created and spread deliberately with the intent to fool others.
The image here on the screen looks ordinary, but it is actually an instance of misinformation. This was an early color photo by Sergey Porkudin-Gorsky from 1909-1915. Or so we may think.
Compare that to this image, which is much more vibrantly hued. This in fact is the faithful rendering of the original photograph. A researcher and author named Gwen C. Katz digitally desaturated this image, rendering it in black and white. She then uploaded the black and white image to a colorization API, a program that uses AI to colorize black and white images. Using this and other examples, she found that the computer routinely failed to recapture the true vividly-colored nature of the originals.
Anyone can easily try this for themselves on deepai.org. These colorized photos can be considered misinformation because we are missing the full color story. The colors of the garments are inaccurate. We tend to think of the past in dusty, faded hues—when in reality this is just not true.
The classic white marble statues of Ancient Greece and Rome were actually painted in vibrant colors, but over centuries and centuries, the pigment has flaked off. The sterile white statues we see at museums are original artifacts, of course, but some of their story has been lost. On the right you see a replica of what such a statue may have looked like in its heyday, made by investigating microscopic pieces of pigment left on the original. It goes to show, then, that looks can be deceiving.
The issues surrounding synthetic media and visual literacy are compounded by how rapidly such media spreads. So I want to also identify some key terms related to how misinformation spreads, namely through bots and through trolls.
Bots are a nickname for robots. They refer specifically to spam accounts, often produced by machine in mass quantities known as bot farms. They can post to social media accounts automatically, making it look as though a human has in fact made the post.
Trolls are actual people, and in groups they form what is known as “troll farms.” These trolls will also make fake accounts on social media. They may be paid or volunteer individuals coordinated in online attacks to interfere with how innocent others think, act, or even vote. Often their goal is to erode trust in reputable, authoritative sources, so that fake news sources seem more legitimate in comparison.
A study in 2017 found that 80% of fake news sources were shared by just .1% of Twitter users, who are known as “super-sharers.” In this case, sharing is most definitely not caring!
Fake news is designed to be more salacious in nature. The creators of fake news want you to click on their links, either for ad revenue, to phish for your information, or to influence how you vote in an upcoming election. To get those clicks, they need headlines that pop—that teeter on the edge of believability. Considering that in 2017, 47% of Americans said that they get most of their news from social media, this is a huge problem.
People tend to believe what they see and hear first. So even if an item of fake news is later corrected or retracted, the problem remains that the original false information is still in people’s heads. Correcting misinformation can have the unintended consequence of spreading that misinformation. It has also been found that misinformation, when conveyed with authority and confidence, is more believable than accurate information that happens to be vague or incomplete. We’ve seen this recently in the pandemic, when TV pundits made bold but unverifiable claims about the disease, while actual scientists spoke the truth: that the disease was too new to know very much about.
Those of us who have done library instruction on visual, media, or information literacy know that we cannot understate the importance of telling students to use reputable sources. Reliable sources should have a reputation for accuracy and a lack of bias. But what happens when one becomes corrupted, say for instance in the office of the 45th president? If the leader of the country has been proven a liar, whom can we trust? Unfortunately I don’t have a quick or easy answer for that, but I do have more questions that can help students understand these issues, rendering them more visually literate.
Here is how we combat disinformation. Start by spreading awareness of the problem to your peers and students. For student work, require citations not just for text-based sources, but for visual media too. Limit their allowable sources to reputable ones only—peer reviewed journals, the associated press, museum publications, and so on. Wikipedia is favorable to many commercial websites.
Remind students to validate their findings by cross-checking their research. We need to remember to do this as professionals, too.
Showing examples of tricky disinformation can be illuminating too. In this uncorrected graph that was actually published by the Georgia Department of Public Health on May 13, 2020, it appears that Covid cases in five counties in Georgia were declining over time. However, when looking closely, we see the X-axis is not arranged in chronological order, but from highest number of cases to lower.
It’s also important to remember that even the most media literate individuals can still be fooled. In this example, the camera maker Nikon announces the winner of a recent photo contest. At first glance, it appears to be a photo of a serendipitous moment, capturing a plane from a unique vantage point. But a second, closer look reveals the airplane looks awfully pixelated, and appears on a solid background color slightly different from the rest of the sky. Turns out even Nikon can be fooled by ridiculously bad Photoshopping.
t’s important to know how to best communicate with someone who may have fallen for a piece of fake news or other misinformation. No one likes to hear that they’re wrong, or worse yet, that they’ve been fooled. So scolding, arguing, or lecturing will only make them dig their heels in more—and will leave you feeling very frustrated. Instead, get curious! Ask them where they found this information. Ask them why they believe in it. Ask them if they might like to hear your reasoning behind why something is fake. Give examples of times when you’ve messed up and believed something that wasn’t real. Here’s one I fell for:
So in other words, Show, don’t tell.
As for deepfakes specifically, learn how to identify them. Does something seem “off” about the image or video? What is its context? Are there any weird glitches? Do the borders surrounding a person’s head look blurry or wavy? Also make sure to look closely at their teeth and the reflections in their eyes—these can be giveaways to faked images.
We as well as our students should also be aware of the forensics of digital images. The website Fotoforensics.com is a good place to start. Ask questions about what it is you’re looking at. Is this a copy of the original photo? Has it been cropped? Has it been otherwise altered? If so, why—for cosmetic reasons, to fit a certain size, or to give you an emotional reaction upon seeing it? Often looking at the image’s metadata will also yield valuable insight.
There are still more questions one should ask themselves when evaluating visual material. Among these, perhaps the most important is “what don’t I see?”
In order to get the bigger picture, we need to consider not only what we’re looking at, but also what we’re not seeing—what remains outside of the frame, what a photographer chose not to capture, which photos an editor chose not to include with an article.
It’s worth pointing out that reputable sources will have standards for their visual content as well as their text-based information. This is the Associated Press’ standard for its photos: “AP pictures must always tell the truth. We do not alter or digitally manipulate the content of a photograph in any way.”
There are many questions and not so many clear answers when it comes to ethics in emerging technologies. Remaining aware of these new media forms and training ourselves not to immediately believe everything we see is paramount in upholding the truth. Misinformation in visual materials is nothing new. But in order to stay on top of it, we must remain vigilant. Thank you.