Presentation on practical responses to misinformation as part of hybrid warfare, including the use of infosec frameworks to frame attacks and responses.
10. Bodacea Light Industries, 2018
Nationstates: Qanon campaigns
10
“Action: continuous barrage of
memes. All SM platforms
Hashtags: #HRCvideo
#releasethevideo #maga #QAnon
Use top trending hashtags along
with your posts. Share and
retweet as much as possible”
26. Bodacea Light Industries, 2018
Individual: report trolls/botnets
26
“Twitter (reportedly)
suspended over 70 million
accounts”
“Facebook created a human
crisis team after algorithms
failed it”
You’ve already heard a lot today about misinformation. I’ll just add a little to that.
Misinformation is deliberately false information. One example is the “fake news” sites above, containing misinformation that’s used to gain advertising money, with clickbait tweets that bring people to them. Some of these currently contain the typical aliens and healthcure material, but many are political and trading on strong emotions like fear and useful divisions in society.
Image: screenshot of http://www.sawthis.one/ 2018-07-08
“A type of confidence trick for the purpose of information gathering, fraud, or system access, it differs from a traditional "con" in that it is often one of many steps in a more complex fraud scheme.”
Source: wikipedia
Online misinformation is huge. A few hundred trolls and thousands of bots can affect millions of people at a time.
This is the scale that nationstate-run groups and pages, dedicated to creating division and confusion, typically work at.
Here are some of the Russian-owned Facebook groups shown to Congress: these high volumes of shares and interactions might include a lot of botnet activity, but are still not insignificant.
Misinformation is also moving from online to offline. Several times now, misinformation actors have sent invites to opposing groups to demonstrate at the same time in the same place.
https://twitter.com/JuliaDavisNews/status/994704834577215495
https://twitter.com/donie/status/957246815056908288
Misinformation is information that’s deliberately false (actually that’s disinformation, but “misinformation” as a term won). The smallest form of online misinformation is ‘joke’ viral content, for example in every disaster there’s someone who puts up an image of a shark in the street.
Image: http://www.politifact.com/truth-o-meter/statements/2017/aug/28/blog-posting/there-are-no-sharks-swimming-streets-houston-or-an/ and pretty much any major US disaster
And then, if you look, you can find organising pages for campaigns. Here are two Qanon “meme war organising page”. Qanon is a major group, but is just one of many. Note that this is from March/April, and has a specific date on it, targetting a specific event.
Familiarity backfire effect
Memory traces
Emotions = stronger traces
Here are some common brain vulnerabilities. My favourites are the familiarity backfire effect, where if you repeat a message with a negative in it, people remember the message without the negative, and that when people read, they take false information in as true before rejecting it - and in that fraction of a second, build other assertions off the false information, even if they *know* the original information is false.
This is targetting groups. This is one of the congress adverts set
This stuff is everywhere online: the expected places (FB, twitter, reddit, eventbrite, medium etc) but also comment streams, payment and event sites.
Social media buys reach and scale. 100 good bots = long game; 10000 ba ones = short but effective
You can also use other advertising techniques, and things like that familiarity backfire. Botnets are very useful for this, and very cheap, at about $150 for a difficult-to-find “aged” set, to a few dollars per thousand for Russian recent bots. Buy the bots, use any of the handy online guides to set them up messaging or retweeting etc, or use some simple pattern matching or AI to make them harder to find.
One big weakness for attackers is that they have to tell you about themselves. They leave a lot of “artefacts” - ways to find them.
botsentinal.com
Here are some of them, including hashtags, URLs, adverts. A simple media search with twitter, tweetdeck etc will find a lot of these. On the right are the artifacts tracked as part of the Canadian elections.
There’s also a lot of content in fact check sites(Snopes etc); if you have the resources, then it’s also possible to pay someone to go look at an area being discussed.
Sometimes misinformation propagation is more subtle. These are a good place to look for that too.
Here are some of them, including hashtags, URLs, adverts. A simple media search with twitter, tweetdeck etc will find a lot of these. On the right are the artifacts tracked as part of the Canadian elections.
You *can* report to platforms. So far this has been pretty underwhelming, but if we did it at scale, it could be interesting.
What would be good in an ideal system includes:
Realtime botnet removal
Realtime troll dampening
Etc
But that’s not where we are, so here’s some others.
Two things: advertising works by putting adverts into slots on pages. We can track unlabelled political ads, we can see the fakenews pages and pages associated with them, and we can see botnets going to pages to drive up their ad revenue. For communities, you can report ads on fake pages to brands.
And as an individual, there are still things you can do. One of these is to work with other people to block misinformation sources and channels. Many anti-harassment apps can be repurposed for this.
My favourite communities are the Lithuanian elves. Formed as an anonymous online group. They fight back every day against Russian misinformation, using a combination of humour and facts. It seems to be working.
Other cool things to do include overwhelming misinformation hashtags with other content, and hacking search terms to make disambiguation pages appear above misinformation sites.
Another group that’s got some traction is VOST (Virtual Operation Support Team), a team that supports responders in disasters: VOST Panama also used humour and “fake stamps” to counter misinformation, and helped me run a deployment on this during Hurricane Irma (when people also reported misinformation to Fema and Buzzfeed).
You can also help in rebuilding damaged communities: this is The Commons Project, that uses a combination of bots, humans and peace techniques for this.
Image: SANS sliding scale of cyber security
This is a mock-up of the Global Disinformation Index
I’m leading a team working on writing a misinformation equivalent to the ATT&CK TTP framework.
There are still a lot of bots out there, but tactics, techniques and procedures are changing rapidly: we’re starting to see an early-infosec-style split into script-kiddie style crude botnets and more carefully crafted responsive bots.
image: https://medium.com/@MediaManipulation/tracking-disinformation-by-reading-metadata-320ece1ae79b