Wendy M. Grossman
Freelance Writer, Founder of the British Magazine
The Skeptic
Close to the end of the Cambridge Disinformation Summit, an audience member posed a conundrum: What about flat earth belief? People arrive at adulthood with it thoroughly “prebunked” at home and in school. It’s been thoroughly debunked, fact-checked, and disproven—yet some people still believe it. What’s up with that?
British skeptic Michael Marshall could have helped answer this question; his talk “Circular Reasoning: The Rise of Flat Earth Belief” explains what he learned over several years of hanging with those believers. But that’s one case study, not a principle.
In answering the question, Stephan Lewandowky, chair in cognitive psychology at the University of Bristol, explained it as an under-researched aspect of social media. Flat earth believers are spread so thinly in the general population that they could all go for years without meeting another one. But on-line they find each other, and suddenly there’s a community of 1,000 who can meet online every day.
“The moment you have community, it becomes entrenched, and external debunking is part of the conspiracy,” Lewandowsky said. This, he added, is something the internet has brought that we’ve never had before.
Andrea Liebman, a senior analyst at the Swedish PsychologicalDefense Agency, added, “People don’t understand how things work.” If you were challenged right now—this second—could you explain how you know the Earth is round?
Liebman’s job doesn’t exist in most countries: her agency was set up in 2022 to increase the resilience of Swedish population against malign influence directed at the country by foreign actors. Another country pursuing a similar line is nearby Finland. At its National Cyber Security Center,JussiRoivanen trains civil servants to deal with campaigns targeting Finns. “I don’t see that we’re winning any war,”Roivanen said. “There was a disinformation problem 200 years ago, and there will be in future.” Roivanen placed himself at one end of a spectrum that ran through the conference. The other end was expressed most clearly by former propagandist (his description) Stephen Jolly, ProfesseurAgrege de Geopolitique at Francs’s Rennes School of Business who called disinformation “an existential threat.”
These two takes sound incompatible, and no consensus emerged in either direction, but the summit’s convener, Alan Jagolinzer, a professor of financial accounting at Cambridge, nonetheless came away with hope. He believes the event made progress toward the collaboration he hoped to foster, first by raising awareness and building collaboration and second by bringing a business framing to disinformation.
It may seem surprising that a conference on disinformation was called by a professor of financial accounting, but, he says, “When you place disinformation in the context of financial fraud, people don’t like it. “He thinks the financial framing also helps resolve the inevitable questions of censorship and free speech. “In that context, people are very comfortable with accountable communication.” It is, at least, a way of sidestepping the polarization that comes into play as soon as you focus instead on pandemic response, climate change, or attacks on democratic institutions.
Jagolinzer’s opening remarks also indicated that he has a personal connection to these issues: “Not too long ago, my nephew died from pandemic-related misinformation.“To set the tone, Jagolinzer proposed identifying disinformation by six characteristics: (1) There is a malign actor or set of actors who: (2) have incentives that might generate financial, social-emotional, and/or physical benefits to (3) produce and disseminate an intentionally misleading, harmful, and typically emotionally triggering message, (4) through selected dissemination channels, to (5) a specially targeted audience, (6) for exploitation.
Jagolinzer believes these characteristics work across many settings—authoritarian politics, blocks on attempts to fix huge problems such as climate risk or the COVID-19 pandemic, and marginalizing our groups, not just his own area of financial fraud. Following this statement of the problem, the rest focused on studying it through the lens of different disciplines and considering ways to collaborate. The range of participants was fairly broad, including psychologists, lawyers, political scientists, AI developers, behavioural scientists, a former cult deprogrammer, and experts on the inner workings of adtech.
Two significant communities missing, however, were skeptics and cyber-security experts, who have long been grappling with state-sponsored attacks that leverage individuals’ vulnerabilities and twenty years ago began harnessing multidisciplinary approaches. It’s long been clear that countering disinformation requires a similar multi-pronged approach.
Given Jagolinzer’s criteria, Jolly added context based on his own experience “doing the stuff.” He sees propaganda generally as “part of a spectrum of persuasion” that includes preaching in church and advertising through tomilitary deception. Disinformation—“the covert strategic use of untruth to effect political ends” – is part of their spectrum.
Many of the panels focused on the view from a particular sector: the psychology of belief so familiar to skeptics, manipulation through microtargeting, the incentives and profits behind disinformation, and the prospective impact of generative AI. The journalism panel was predictable in advocating fact-checking and improving media literacy. Both are frustratingly slow. In the days of the telegraph, a lie might have gotten halfway around the world while the truth was putting its pants on, but today that lie has built communities of adherents while the truth is still ordering the pants from a catalogue. AI can help identify and remove low-hanging fruit, but, as numerous critics have highlighted, there’s too little effort to cover minority languages and cultures, with the result that “AI is largely ineffective in the places where platforms’ markets are smallest and they are least motivated to invest in improvements or even set up an office”. “AI is bad for humans, good for propaganda,” Jolly observed early on.
In a discussion of possible solutions, Sander van der Linden, a professor of social psychology at Cambridge and author of the recent book Foolproof: Why We Full for Misinformation and How to Build Immunity, advocates “prebunking”. That is, in a vaccine analogy, he believes people can be inoculated against extreme beliefs by being infected via simulations and focusing on techniques rather than claims. Later, Stephan Lewandowsky warned that inoculation theory will never be able to overcome a giant machine such as Facebook, whose profits depend on keeping people engaged and angry.
“There is a striking, strong political asymmetry in the source of misinformation. That’s the elephant in the room, and I don’t think we should ignore it,” Lewandowsky said. In a recent paper, he found a startling difference between the news sources shared by Republicans and Democrats.
Lewandowsky nonetheless has hope: he believes the emotional component of misinformation can be harnessed to sensitize people to manipulation. In work in progress using Brexit as a hot-trigger issue, he is showing that if you remind people of the things they have in common, they become less polarized.
When asked what success looks like, however, the closing panel produced little direct response. As the activist Jillian C. York said in Alex Winter’s recent documentary The You Tube Effect (which the conference screened), it depends on what you’re looking for.
One person’s disinformation is another’s valid view-point—which is why Jagolinzer’s approach, identifying disinformation by behaviour rather than content, stands up well to scrutiny. You can, he says, sit down with someone and show how these characteristics worked in a variety of cases of known fraud such as Theranos, Enron, Wirecard, and Carillion, and then show how they apply in other scenarios such as COVID-19 misinformation, religious cults, or even gaslighting relationships.
Jagolinzer himself hopes this event is a beginning, and he’s pleased to see participants already linking up to form projects. “The conference was designed to bring people together,” he says, “and raise awareness that we are collectively thinking about this….. I don’t know what the answers are, but the idea is to start now working in subgroups and think through some of these discussions.” He is beginning work on adding disinformation risks and responsibilities to the curriculum for business leadership. “We don’t teach that at all in business schools”
Courtesy : The Skeptical Inquirer