[ad_1]
Joan Donovan misinformation expert and research director of the Harvard Kennedy School’s Shorenstein centre on Media Politics and Public Policy. For Chris Keall, Business. NZ Herald Photo by Alex Burton 16 June 2023
Misinformation expert Joan Donovan is trying to get a point across to the big social media firms – and it’s not what you’d think.
“One of the things that we’ve tried to do with our
research is show tech companies how small the problem really is,” she says.
Donovan is research director of the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy, and is currently on the second leg of a two-week speaking tour in June, supported by the US Embassy in New Zealand, in partnership with Tohatoha Aotearoa Commons and Koi Tū, the Centre for Informed Futures.
She said a small number of bad actors were able to exploit social media platforms’ technology to hugely amplify their messages, often using memes to take messages from the fringe to the mainstream.
That’s a take supported by local research. Analysis by The Disinformation Project found that just 12 people were behind the bulk of conspiracy and disinformation posts that inflamed tensions in the build-up to the violent climax of the February and March 2022 protests outside Parliament. (In the world of fake content, “misinformation” is unwittingly sharing incorrect facts; “disinformation” is fake news that’s deliberately and knowingly planted.)
There’s cross-border incitement, too.
“We know very well that the openness of these tech platforms [is] being exploited by foreign actors to increase polarising rhetoric, hate and harassment,” Donovan said.
Wherever they’re from, creators of fake content have become highly networked, she said. They’re “leaving little breadcrumbs” around the internet, in the form of catchy memes, targeting politicians – or journalists – they know could take the bait.
Advertisement
And the nature of modern news means it can be all news services, and countless repeater sites and accounts, before the misinformation is debunked (it should be noted that wire services also play a part in calling out conspiracies, such as AP’s regular “A look at what didn’t happen this week”).
Case studies collected by Donovan span from those that involve an inadequate response from social media, and some in more traditional media – such as carefully seeded fake information alleging immigrants at the US’s southern border carried ebola – to social media unfairly copping it.
That was the case with a “media-fuelled” social panic about a “slap a teacher” craze that had allegedly been sparked by a TikTok challenge. It was in fact a hoax – and one largely pushed through Facebook accounts.
There was no such craze but, like other fake news chronicled by Donovan’s group, it was used to push various political agendas regardless.
One certainty about this year’s election is that there will be a blizzard of misinformation on social media. How can we sort the wheat from the chaff?
“Disinformation is effective because it shows up looking like authentic source material,” Donovan said.
“Going to rallies and speeches of politicians, what you want to listen for are strange turns of phrase or slogans that the politicians are using and then go back online and search for those slogans and try to understand what’s happening in the digital media environments and how are people being mobilised.”
Is there any government or regulator that Donovan thinks is doing a good job of wrangling misinformation?
Advertisement
She points to a measure recently passed by the EU. “The Digital Services Act is a good step forward in terms of offering up ways of auditing tech platforms.”
The legislation includes an emergency mechanism to force platforms to disclose what steps they are taking to tackle misinformation or propaganda in light of Covid-19 and the war in Ukraine. It also puts tough new rules in place around marketing to children and bans manipulative techniques that lead people to unwillingly click on content on the internet, known as dark patterns.
Tech companies who break its provisions risk a fine of up to 6 per cent of their global turnover.
British bulldog
Donovan also gives dibs to the UK, where John Edwards, formerly NZ’s Privacy Commissioner, recently slapped TikTok with a £12.7 million penalty for mishandling children’s data in his new role as Britain’s privacy czar (Edwards’ successor is about to embark on an exercise to see if our Privacy Act is fit for purpose in this area, which will run in parallel to the free-ranging consultation over a possible new super-regulator).
But she qualifies, “Unfortunately if it doesn’t happen in the country where these companies have their home offices like the US, it’s not going to have the kind of teeth to get these companies to think about how their products are being weaponised.”
One of Donovan’s main points of focus, however, is trying to get governments to enforce laws already in place.
She also wants the social media firms to be more assertive in using the tools they already have in place, and to introduce new measures.
But with Twitter and many of its peers culling their misinformation teams this year, we risk heading in the opposite direction.
“What’s sad to see is that these companies that are making billions of dollars in profit are extracting those profits and not reinvesting in their workforce and not reinvesting in improving their products,” Donovan said.
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is technology editor and a senior business writer.
[ad_2]
Source link