Sunday, 27 October 2019

A New Study Tells You How Fake News Is Spread On WhatsApp


A New Study Tells You How Fake News Is Spread On WhatsApp

Researchers found that the sensationalism of mainstream media formats and genres works very well when edited and used out of context in WhatsApp-based propaganda or misinformation.



PICTURE ALLIANCE VIA GETTY IMAGES
Representative image.

BENGALURU, Karnataka—A new WhatsApp-backed study has found that fake news is mostly spread by users who are prejudiced and ideologically motivated, rather than ignorant or digitally illiterate.
The report, titled WhatsApp vigilantes? WhatsApp messages and mob violence in India, is by a group of researchers from the London School of Economics who were among 20 teams selected by the company for a $50,000 grant to study how fake news spreads, and what the company can do to prevent this.
Facebook-owned WhatsApp is the most popular social media platform in India, with over 400 million users in the country. The app’s innocuous inbox is prime real estate for misinformation and downright hate, ranging from flagrant violations by political parties to communal propaganda. The rapid spread of information has also led to many Indians being lynched over rumours of being child kidnappers, leading the company to introduce restrictions on forwarding messages. But then, reports revealed thata Rs 1,000 software could bypass WhatsApp’s restrictions. And if that was too hard to do, many, many companies were also offeringsWhatsApp mass-messaging as a service, automating propaganda and undercutting regulation.
For the latest news and more, follow HuffPost India on TwitterFacebook, and subscribe to our newsletter.
The researchers—Dr Shakuntala Banaji and Ramnath Bhat at LSE, and Anushi Agrawal and Nihal Passanha at Maraa, a media and arts collective in Bengaluru—looked specifically at the issue of WhatsApp lynchings, with “a focus on the intersection of disinformation, misinformation, fake news, propaganda, mob violence, socio-political contexts of technology use, technological affordances and infrastructures, user experiences and motivations, media literacy, policy and regulation”.
“Since 2015, there have been more than a hundred instances of lynching,” said the report. “Many of these incidents victimise individuals from discriminated groups (Dalits, Muslims, Christians, Adivasis) based on allegations of cow slaughter, cow trafficking and cattle theft. Although the victims are targeted for different reasons, these incidents have in common mobs of vigilantes who use peer-to-peer messaging applications such as WhatsApp to spread lies about the victims.”
The researchers interviewed experts and focus groups, talking to users across Karnataka, Maharashtra, Madhya Pradesh and Uttar Pradesh this year. They also gathered WhatsApp messages from multiple sources, and analysed WhatsApp forwards, including texts, still photos and moving images. 
“During focus groups and interviews with working and middle class users, men, women and young people, urban and rural as well as literate and illiterate users with a spectrum of political opinions, and during expert interviews, we examined the daily practices of WhatsApp usage in the contemporary Indian socio-political context,” said the report.
Perhaps unsurprisingly, they found that WhatsApp messages don’t exist in a vacuum. Although new technology platforms can accelerate the spread of certain messages, hate speech is not intrinsic to new platforms. So, for example, the researchers found that particular stereotypes or narratives would appear on WhatsApp at the same time that they began circulating on social media, mainstream news media or even films. 
“The fact that mainstream media has been responsible for broadcasting the hate-speech and stereotypes in the speeches of politicians without much criticism or questioning means that messages on WhatsApp which disparage particular communities or call for action against them (for example: Dalits, Muslims, Adivasis, Kashmiris, Christians) are less likely to be perceived as misinformation,” the researchers noted. “Likewise, the sensationalism of mainstream media formats and genres works very well when edited and used out of context in WhatsApp-based propaganda or misinformation.”

Who’s reading them?

One difference, though, is in who is receiving these messages on WhatsApp, versus traditional media. The use of smartphones in India is highly gendered—according to the Internet and Mobile Association of India (IAMAI) there are 451 million Internet users in India, of which less than 150 million are women. This bias is much sharper in rural India. In rural India,barring a few social-good programmes, men still control access to devices, and with that, WhatsApp and the rest of the Internet as well.
“This finding needs to be considered in the context of the allied finding that the ready availability of digital technologies has contributed to new forms of physical and virtual violence,” the researchers said. 
They found that such violence is more likely to be directed towards women, especially those from marginalised communities. These could include unsolicited sexts, sex tapes, rape videos and blackmailing.
“A key finding is that there exists widespread, simmering distrust, hatred, contempt and suspicion towards Pakistanis, Muslims, Dalits and critical or dissenting citizens amongst a section of rural and urban upper and middle-caste Hindu men and women,” they added. “WhatsApp users in these demographics are predisposed both to believe disinformation and to share misinformation about discriminated groups in face-to-face and WhatsApp networks.”

And who’s sending these?

Although people in India already buy into various prejudices, this doesn’t mean that they all act on them. Some messages are more successful than others, and the researchers also looked at the impact of different messages. 
“Amidst the flow of hundreds of messages, the ones which stand out are those that convey a sense of immediacy, and those that can and do have shock value,” the study showed. “During elections, or during incidents of cross-border military action, simmering sentiments become high-intensity situations where the quality of disinformation and propaganda becomes immediately inflammatory.”
“In these circumstances, the chance of long-term discrimination turning into physical violence against particular demographic groups increases,” the researchers said.
You may have sighed in frustration at a message in your family WhatsApp group, wondering why anyone would send that. As it turns out, the researchers also wanted to answer the same question, and a number of reasons emerged, ranging from the naiveté of older users who choose to believe messages forwarded by known and trusted individuals, to the belief that it is a civic duty to pass along information about (even unverified) suspicious activities, and the need to be seen as a local “expert” by sharing local information.
In some cases, the emotional disturbance felt by users on viewing a clip of spectacular violence or overwhelming content (train or road accidents, harm caused by natural disasters) impelled them to share them with others and/or discuss them within their networks.
Misinformation spreads largely due to prejudice and ideology—rather than out of ignorance or digital illiteracy.
In other cases, this kind of content contributed to a sense of emotional fatigue and exhaustion whereby WhatsApp users would forward disinformation without checking the message fully, research showed.
“We found that for most WhatsApp users in India civic trust follows ideological, family and communal ties far more closely than is reported in other literature on this topic,” the researchers added. “In a majority of instances, misinformation and disinformation which contributes to the formation of mobs that engage in lynching and other discriminatory violence appears to be spread largely for reasons of prejudice and ideology rather than out of ignorance or digital illiteracy.”
What the researchers found is that if a WhatsApp user is male, technologically-literate Hindu, then regardless of whether they’re “upper or middle caste”, young or middle-aged, or rural or urban, they are more likely to share misinformation and hate speech. “Some user narratives in our fieldwork go as far as to suggest that this type of technologically-literate, male, Hindu user is also more likely to create and administer the groups responsible for ideologically charged misinformation, disinformation and hate-speech on WhatsApp in the first place,” they explained.

So what’s WhatsApp doing?

This is an issue that won’t go away easily, and multiple stakeholders are seeking solutions. Citing fake news as a concern, the government is calling for traceability of WhatsApp messages, but there are concerns that this would also enable snooping on all your messages. The Indian government has long argued that WhatsApp’s encrypted messaging service is detrimental to national security and has demanded WhatsApp “digitally fingerprint” all messages in India. Facebook has pushed back thus far, arguing it would undermine user privacy and turn WhatsApp into a different product. 
“We’ve taken a number of steps within our product over the last year to help address the challenge of misinformation,” a WhatsApp spokesperson explained. “To include new labels that identify to users when they have received a forwarded message, or a highly forwarded message (such as a chain message) – as well as limiting how messages can be forwarded to just five chats at once. That limit change reduced the total number of forwarded messages on WhatsApp by 25%. We also launched group permissions to empower users to decide which groups that would like to be part of.”

RUPAK DE CHOWDHURI / REUTERS
WhatsApp-Reliance Jio representatives perform in a street play during a drive by the two companies to educate users, on the outskirts of Kolkata, India.

On its FAQ page about WhatsApp’s steps against misinformation in India, the company also notes measures such as limiting forwards, its education and advertising campaign about fake news, and growing a local team to work with civil society and the government. It has also worked to train local law enforcement departments on how to work with WhatsApp and make legal requests for information, aside from the 20 research awards to promote studies that will inform product development and safety efforts, the spokesperson noted. 
“WhatsApp cares deeply about the safety of our users and we appreciate the opportunity to learn from these international experts about how we can continue to help address the impact of misinformation,”said Mrinalini Rao, lead researcher at WhatsApp. “We recognise this issue presents a long-term challenge that must be met in partnership with others. These studies will help us build upon recent changes we have made within WhatsApp and support broad education campaigns to help keep people safe.”
Other topics by researchers who received the $50,000 grant from WhatsApp (a total of $1 million) include Social media and everyday life in India by Philippa Williams, Queen Mary University of London (Principal Investigator), and Lipika Kamra of OP Jindal Global University, which examines the role of WhatsApp in everyday political conversations in India; another, titled Misinformation in Diverse Societies, Political Behavior, and Good Governance by Robert A Johns, University of Essex, Sayan Banerjee, University of Essex, and Srinjoy Bose, University of New South Wales, uses field experiments with WhatsApp in India and Afghanistan, to establish a relationship between misinformation on social networks, and public opinion on ethnic relations.
In India, WhatsApp is also partnering with Osama Manzar and the Digital Empowerment Foundation to train community leaders in several states on how to address misinformation. The foundation is also part of a proposal which aims to adapt game-based interventions to “vaccinate” people against misinformation, testing it with field experiments.

No comments: