Fake news isn’t just annoying. It can also have tragic, and sometimes deadly, consequences.
Facebook-owned WhatsApp has contributed to the chaos that’s spread across 10 Indian states since May, which is when rumors about child abductions and child traffickers began to circulate on the messaging app via a falsified video. Since then, about a dozen suspected “kidnappers” have been lynched by mobs fueled by these rumors. On July 1, a single attack by villagers, provoked by WhatsApp reports of “child lifters,” resulted in the deaths of five people.
The person behind the fake video is still unknown. Whoever this master manipulator may be, he certainly knows how to prey on the naiveté of vulnerable people. Many Indian citizens who fell for the video are first-time smartphone users who have never before needed to discern between real news and “fake news.”
Local authorities in India have made an effort to warn citizens against taking everything they see at face value. “Rumor busters” have been sent to villages to educate people, which backfired on June 28 when one such “buster” was murdered by a mob.
The Indian Ministry of Electronics and IT (MeitY) wrote a letter to WhatsApp on July 3, pleading for more of an effort to mitigate “irresponsible and explosive messages,” and noting that WhatsApp “cannot evade accountability and responsibility” for the lynchings.
In a response dated July 4, WhatsApp said that it has been “designed with security in mind” and that it’s taking steps to contain the provocative messages and prevent future hoaxes. One of the mentioned steps consists of granting group administrators permission to restrict certain group members from sending messages. Another change would label messages to clarify if the sender had composed the message him or herself or if it had been forwarded from a third party. In addition, a new project to be directed by leading Indian academic experts will take a closer look at how misinformation spreads and how it can be slowed or stopped.
WhatsApp divided these measures into three separate categories: digital literacy and fact-checking, proactive action to tackle abuse, and product controls.
“WhatsApp is working to make it clear when users have received forwarded information and provide controls to group administrators to reduce the spread of unwanted messages in private chats,” said WhatsApp spokesperson Carl Woog. “We’ve also seen people use WhatsApp to fight misinformation, including the police in India, news organizations and fact checkers. We are working with a number of organizations to step up our education efforts so that people know how to spot fake news and hoaxes circulating online.”
In a blog post posted Tuesday, WhatsApp officially announced the app’s new ability to distinguish between original and forwarded text, video, image, and audio messages.
“WhatsApp cares deeply about your safety,” the post reads. “We encourage you to think about sharing messages that were forwarded. As a reminder, you can report spam or block a contact in one tap and can always reach out to WhatsApp directly for help. For more information, please visit our WhatsApp Safety Tips page.”
Do you think WhatsApp has gone far enough to put an end to fake news and the violence it has inspired? What else might be done to help users distinguish between true and false information?