This is the kind of internet we want: how social media turned into a deadly weapon

We are already used to the fact that social networks influence elections and politics in Western countries. But they can cost people their lives, and there is no solution yet.



image






In March 2019, 49 people were killed in two mosques in New Zealand . The organizers of the massacre broadcast it on social networks in order to attract the attention of as many people as possible. Patrick Crusius was among the viewers on the other side of the screen, who, under the impression of what he saw, staged a mass shooting in the Walmart department store in El Paso on August 4, killing 21 people and injuring 26 more. Investigators claim that the suspect posted information about several months before the tragedy his intentions on one of the extremist resources on the network, where he expressed, among other things, the support of a fighter from New Zealand. Following an investigation into the causes of the disaster, France and New Zealand turned to other countries and technology companies to find technologies that would prevent the distribution of such content. The initiative was supported by 17 countries, the EU and 8 major technology companies, including Google and Facebook.



In April 2019, a series of explosions in Sri Lanka killed more than 200 people. Authorities say they warned Facebook about the potential consequences of fueling social media violence. Understanding was not achieved, and as a result of human casualties, it was decided to block access to a number of social networks and instant messengers, including Facebook and WhatsApp. According to the authorities, the spread of misinformation about the events could cause an escalation of violence in the country.



Blocking of social networks and instant messengers during outbreaks of violence occurs more often. For example, in the summer of 2018, the viral spread of WhatsApp kidnapping rumors provoked a series of clashes in India. And in February 2019, the misinformation spreading on Facebook, Instagram and WhatsApp increased tension between India and Pakistan. Now, officials in India fear that spreading fake news via Facebook might fuel the situation during the upcoming elections. Online breeds violence in more developed countries: in Mexico , the United States and Germany .



On the one hand, when governments block social networks, this is perceived as an attempt to restrict freedom of speech. But on the other hand, the aforementioned cases show that disconnecting social networks is the only tool for countries to prevent violence and loss of life. How have social networks turned into an aggressive environment and is there a way to stop the violence without turning off the Internet?



From ethnic cleansing to terrorist attacks



UN studies showed that Facebook played a “determining role” in the rise of anti-Muslim sentiment in Myanmar, neighboring Sri Lanka. The spread of rumors in social networks led to casualties among the population: anti-Muslim hysteria in the social networks of Myanmar turned into a full-scale ethnic cleansing. Government forces have used Facebook for several years to propagate against the Rohingya ethnic group in the country. At least 6,700 people became victims in the real world, according to a report from Doctors Without Borders. In November 2018, Facebook published an independent assessment of what was happening in Myanmar and admitted that the company did not do enough to prevent its platform from inciting disagreements and inciting violence in real life.



A year before the terrorist attacks in Sri Lanka, false rumors in social networks provoked the Buddhists to speak out against Muslims: the news feed Facebook also played a fatal role . At the same time, the country's government appreciates social networks for helping to establish democracy after many years of civil war, so their criticism for inciting racial hatred can be considered objective. Representatives of Facebook did not respond to criticism of the government until access to their resource was blocked in the country. Only after that, the American company promised to hire more moderators and improve interaction with local authorities.



Death algorithms



The vast majority of publishers and distributors of extreme content on the web do not commit hate crimes in real life. But the hate crimes that occur are often related to the activities of online instigators. The mathematical model of social behavior, calculated by Russian scientists, shows that there is a turning point, after which some people become so obsessed with the distorted version of reality created by the Internet that it begins to act.



A study by the University of Warwick in Great Britain found a correlation between attacks on refugees in Germany between 2015 and 2017 with areas of high Facebook popularity and the distribution of anti-refugee posts by the far-right populist Alternative for Germany (AfD). Of the 3335 attacks, 3171 of them occurred in municipalities with high online threat activity. In Chemnitz, Germany, false rumors about killer migrants on Twitter and Facebook led to mass demonstrations , gathering 6,000 participants in August 2018.



Accessing technologies and platforms that provide viral content is not so difficult. A study by the Massachusetts Institute of Technology (MIT) of more than 126,000 posts written by about 3 million people showed that lies spread on social media much faster and more extensively than the truth in all categories of information. The influence of social networks is reinforced by the so-called ideological bubbles: to keep the attention of people on their sites for as long as possible, companies such as Facebook and YouTube use algorithms that recommend the potentially most interesting posts to the user. If the system captures the popularity of a particular video, then it distributes it more, and as a result, the content receives more views, and YouTube - more money due to the fact that users spend more time on the resource and view more advertising.



Users in ideological bubbles see a version of the world in which their views are never disputed, and opposing opinions are not found. The viewer of YouTube falls into the conditions considered in the model of " information confrontation " of Russian sociologists: the user does not encounter an irritant in the form of alternative points of view. This creates an extremely close-knit and loyal circle of adherents who are ready to imitate a more radical agenda if it occurs. Bubbles like this bring “tipping points” into which violence spills out.



Engineer Guillaume Chaslo, who worked on YouTube’s recommendation systems for three years, created the Algotransparency website in 2016, where he tries to unravel video hosting algorithms: how one content is promoted compared to others. The day before the shooting at the Pittsburgh Synagogue in 2018, Algotransparency recorded that the YouTube algorithm distributes a video of conspiracy theorist David Ike, in which he accuses Jewish billionaire , philanthropist and beloved target of ultra-right George Soros of “manipulating political events in the world.” The video had less than 1000 views, when the algorithm began to promote it, now it has more than 70,000. This does not prove a causal relationship between the video and shooting, but it gives an understanding of how to find provocative videos and combat their distribution.



Online violence



Another problem related to violence and network media is the uncontrolled distribution of videos with massacres and suicides. Four years ago, a video independently shot by the killer of two Virginia reporters was distributed on Facebook and Twitter; three years ago, mass-shooting shots in Dallas became viral. Social media platforms can provide a global audience for people who want to do serious harm to themselves or others. This is probably not what Facebook was counting on when it introduced Facebook Live to the public several years ago .



Due to the fact that streaming video is a relatively new tool on the Internet that Facebook, Twitter, Amazon and other corporations continue to invest in, it is unclear what to do in case of broadcasting terrorist attacks on the air. Even if Facebook quickly blocks violent content, such incidents become instantly known, prompting people to search for original videos through Google and other search engines.



The aforementioned attack in New Zealand, which killed 50 people, was broadcast live on Facebook, re-posted on YouTube and discussed on Reddit before any of these platforms reacted. Many users manage to place it in their accounts, which requires a second audit not only by automatic means, but also with the help of moderators in order to delete all copies. For example, almost two months after the Christchurch tragedy, CNN found copies of the massacre broadcast on many sites still available .



Rescue of the drowning



In recent years, disabling the Internet has become an increasingly common occurrence worldwide, especially in Asia and Africa, after protests or other anti-government activities.



However, Internet policy experts doubt that blockages prevent terrorists from fulfilling their intentions. Stanford University researcher Ian Ridgak found that blocking social networks and websites in India was accompanied by an escalation of violence ... Blocking spills fake news and rumors through alternative channels, in the absence of fast mass Internet communications it is more difficult to refute.



In any case, bans are only effective to some extent, because users can use applications known as VPNs or virtual private networks to bypass blocking or instant messengers that communicate with surrounding devices via Wi-Fi if the Internet is turned off.



In March, the Indian parliamentary commission asked Joel Kaplan, the head of global Facebook politics, to tighten control over WhatsApp and Instagram, requiring companies to check user posts to make sure they aren't illegal. This approach is ambiguous for two reasons: it leads companies such as WhatsApp, which use end-to-end encryption, to radically change their platforms, and, on the other hand, raises concerns about confidentiality and excessive state control.



Manual control



In China, a very real army of censors is fighting against inappropriate social media content with hundreds of millions of users. Researchers at Harvard University rated their number at unprecedented tens of thousands of participants, including government bloggers and the so-called Internet police, which filter content from network resources. Thanks to the fast-growing Chinese applications, the influence of censors extends far beyond the borders of the Middle Kingdom: Facebook founder Mark Zuckerberg accused the TikTok social network of blocking content related to protests in Hong Kong, even in the United States.



If dangerous content is located outside the Celestial Empire, the Great Gun is used , which slows down the work of objectionable resources. For example, according to Pavel Durov, the Chinese government conducted a DDoS attack to slow down Telegram on June 13, when hundreds of thousands of Hong Kong citizens surrounded the government building to protest the bill , which allows people to be extradited to court in mainland China.



But even such a powerful system and army of moderators are not able to manually view each downloaded file in a reasonable amount of time, and the growth of streaming broadcasting brings with it new difficulties in recognizing cruel content, since the video can be shown before the censor knows that it was broadcast.



Automated technologies are far from perfect and provide the only opportunity to flag and prevent the repetition of only the most egregious content on the Web. According to YouTube statistics, only 73% of auto-tagged videos are deleted before at least one person sees them.



Over the years, both Facebook and Google have developed and implemented automated tools that can detect and delete photos, videos, and text that violate their policies. For example, Facebook uses Microsoft's PhotoDNA to detect child pornography. Google has developed its own version of this open source software. These companies also invested in technology to detect extremist messages, joining a group called the Global Internet Forum on Counterterrorism to share information about known terrorist content. For the detected illegal content, digital signatures are generated that allow it to be recognized upon repeated download (and prohibit it).



But automated moderation becomes much more difficult if it is not about photos and videos, but about streaming video. As Google’s general counsel Kent Walker wrote on his blog in 2017, “cars can help identify problematic videos, but the human factor still plays a decisive role in the nuances of deciding the boundary between violence propaganda and religious utterances.” Technology corporations have not yet been able to create effective AI to proactively block such content, although it is the richest industry in the world. Therefore, Facebook and YouTube have teams of censors who watch videos around the world, and at the same time are increasingly faced with the problem of data volume. Perhaps both YouTube and Facebook have become too large to be moderated.



It seems that Mark Zuckerberg himself is not sure about the capabilities of automated systems: he is ready to transfer control over distressed assets to the state , including “killer” content and political advertising. “Until recently, the Internet in almost all countries outside of China was defined by American platforms, in which the values ​​of freedom of speech are strong. There is no guarantee that these values ​​will win, ”the billionaire said in a speech at Georgetown University. But is there any guarantee that in the case of transfer of control over the content to the government of any country, we will not receive censorship of the Global Network according to the Chinese scenario? Is this the kind of internet we want, Mark?






Nikita Tsaplin



UFO Care Minute



This material could cause conflicting feelings, so before writing a comment, refresh something important in your memory:



How to write a comment and survive
  • Do not write offensive comments, do not get personal.
  • Refrain from obscene language and toxic behavior (even in a veiled form).
  • To report comments that violate the rules of the site, use the "Report" button (if available) or the feedback form .


What to do if: minus karma | blocked account



Code of authors Habr and habraetiket

Full version of site rules



All Articles