top of page

How social media platforms made the New Zealand massacre go viral

Popular social media apps including YouTube, Facebook, Twitter and Reddit helped spread footage of the Christchurch massacre. Photo by Justin Hsieh.

By Justin Hsieh, Staff Writer

Social media companies have come under heavy scrutiny after their platforms helped spread content related to two mosque shootings that left at least 50 people dead in Christchurch, New Zealand, over a week ago.

The disaster began even before the shooting, when someone posted links to primary suspect Brenton Harrison Tarrant’s 74-page white nationalist conspiracy manifesto on Twitter, Facebook and the anonymous message board 8chan. 8chan is an online forum known for its extreme and often hateful commentary, and played a significant role in the shooting’s online spread before, during and after it occurred.

After the debut of his manifesto online, Tarrant live-streamed the massacre on Facebook, a broadcast that lasted for 17 minutes and included first-person video of him firing hundreds of rounds of bullets at worshipers in and around the two mosques. Facebook said that fewer than 200 people viewed the broadcast live, and none of them reported it.

The first person to alert Facebook to the broadcast did so 12 minutes after it ended. Facebook said that it removed the original video within an hour, but by then it was too late. The video had already been viewed 4,000 times before it was removed from Facebook, and some viewers made copies of the video to redistribute online. On 8chan, users reacted to to the attack in real time, shared links to copies of the livestream on various sites and encouraged each other to download the clips before they were removed.

In the first 24 hours after the livestream, Facebook removed over 1.5 million videos of the shooting from its site. On YouTube, copies of the video spread rapidly as content moderators struggled to keep up with the pace at which users were reuploading the video.

Similarly, Reddit attempted to remove links posted to the video and banned the “gore” and “watchpeopledie” forums, where the videos had been reposted directly, but users continued to post links to “mirror” sites hosting the video. The video was also posted to Twitter, which said it suspended one of the suspects’ accounts and was working to remove the video from its platform.

These struggles showcased what some have criticized as deep problems in the content policies and practices of several of the largest tech companies in the world, some of which have been grappling with these issues for years. Social media sites are fundamentally structured towards user engagement and mass proliferation of trending content, and the virtually unrestricted ability of any user to reach a large audience has fueled much of their success as platforms.

Yet critics argue that the price of this freedom may be lax and systems inadequate for moderating inappropriate content, which have allowed horrifying content to spread throughout the most powerful information-dissemination machines in the world. The Christchurch shooting was the third time that Facebook has been used to broadcast video of a murder. Video of the shooting follows hateful online conspiracies, violent terrorist recruitment videos, and suicide instructions in kids’ videos in sparking outrage through its spread on YouTube.

Platforms like Facebook and YouTube rely primarily on three means of detecting inappropriate content on their sites and apps: automated flagging, human review, and user reporting.

A common method of preventing problematic content from being reuploaded is “hashing,” whereby the photo or video is added to a review algorithm’s list of banned content and can be automatically blacklisted when identical photos or videos are posted. Hashing allowed Facebook to block 1.2 million shooting videos at upload on the first day.

This method is limited, however, as small alterations to uploaded content such as watermarks or distorted audio can allow them to avoid detection. Some of the shooting videos uploaded to YouTube contained these modifications. Facebook said that it “found and blocked over 800 visually-distinct variants of the [shooting] video.” By the afternoon of the shooting, some clips had been edited to include superimposed footage of YouTube personalities as if they were video game livestreams.

To solve these shortcomings, Facebook also uses artificial intelligence-based algorithms and human reviewers to attempt to detect inappropriate content. On the day of the shooting, however, AI systems did not flag the shooting livestream because AI are pattern-trained on large samples of content, and videos of live shootings “are thankfully rare,” according to Facebook. Human reviewers were delayed in responding to the video because it was not reported while it was live, and it fell outside of the content categories that are “prioritized for accelerated review.”

“As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review,” said Facebook in a statement released several days after the shooting.

The online spread of the shooting video has had repercussions both in New Zealand and in the United States, the home country of both YouTube and Facebook. Three days after the attack, New Zealand’s chief censor David Shanks officially classified the video as objectionable. At least two people have since been criminally charged for reposting video of the shooting.

“If you have a record of it, you must delete it,” said Shanks. “If you see it, you should report it. Possessing or distributing it is illegal and only supports a criminal agenda.”

In a speech four days after the shooting, New Zealand Prime Minister Jacinda Ardern declared that tech companies needed to work to improve their management of their platforms.

“We cannot simply sit back and accept that these platforms just exist and what is said is not the responsibility of the place where they are published,” said Ardern. “They are the publisher, not just the postman. There cannot be a case of all profit, no responsibility.”

This demand was mirrored by U.S. House Homeland Security Committee Chairman Rep. Bennie Thompson, who asked Facebook, Microsoft, YouTube, and Twitter for a briefing on the shooting.

“You must do better,” said Thompson in a note to the companies’ executives. “If you are unwilling to do so, Congress must consider policies to ensure that terrorist content is not distributed on your platforms.”

On the other hand, Oregon senator Ron Wyden stressed the need for a careful approach to addressing the issues of social media companies. Wyden was one of the chief architects of Section 230, a landmark piece of Internet legislation that protects major tech platforms from liability for user-posted content on their sites.

“So often in the wake of horrible events, politicians grasp for knee-jerk responses that won’t solve real problems and may even make them worse,” said Wyden.

Facebook’s stock fell more than 3% three days after the shooting, its largest drop of the year. On the same day, the Association of New Zealand Advertisers and the Commercial Communications Council released a joint statement expressing their concerns with social media platforms and asking advertisers to consider the implications of their endorsements.

“Advertising funds social media. Businesses are already asking if they wish to be associated with social media platforms unable or unwilling to take responsibility for content on those sites,” read the statement. “ANZA and the Comms Council encourage all advertisers to recognize they have choice where their advertising dollars are spent, and carefully consider, with their agency partners, where their ads appear.”


This article was originally published on www.baronnews.com.

Comments


bottom of page