GADGETS

A detailed response to Facebook documents to the Christchurch massacre has been leaked


Picture for an article titled How the Christchurch Massacre in 2019 Forever Changed Facebook

Photography: Sanka Vidangama (Getty Images)

On March 15, 2019, a heavily armed white man named Brenton Tarrant entered two separate mosques in Christchurch, New Zealand and opened fire, killing 51 Muslim believers and wounding countless others. It was close to 20 minutes of massacre from one of the attacks live on Facebook—And when the company tried to remove it, more than 1 million copies sprang up in its place.

Although the company was able to quickly remove or automatically block hundreds of thousands of copies of the horrific video, it was clear that Facebook had a serious problem: the shootings I’m not going anywhere, and live broadcasts they are not either. In fact, up to this point, Facebook Live had little reputation as a place where you can catch streams of violence – including some killing.

Christchurch was different.

An internal document describes in detail Facebook’s response to the massacre in Christchurch on June 27, 2019, describes the steps taken by the company’s task force created after the tragedy to address users who broadcast violent acts live, highlighting failures in reporting companies and detection methods before the shooting, how much he changed his systems in response to those failures — and how far his systems still had to go.

More: Here are all the ‘Facebook papers’ we have posted so far

The 22-page document was published as part a growing fund internal Facebook research, memoranda, employee comments and more recorded by Frances Haugen, a former employee of the company that submitted whistleblower complaint against Facebook at the Securities and Exchange Commission. Haugen’s legal team has released hundreds of documents to select journalists, including Gizmod, and countless more are expected to arrive in the coming weeks.

Facebook relies heavily on artificial intelligence to moderate its global platform, in addition to tens of thousands of human moderators who have been the subject of traumatic content throughout history. However, according to the Wall Street Journal recently published, additional documents released by Haugen and her legal team show that even Facebook engineers doubt the ability of artificial intelligence to adequately moderate harmful content.

Facebook has not yet responded to our request for comment.

It could be said that the company’s failures started at the moment when the shooting took place. “We did not proactively detect this video as a potential violation of the law,” the authors write, adding that the live broadcast scored a relatively low score on a classifier used by Facebook’s algorithms to refine graphically violent content. “Also, no user reported this video until it was on the platform for 29 minutes,” they added, noting that even after it was removed, there were already 1.5 million copies to resolve in the 24-hour range.

Furthermore, its systems were apparently able to detect any kind of violent violation of the terms of service “after 5 minutes of broadcasting”, the document states. Five minutes is too slow, especially if you’re dealing with a mass shooter who starts filming as soon as the violence starts, like Tarrant did. In order for Facebook to reduce that number, it needed to train its algorithm, just as data is needed to train any algorithm. There was only one terrible problem: there weren’t a lot of live footage from which that data could be obtained.

The solution, according to the document, was to create what sounds like one of the darkest data sets known to man: a compilation of police and body recordings, “recreational recordings and simulations” and various “military videos” obtained through the company. partnerships with law enforcement agencies. The result was the detection of “First Person Shooting (FPS)” and an improvement to a tool called XrayOC, according to internal documents, which allowed the company to mark live footage as apparently violent in about 12 seconds. Of course, 12 seconds is not perfect, but it is deep better than 5 minutes.

The company added other practical repairs. Instead of requiring users to jump through multiple hoops to report “violence or terrorism” happening on their stream, Facebook concluded that it might be better to allow users to report it with a single click. They have also internally added a “Terrorism” tag to better track these videos once they have been reported.

Next on the list of “things Facebook probably should have had before it aired the massacre,” the company set certain restrictions on who was even allowed to go live. Prior to Tarrant, the only way you could get a ban on live streaming was to violate some sort of platform rule during a live broadcast. As the research points out, an account that is internally labeled as, say, potential terrorists “Would not be limited” to live streaming on Facebook under these rules. After Christchurch, that changed; the company presented “with one blow”A policy that would prevent anyone caught posting particularly crappy content from using Facebook Live for 30 days. Facebook’s “amazing” umbrella includes terrorism, which refers to Tarranta.

Of course, content moderation is a dirty, imperfect work they are implemented, in part, by algorithms that are, in the case of Facebook, common also deficient as the company that made them. These systems did not mark the shooting of retired police chief David Dorn when she was caught on Facebook live last year, nor did he catch the man who broadcast live his girlfriend’s shooting just a few months later. And while it is several hours of obvious bomb threat which a far-right extremist broadcast live on the platform last August was not as explicitly horrific as any of those examples, but was also a literal bomb threat which could stream hours.

As for the bomb threat, a Facebook spokesman told Gizmodo: “At the time, we were in contact with police and removed videos and the suspect’s profile from Facebook and Instagram. Our teams have worked to identify, remove and block any other instances of video of the suspect that do not convict, neutrally discuss the incident or report neutrally on the issue. ”

Still, it is clear that the Christchurch disaster had lasting consequences for the company. “Since this event, we have faced international media pressure and seen that legal and regulatory risks on Facebook have increased significantly,” the document reads. And that is an understatement. Thanks to a new Australian law passed hastily after the shooting, Facebook leaders could face high legal costs (not to mention prison time) if they are caught re-enacting live violent acts such as shooting on their platform.

This story is based on the findings of Frances Haugen to the Securities and Exchange Commission, which her legal team also submitted to Congress in redacted form. The redacted versions received by Congress were given to a consortium of news organizations, including Gizmodo, the New York Times, Politico, Atlantic, Wired, The Verge, CNN, and dozens of other media outlets.



Source link

Naveen Kumar

Friendly communicator. Music maven. Explorer. Pop culture trailblazer. Social media practitioner.

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button