What Facebook should do about its Kenosha problem

Today let’s talk about the controversy around a militia organizing on Facebook, the violence that followed, and where that leaves the company heading into this week’s planned visit by the president to Kenosha, WI, threatening to stoke more unrest.

Kenosha police shot Jacob Blake seven times in the back last week, leading to protests in the city. Two people were killed and a third was injured in a shooting during one of the protests, and a 17-year-old has been charged in connection with the shootings.

The afternoon before the murders, a 3,000-member Facebook group calling themselves the Kenosha Guard had advertised an event on Facebook encouraging an armed response to the unrest. It was taken down after the shooting. My colleague Russell Brandom broke the news at The Verge:

In a post Tuesday afternoon, the Kenosha Guard Facebook group encouraged an armed response to the ongoing unrest. “Any patriots willing to take up arms and defend our city tonight from the evil thugs?” the post reads. “No doubt they are currently planning on the next part of the city to burn tonight.”

Facebook said it had not found any digital link between the accused shooter and the Kenosha Guard. Which is to say: his Facebook account did not follow the Kenosha Guard page, and he had not been “invited” to the event. Did the shooter see the post, though? No one at Facebook could tell me today when I asked.

At the same time, Brandom reported that the group had been reported multiple times as violating the company’s policies against militias — but the account was nonetheless found to be “non-violating,” in content moderator parlance. Why? That’s still under investigation inside Facebook, a source familiar with the subject told me.

The basic, implicit bargain we have struck with social networks in the United States sounds something like this. Platforms agree to remove hate speech, incitements to violence, and other terrible posts, and as long as they do so in a timely fashion they can continue to operate. This bargain has many flaws — it’s more of a gentleman’s agreement than a law, and platforms break it in spirit and letter all the time. (This is one of the main reasons that both the candidates for president say they want to get rid of Section 230, the part of law that enables the current bargain.) But it’s the status quo and has been for a long time.

The best way to understand the controversy around the Kenosha Guard page is that Facebook broke this implicit bargain. The reason is that Facebook users had done their part — and as Ryan Mac reported at BuzzFeed, they had arguably done more than their part (emphasis mine):

The event associated with the Kenosha Guard page, however, was flagged to Facebook at least 455 times after its creation, according to an internal report viewed by BuzzFeed News, and had been cleared by four moderators, all of whom deemed it “non-violating.” The page and event were eventually removed from the platform on Wednesday — several hours after the shooting.

“To put that number into perspective, it made up 66% of all event reports that day,” one Facebook worker wrote in the internal “Violence and Incitement Working Group” to illustrate the number of complaints the company had received about the event.

Ultimately, CEO Mark Zuckerberg posted a portion of his weekly Q&A with employees publicly, and said the incident had been an “operational mistake.”

There are a few things to say about this.

The first is that, strange as it may seem, the Kenosha Guard’s page might not have been found to be in violation of Facebook’s policies at all had the company not changed them quite recently. On August 19th, Facebook banned “US-based militia groups” as part of an effort that made bigger headlines for removing a bunch of QAnon groups. That’s the policy under which the page was removed. It’s possible moderators could have elected to take it down for inciting violence, but it isn’t guaranteed.

One question coming out of the Kenosha incident is whether Facebook is attempting to remove these militia groups proactively or whether it’s relying on user reports instead. A source told me that for the most part, it’s going to be the former. Facebook has better insights into the growth and operations of pages like this on its network than average users do, I’m told. And user reports aren’t always a great signal — often people will-mass report benign posts for malicious reasons.

That may be one reason the Kenosha Guard page wasn’t caught sooner — Facebook is generally less sensitive to seeing a spike in user reports than it is to seeing a spikes in views and growth. The Kenosha Guard page wasn’t getting a lot of either, at least not in Facebook terms, I’m told.

That doesn’t explain why the moderators who saw the page didn’t take action when they first saw the page, though, which leads me to the second thing worth saying about the Kenosha incident.

When Facebook’s policies change — which they do frequently — it often takes time for those policies to be understood, and effectively enforced, by the company’s roughly 15,000 outsourced content moderators. One of the conclusions I came to after spending last year reporting on the lives of Facebook’s content moderators in America is that they often lack the necessary context for effectively enforcing the policies with a high degree of accuracy, and that supplemental resources from Facebook and its third-party vendors are often lacking or contain errors themselves.

Moderators also generally give users wide latitude in their posts to discuss events that were even faintly political, even when those posts seem obvious on their face, a former Facebook moderator told me Sunday.

“We would get examples like “shoot immigrants,” “shoot at immigrants,” and variations of this,” the moderator said. “People would defend leaving stuff like that up because ‘you aren’t saying you’re going to physically hit them necessarily, they can just be talking about using guns to defend the border/property.’”

The moderator continued: “Essentially, in Facebook’s moderator population, they have tons of people who see no problem with things like ‘bring all your guns.’”

Officially, moderators are not supposed to have any leeway in how they enforce Facebook policies. But in practice, of course they do — there’s a lot of gray area in those policies; even well written policies still require judgment calls; and only a fraction of their decisions are ever audited to ensure fidelity to the written policy.

Add to all that the fact that a majority of Facebook’s moderators are located in gun-friendly states like Texas, and you begin to understand why the Kenosha Guard page may not have come down immediately.

So what to do about all this?

Facebook is continuing to roll out its ban on militias, and it seems likely that a few months from now it will be more effective at rooting out violent social movements on the network than it is today. The big question, of course, is to what extent that can happen before the election and its immediate aftermath, when tensions will be at their highest. Several reports last week found that Facebook still has a lot of work to do on that front.

Another thing the company could consider is publishing a public report about the incident. The investigation now underway into whether the alleged shooter saw the page in question, why moderators initially dismissed reports, and how Facebook will handle similar reports are all subjects of legitimate public interest. Facebook led the way in publishing quarterly “transparency reports” about their enforcement actions — the company could earn some much-needed goodwill by publishing occasional public reports about its high-profile missteps, too.