Personally, I Prefer Having Fewer Nazis on Twitter
Jordan Weissmann — Read time: 8 minutes
Skip to the content
Future Tense
Personally, I Prefer Having Fewer Nazis on Twitter
A downtown building with a sign that says @twitter
Twitter headquarters in San Francisco. Justin Sullivan/Getty Images
Like pretty much any journalist who compulsively wastes too much of their life on Twitter, I’ve had my fair share of unpleasant experiences with the site. In part, that’s because I’m Jewish, and I occasionally find my mentions flooded by antisemitic trolls who think it’s the height of wit to throw an echo around my name—as in “we see you (((weissmann)))”—in response to some tweet they don’t like. It’s a hazard of the site that you learn to live with.
Thankfully, Twitter’s Nazi problem has felt a little less severe in recent years. Back during the 2016 election, when Donald Trump’s first run for the White House helped turn social media into a white supremacist jamboree, having some anonymous groyper tell me to jump in an oven was just another day at the office. These days, something I say really has to go viral and reach a large swath of right-wing accounts before the bigoted shitposters start to show up in numbers.
Suffice it to say, I’m feeling a little bit apprehensive about what exactly Elon Musk plans to do with Twitter, assuming he actually closes his deal to buy the company. The Tesla and Space X founder describes himself a free speech “absolutist” and has regularly criticized the site’s content moderation practices for supposedly trampling on discourse. There’s now a widespread assumption he’ll try to loosen those policies.
“I am against censorship that goes far beyond the law,” he recently tweeted. “If people want less free speech, they will ask government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people.” In practice, that would mean allowing all sorts of hate speech and grisly, violent material Twitter currently bans, a prospect that seems to be exciting the hard right. A number of banned neo-Nazis tried to set up new accounts shortly after the news of Musk’s deal broke, though they were quickly kicked off.
How much could Musk undo? Quite a bit. In its early days, Twitter had an anything-goes attitude toward content moderation. But, spurred on by the harassment campaign unleashed by Gamergate and the toxicity of the 2016 election, it began to take a more proactive approach on issues like harassment, hate speech, and misinformation, using a combination of more stringent policies and new platform features that helped make it a leader in the industry.
Some examples: It got more aggressive about banning accounts and booted hard-right figures like Milo Yiannopoulos and Alex Jones well before competitors like Facebook. It partnered with academics to try and measure the health of conversations on the platform, rolled out safety features to prevent harassment, and put in place policies to combat transphobia, such as banning “deadnaming.” In 2020, it expanded its policy against hateful conduct to bar “language that dehumanizes people on the basis of race, ethnicity and national origin” and permanently banned former Ku Klux Klan leader David Duke. It started labeling misinformation and has also worked to limit lies about COVID, barring Rep. Marjorie Taylor Greene’s personal account for flouting those policies. And of course, it famously dumped Donald Trump from the platform.
It’s hard to say objectively how much of a difference these steps have made to daily life on the platform. But experts in the field say the changes have made a serious difference. Heidi Beirich, the co-founder of the Global Project Against Hate and Extremism, told me she’s seen a major drop in the number of hate groups operating on the platform. “There are far, far, far fewer of them on Twitter today than there were before,” she said. ”And I think those who have stayed on have been inhibited from saying more extreme things.”
Twitter’s regular transparency reports, which document its rules enforcement actions, also show the company has gotten more aggressive policing hate speech. In the first half of 2021, it removed more than 1.6 million pieces of content for violating its hateful conduct policies, almost 2.5 times as much as it removed in the first six months of 2018—which is a bigger jump than you’d expect just based on its overall user growth. It’s removing 18 times more material for violating its policies regarding “sensitive media,” a category that includes extreme violence and sexually explicit material, but also hate symbols such as swastikas. The company is also suspending accounts more frequently.
Chart showing increase in Twitter enforcement actions from 2018 to 2021
Jordan Weissmann/Slate
“Twitter today is different from 2016,” Adam Conner, vice president for technology policy at the Center for American Progress, put it to me. “I am not going to stand here and tell you that Twitter is a perfect place, but it is better than it was.”
If Musk actually applied a First Amendment standard to Twitter’s content moderation, the vast majority of this material would stay up. Under the Constitution, Americans are essentially allowed to indulge in all the hate speech they like, and can advocate violence as long as it doesn’t cross the line into direct threats against individuals or incitement to immediate action. (So you can talk about how great you think it is to burn churches and synagogues, as long as you don’t tell a crowd to go burn down a Black church or synagogue down the street.) “I would say the vast majority of the speech that is currently banned [on Twitter] could not be banned under the First Amendment,” Catherine Ross, a law professor at George Washington University who studies free speech issues, told me.
With that said, it is difficult to imagine that Musk would actually try to apply a First Amendment standard to Twitter’s content moderation. For starters, it would turn the platform into a cesspool that would alienate a lot of users and scare away advertisers. Aside from letting the racists run wild, it’d mean letting people advocate terrorism, share graphically violent content like videos of beheadings, and even trade digitally fabricated child pornography, all of which count as protected speech under the Constitution.
Second, it would be wildly impractical. Laws about hate speech vary by country, and the majority of Twitter’s users reside outside of the United States. A European Union official has already warned that Musk will have to follow the EU’s new Digital Services Act, which fines companies up to 6 percent of annual sales per violation if it fails to police hate speech and harassment. He’ll also have to deal with the U.K.’s own online safety laws. Unless Musk wants to impose a different set of rules for every nation, his hands are somewhat tied.
Finally, he’s hinted here and there that he doesn’t particularly want to make Twitter an alt-right paradise again. Musk has complained about specific policy decisions by Twitter he saw as discriminating against conservatives, such as how it handled the New York Post’s reporting on Hunter Biden’s laptop, and was angered by the site’s decision to suspend the conservative satire site the Babylon Bee for violating its policies against anti-trans harassment. But in a tweet responding to conservative media mogul Ben Shapiro, he said: “I should be clear that the right will probably be a little unhappy too. My goal is to maximize area under the curve of total human happiness, which means the ~80% of people in the middle.”
The point, however, is that there is plenty of room for Twitter to backslide and return to being a much less pleasant place to have a conversation. As one of its deeply addled regular users, I’d find that pretty unfortunate. Personally, I prefer the place with fewer Nazis around.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.