Saturday, October 7, 2023

An interview with Yoel Roth on Twitter, Elon Musk, and more. By Thor Benson


www.publicnotice.co

14 - 18 minutes
6 Oct 2023
Discover more from Public Notice

Sign up for fearless, independent coverage of US politics and media, right in your inbox four times a week.

Over 67,000 subscribers
Yoel Roth at Code 2023 last month. (Jerod Harris/Getty)

Support this work by subscribing

It was recently reported that Elon Musk fired half of the platform formerly known as Twitter’s election integrity team, even though the company had promised to expand it. Musk quickly confirmed the news in a post.

We’re just over a year away from a presidential election. We know bad actors will try to spread election disinformation on X (we’ll still refer to it as Twitter throughout this post) over the next year, and now the platform will be less equipped to deal with it. That being said, there’s little evidence Musk, who at this point is at least fascist-adjacent if not an actual fascist, even wants to address the spread of disinformation.

Yoel Roth understands this situation better than pretty much anybody. Roth was head of trust and safety at Twitter for seven years until he stepped down last November, shortly after Musk took over, citing his new boss’s style of managing by “dictatorial edict.” Musk initially seemed to trust Roth as an expert and called on him to quell people’s concerns about how the platform was changing, but things went south quickly. When Roth left the company, Musk ended up smearing him on Twitter as a pedophile. That, of course, is not true, but it led to relentless harassment that forced Roth to have to move to a new home. 

Roth thinks Twitter isn’t ready to handle the large flow of election disinformation we’re about to see, and he’s worried about what effects that might have at home and abroad. We spoke about the election integrity team, how to handle election disinformation, Musk’s attacks against him, and more.

This free edition of Public Notice is made possible by paid subscribers. If you aren’t one already, please sign up to support our work. We’re offering free trials for all new signups. Just click the button below.

Thor Benson

Musk kept you around when he took over and seemed to point to you as a reliable expert. What happened there?

Yoel Roth

In the days after Elon acquired Twitter, things happened very quickly, including pressure from advertisers and activist groups that Twitter shouldn’t change its moderation policies and that Elon Musk should be wary of going too far down the path of unrestricted free speech. I think keeping me at the company — not firing me the way he fired my boss — was a way to hedge against some of that criticism. 

It was a way to demonstrate continuity, to show that the company was still moderating. I’ll say in a lot of those early interactions, I didn’t find Elon to be as unreasonable as the company’s subsequent actions have demonstrated. There were lots of cases where he came to me with a question or something that he wanted to do, and I’d explain why that might be a risky option and suggest an alternative, and he understood and agreed with it. Once I left the company, and especially once I started speaking about my concerns with the company’s direction, his treatment of me changed pretty fundamentally. He attacked me publicly. 

I’ve written and spoken about what happened after he called me a pedophile on Twitter. It pretty materially upended my life. I believe it was a strategy. I think I became a threat to him and his interests at Twitter, and he attacked me in hopes that it would get me to shut up. That didn’t work, but I think he was quite purposeful in the way that he attacked me publicly, and we’ve seen him employ that same strategy against others who have criticized him. 

Thor Benson

I wanted to see if you’d like to respond to a recent Elon tweet. He says, “I have rarely seen evil in as pure a form as Yoel Roth and Kara Swisher’s heart is filled with seething hate. I regard their dislike of me as a compliment.”

Yoel Roth

It seems a little disproportionate. [laughs] I’m an academic writing articles that basically nobody reads, and in his mind I’m the most evil creature on the planet. It just seems a little bit out of whack with my actual power in the world, but I suppose that’s somewhat par for the course with Elon.

[Editor’s note: You can watch Yoel’s recent interview with Kara Swisher at the Code Conference below.]

Thor Benson

What do you think about Elon firing these key election integrity workers at Twitter?

Yoel Roth

Twitter has pretty significantly cut back the teams that were supporting elections around the world since Musk bought the company, but through all of it there was one staff member who I hired early on as Twitter was building its elections team that stayed on at the company until recently. That person single-handedly supported major global elections. The fact that they are no longer at Twitter, and their entire team was eliminated, puts the company in a uniquely bad position going into elections for the rest of this year and next. 

Thor Benson

Why was the team so crucial?

Yoel Roth

Supporting elections is different from other types of content moderation in the sense that usually you’re not even really dealing with content so much as dealing with malicious behaviors coming from concerted and dedicated bad actors. Over the last five years, Twitter and the rest of the social media industry had to invent different ways of dealing with coordinated behavior and staff up teams of experts to work on this. 

Those are highly specialized jobs. They require technical skills. They require knowledge of what different governments might be trying to do to interfere in elections and how they’re doing it. It’s not work that you can just replace with random new employees. It’s not something you can outsource. It’s not something that you can fake your way through. 

By dismantling the team of people who had been doing this work for years and had built up those skills, Twitter is losing an incredible amount of institutional knowledge and capability. I worry that even if they wanted to, they couldn’t replace it. 

A note from Aaron: Working with brilliant contributors like Thor requires resources. To support this work, please click the button below and sign up for a free trial.

Sign up for Public Notice free trial

Thor Benson

Some might say we’re just talking about a lot of people expressing political opinions on social media, but that’s not it. Online election interference often involves concerted efforts by larger groups to spread disinformation and muddy the waters. 

Yoel Roth

Yeah, there are layers to this. The first is: A lot of times in the context of elections we’re not even talking about people. I don’t think anyone — Elon Musk included — would defend the First Amendment rights of fake Russian accounts out of a troll farm in Saint Petersburg. That’s not a free speech question. That’s an integrity question.

Even if you assume you’re dealing with real people spreading harmful and dangerous information, it absolutely is a coordinated effort, a strategic effort, and one that you can’t just respond to ad hoc. You have to really deeply understand those dynamics and think about the right solutions and interventions to address them. There’s nobody left at Twitter to do that.

Thor Benson

What would you typically see when it comes to election disinformation when you were at Twitter? Was it mostly just people saying false things about a candidate? Was it people trying to confuse others who might be trying to vote?

Yoel Roth

It’s a bit of everything. In the context of major global elections, you would see, at least in years past on Twitter, heated political debate. For the most part, Twitter would stay out of that. It wouldn’t intervene. An example you brought up is people saying false things about a candidate. We actually never treated that as a form of misinformation. 

If people say, “Candidate X believes thing Y,” and that happens to not be true, people lie about candidate positions all the time, and that should be a conversation voters have and candidates have so people can accurately understand what their positions are. There’s other types of disinformation that can be more directly dangerous. 

You could imagine a claim like, “Immigration and customs enforcement are going to be patrolling outside of a polling place in Maricopa County in Arizona.” That is a claim that’s known to deter voters — eligible legal voters — from voting. It’s illegal under election laws in most states. Twitter had policies prohibiting that conduct on the service and teams that were enforcing it. Twitter claims that those rules are still enforced, but rules that exist with nobody to enforce them are not really good rules at all. 

RELATED FROM PN: Is Elon Musk evil or simply a fool?

Thor Benson

I don’t know how much you’ve been monitoring this since you left Twitter, but what are the changes you’ve seen in regards to the spread of disinformation on Twitter since Musk took charge?

Yoel Roth

It’s hard to monitor what’s going on on Twitter beyond anecdotes. I think that’s part of the strategy. Twitter used to be the easiest platform to keep an eye on because it was nearly all public, and the company offered public APIs [Application Programming Interfaces] that let researchers, academics and journalists — anyone — monitor what’s happening on the service. 

Among the first things that Musk did following his takeover was to make it prohibitively expensive for people to get the data that they need to monitor elections. As a result, the thousands of people around the world who were doing this work and would understand what the trends are around election issues on Twitter couldn’t do it anymore. 

Truthfully, we don’t know what’s going on with Twitter. We can make some assumptions based on what shows up in the replies to politicians and which blue checkmark accounts are getting prioritized, but a systematic, data-driven understanding of elections on Twitter isn’t possible to achieve anymore given the company’s changes to data access and its APIs. 
Elon Musk in New York City last month. (Faith Aktas/Anadolu Agency via Getty)

Thor Benson

What do you think is the best playbook for dealing with election disinformation? How should it be done?

Yoel Roth

A helpful framework for thinking about elections comes from Camille Francois, a Columbia researcher and the head of trust and safety at Niantic. She’s written about what she calls the “ABCs of disinformation” — actors, behavior, and content. In order to protect elections, you have to think about these issues across each of those areas. 

Let’s start with the C. The content of election disinformation is in some ways the most obvious bit of it. You can think of voter suppression narratives. You can think about misinformation about mail-in voting being secure. All of those are claims that platforms can and should intervene against. There’s lots of ways you can do that. You can apply fact checking labels. You can disseminate prebunks, which show people corrections to misinformation before they’ve even seen the misinformation.

Of course, you can remove tweets and posts that actually violate the rules and could lead to harm. Some of the most important strategies for addressing election disinformation focus not on the content of posts but the groups and actors behind them and on the deceptive behaviors that are used to propagate that. I think platforms have to think and operate across all of those levels, not just fact-check misinformation but also identify whether there’s coordinated activity or inauthentic accounts that are responsible for spreading these messages and then deploy the right technologies to address it. 

These are all really straightforward lessons that platforms learned after 2016. None of what I’m saying is groundbreaking, but we’re seeing, especially at Twitter, lessons that the company learned robustly after 2016 are no longer part of its playbook for dealing with elections. That’s a major risk going forward. 

RELATED FROM PN: How Twitter became a haven for climate misinformation

Thor Benson

Following the Twitter Files, there’s been a lot of talk about the government reaching out to Twitter to flag posts it thought were problematic, and controversy about whether that’s a form of censorship. When we’re talking about elections, how much is the government involved and how much should it be involved? And how much should Twitter respond to its requests?

Yoel Roth

At the theoretical level, I think the government coercing companies to moderate content would be a bad thing if it was happening. We’ve actually seen some evidence come out in lawsuits and in reports from congressional committees that, to me, seems pretty scary. You see folks at the White House emailing employees at Facebook and yelling at them about their moderation practices around covid. That’s a problem. There should be limits around that. 

But you can’t throw the baby out with the bathwater on these issues. I think it would be to everyone’s detriment if the government weren’t able to communicate with platforms about malign activity that it’s seeing. Tech companies aren’t intelligence agencies, nor should they be. Tech companies build social platforms, and they sell ads, and they’re really good at doing that. They have the ability to shut down malicious activity when they’re aware of it. Spy agencies have a whole lot more information than platforms do.

I think it’s in the government’s interest and the public’s interest for platforms to be made aware of that activity — within reasonable limits. That’s exactly what the relationship between platforms and the FBI and the intelligence community looked like for many years. There was appropriate threat information shared from government to companies, and it enabled companies to react to emerging threats. If you cut off the flow of communication there, platforms are going to be less equipped to respond to these threats. You’re cutting off a critical source of information.

Thor Benson

We’re about 13 months away from the election. What are your main concerns when you think about the spread of election disinformation on Twitter?

Yoel Roth

We’re not just 13 months out from one election. There are 65 national elections around the world next year, and more than 3 billion people are going to be going to the polls. What I worry most about is the ability of every platform — Twitter included and perhaps most notably at all — to keep pace with the demands of a very, very busy year with some very high risk elections. You can’t do this work with machine learning. You can’t automate elections. 

You can’t say, “AI is going to fact check all of the misinformation.” It doesn’t work that way. No company has built the technology to do that. It needs teams of dedicated people to do this work, to secure election conversations on social media, and we’ve seen nearly every major tech platform cut back on the teams doing it. That really concerns me. 

Thank you for reading Public Notice. This post is public so feel free to share it.

Share

We’ll be back with more Monday. If you appreciate this newsletter, please support our work by signing up for a paid subscription. Just click the button below for a free trial.

Have a great weekend.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.