Thursday, July 22, 2021

Facebook and YouTube’s vaccine misinformation problem is simpler than it seems

Facebook and YouTube’s vaccine misinformation problem is simpler than it seems

As the Biden administration struggles to find the words to confront social platforms, a better understanding of their algorithms could help

Technology news analysis writer

Today at 2:50 p.m. EDT

On Friday, President Biden said Facebook is “killing people” by spreading misinformation about the coronavirus vaccines. On Monday, he changed his tune. “Facebook isn’t killing people,” he amended, instead blaming a handful of disinformation merchants who use the platform.


Whether Facebook is or isn’t killing people depends on your definitions. What’s clear, regardless, is that Facebook, YouTube, and other social media platforms have played a major role in the anti-vaccine movement. And they continue to do so, despite some sincere efforts by the companies to combat the trend.


Untangling exactly who’s at fault, and to what degree, is nigh impossible, especially because the companies carefully guard the data that would help researchers understand the problem. But there is at least one critical element of social media’s misinformation problem that’s quite simple, once you grasp it — and that helps to explain why none of their interventions so far have solved it. It’s that the recommendation and ranking software that decides what people see on social platforms is inherently conducive to spreading falsehoods, propaganda and conspiracy theories.


Story continues below advertisement

Before social media, the dominant information sources — newspapers, magazines, TV news shows — had all sorts of flaws and biases. But they also shared, broadly speaking, a concern for the truth. That is, they subscribed to an expectation that factual accuracy was core to their mission, even if they didn’t always get it right. (There are exceptions, of course, including a certain cable news network that has broadcast more than its fair share of vaccine misinformation.)


Facebook and YouTube didn’t set out to become dominant information sources. They set out to connect and amuse people, and to make lots of money. (In fact, both began as dating sites, more or less.) As they grew, they developed algorithms to show each user more of what they wanted to see. To make those decisions, they looked at the data they had readily at hand: which posts and videos were generating the most likes, comments and views, and what types of posts the user had watched or liked before. That’s called engagement data, and it turned out to be pretty effective in keeping users hooked to their feeds. It also proved effective for targeting them with ads. Massive growth ensued.


With that massive growth came a role Facebook and YouTube weren’t built for. They became the leading conduits for not just social connections and entertainment, but also online content in general. As people spent more and more of their digital lives on these platforms, they became hubs for news, political discussion and organizing. By 2020, the Pew Research Center found that 36 percent of U.S. adults “regularly” got news from Facebook, and 23 percent from YouTube. And the number whose understanding of politics, culture, and science are influenced by content they see on those platforms — even if not strictly “news” — is probably much greater.


Story continues below advertisement

Using engagement data as the basis for what content to show people works fine as long as the only goal is to, well, keep them engaged. But such metrics, known as metadata, don’t tell you anything about the content’s substance. Crucially, they tell you nothing about whether the information presented in a given post or video is true.


So now you have two media realms: a traditional media realm in which information must be both true and interesting to reach an audience, and a social media realm in which it must only be interesting. Guess which one is bound to become a magnet for conspiracy theorists, hoaxsters, propagandists, disinformation operatives, grifters and peddlers of false cures?


Indeed, there is evidence that such content tends to thrive on algorithmic social platforms, with lies outrunning the truth and fabricated political stories outperforming real ones. Truth can be messy, complex, confusing. Propaganda and conspiracy theories offer clarity and tidy stories of good and evil, and can be tailored to appeal to any given demographic's existing biases and fears. The algorithms are built to ensure they reach that target demographic.


Story continues below advertisement

The companies have had ample time over the years to reconsider that purely engagement-based approach, as their influence over news and politics grew. Instead, they embraced it, at least until quite recently — both because it was good for growth and because it allowed them to claim editorial neutrality, which is convenient for avoiding allegations of bias or discrimination.


It was clear as early as 2014, for instance, that Facebook had become a hotbed for health and medical misinformation, some of it dangerous. Yet when I asked the company back then if it had considered trying to build simple hoax detection signals into its news feed algorithm, a Facebook product manager said it wasn’t on the agenda. “We haven’t tried to do anything around objective truth,” Greg Marra told me, adding that it would be a “complicated topic.” Had Facebook taken the chance to reevaluate its algorithms years ago, it might have been in a far better position to combat vaccine misinformation today.


Over the past few years, Facebook and YouTube have belatedly acknowledged that they play an important role in shaping people’s attitudes about everything from elections to vaccines. And they now admit that misinformation is a serious problem on their platforms. But even as they reluctantly accept some responsibility for addressing certain kinds of particularly dangerous misinformation — usually after the fact, by removing posts or suspending users — they tend to view the problem as not really their fault. After all, they’re not “arbiters of truth,” as Facebook chief executive Mark Zuckerberg put it as recently as 2020. They’re just tools for connecting people. If some portion of their billion-plus users are bent on spreading lies, how can Facebook or YouTube stop them?


How platforms such as Facebook and YouTube could possibly eradicate misinformation on their platforms at this point — even one specific type, such as vaccine misinformation — without becoming overbearing censors is a very hard question. Fortunately, it may also be the wrong question.


Story continues below advertisement

As the backstory above makes clear, the problem of misinformation on social media has less to do with what gets said by users than what gets amplified — that is, shown widely to others — by platforms’ recommendation software. Researchers such as Renée DiResta of the Stanford Internet Observatory and Joan Donovan of Harvard University’s Shorenstein Center have long called for a greater focus on the role of social media companies’ algorithms and design decisions, as opposed to their decisions about what content to allow or prohibit.


How to reconfigure social media to better incentivize and select for reliable information is a hard question, too. But at a time when the problem of vaccine misinformation appears all but intractable, it may offer a more constructive framework for both diagnosing and addressing the issue at scale. It also has the virtue of skirting at least some of the First Amendment issues raised by government attempts to influence what speech platforms prohibit or allow.


Facebook in particular has taken a few hesitant steps in this direction, as when it tweaked the news feed in 2018 to boost posts from publishers identified as “broadly trusted.” But the company’s top executives have repeatedly scuttled, weakened or deprioritized such projects out of concern that they would stifle growth or fuel claims of liberal bias. If the Biden administration wanted to pressure Facebook over its role in misinformation, those reports might make for a better place to start.


Story continues below advertisement

Are Facebook, YouTube and other algorithmic platforms capable of adapting their products to prioritize trustworthy content in substantive ways? Or could the answer involve somehow reducing their role and influence in the information ecosystem? The answers may be no easier than answering the question of whether Facebook is “killing people.” But at least they would have a chance of getting us somewhere.


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.