The Community Notes feature was supposed to shut down conspiracy theories on X, formerly Twitter. Instead Elon Musk's preference for crowdsourced moderation is allowing malicious users to weaponize hatred, lies and disinformation about the October 7 Hamas attack
The attackers from Gaza burst through the fence into Kibbutz Kfar Azza on October 7, using explosives and quickly reached the houses, where students and young couples, some with babies, lived
The attackers from Gaza burst through the fence into Kibbutz Kfar Azza on October 7, using explosives and quickly reached the houses, where students and young couples, some with babies, livedCredit: Rami Shllush
Hours after the massacre of Israeli civilians by Hamas terrorists on October 7th, many online accounts had already begun to spread doubt about the veracity of the horrors that had taken place.
In a particularly disturbing development, the “Community Notes” feature on X (formerly known as Twitter) was repeatedly employed to give a veneer of legitimacy to those spreading conspiracy theories. Deployed first in the U.S., and then globally on X earlier this year, Community Notes are intended to provide user-generated fact checks debunking dubious claims made in viral tweets. But in the wake of the October 7 attacks, it was abused by malicious users to spread denial of the massacres.
Musk’s X and Hamas’ digital offensive: Gaza is a perfect storm of disinformation
Disinfo on Israel-Gaza plagues social media, with Elon Musk's aid
For Hamas, the Gaza hospital tragedy is a narrative victory
Rebutting these denialists is itself a fraught issue given that the atrocities have only recently occurred and that Jewish religious and cultural sensitivities limit how, when, and to whom human remains ought to be displayed.
Nevertheless, many Israelis felt that it was imperative to show the world the horrors of that weekend. Some made explicit historical comparisons to the hard choices made by then U.S. General. Dwight Eisenhower in 1945 or Mamie Till (the mother of Emmett Till) in 1955 both of whom recognized the power and danger of the human impulse to deny horrors.
Eisenhower wrote after visiting the recently liberated concentration camp at Ohrdruf, “I made the visit deliberately, in order to be in a position to give first-hand evidence of these things if ever, in the future, there develops a tendency to charge these allegations merely to ‘propaganda.’” Mamie Till had insisted on an open casket after the lynching of her son in Mississippi, saying “Let the people see what they did to my boy.” White newspapers and magazines notably refused to publish the images of Emmett Till’s tortured body; Black newspapers and magazines did not.
Teddy bears in a saferoom, photographed October 22, which was also used as a children's room, and are covered with soot following a deadly infiltration by Hamas terrorists from the Gaza Strip, in Kibbutz Beeri in southern Israel. About 120 members of the kibbutz were killed.
Within hours of the October 7 attacks, horrifying images were released by the Israeli government on social media, including X. Most of the images avoided showing the victims’ remains, including one showing copious blood splattered all over a young girl’s bedroom. But others did not, including one profoundly disturbing image showing the charred, barely recognizable body of an infant on a medical examiner’s lab. The intention behind the Israeli government’s decision to release these images was clear: let no one deny what was done here.
But the depravity of certain online communities is not to be underestimated.
Within hours of these images beginning to circulate online, some users of X had weaponized the Community Notes feature to spread conspiracy theories that the images were doctored or staged.
Malicious users posted Community Notes falsely characterizing the images as fake and mobilized other users to rate these notes as “helpful” so that they would appear below the images for all users on the platform.
For example, a Note on the image of the child’s bedroom claimed that the color of the dried blood was implausible because “Deoxygenated blood has a shade of dark red, so it is a staging, the blood is not pink”, willfully conflating the copious blood with a small area of pink in the coloring book.
A Note on the image of the infant’s charred corpse falsely claimed that digital analysis had found the image to have been generated by AI and cited “tell-tale signs such as distorted fingers” despite the fact that the photo is real and that the medical examiner’s fingers appear normal. This represents an example of AI-driven “perception hacking,” in which the fear of being deceived by AI-generated images is weaponized to induce people to disbelieve real images.
The bogus notes were removed by X within a few hours, but by that point, the damage was already done.
Not only had untold thousands of users already seen the Notes falsely “debunking” the images, but atrocity denialists had screenshotted them for further distribution, likely knowing the Notes would be removed.
They subsequently circulated these screenshots to many more users, both on X and elsewhere, instead of linking to the Tweets themselves, which no longer carried the bogus Notes. Because of the insidious nature of antisemitism, denialists were able to additionally claim that Jews used their wealth or media influence to have the Notes removed; some such claims remain on X even now.
The denial of atrocities online is nothing new. Bashar Assad apologists very successfully cast doubt on war crimes committed by the Syrian regime against Syrian and Palestinian civilians throughout that country’s lengthy civil war, in some instances accusing Syrian civilians of using chemical weapons on themselves, in others suggesting that the attacks simply never occurred. More recently, pro-Russian propagandists have sought to cast the atrocities inflicted on Ukrainian civilians in cities like Bucha—including rape, torture, and murder—as Western propaganda.
But the use of X’s Community Notes to deny the reality of an atrocity—in this case one that was heavily filmed and contemporaneously uploaded to Facebook and Telegram by both the victims and the perpetrators—represents a new development in the toolkit of those spreading hate and disinformation.
It highlights one of the weaknesses of relying on a fundamentally user-moderated system of fact checking: some users are bad-actors, and an insufficiently robust system allows such people to spread disinformation and atrocity denial while draped in the credibility that the Community Notes feature is supposed to convey. Elon Musk has sought to reduce the burden on X that professional content moderation imposes, but the risks of relying on community-generated moderation are evident. Without a greater degree of professional moderation, bogus analysis masquerading as real fact checking will continue to proliferate.
Propagandists and atrocity denialists of the past would have loved to be able to tell their audiences to simply not believe their own eyes; the age of photoshop—and now AI—permits their intellectual descendants to do exactly that.
Nathan Kohlenberg is a Research Analyst on the Malign Finance and Corruption Team at the Alliance for Securing Democracy at the German Marshall Fund of the United States, where he focuses on the Middle East. He is a graduate of the Johns Hopkins School of Advanced International Studies (SAIS), and a Fellow at the Truman National Security Project. On X: @nkohlenberg
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.