Monday, August 29, 2022

The case against Meta

The case against Meta


Giving people what they "want" isn't necessarily good

by Matthew Yglesias 

In a recent article about TikTok and trends in phone-based entertainments, Ben Thompson observes that “what made Facebook’s News Feed work was the application of ranking: from the very beginning the company tried to present users the content from their network that it thought you might be most interested in, mostly using simple signals and weights.”


The thing I found most interesting was the choice of the word “interested” to describe the quality that Facebook and its competitors’ increasingly sophisticated machine learning tools use to determine what they show users. After all, computers aren’t people and you can’t just tell them “show Matt some stuff you think he’s likely to be interested in.” Instead, as I understand it, the models are trained with some kind of objective reward function. We don’t know exactly how TikTok works or what its reward function is, but it’s something like “it’s good if people watch the video and keep watching more videos on TikTok.” Internal to Facebook, they reward content that is “engaging.”


And it’s not just that machine learning is, in a sense, better at this than human recommenders — it’s actually doing something different.


When I was in high school, one of my teachers recommended that I read S.E. Finer’s three-volume macrohistory “The History of Government from the Earliest Times” because he knew that I enjoyed books like “Guns, Germs, and Steel” and “Plagues and Peoples.” Similarly, my grandfather once told me I might be interested in Charles & Mary Beard’s “The History of American Civilization.”


And I did find those books interesting, interesting enough that decades later, I remember what they said. But this kind of recommendation isn’t just more artisanal than TikTok’s — it’s actually answering a different question. My teachers and family members were essentially saying “given my understanding of your interests, my belief is that reading this book will help you become a better version of yourself.” Not everyone is that good at recommendations, though, and the idea that we could improve them by using data, computers, etc. is incredibly appealing.


But somewhere along the way we run into a serious alignment problem, and the recommendations engines that are increasingly driving our lives aren’t really doing what we truly want.


Revealed preferences are overrated

The fact that all the major Silicon Valley executives seem to be reasonably slim and in pretty good shape always makes me suspicious.


Most Americans are overweight. And while the level of obesity is unusually high in the United States, every country as far as I’m aware is trending in the same direction.



Throughout the vast majority of human history, people had to endure significant periods of food shortfalls and calorie deficits, so the optimal way to behave when food was available was to consume surplus calories.


And in most domains of life, this kind of behavior is sensible. I own more pairs of socks than I need to get through a typical laundry cycle because sometimes things go wrong and it’s good to have backups. Developing an instinctual tendency to eat backup food when food is plentiful is going to help you survive periods of food dearth. The problem is that in the modern world, most people never endure a period like that. So to maintain a healthy weight you either need the iron self-discipline to avoid the overeating instinct, the iron self-discipline to practice intermittent fasting and create a simulacrum of dearth, or else to be a weirdo mutant who would have starved to death in 1622 but happens to be optimized for the modern world.


And it seems to me that most of the top Silicon Valley people do, in fact, have an iron will.


Which is great for them. But I think it pushes them toward interpretations of the world in which the things that us normal people do (overeat and get fat) represent our genuine desires, while the things we say we want to do (eat prudently and be in good shape) are just hollow virtue signaling. Obviously people lie, even to themselves. But I think this basic model of the world is wrong. When I say that I want to stop eating random snacks, that is my authentic preference — and when I go to a party and see some chips and tell myself “just one won’t hurt,” that is self-deception.


Here in the human world, though, even if we disagree on exactly what to say about overeating, we do all basically agree on what’s happening: a clash between first-order and second-order preferences. But machines are very literal about their optimization functions.


Thompson gets at this in a follow-up post, noting that the difference between Instagram Stories (where you see stuff your friends posted) and Instagram Reels (where you see content Instagram is optimizing for) is that Reels makes him feel guilty:


I have to be honest: the more I use Reels, the less interest I have in Stories; they seem so boring in comparison. At the same time, this emphasizes what a high-stakes gamble this all is for Instagram: there is a big difference between “I ought to” versus “I want to”; spending ever more time on content tailored to me is, if you step back and think about it, the exact sort of activity I ought to feel guilty about. Instagram has, thanks to its social network foundation, retained some sort of the old “connecting” DNA that Facebook used to talk a lot about, but the more it tries to ape TikTok the more it is nothing but pure entertainment that, if I’m honest with myself, I feel a bit guilty about indulging.


I think that conveys the point that whatever it is that Meta is optimizing for, it’s not that well-captured by the English language word “interested.” If Reels was interesting, you wouldn’t feel guilty about time spent on it, you’d feel satisfied — like a need had been met.


One thing I like about being in the subscription newsletter business is I don’t think you can make money doing this on a guilty pleasure basis. Because people need to actually opt in and pay money, it only works if people feel proud to be a Slow Boring subscriber. The fact that Facebook and Instagram work best as businesses when they’re free reveals quite a bit about our preferences — these platforms are temptations that people don’t have particularly nice things to say about in their more reflective moments.


And this is why I’ve tried over the years to try to get the people who work at Meta/Facebook to be a little bit more reflective about their own work. If the people who watch Reels feel guilty about it, I think you should probably feel guilty about the fact that your job is to get more people to spend more time watching Reels. When surveyors pay people to stop using Facebook for a few weeks, the recipients of the money report being happier, and many of them don’t go back to using Facebook when the experiment ends. If Facebook was genuinely showing people content that they are interested in, you wouldn’t see those responses. What it’s doing is exploiting an addiction-type mechanism.


There’s a tension between commercial incentives and doing the right thing in most fields. When I was an intern at Rolling Stone, the staff tended to roll their eyes at the magazine’s cover stories. The imperative for a cover was to find someone famous and popular — and to get someone like that to cooperate for a Rolling Stone cover shoot, the magazine had to implicitly promise a softball piece. The staff mostly had good politics and good taste, and they were not at all enthusiastic about the kind of stories that ended up on the cover. But it was a commercial enterprise that needed to pick covers for the sake of newsstand sales. You could justify the commerce-minded cover stories by saying they created the opportunity for more editorially ambitious work.


And even though the media business changed in a million ways between my internship in the summer of 2000 and when I co-founded Vox in 2014, there was a common thread: commercial considerations played a role in editorial decisions, but they did not absolve the decision-makers of responsibility.


In other words, either the product as a whole was something you could be proud of or it wasn't. You couldn’t just say “we run dumb headlines because people click on them” or “we put bad bands on the cover because it sells issues.” Staying in business is relevant, but it can’t be the only consideration unless you’re a huge asshole.


We see this in the movies, too. Ryan Coogler made “Fruitvale Station,” a great indie debut. Then he made “Creed,” a really cool reimagining of a storied franchise. Then he made “Black Panther,” which I think meant a lot to a lot of kids and got his work in front of a ton of eyeballs, but isn’t nearly as good a movie as the other two. It’s one of the better MCU films, but its political ideas fundamentally don’t make sense — not because Coogler suddenly forgot how to make sense, but because he had to fit the plot into the larger MCU tapestry. Soon he’ll be coming out with “Wakanda Forever,” and I hope he got paid a bajillion dollars for it. But then I hope he leverages that commercial success into more artistic projects. Many of the great filmmakers spend some time walking on the more pop side of the medium. But no one has the best possible career exclusively doing that kind of work.


Part of what I find so off-putting about the corporate culture at Meta is that they don’t deny that what they’re feeding people is kinda garbage. They instead deny that they have any responsibility for the machine they’ve constructed — it’s just showing people what’s interesting to them, and if they’re interested in garbage, that’s their own fault.


But I don’t accept that. Everyone has to navigate the real world, but the people who work at Meta are also responsible for their choices. And a company whose premise is that it actually isn’t responsible for its outputs is dangerous.


The last time I wrote about this, I got a lot of feedback saying I misunderstand what it is that the bulk of the talent at Meta is doing. They’re not fiddling with nobs to increase Facebook and Instagram engagement, they’re working on fundamental issues that power the underlying infrastructure.


And I’m sure that’s true, but so what?


A lot of people have technical expertise that’s relevant to food manufacturing. Some of them work for companies that are trying to make alt-meats that, if brought to a large scale, could prevent billions of animals from suffering while improving environmental outcomes. Others work for companies that are trying to make Doritos slightly more compulsive so that future people will eat bags of Doritos more rapidly. If you are doing fundamental basic research relative to the flavor and texture of food, it makes a big difference to the world which of those companies you work for!


And I don’t think the fact that people have a “revealed preference” for Doritos does much justificatory work here. Everyone’s got to make a living, and there are worse jobs than Frito-Lay. But if you do have options in life, the right option is to make cruelty-free meat alternatives, not chips. It’s no fun to be a scold, but think about all the times you’ve been mad about an article you read somewhere because you think it got something important wrong. If the author said “well, a lot of people read it,” you wouldn’t find that to be an acceptable defense, and you shouldn’t find it to be an acceptable defense.


Markets are good ways of organizing things, but a healthy market involves the individual participants having integrity and ethics.


I’ve been interested to learn recently that among people who care about AI safety, the AI research team at Meta is considered to be both highly skilled and dangerously cavalier about the risks. I’m not nearly technical enough to delve into these disputes, but I’m not surprised that Meta has earned this reputation. Its corporate culture is deeply invested in the idea that you can just say your ranking engine is showing people “things they will find interesting” and not really bother worrying too much about what that means in practice or how closely the actual implementation tracks a normal understanding of what it means to recommend interesting things to people.


So far nothing catastrophic has happened because of Meta’s recommendation engines. But the general unwillingness to ask the question “what if it’s bad to create these compulsive apps?” — or even to admit that it’s coherent to wonder if something popular and technically impressive could be bad — speaks to a larger set of blinders. The question of what we really want our machines to do for us is actually fairly subtle and difficult, even in a fairly banal case like auto-playing video. To insist that whatever maximizes time spent on the app is, by definition, tapping into our truest preferences is both absurd and dangerous.


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.