Friday, June 30, 2023

AI Mailbag: the risks and benefits of AI. By Timothy B Lee


www.understandingai.org
AI Mailbag: the risks and benefits of AI
Timothy B Lee
13 - 17 minutes

Wow you folks asked some great questions! Thanks to everyone who participated. Below are answers to eight reader questions. There were several other questions I wanted to answer but couldn’t because I didn’t know enough about the topic. I’m hoping to research some of these questions and turn them into future articles.

AndrewB asks: “Do you think AI is currently in an overhype phase, the way self-driving cars were several years ago when many were predicting full self-driving was just around the corner? My own experience with co-pilot, for example, is that co-pilot is helpful, but it's certainly not making me 10x more productive (more like 1.1x more productive).”

Yes. I think AI is ultimately going to have big economic and social impacts, but right now I think a lot of people are wildly overestimating how fast it’s going to happen in much the same way they did with self-driving cars six to eight years ago.

AnonymousFactory77 had several questions about the future of work:

    “Are the jobs ‘created’ by AI really going to be available to the average person whose job is lost to AI?”

    “There seems to be a lot of vague talk about retraining people to be able to take on these new AI-created jobs, but if AI is only creating extremely advanced engineering jobs, how is any of this going to work?”

    “When do you personally predict that the majority of current white-collar jobs will be automated?”

    “What advice would you give a young person (who works in a field exposed to automation) to prepare for becoming economically obsolete?”

When a new, potentially job-destroying technology comes along, it’s common for optimists to emphasize that the technology will create new jobs in the process. And often that does happen. But the case for optimism does not depend on new jobs being created by that technology.

Rather, the main mechanism for job creation is more basic: the new technology makes people (investors, employees, customers) wealthier, people spend that increased wealth on various goods and services, and businesses hire workers to meet the increased demand. The new jobs might be—and often are—created in a totally different industry from the one where jobs were lost.

Of course, this process isn’t perfect. During the 2000s, a lot of manufacturing workers in the rust belt lost their jobs and had difficulty finding new ones. New jobs did get created, but they were often in different metro areas and required learning new skills.

With that said, I don’t think job prospects for white-collar workers are as grim as AnonymousFactory77 seems to think. Over the next five to 10 years, I expect most uses of AI in the workplace will be based on the “copilot” model where software makes workers more productive rather than replacing them outright.

But if you’re worried about your job prospects over the longer term, my advice is to try to shift into a career that involves interacting with other human beings.

This includes teachers and college professors, nurses and doctors, therapists and counselors, realtors, and so forth. Many desk jobs also involve interacting with people outside of your company, whether that’s salespeople (talking to customers), law firm partners (talking to clients), PR people (talking to reporters), user experience designers, and so forth. While it might be possible to automate most aspects of these jobs in a decade or two, companies are going to be reluctant to replace people who have built up a rapport with important external stakeholders.

Alex wants to know: “Any guidance for helping people in my life understand the extent to which AI is going to impact the future? Some are turned off by the "human extinction" element of the AI discourse, which they see as grandiose or conspiratorial. How would you go about portraying AI's very certain impacts on humanity to folks who are turned away by the most troubling headlines?”

Honestly I think there’s a lot of uncertainty about how AI will impact our lives in the coming decades.

I don’t buy the most apocalyptic predictions about AI—either human extinction or mass unemployment. But it could be a bumpy ride, with some serious negative consequences along with the positive ones.

The place where I see the most reason for optimism is in medicine. AI is being used for everything from curing paralysis to discovering new drugs. I also see big potential in transportation, especially self-driving cars.

On the other hand, I do worry that AI could make our political system even more volatile. There’s a good argument to be made that the invention of the printing press led to the protestant reformation, which in turn led to the bloody wars of religion. The invention of mass media in the early 20th century probably made it easier for dictators to consolidate power. And I don’t think Donald Trump would have been elected president in 2016 without Twitter and Facebook.

I don’t know how AI will be used in the political arena in the next decade or two but I would not be surprised if it has some big—and not necessarily positive—impacts on American political culture.

Chris writes “I know you're not an economist, but what is your reaction to this tweet (which as an economist I think is correct, and you seem to be on the side of the economists). ‘Economists seem to consistently be the most dismissive of AI existential risk concerns, out of all groups of people who think seriously about the future. Why is this and what can we learn from it?’ What is it we are missing, or conversely what is it we understand that others don't?”

I wasn’t trained as an economist, but my other newsletter is called Full Stack Economics, so I’m pretty familiar with how economists think. And I do think economists have some valuable insights to offer here because they are used to thinking systematically about the large-scale impacts of new technologies.

A lot of people have the impression that computers and the Internet have driven unusually rapid economic change in recent decades. But economists know this isn’t true: the rate of economic growth has actually been slower than was typical in the 20th Century.

In a 1990 paper, the economist Paul David drew an analogy to the early years of electrification. During the early 20th century it took decades for businesses to reorganize their factories to take full advantage of the new capabilities of electric power. By the same token, David predicted, it might take decades for businesses to reorganize to take full advantage of the capabilities of computers.

Another reason we haven’t seen rapid productivity growth in the Internet era is that data processing just isn’t that important for much of the economy, including basics like housing, transportation, food, and clothing.

This background makes me instinctively skeptical when I see people predict that AI will lead to an unprecedented pace of economic and social change. And it also affects how I think about existential risk. Just as economic activity mostly happens in the physical world, any realistic plan for taking over the world is going to require gaining control over the physical world. And that seems a lot harder than singularists think (see my piece last month for more about this).

Another relevant economic concept is complementarity. To accomplish almost any ambitious goal, you need a variety of resources and capabilities. To build a building you need an architect to draw up the plans, a lawyer to get the necessary permits, construction workers, tractors, building materials, and so forth. Hiring more architects won’t help the project go faster if you don’t have enough construction workers or building materials.

It’s crucial to bear this principle in mind any time you’re thinking about a hypothetical future with millions of human-level (or superhuman) AIs. No matter how fast an AI can think, most of its ideas will have to be translated into actions in the physical world. And most of those actions will have to happen at the speed of human beings.

An AI might think of a new scientific theory, but it will need human help to set up experiments to confirm it. The AI can think of a new superweapon, but it will need human help to test and manufacture it. So I’m very skeptical of “fast takeoff” scenarios where superintelligent AIs rapidly take over the world. Because at the end of the day AI is going to complement physical human labor more than it will substitute for it.

Daniel asked “How should schools address AI applications, like ChatGPT? What guidance would you give secondary school teachers in terms of what adjustments to policies and procedures they should employ?”

I definitely want to do more reporting on this in the future, but one analogy that might be helpful here is that large language models are to writing what calculators are to math. A good strategy for teaching math is to have students do arithmetic without calculators in the earlier grades. Then once they’ve mastered arithmetic, let them use calculators in more advanced classes like calculus or physics. It’s also helpful to ask students to show their work to verify that they’re not just writing down an answer they got from a calculator.

I think teachers in writing-oriented classes are going to need to adopt similar strategies. In classes that are designed to teach writing, teachers should try to prevent students from turning in work written by a large language model. This could be done by having students do more writing in class under the supervision of the teacher.

Alternatively, teachers could ask students to “show their work” by having them write in an editor like Google Docs with change-tracking turned on. Teachers could then check a document’s edit history to verify that the student composed the essay over a period of hours rather than cutting and pasting a finished essay from some other source.

In classes more focused on specific subjects like history or philosophy, it might make more sense to allow or even require the use of ChatGPT as a research tool, while teaching students how to check the output for errors.

Aaron Strauss asks: “Training-set/feedback/reinforcement learning has proved dominant over rule-based AI. Yet, when it comes to assuming AIs will be super-intelligent *in the real world* (not chess, Go, protein-folding), I haven't heard any of the dystopians explain what feedback mechanism even has the possibility of creating a threatening AI. Instead, to my lay ears, it sounds like ‘well, computers are logical and fast, ergo they must be capable of super-intelligence’—but that completely skirts the issue and seems to rely more on rule-based thinking. Have you heard folks discuss feedback mechanisms for superintelligence? And might those discussions help clarify how to build AIs safely?”

Because the rules of chess are simple and unchanging, chess software is able to grind through a massive number of possible board states to figure out the best move. This works so well that the best chess software today dramatically outperforms the best human players.

In contrast, the goal of a large language model is to imitate the speech patterns of a typical human being. So you might think this would cause LLMs to “max out” at the intelligence level of the human beings they’re trying to imitate.

Maybe, but I’m not so sure.

Interestingly, Aaron mentions protein-folding as a domain where computers have shown superhuman capabilities, but DeepMind’s protein-folding breakthrough was based on the same transformer architecture that powers large language models. So if a transformer can recognize patterns in protein structure that elude human biologists, maybe they’ll eventually be able to recognize interesting patterns in human writing that no human being has noticed.

Chris Cottee asks, “Given that human consciousness arises from being physically present and vulnerable in a natural environment, thus giving rise to emotions and desires (pain, loss, excitement, fear, joy, love, etc) and that our whole mental life extends from those experiences, do we really need to be worried about AI when it can never achieve true consciousness without them?”

Chris is making an astute observation about human nature here. Our ancestors’ struggles for survival powerfully shaped our minds, giving us powerful desires and emotions that digital minds are unlikely to have. Perhaps that means that AI will neve achieve “true consciousness.”

But I don’t think it necessarily follows that we don’t need to be worried. AI minds might be quite different than our own, but that doesn’t necessarily mean they’ll be less dangerous. The strangeness of AI might mean that it malfunctions in odd ways that we can’t anticipate. And even if you don’t buy the premise of an AI spontaneously going rogue and trying to take over the world, it’s easy to imagine a malicious human being creating an AI and giving it goals that are harmful to other humans.

JPodmore asks: “What do you think are the odds of a widening regulatory approach between Europe and the US? GDPR already seems to be driving a bit of a wedge between the EU and the US and I can't see Europe taking a laissez faire approach to e.g. training datasets or outputs that mimic a specific person's work.”

It seems pretty likely to me. I don’t think the US is going to pass any significant legislation in the next few years. I am not an expert on the EU legislative process but it seems like they are taking the AI Act pretty seriously. So it seems pretty likely that we’ll wind up with strict regulations in Europe and little to no regulation here in the US.

What I expect to happen next is that US companies will largely ignore the EU laws, European citizens will want to use the American services anyway, and then EU authorities will come under pressure for a face-saving compromise.

That’s basically what has happened in the privacy arena over the last decade, and frankly it hasn’t been good for the EU’s technology sector. European policymakers might want to think carefully about whether they really want to go down this path again.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.