Thursday, September 29, 2022

At last, an AI existential risk policy idea


www.slowboring.com
At last, an AI existential risk policy idea
Matthew Yglesias
10 - 12 minutes

By law, computer systems dealing with nuclear weapons and other sensitive military matters need to be “air-gapped”: physically separated from the internet, other computer networks, and other network-connected devices. To get data onto or off of an air-gapped computer requires direct physical access and the use of a USB drive or similar physical storage medium. That’s why in the original “Mission Impossible,” Tom Cruise has to break into a specific room in a specific building and directly access the computer — air gapping is very inconvenient for legitimate users, but provides a high degree of security.

A number of big U.S. labs are currently working on major AI projects involving huge numbers of GPUs and training models on bigger and bigger sets of data. Much of this work is done by the big computer companies you know — Alphabet, Microsoft, Amazon, Meta — but also a handful of bespoke AI companies, including OpenAI, DeepMind, and Anthropic. The latter three stand out in that their founders and key executives say they agree with the AI risk worriers and believe there is a good chance that ongoing AI experimentation will lead to a terrible outcome involving AI takeover or human extinction.

But while you might think people undertaking research programs that they themselves believe are more dangerous than nuclear weapons research would copy safety practices like air gapping, the reality is they don’t.

And that’s because of competition. If the more safety-minded labs were to adopt logistically burdensome but safety-enhancing processes, that would increase the odds that the “race” to super-capable AI would be won by a less responsible company. This is traditionally one of the reasons why we have regulations. If there are two ways to make a widget and one of them is slightly cheaper but involves poisoning everyone’s drinking water, the unfortunate reality is that well-meaning executives are likely to be outcompeted by other executives who are not well-meaning. You need a rule that says nobody can poison the drinking water.

The additional wrinkle in the case of AI risk is that while stringent environmental regulation may push widget-making to China, that doesn’t necessarily have a big impact on America — if China wants to pollute its own rivers in order to make widgets while we enjoy clean water, that’s not a threat to us. But if we make the whole American technology industry follow responsible practices for AI development while Chinese companies use the irresponsible ones and win the race instead, that’s bad.

This brings us to an obscure Dutch company.

As a policy columnist, I want someone to tell me what we should do in a policy sense about a neglected problem. But the frustrating answer on AI risk has long been “worry more” without necessarily doing anything per se. The fear was that not only is it hard to specify an appropriate regulatory scheme, but there also isn’t even a way to regulate the relevant players, because that consists of the entire global AI research community.

Except, there may be a way: ASML.

AI research labs all use lots and lots of GPU chips, primarily designed by AMD and NVIDIA and manufactured in factories called “fabs,” with Taiwan’s TSMC leading the industry. But China has its own semiconductor industry with fabs and fabless designers, so while being cut off from the western and Taiwanese semiconductor supply chain would hurt them, it’s not an insurmountable obstacle.

But the fabs for modern chips all use something called extreme ultraviolet (EUV) lithography as part of their manufacturing process. I won’t pretend to be able to explain how that works beyond the obvious point that it involves using extreme ultraviolet light to print the chips. Developing the EUV machines that the foundries use was very difficult and expensive. Clive Thompson wrote last fall that the Dutch company ASML spent $9 billion and 17 years developing their EUV technology — and as a result, they are the only suppliers of EUV machines in the world.

They do face some competition, but ASML’s machines are the only ones that can make the good chips on a large scale. And the U.S. government is already putting a lot of energy into pressuring the Dutch government to block ASML exports to China. That’s an anti-China foreign policy initiative started under Trump and continued under Biden that’s not really about global AI risk. But it does underscore the existence of a potential AI development chokepoint that can be regulated, if not by the U.S. government then at least by Dutch and EU public sector institutions that are accountable to democratic publics and care what the United States thinks. The AI labs depend on chip manufacturers and chip designers, but they in turn depend on ASML, and you could theoretically require that ASML sell equipment only to chip manufacturers who agree to abide by certain rules.

Of course, that analysis doesn’t yield specific answers about what the rules should be.

Unfortunately, after several days of chats at the EA Global conference with different folks interested in AI safety, it still seems to be the case that nobody is locked and loaded with an incredibly compelling regulatory plan. But focusing on the supply of EUV machines as a regulatory leverage point is the beginning of a constructive conversation about where policy could go.

One set of possibilities relates to firmware that would include a dead man’s switch and/or remote shutoff features, buying time if something does go wrong to try to reduce the odds that it leads to catastrophe. And because machine learning models use many parallel GPU cores, another idea is to directly limit how many chips can be chained together. That’s something chipmakers are interested in from a business and price discrimination standpoint, but it could have important safety features as well.

Nobody I spoke to is incredibly optimistic about any of this, but they are at least slightly optimistic, which is a big step forward.

From my point of view, it’s also an important step in transmuting this set of concerns from the realm of speculation into a legible policy domain. Genuine existential risks are very rare. But dealing with the fact that profit-seeking companies are incentivized to pursue reckless courses of action that cause harm to others, that they need to be regulated, and that you need to worry about leakage around the boundaries of the regulatory system is something governments do all the time.

Slowing the most conscientious actors in the name of safety could make the world less safe. But slowing everyone down seems pretty good.

While the technology industry is facing increased antitrust scrutiny in general, the relevant regulators should probably try to create a specific safe harbor for collaboration on AI safety.

For example, suppose a team of the more conscientious AI labs wanted to create a collaborative “safety code” club with meaningful standards, and then tried to get other industry stakeholders to bully the less conscientious labs into joining the club. We know from a variety of contexts that Alphabet, Meta, etc. care a fair amount about what their employees think about social issues, so if word got out that refusal to join the club was a big problem, they’d face pressure to do so. But corporate leaders like to come up with reasons why they can’t do things that in reality they basically just don’t want to do. And one such “can’t” floating around is the idea that firm-to-firm collaboration on AI safety would violate antitrust law, especially at a time when Lina Khan and others have them under heightened scrutiny.

Is that really true? I am skeptical.

But all legal questions are somewhat subjective, and antitrust enforcement, in practice, has a significant discretionary element. Declaratory statements from the FTC and the Department of Justice saying that they would give safe harbor to collaboration on good faith efforts to create and enforce a safety code would be helpful. Helpful legally, but also helpful in terms of more constructive and conscientious people winning internal office politics fights.

And it relates to the broader analysis of the situation. Monopolization is bad because companies or cartels with monopoly power can raise prices above the market equilibrium while reducing output. That’s bad in general, and it’s why we have antitrust laws. But given the generally very rapid pace of progress in this field and the risks involved, I don’t think it should be a huge concern.

I know a lot of people (including many of you!) think this whole subject area is dumb.

That’s fair enough (though maybe read this for a contrary view), but the main thing I’d like everyone to consider is less the galaxy brain philosophical underpinnings of existential risk than the actual tradeoffs here. These are not especially wild policy ideas. The downside of that is even the people pushing for them don’t have incredibly high confidence in their efficacy, but by the same token, we’re not talking about giving up on global development and public health, climate change, and the dozens of other causes normal people care about.

We’re talking about impairing the business prospects of one large-ish Dutch company in a way that would probably require some offsetting concessions to the Netherlands on a different international economics topic, plus maybe making it slightly more expensive to buy AI outputs. To me, these seem like reasonable ideas, and their reasonableness doesn’t particularly hinge on nailing the exact estimate of the risks since fundamentally the costs are low.

AI advances are clearly going to make a larger and larger impact on the world over the next 10 to 20 years. It’s not acceptable across a whole range of dimensions to just leave that up to the whims of a handful of software companies. And that means we need to identify feasible focal points for regulation, which in this case turns out to be several levels upstream of the actual AI models.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.