Select Page

“AI is whatever hasn’t been done yet.” – Larry Tesler

Laws for AI

by Benedict Evans (excerpted)

“The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” – White House ‘executive order’ on AI, October 2023

 

First, talking about ‘AI’ has the effect of detaching us from an analytic conversation about tangible engineering, product and policy decisions. It’s a catch-all term for whatever doesn’t work, whatever we don’t understand, and whatever might be scary. In the 1970s we called databases AI, and databases are a powerful technology with some superhuman capabilities, that if mis-used (China) or mis-understood (the Post Office) can ruin people’s lives.

 

Now the same happens with machine learning and LLMs: like databases, machine learning is and LLMs will be a general purpose, low-level building block that becomes part of everything we do with computers. Some of those have scope to run people’s lives and some do not, depending on how they’re used and how people react to them, and trying to regulate ‘LLMs’ at whatever size seems like trying to regulate SQL or HTML. It’s the wrong level of abstraction.

 

Second, the solution to this is probably not to try to write a list of all the potential problems that might come from this technology and roll them into up one law. I sympathise with regulators and politicians who do not want to wait until it’s all over and too late, with the lessons of social media behind us. I also sympathise with politicians who need to be seen to be doing something, especially when some people in tech are declaring that the future of humanity is in their hands, and the temptation to share stages with scientists in the Exciting New Thing.

 

But the result is to muddle tangible, specific engineering and policy questions like misinformation or bias and discrimination together with vague theorising about bioweapons or AI take-off, where the best we can say is that we have no idea what the issue might be (but that it was probably already in Google a decade ago).

 

The hardest problem, though, is how policy can handle the idea of AI, or rather AGI, as an existential risk itself. The challenge even in talking about this, let along regulating it, is that we lack any theoretical model for what our own intelligence is, nor the intelligence of other creatures, nor what machine general intelligence might be. In the absence of this, we tend to stack up logical fallacies and undergraduate theology – appeals to authority (‘this expert is worried!’) or Pascal’s Wager.

 

Indeed, theology is the best comparison: the attempt to reason from first principles about something you don’t actually know anything about (Descartes’ Fallacy, so to speak). You can write a rule about model size, but you don’t know if you’ve picked the right size, nor what that means: you can try to regulate open source, but you can’t stop the spread of these models any more than the Church could stop Luther’s works spreading, even if he led half of Christendom to damnation.

 

Of course, if it turns out that these models really will scale to AGI, and that that is Bad, then we will regret not stopping it. But we don’t know what we’re trying to stop, nor now.”