Model Culpa

Model Culpa

What happens when a self-driving car has an accident; who is at fault? Is it the driver who surrendered control or is it the engineer/organization who designed it?

Neither. That is the improbable answer, according to someone who has an actual position that can influence and shape policy in the space.* If the “AI” is a superintelligence, smarter than the humans who made it or the ones who are driving, then it should be considered culpable. “How might an inert matrix of floating point numbers face repercussions?” I asked. “Shall we jail it or perhaps demand payment of a fine?” Of course not, that would be nonsensical. Instead, the repercussions a large language model ought to face should involve “turning it off”.

While this position on culpability when it comes to AI may seem attractive at first blush, perhaps it deserves a little scrutiny.

The (already existing) Artificial Super Intelligence

Imagine, for a moment, that we had a tool so powerful, that the greatest minds on earth did not even bother competing with it. It was a narrowly focused (domain) expert. It could produce output that was more reliably accurate and much faster than the world’s leading professionals. My friends, I’m here to tell you it already exists. It is called a calculator.

A calculator may be considered a kind of super-intelligence (and, while there are important qualifications to be made concerning ASI, calculators do serve as a suitable analogy). Now suppose for a moment that an accountant is using a calculator and assigning to it the task of calculating the tax return to be paid by a multi-billion dollar organization. Unfortunately, at the precise moment that the calculator produces its result, gamma rays from the sun flip a bit, causing multiple orders of magnitude in error (this is technically possible, though, unlikely). The organization pays only a small fraction of the taxes due and goes on its merry way. Of course, the IRS pays less attention to gamma rays and more attention to who is supposed to be submitting payments, so they notice an anomaly. Who is at fault?

I would say, it’s probably not the engineer who built the calculator and figured out what its tolerance was for gamma rays. I also don’t think it’s the accountant who submitted the tax return without checking their work. I certainly don’t think an organization should ever take some sort of responsibility. I would say it’s probably the calculator’s fault and it should be turned off.

The Model of Intelligence

Fortunately for Tesla, because “they don’t use Lidar”, their self-driving intelligence will “never be a super intelligence”. The cameras on a Tesla are, of course, “vastly inferior to human eyesight”. So how could Tesla’s self-driving ever be considered culpable? That’s on the driver. But other cars that use better sensors… they might indeed be candidates for culpability. At least, this is what I’m given to understand by industry experts.

So the sensors are critical to the intelligence. Is the architecture important to intelligence? Apparently, yes, but only if it is sophisticated enough to be “super” (-intelligent). That’s when it becomes culpable. I suppose that next-word predictors are sufficiently sophisticated, since ASI is knocking on the door. But to be fair, that wasn’t said outright. I suspect, however, that if you’re using an RNN, you’re safe. Provided you don’t have any attention layers. That’s when culpability starts popping up…

A Note to the Reader

There is no kind of AI that makes decisions. There are engineers who decide to wire probability spaces to kinetic devices that move when numbers get multiplied together. But, just to reiterate, that was an engineer who made that decision. LLMs are pretty good at converting natural language into commands that you can run in your terminal. But if you decide to just wire those together, that’s a decision that has nothing to do with any agency of your large language model. Irrespective of whether you called it an “agent”. So not only is it bonkers to suggest that AI can ever be assigned culpability, it betrays a woeful misunderstanding of how machine learning actually works.

Maybe it would be better if AI did make up policies?

* While naming and shaming has merit at times, I don’t want the considerations of civil converation to inhibit my sense of the appropriate levels of saracasm engaging these ideas.