A couple of years ago I speculated about the possible trajectories for AI development in terms of social impact. A recent article has reminded me of this but in a rather irritating manner : it claims that LLMs aren't calculators. Well, obviously this is literally true on a superficial level, but the article goes considerably further. It claims that LLMs will have entirely negative impacts in a way that makes the comparison to calculators not just wrong, but invalid.
I strongly dispute this. I think the calculator analogy is an extremely useful one, albeit one that's most helpful only when properly defined and constrained. I think LLMs are indeed calculators in a very meaningful (if strictly analogical rather than literal) sense... if we ask only what life was like before and after the pocket calculator, and do not ask how calculators work, then the scope and intent of the analogy becomes clearer. I find the article an extremely frustrating read because it badly confounds these two different issues, among other things.
Before I go into this in more detail, let me here briefly revisit my own predictions.
Back To The Future
I claimed :
- The effects of AI wouldn't be extreme. That means no revolutions, no societal collapse, no mass layoffs, and equally not a total non-event. This is self-evidently correct so far, though admittedly it's still early days.
- LLMs were hobbled by censorship. Certainly true at the time, but this is at least reduced these days. That said, I've moved on from my "generate all the crossover stories !" phase and usually use chatbots for actual work, so I can't really evaluate this one from personal experience. Galaxy evolution has never been sexy or offensive enough to be censored, and I have no interest in a chatbot either sexting or swearing at me to fill some bizarre emotional void in my life.
- Chatbots don't replace search engines. Again, definitely true at the time, being rife with inaccuracies and hallucinations, but this is definitely not true any more. Google's usefulness as a search engine is all but dead; AI is incomparably better for complex queries (and even quite a lot of simple ones). A major caveat is that chatbots now act directly as search engines themselves, providing direct links as well as in-context content. So they've replaced search engines in part by becoming them.
- AI is a useful aid, not a replacement for anything. This is still true, I think, and if anything even more true now than it was then. But recent results mean that I'm more prepared to believe it's moving towards a true replacement stage, even if I still don't believe this is on the immediate horizon.
- The most likely trajectories would be a sustained net positive improvement but possibly with a plateau. No exponential growth either in the technology itself or changes in society resulting from it : its effects will always be tempered by our innate tendencies to adopt things at a pace most of us can handle. It's still possible that we might hit a plateau soon, though recent improvements (see below) tend to discredit this.
Unfortunately, the author of the article in The Conversation (a perfectly decent website) appears to fall for the classic fallacy of taking the analogy too literally. I view the calculator as a metaphorical comparison for the impact of the technology; they appear to think it needs to function with a sort of exact equivalence, if not quite literally the same thing. I'm going to ignore the no small amount of tiresome invective running through the article : at best this is highly selective and one-sided, at worst some of it is simply wrong.
Not A Calculator ?
Their claims :
- Calculators do not hallucinate or persuade. This is true. Calculators and LLMs don't do the same thing at all. LLMs are certainly not unbiased truth engines and they can and do get things wrong. This is uninteresting; the important thing is whether they are accurate enough to be useful. I claim that they are, and that GPT-5 is a significant development compared to previous models. They are no longer just inspiration machines. They actually produce useable, rather than merely provocative, content.
- Calculators do not pose fundamental ethical dilemmas. True, but this miscasts the situation, conflating the choices of the companies with the technology itself. And the argument that the energy use of LLMs is "killing the planet", as a I heard a recent conference attendee assert, is becoming increasingly tiresome. Even under older assumptions that an inquiry used 10x as much energy as a Google search, it was clear that this wasn't an issue*, and now we know this was an overestimate and efficiency has increased (the graphs here nicely show just how pointless worrying about inquiry energy usage – though not training – actually is).
- Calculators do not undermine autonomy. Well of course they do ! That's their whole point. You no longer have to do tedious things with numbers and can worry about the mathematical operations instead. The same arguments have been raised time and time and time again : television, the printing press, even writing... all of it supposedly undermines critical thinking and turns us into morons. All nonsense. What matters is what you read, what you watch, what questions you ask... all valid concerns, but nothing at all unique to LLMs.
- Calculators do not have social and linguistic bias. Well, no, but this is like saying that we should chuck out all of our history just because we don't like it any more. If this is an argument against LLMs, it's also an argument against reading. I really don't see the point of this one at all.
- Calculators are not ‘everything machines'. Yes, obviously, but this seems plainly unfair and circular. The whole point of an AI is to be able to deal with a broad set of inputs; if you're got something against them on these grounds, you're never going to be happy. Essentially you've defined them to be useless because they're too useful, which is silly. That said, I do like the points in this article very much that single-purpose devices can be better for creativity; of course, both everything machines and one-trick-ponies have their place.
The Calculator Moment
So I think the calculator analogy is a great one. LLMs form coherent sentences (with sufficient training) just as calculators accurately manipulate numbers. Granted, coherency is not accuracy, but even inaccurate statements can be useful, sometimes a good deal more so than correct ones ! Moreover, if accuracy hasn't increased to calculator level – it will never do this until we have infinite knowledge, so this is a foolish expectation – it's still already enough to be useful, even leaving aside significant recent improvements.
And that's where I think the analogy has its greatest value. LLMs have now reached, or are reaching, comparable levels of usefulness, affordability, and accessibility to pocket calculators, so the comparison helps us to consider what we're going to do with them. As The Conversation quotes at the start, they're just tools. Not necessarily always perfect ones, but then, what is ? The analogy is scarcely less valuable because it isn't a direct equivalence.
The old “what if you don’t have a calculator with you” mentality was wrong-headed when I was growing up and it’s wrong now. I will always have a calculator with me. There’s very little use in being able to do accurate mathematics in one’s head for its own sake. It might, I fully concede, be good for mental self-discipline and critical thought more generally, but I could never do long division in school and I still made it in a career which involves no small amount of maths.
So how do we deal with this new reality ? Probably, I think, in largely the same way as we did (or should have done) with calculators. You don't try and put the genie back in the bottle, but you also don't count on the genie to give you infinite wishes. Calculators and LLMs do undermine autonomy, but this can be harnessed positively.
The solution is simple : gradual access. There are some things you just have to know; you don't give five year olds calculators. You shouldn't be letting children loose on LLMs either, but teaching them the basics first. Later, you introduce into lessons slowly and with monitoring. Even in higher education you still wouldn't replace lecturers with chatbots. You'd continue teaching students both the benefits and the downsides as long as possible, just as we should be teaching people about the media already. Just because we shouldn't fully trust something doesn't mean we either can or should discard it entirely : after all, the front line of research is where results are most tricksy, but it'd be utterly stupid to stop doing research because people made mistakes.
This is not to say that LLMs won't change things. They will. Coursework, in particular, might have to end, because the temptation to ask the AI would likely be irresistible. But the adaptation may not be as difficult as it would seem : examinations in which calculators are forbidden are already a thing, so controlled conditions in which AI access is denied is hardly asking for a sea change.
More difficult may be accomodating LLMs within professional research. "I was overwhelmed by the power of this place... but I didn't have enough respect for that power, and it's out now", to quote Jurassic Park. On the other hand, there are plenty of things I don't want LLMs to do. I see no point in using them for writing text in my own papers because then it's not me expressing myself. I don't want them to write my blog posts for the same reason (here's an experiment in having GPT-5 write an outreach piece on one of my papers – it's quite passable, entirely accurate, but it's not me).
So have a little faith in the future. Undermining the need to do things we don't want to do doesn't mean we'll stop wanting to do other things where we can challenge ourselves productively. If anything, I suspect the opposite is true.
My guess is that a lot of what looks like AI-hate pieces actually stem from angst. AI has too many similarities to previous technologies to cause us any really deep, pure horror or fear (or indeed joy) over what's likely to happen next : for those we'd need something genuinely new and unpredictable, and I don't think this version of AI – unmotivated, controllable, emotionless – comes close to fitting the bill.
What we have instead is uncertainty over the specifics of how things will play out – in part from an entirely justifiable cynicism not over the tech itself but those who are marketing it. That's healthy. Pretending it's something we can't act on, with no recourse but to just hope everyone stops using something at least as useful as a pocket calculator, is not.
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.