Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Monday, 23 February 2026

The Truth About Utility

What makes a useful definition ? Originally I had a much more philosophically pretentious post semi-drafted for this, and I may still do that one separately. But various recent discussions have taken me down a very different path, one which might be more, err... useful. So let's start with this one and see if I ever get back to the nature-of-reality version in a future post. 

A good definition, I think, must surely be something which is widely applicable but also specific : things which happen frequently but not always. It should be readily distinguished from similar counter-examples. Crucially, it cannot be something which either never happens or always happens. It shouldn't be something which forbids itself entirely or makes it inevitable or ubiquitous. It should describe a specific thing that actually sometimes happens, or is at least conceptually valid and distinct from other, similar terms. 

What I see people doing is trying to make things true or false by setting their definition up in such a way that it cannot ever fail, and that to me seems like a mistake. This doesn't mean we can't have productive discussions, but it does, I think, impose some extremely unhelpful limitations. 

Let's do this one by example. First I'll look at cases where people define things such that they can't ever happen, and then the reverse, related case of defining them such that they're inevitable. Both in my view are counterproductive mistakes. They are terminology problems but they prevent us from getting at what we really mean, which is usually much more interesting.


1) Defining things out of existence

If we define something with such precision, such high standards, or such that it involves a logical contradiction that it can't ever be true, then I submit that this isn't a useful definition at all. Furthermore, it's likely not what we really mean when we use the term in everyday discourse.


Malevolence : Plato and other ancient philosophers help that nobody would knowingly do evil. I forget who it was who described it explicitly (possibly several people), but the basic idea was that if you knew something was wrong, you couldn't possibly do it. You might still carry out an immoral action, but you'd be misjudging, thinking that the gratification you would get would outweigh any negative consequences there might be for anyone else. Alternatively, you might do so only because you hadn't realised the existence, extent, or nature of those negative consequences.

I think this is a deeply mistaken view of humanity. As per the link, people certainly carry out heinous acts in full knowledge of their full consequences, sometimes this being the very reason for their behaviour rather than a side-effect. Or they may know but simply not care. But they aren't, I think  carrying out a mental calculus of where the balance lies. Even if they were, this would still make the word – or notion of wilful harm – meaningless. The point for most discussion is that people cause each other harm sometimes because they want this to happen, not out of ignorance. Anything more beyond this rapidly leads into such convoluted nuances that the definition collapses into uselessness. 

Or to put it another way : "Sure, he committed the murder, but it wasn't out of malice : that's impossible, so he must've done it because he mistakenly thought his pleasure at the victim's suffering would outweigh their actual suffering". 

To me this makes the word unproductively useless, trying to define the thing out of existence. Surely that points towards this meaning not being what we truly meant : the important thing is that people inflict harm on others for its own sake, and inferring anything further is best avoided altogether.


Altruism : In the opposite case, my partner likes to say that nobody is really altruistic. Everyone acts, she says, because they believe there will be some benefit to themselves, even if that reward is purely emotional. In the extreme case, someone might be give their own lives to save others, not because they thought the value of the lives they saved outweighs their own, but because of the fleeting emotional reward they themselves will get from knowing the others will live.

This too I think is surely putting more on the word than it can bear. The point of altruism, I'd say, is that we sometimes value others more than ourselves and act to bring a net benefit to them even at the expense of our own status. Start demanding that we get no emotional reward at all and again the term has been defined out of meaningful existence. This makes it utterly useless, and surely, therefore, this can't be what most of us mean most of the time when we use it. I'll qualify this a bit more later, but that general-case point is the one I want to focus on.


Knowledge and understanding : I've covered the nature of LLM-outputs several times, most recently here but also e.g. here and (tangentially) here. More on those more directly in a minute, but a closely related claim is whether they can be said to truly understand anything. I think they can, in the carefully qualified sense that a) they have access to some form of information; b) they form connections between different pieces of information; c) they act in a logical, coherent way to predict how things behave in novel situations. Not perfectly, it's true, but more interesting by far is that they do it at all. 

Now for sure, this is not the same sort of understanding that humans have. But its qualitative similarities, in my view, outweigh and are far more interesting than the quantitative differences between silicon and neural understanding. I think it's just not at all useful to say that "meaning only comes from humans ascribing this to the output". This is so inevitably necessary that it adds nothing useful to the discussion : well who else was going to be reading the output then ? And if you define "understanding" to be only a human thing, then it's tautologous that no non-human will ever have it. That's cheating.


2) Defining things into existence

We can now see how the reverse is also true : if we define a thing as being completely unavoidable, we won't get anywhere.


Hallucinations : See the links in the previous entry as this follows directly from the previous definition. I did initially agree that it was sensible to describe all LLM output as a hallucination, but I changed my mind some time ago. Given that they are now able to process complex (and multi-modal) information in a way that closely aligns with human expectations, and can in fact exceed our own predictive capacities at least some of the time, I now no longer think describing their output at purely hallucinatory makes much sense. 

It's more useful, I think, to say they're hallucinating (in their own peculiar way) when their output has no connection whatever to the input data or prompt. This is much more analogous to human hallucinations in which we see things which aren't there. I would still agree, provisionally, that LLMs treat all information has having a much more similar level of validity than humans do, and undeniably they have some qualitative as well as quantitative differences from human thought. But they are very clearly not purely fabricating stuff all the time : more often than not, they're processing their inputs quite sensibly.

Importantly, the claim that all LLM output is a hallucination is consistent with the notion that they don't understand anything. I'm not claiming incoherency here : I'm claiming that these definitions should be discarded because they aren't useful, not because there's any inherent problem as such. The alternative definitions I've suggested are, I think, better only because they are more flexible and specific, allowing us to describe things in more detail, not because they eliminate any inconsistency.


God : Don't say I'm not ambitious ! The old argument that god is necessarily perfect and perfection necessarily exists... well, surely this is the ultimate case of trying to assert truth by definition. God is a perfect what, exactly ? A perfect square ? A perfect teapot ? Well, if a perfect teapot exists, where is it ? Could it be Russel's Teapot, somewhere beyond Earth's orbit ? Surely not, because if it was perfect, it would be in my hands whenever I need it. But it isn't, and therefore the perfect teapot clearly does not exist. 

And if even the perfect teapot doesn't exist, I see no reason to say that the abstract concept of perfection itself – a Spinozan notion of God – also has to exist in any sense beyond a mental construct. Clearly, I can imagine what I think a perfect teapot should be like, but that has no further existence outside my head. There's no reason to think that perfection itself is any different.

So here too, "perfect" in the everyday sense does not mean the same thing as St. Anselm would have it mean. Nobody uses perfect to mean "something which must exist" : indeed, we often use it to describe things which can't exist precisely because they're perfect ! "Platonic ideal" might be one of Plato's better ideas here, if only in the concept : we can conceive of better examples of chairs and circles and virtues even if we can't bring them into being. That's generally how we use the word, to describe something specific in aspect, not the singularity-like God of the Upanishads

As far as the existence of God goes, and very much with my agnostic hat one, I think definitions here are of no help whatever. We can conceive of perfect examples of things we fundamentally do understand, like circles. But perfection itself ? That would require understanding all facets of existence, which as imperfect beings we simply can't do and never will. A general understanding of perfection is beyond our limitations : we can no more say that "god's perfection means he exists" than we can say what a perfect dinosaur would be like. The concept may simply be incomprehensible or it may not even make sense at all.




That's my idea of a good definition then. It should be specific, flexible, distinct from alternatives, and describe things which occur at a finite rate (even if only conceptually). If a definition forbids itself from ever existing, or would always be true, then it has no use cases and should be discarded. Those kinds of definitions usually twist readily-comprehensible everyday meanings into something convoluted, unproductive, and useless.

I'll stress "usually" a little bit though as I don't say that the extremes don't matter at all. For example, what do we mean when we say we're know something or that we're certain of it ? Usually, that our own belief is well-formed and our confidence is beyond reasonable or routine doubt. We don't usually mean that we have found Truth Itself, that we can state our claim with literally zero chance of it being wrong, and that all unbelievers are evil and/or stupid. 

Like the case of purest selflessness, this kind of concept definitely does have value, but more in the philosophy classroom than the real world. The extreme cases let us frame our own actual beliefs and compare them to those of others, rather than providing useful, workable definitions in themselves. For example, we can all agree on what true certainty would actually mean, but to use the word more practically, we have to scale things back. That's where the discussions start to get interesting, trying to figure out the limits of our own underlying reasoning as well as that of the others in the debate. 

To a very large extent, I think the question of how we use a definition is very much the same as what we think it means at the most basic level. But then, others may have a different understanding. I don't always agree with the alternatives, but trying to figure them out is usually the fun bit.

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

The Truth About Utility

What makes a useful definition ? Originally I had a much more philosophically pretentious post semi-drafted for this, and I may still do tha...