Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Sunday, 19 April 2026

Navigating the AI Hype And Hysterics

Back in the days when AI development meant working towards something like an artificial human, I had three rules to bear in mind when reading most popular articles on the subject : 

  1. AI does not yet have the the same kind of understanding as human intelligence.
  2. There is no guarantee rule 1 will always hold true.
  3. It is not necessary to violate rule 1 for AI to have a massive impact, positive or otherwise, intentional or otherwise.

These were from the earliest days of LLMs when there were many other types of AI floating around in the popular press. Most AI stories were about whether AI was or could be conscious or not, a position most serious people have moved on from completely (though not quite all). The "rules" made sense as a way to keep perspective, to independently remind myself that the author might well have gotten carried away or missed their own point.

Given the massive developments in the last few years, I think it's time for an update. There's also a nice piece on Clearer Thinking which I think has a sensible take, especially if you want more practical advice.

Personally, I still have more than a little sense of wonder about the whole thing. Regardless of what AI is used for or who controls it, I just think it's an astonishing feat to essentially teach a rock to think. I find it somewhat dismaying, much as with skeptics of the space program, that the realisation of one of mankind's oldest dreams is being treated so often with cynicism and fear more than wonder and enthusiasm. It's as though everyone is focusing on the corrupt capitalism in Jurassic Park more than they are the reincarnation of freakin' dinosaurs. Still, for this very reason, if I might vainly hope to rekindle some sense of fascination in the more cynical reader, I should also try to temper my own admiration.

Here, then, are my offerings. I'll try to keep them neutral-ish, but you've been duly forewarned as to my own bias.


0) LLMs are not human

This should be a default presumption. When I say LLMs are thinking, reasoning, or understanding, I am not saying they do so in an entirely human-like way. While I think it's legitimate to say they do all of these things, the sense in which this is meant must be very carefully defined or else presumed to be linguistic shorthand. But to be direct, LLMs are not conscious, have no will, no desires of their own, no inner awareness, no coherent long-term memory, no personality, function differently according to their current context window, etc. etc. etc. 

In some narrow but important ways, they probably are doing something closely analogous to human thinking. In the right conditions, those similarities are fascinating, and we probably shouldn't dismiss LLMs as a dead-end in intelligence more broadly. But in more general ways, LLMs are absolutely nothing like humans. I get very frustrated when people dismiss LLMs out of hand because of their differences to human cognition when the similarities actually are interesting, but nevertheless, I completely agree with the basic premise that a net of linguistic probabilities doesn't count in any way as "alive".


1) The imperfect nature of AI does not render it useless

And the useful nature of AI does not render it perfect !

Much the most common flawed argument is rotten cherry-picking : focusing entirely on the mistakes that AI makes, especially the silly ones, and thereby extrapolating that it can't do anything at all – or at the very least that it's completely untrustworthy. Less common among my feeds is the opposite view, that because AI is able to do some incredibly complex analysis very well, it can be completely relied upon in all things, or at least that it's silly mistakes in simple problems are just not worth worrying about at all.

Both of these are wrong-headed. A better way to look at it might be for pessimists to say, "just because an AI isn't useful for me, it doesn't follow that it's of no use for anyone else". Conversely, the optimist's take would be, "just because I find AI useful, that doesn't mean that everyone else will necessarily do so as well".

The "jaggedness" of LLM-intelligence seems to cause people no end of strife. Sure, it can't understand some common sense things. So what ? All that should tell you is Rule 0 : that it isn't reasoning like a human. It does not tell you that its answers on more complex topics are therefore wrong. At most, it should act as a reminder to what's best practise in all situations : when something is important, you need to check any proposed solution from any source, rather than assuming blindly that the proffered answer is correct and immediately implementing it into your workflow.

A hilarious example : this case of ChatGPT showing blatant sycophancy in analysing a fart track as a serious musical composition. True, absolutely, it shouldn't do this. But to conclude that "your product sucks" is... I mean, I honestly don't understand this mentality at all*. 

* Although I do understand it as a joke, of course, and I laughed along with the ending. Here I'm criticising people who actually do think like this, of which their numbers appear to be legion. 

A much better, more nuanced take comes from this article on the use of AI in mathematics. Time was when LLMs couldn't even use a calculator, but that time is no more. Used correctly, they can be hella productive. It's worth reading that one in full – it covers the downsides quite nicely as well as the upsides – but the most interesting bit to me was the following :

The LLMs he spoke with inevitably made lots of mistakes, leading some mathematicians to dismiss them outright. Many researchers, he said, decide that if “everything it says is kind of wrong, I will just not talk to it.” But others — he puts himself in this camp — have a higher tolerance for “the pain of talking to this bullshitting model. They say, I can still get something out of this conversation; even if not every idea is good, I can ignore the bad ones and take the good ones.” And the mistakes, Schmitt noted, are weird ones: There is virtually no way that a person with any training in mathematics would make such a plethora of basic errors while also succeeding in coming up with subtle, original, and correct ideas.

Maybe LLMs annoy certain people because they're still thinking that they must be human-like to be useful, or are simply not prepared to accept anything except the smallest error rate : either they have to be fully human, or as perfect as a calculator, and anything in between constitutes an unacceptable uncanny valley. 

I personally have always preferred to use the AI output as inspirational more than authoritative, and with that sort of mindset, even GPT-3.5 could be quite useful. If you're looking for a Truth Engine, go home, but then... why did you ever believe there was any such authority anyway ? Why would you assume that human experts have an error rate of zero ? 

I think there's a lot of double standards being applied here. Apparently, people can accept that other people might sometimes be wrong without dismissing them entirely, but such errors in LLMs seem to render them as useless junk for some reason. I find it weird. I also find the opposite techbro mentality weird, mind you : just because some mistakes seem trivial doesn't mean they don't matter at all, and just because they're very good in some situtations, it doesn't follow they should be shoved into absolutely everything.


2) AI is used by real humans in the real world, including very stupid and very clever people

Following on from that, I think AI-skeptics should approach any AI article from the stance that a vast number of people do find using AI beneficial, and that they're not all deluding themselves. Conversely, those of us who are more optimistic should acknowledge that not every negative study is necessarily flawed, and that some concerns are motivated out of entirely sensible considerations based on human psychology rather than cynical views of the techbro ilk. The scale of AI adoption is vast, and it makes no sense to say that all these hundreds of millions of users aren't seeing any benefit at all, nor to dismiss the possibility of downsides from such a rapid, enormous uptake.

Two contrasting pieces : this one in The Conversation (a usually skeptical website) finds that most students aren't just using AI to do all their homework for them, but actively engage with its output and revise it according to their own needs. This is much my own approach : I almost always reword AI text (on the rare occasions I use it for text, which I dislike doing) to suit my own style, even if the AI version might sound better. Conversely, this piece in Ars Technica gives a detailed description of the problems AI has caused for teachers, with the temptation to simply go to an AI for the answer – even if the student then rewords the thing – being sometimes irresistible.

Quite honestly I don't know what to do about that. In school, we probably want to keep AI and maybe even computer use down to a minimum, with single-use devices like books, pencil and paper etc. being innately better at creating focus. It's always seemed to me that this trajectory is obvious : begin with training on the basics so you have a full, deep understanding of what's going on and can do it on your own, then gradually transition to using more and more learning aids like reference books and calculators and so on. In this way you move slowly into the real world, with a solid grounding in the fundamentals so you can make better use of all the productivity boosters everyone uses when they have to actually get stuff done for real. Keep exams device-free when necessary and that's all there is to it.

The difficulty with this is coursework. In principle, this is the best guide to how to use knowledge and skills in the real world. In the pre-LLM world it was relatively easy to set a task that couldn't be automated, and I personally always preferred this to examinations. Exams carry a weird kind of stress that isn't replicated in real life, whereas coursework can be done more at one's own pace. I preferred it and would have encouraged it to replace examinations as much as possible. But with LLMs in the picture, I honestly don't know what to do.

The only thing I can offer is to acknowledge that coursework is still a chore. I believe quite strongly in a work-life balance, and the need to continue working outside of working hours is something I always found depressing : even when it's something I enjoy doing, I dislike being compelled to do it during what seems like it should be my own personal time, even when I can largely set my own schedule. So I wouldn't want to knee-jerk to "students are cheating" here* : they're doing exactly what the rest of us are doing, a perfectly natural reaction to avoiding things they'd rather not do.

* Indeed, some of my students would probably benefit from using LLMs a lot more to polish their language

Maybe the only solution here is, as with multi-functional devices, to take them away. Give students a good working environment where they can go at any time for coursework in which LLM-use is absolutely restricted... I don't know. 

Using AI definitely isn't something we should allow to be complete free-range in all situations for all people, but at the same time, it definitely isn't something we should strangle at birth either. While I think the suicide/murder stories are not something worth taking very seriously – there are hundreds of millions of users, and if AI was a causal factor in this, then violence would already have skyrocketed – there are definite concerns about over-use and the degradation of critical thinking and suchlike. I just think that while it might have negative effects on some, this does not automatically offset the positive benefits for others. Cherry-picking on either side isn't helpful.


3) This is the worst it will ever be

Finally, even quite recently I would have said that AI could never do a whole bunch of things it's now reasonably competent at. How far this is going to progress is a matter of debate, but while I find this video to be largely hyperbole, it has one outstanding point : if we're not good at intuiting exponential progress, we're even worse at understanding S-curve exponentials. That is, development follows an exponential trajectory but only by averages. Sometimes there can be protracted periods – months or more – when development plateaus or increases only slowly, but these are followed by short periods of enormous breakthroughs. 

Thus far the pattern has held every time the nay-sayers have insisted that AI development is hitting a wall. The CEO of Microsoft (for whatever that's worth, which is not much but not nothing) says that there's no sign of this happening in the foreseeable future, while if you follow AI news, you'll know that there are plenty of other avenues under investigation for advancement beyond raw computing power.

The takeaway from this one is simple : to say that "AI can't do this and therefore it's useless to me" is a largely vacuous statement. There is absolutely no guarantee that it won't be able to what you need in the (very) near future. Some things will likely take longer than others, but realistically, nothing is off the cards. Full automation even for the most complex of jobs looks like a real possibility, and some of the seeming hype is worth taking seriously. To pretend that we're still in the era of GPT-3.5 is not at all sensible, and to "hope" that development will somehow just stop here is scarcely any better.

Whether you think that this will be a good thing or not is another matter. If you think that LLMs thus far have been generally positive, presumably you think that further advances will be more of the same. Conversely, if you think they're been detrimental, you probably don't want to see them continue. Neither is correct : if AI use thus far has been generally positive, it does not follow that further developments cannot be problematic; equally, and conversely, if AI has been harmful thus far, it does not follow that future developments must inevitably be more of the same.

My point is that it's so easy to pick and choose whatever set of stories you want to support your position, whereas reality is likely more complex than either. If we can predict how LLMs will advance for at least the next few years, predicting what humans will do with them is another matter entirely : here I would tend to side with the cynics much more than the techbros, even I think they're hardly going in to usher some kind of apocalypse. Reaching for the heuristics of "I dis/like what's happened so far, so we can expect more of the same in the future", is, however, simply not good enough, especially in this most non-linear of development trajectories.




That's my take then. That LLMs aren't human doesn't mean they're useless, nor are they perfect or their mistakes inconsequential. Neither the benefits nor the downsides can be taken to completely offset the other and cherry-picking from either perspective is a trap which is perilously easy to fall into. Accounting for how people, both those who do and don't understand their operation, actually use them, is already a mixed bag, and predicting what comes next is only going to get harder even though we can actually make quite a reasonable extrapolation as to future LLM performance abilities. Critiquing LLMs for current shortcomings is valid, but it's worth getting some perspective and realising they have already made truly astonishing gains, and insisting that current problems are unsolvable just lacks any common sense. 

The future is coming whether we want it to or not, and to try and force everyone who benefits from it to put it back in the bottle even for those who are genuinely badly affected by it... is just not a realistic expectation of humanity. LLMs are not truth engines, but they are certainly powerful thinking engines of a sort, which can no more be stopped than the rise of steam power. Whether they will have the same degree of impact I don't know, I still tend towards thinking probably not, or at least not yet, but one thing I am confident on is that to dismiss them entirely is just not a sensible thing at all.

Saturday, 18 April 2026

Review : Galileo's Daughter (II)

Welcome back to the concluding part of my review-summary of Dava Sobel's Galileo's Daughter. In part one I covered how Galileo was a prodigious polymath, something which often gets lost in the simplified "he was imprisoned for teaching that the Sun goes around the Earth" version of events. I also looked briefly at Galileo as a man, ending with how for him, the idea of a conflict between natural inquiry and holy doctrine barely made sense. 

In this second part, I'll begin with these now largely estranged viewpoints didn't pose a problem for Galileo's own philosophy, but then how this came into such prominent conflict with the Church. I'll end with a look at whether Sobel's own interpretation – that the problem was not one of a clash of world views – is really correct.


3) Galileo the Philosopher

Galileo, says Sobel, believed that God revealed truths through two books. One of these was Holy Scripture, which could never be wrong, but could be misinterpreted and wasn't intended to be taken literally. The second was nature itself, "this grand book the Universe". Observation was truth and could never be wrong. Where the two appeared in conflict, ultimately nature must prevail, and scripture recognised to be symbolic :

"Holy Scripture and Nature are both emanations from the divine word : the former dictated by the Holy Spirit, the latter the observant executrix of God's commands."

Key here is that while Galileo might be somewhat morally flexible, he really seems to have believed this. Nor was this an unusual or even unorthodox view. While Galileo himself popularised the famous quote that the Bible tells us how to go to heaven, not how the heavens go... the original source was no less than a Cardinal ! Indeed, the book is replete with high-ranking Church officials endorsing Galileo's work with some enthusiasm, and until the moment of the publication of the famous Dialogue, it would be hard to see him as being anything other than on the best of terms with the Church fathers. 

Not everyone, it's true, was a fan, but the overwhelming nature of his detractors appears to have been one of envy. There might be a smattering of religious fanaticism here and there, but Galileo himself dismissed this as mere "pretended religion", a means to an end, not a sincere belief at all.

This view of nature as the work of God led to a total rejection of the idea of intelligent design. He offers an argument for accepting uncertainty and rejecting the exalted status of mankind which is at once both theological and scientific :

"When I am told that an immense space interposed between the planetary orbits and the starry sphere would be useless and vain, being idle and devoid of stars, and that any immensity going beyond our comprehension would be superfluous for holding the fixed stars, I say that it is brash for our feebleness to attempt to judge the reason for God's actions, and to call everything in the universe vain and superfluous which does not serve us."

In other words, we might not understand why or how God operates, but that damn well doesn't mean we can't observe the world he's made, or reject it because it disagrees with our ideas about how it should be. He continues :

"I believe that one of the greatest pieces of arrogance, or rather madness, that can be thought of is to say, 'Since I do not know how Jupiter or Saturn is of service to me, they are superfluous, and even do not exist*'. Because, o deluded man, neither do I know how my arteries are of service to me, nor my cartilages, spleen or gall; I should not even know that I had a gall, or a spleen, or kidneys, if they had not been shown to me in many dissected corpses."

* While the majority response seems to have been one of awe, a few people had indeed refused to even look through Galileo's telescope. 

Practically minded as he was, he was also thinking of the philosophical implications of what he was doing and his methodology. While he was clear that experiment and observation had the last word, he used thought experiments as guides. For example, when considering whether heavier objects fall faster, he imagined two objects merging in mid-fall to ask whether their speed would then spontaneously increase. Although the notion of gravity evaded him, he at least considered the basis of relative motion, imagining that if you dropped a ball when pulled along on a moving platform, the ball would still keep moving forwards as it fell. In understanding why 45 degrees was the optimum angle to maximise the range of a projectile, he wrote :

"To understand why this happens far outweighs the mere information obtained by the testimony of others or even by repeated experiment."

I like this argument very much. When you understand the reason for something, you can predict what will happen in new situations. You've gained a fundamental insight into how the world works, even, in sense, a measure of control over it. Pure data collection and observation are sometimes an underrated part of the scientific process (likened unflatteringly to stamp collecting), but ultimately this kind of understanding is the nugget of shining gold we're all searching for. This is the epitomy of the scientific endeavour, not to merely quantify the world, but to understand it at the deepest level the human mind allows.

Galileo even came up with an argument that would presage Wigner by several centuries, recognising that the Universe did indeed follow mathematical laws – in sharp contradiction to Aristotle's silly view that the world was just more complicated than that :

"Just as the accountant who wants his calculations to deal with sugar, silk and wool must discount the boxes, bales, and other packings, so the mathematical scientist, when he wants to recognise in the concrete the effects he has proved in the abstract, must deduct any material hinderances; and if he is able to do that, I assure you that things are in no less agreement than are arithmetical computations. The trouble lies, then, not in abstractness or concreteness, but with the accountant who does not know how to balance his books."

Here was a true, practical philosopher*. He saw no contradiction between science and religion and indeed saw his work as revealing the glory of god : "To imagine an infinite universe was merely to grant almighty God his proper due". There were, of course, those who disagreed. Some did so out of envy and jealously, some out of sheer stupidity. And one, in particular, seems to have done so, if we have to reduce it to a single cause, out of an insistence on authoritarian control. Belief in a deity itself was never the cause of the strife, but the need to assert power over others has an awful lot to answer for.

* What's especially fascinating to me is how some other philosophical arguments of this and earlier eras are incredibly sophisticated, yet the practical, natural philosophy remained stuck in an age of astonishing stupidity. This is quite the reverse of the common tendency to proclaim that our intellect gets ahead of our morality. Maybe this is true today, but for the longest time, our practical understanding was incredibly slight and completely outpaced by moral philosophy – if not by how we actually treated each other in reality.


4) Galileo the Criminal

Pope Urban VIII was a cunt. He wasn't the only cunt, and he didn't start out being such a cunt, and he certainly wasn't one of the stupid cunts like what we have nowadays, but nevertheless, he had a nasty, vindictive streak, was thin-skinned, and didn't like to back down in the face of a challenge. He was, I say again, a cunt.

Galileo had a tendency to sail close to the wind, to push the boundaries of what the Church would allow. He also had a healthy fear of being ostracised, and in the face of censure and censorship, he usually backed down. He consistently tried very hard, however, to pre-empt this, discussing his ideas with high-ranking members of the clergy before publishing them and always (until the very end) doing everything through the proper channels. He was personally friends with many high-ranking members of the Church, most of whom praised him for his wonderful discoveries of all kinds, and on good terms with the Pope himself. So how did it all go so suddenly and catastrophically wrong ?

The initial dispute in 1616 was relatively mild. Heliocentrism wasn't deemed heretical, but the Church required it to be discussed as a hypothesis, not fact. The Church, in the wake of the Reformation, was not feeling terribly secure, but Galileo was given personal assurances that his minor infraction hadn't damaged his high standing : he just had to be a bit more careful in the future, but everyone still thought he was the bee's knees.

More problematic was the business of interpreting the Bible. The problem with not taking Scripture as literal truth meant that both sides argued with each other as to whose interpretation was correct, and worse, who got to interpret the interpretations. The Church was moving in a direction of declaring itself the sole authority as to who was allowed to do this, which made Galileo's personal attempts at theology dangerous. Galileo, in contrast, was seeking to save the Church from itself, knowing that while the evidence for heliocentrism was not yet irrefutable – arguments were genuinely lacking, especially a theory of gravity for how it would all work – the time was coming when it would be proved with certainty, and to maintain denial would cause the Church to be ridiculed. People would, quite literally, lose faith.

But for the next 16 years or so, Galileo kept his work to himself and close colleagues. Only after much effort, seeking assurances from multiple clergy, and sending it through the Papal-approved publication censors and channels, did he submit a much more elaborate discussion. Initially, it was received with all the applause Galileo was accustomed to. Only when Pope Urban got wind of it did things go very quickly south.

What seems to have happened was a confluence of factors all making the Pope more of a cunt than usual. As well as the Reformation, he was feeling personally insecure due to accusations of not defending the faith with sufficient vigour. Galileo's detractors, recognising the early censure* of heliocentrism, sensed a chance to bring him low, and they seized upon it in full force. The Pope never read the 500 pages himself – all he heard was the filtered version from Galileo's enemies.

* With just two very small amendments and insertions required by the Church, the stronger term of "censorship" hardly seems appropriate. This was scarcely more than the actions of modern-day broadcast regulators, albeit applied to an arena which is out of bounds to the Inquisition's modern-day counterparts.

What the Pope heard was partly correct and partly an egregious distortion. It was true that Galileo was openly supporting heliocentrism as fact, not hypothesis. Technically he had fully complied with all the directives, but in practise, nobody reading it would ever come away with the impression the geocentrism remained a valid theory. At his trial, he bowed completely to the pressure – there was never any famous "and yet it moves !" defiance from a by now frail old man – and pretended that he actually fully believed in the Ptolemaic model, saying that maybe he'd once considered heliocentrism an interesting alternative but no more than that. He maintained that his work was not intended to promote the heliocentric model in any way. He presented a document that he believed gave him full license to consider it hypothetically, but none of this was enough to satisfy the more belligerent Inquisitors (although only 7/10 signed the final verdict) or the Pope himself... who was, of course, a cunt.

Why do I labour this point ? Because Urban never even read the document, yet doubled down on his wrath. He eventually had Galileo moved, not out of kindness so he could be closer to his daughter, but because his ostensible gaolkeeper was actually treating him as more of an honoured guest, receiving many visitors for scientific discussions. Far worse, Urban had all of his existing (and future) works banned from further printing : a wholly mean-spirited and petty thing to do, an action of pure spite given that half of Europe had already read them. 

The part that seemed to personally wind him up was that Galileo had used some of his favourite arguments to favour heliocentrism... or at least, this is how things are usually described, but really this is a bit too simple. The fuller version is more subtle.

In an earlier essay, Galileo had written an analogy that was pleasing to both scientist and theologian alike, a story of a man who goes around looking to understand the various ways in which animals produce sounds. He learns a great deal, but eventually finds one which doesn't fit any of theories, so realising that the omnipotence of God is greater than he can imagine. Man can learn much, the fictional investigator realises, but the full wonders of the Universe are beyond human comprehension. 

This argument "delighted Urban", and so Galileo used it in the dialogue (as he was instructed to do) for the debater who prefers geocentrism. At the end, he concedes that the magnificence of God is such that he might be wrong, and that God might indeed allow a heliocentric universe. Together with the others conceding that they cannot offer certain proof of heliocentrism, this was enough for the document to pass muster for publication. It all feels very delicate and respectable, and not at all the case of "Galileo made fun of the Pope" which even today is how it's written in some popular descriptions. Rather it seems thoroughly intellectual and nuanced.

What marks Urban as a complete cunt is that he swallowed the argument that Galileo was mocking him without ever bothering to check. He was willing to fine him and sentence him to life imprisonment – Galileo was even threatened with torture – all on the basis of what other people said, despite having been on such personally good terms with the man until almost that very moment. He would have undone all his life's works without ever even trying to check if he was being fed the correct interpretation of what Galileo had actually written. Why he did this is, sadly, the missing piece of the argument, but the result, of course is history.


Conclusions

The censorship of the Catholic Church was indefensible by modern standards, but its importance has been massively exaggerated by New Atheists. Neither Galileo nor Copernicus (who was never branded a heretic) were seeking any conflict with the Church or to undermine religion, but they did have a different approach to theological inquiry. For them, studying the natural world revealed the handiwork of God and was literally incapable of contradiction with holy writ, with the Scriptures subject to human error and therefore subservient to observation. 

For those like Pope Urban, it was not quite that the situation was reversed. It was more that he believed that God was not bound even by human imagination, that God could, if he wanted, commit something logically inconsistent like creating a rock so heavy he couldn't lift it. Whereas Galileo might see God as bound to create things which were ordered and harmonious, Urban imposed no such restrictions.

Galileo also offered an argument of cosmic economics. Whereas the Ptolemaic system offered Earth an exalted central status, but at the same time made it degenerate and changeable, Galileo said that its rarity in the heavens – not its immutability or position – made it valuable. Morality and cosmology were linked, and so Galileo made clever arguments to overturn the Ptolemaic framework on this front too. But the weight of history was perhaps here against him, and worse was the Church's increasing zeal to control information.

And this, I think, is the really interesting discrepancy between Galileo and his detractors. It's not at all about science versus religion in the highest, abstract sense : belief in deities is orthogonal to belief in what those deities happen to do or what they require of humans, and says nothing in itself about how they chose to order the world. It might be fairer to compare science to academia and religions to the Church, human institutions pitted against one another rather than their ideals. And at the heart of it seems a strongly illiberal desire amongst the Church – not the religion – to dictate what is true. 

I always remember hearing a well-known scientific contrarian declare, for no real reason at all, that "at some point you've got to make a decision", that is, to declare what the answer is so that everyone else can move forward. For the scientific consensus this is exactly what you mustn't ever do. Consensus is only successfully forged from independent findings, and only achieves its strength precisely because everyone has worked so hard to pull it down and failed. The idea that anyone gets to decide what's True, to tell everyone else what they have to believe, is anathema to the whole project. 

As scientists, we can only offer evidence and arguments; as humans, we can certainly try and persuade because we think that believing the truth as we see it will be of benefit to others. What we cannot ever do is to Declare Truth. Once we decide on truth by authority, we have lost the path of righteous truth-seeking and fallen into a need for power. The need to control others, for them to believe specific things even when the alternatives are completely harmless... that's what was at work with Urban. That's, I think, the "one trial of Galileo" that was really going on beneath the surface accusations.

It's what we see today among many fanatics, both of religious ilk but also the weirder segments of the political ideologues. To brand all the religious as being unthinking dolts as the New Atheists would have it is to catastrophically miss the point : what we should be fighting against is not spiritual beliefs, but the the illiberal ideas that we must impose our ideas on everyone else, to control rather than convince, to declare rather than discuss. In this age where one political side appears to be degenerating into ever more incoherent fascist farce, maintaining this liberal view is increasingly difficult. But if we can't accept that other people are perfectly entitled to hold stupid but harmless beliefs, then what are we even struggling for ? 

Thursday, 16 April 2026

Review : Galileo's Daughter (I)

There was only one trial of Galileo, although legends often speak of two... There was only one trial of Galileo, and yet it seems there were a thousand – the suppression of science by religion, the defence of individualism against authority, the clash between revolutionary and establishment, the challenge of radical new discoveries to ancient beliefs, the struggle against intolerance for freedom of thought and freedom of speech. No other process in the annals of canon or common law had ricocheted through history with more meanings, more consequences, more conjectures, more regrets.

Today, a look at Dava Sobel's magisterial Galileo's Daughter. First published in 1999, this was one of the books from an unread stockpile I bought back with me from my last trip home.

I can only find two real weaknesses with this book, both minor. It's a little weirdly undefined in its goals in the first couple of chapters or so, making the pace a feel just a little bit off. It would have helped a little to spell out what the book was going to be about, especially because of the title. We hear almost nothing of the eponymous daughter until a hundred pages in, and even thereafter, she's not really the focus at all.

Not that this matters. What the book actually is is a biography of Galileo, focusing heavily on his conflict with the Inquisition. The central thesis is that while this might have been a conflict between an academic and the Church, by no means was this a clash between science and religion. Along the way we get an in-depth look at Galileo the man, particularly drawing on letters from his daughter, but also plenty of theological and philosophical insights. 

If she fails to give a clear mission statement of what the book is supposed to be, Sobel nevertheless manages to balance things perfectly. The book is not one word too long or too short. We get enough background to Galileo's life to understand the trial in its full context without losing focus; enough detail on the trial itself to understand how things proceeded without getting into unnecessary minutiae; enough of the aftermath to follow the consequences without losing sight of Galileo the man. We get all the core philosophical arguments, all the essential subtle differences between the views of Galileo and his detractors, presented in a clear and unbroken narrative flow. Overall, I think I have to give this one 9/10.

I cannot possibly give a summary that retains Sobel's narrative without still being many thousands of words long. So instead, let's try the usual thematic approach, gradually building up to the all-important trial which has become so (arguably) spuriously emblematic of the conflict between science and religion. In this first part, I'll look at Galileo's other achievements outside astronomy and his character as a person. In part two, I'll cover his philosophical approach to scientific inquiry and his conflict with the Church.


1) Galileo the Polymath

Perhaps the first thing that becomes apparent is that Galileo was no one-hit-wonder. To be honest this is something I should have been more aware of, but while I knew something of Galileo's astronomy, I had only vague impressions of his experiments on motion, and I knew nothing at all about his more practical skills.

Here Galileo is revealed to be a true renaissance man, a veritable polymath to rival Da Vinci except for his lack of artistic achievements. He possibly could have gone down this route – his father was a musician and his own drawings of the Moon are of extremely high quality – but he seems to have preferred to have concentrated firmly on science and engineering. His more creative tendencies were reserved for his public outreach activities and tending to his garden. And this is no bad thing, since, unlike Da Vinci, he left little unfinished. His interests were wide-ranging, and yet he seemed to have no issue with taking years to complete his projects : not to say that he wanted things to take this long, but he wouldn't get distracted along the way. If he was ever derailed, then it wasn't by choice, and he'd almost always eventually pick up where he left off.

Galileo's most famous non-astronomy work is surely dropping cannonballs off the Leaning Tower of Pisa. Sobel is careful is her framing here of this "legendary" experiment, but in the text as written, it sounds very much as though he actually did do this, and much earlier in his career than in the more carefully-documented case of rolling balls down an inclined plane. Still, he wrote with some bitterness that while the larger ball did beat the smaller, the point was that they don't fall at anything like the same speed, as Aristotle had claimed : his detractors were completely missing the point. "Speaking of my tiny error", he wrote, they "remain silent about his enormous mistake".

He would tackle more minor but important problems throughout his life. In Venice he was granted a patent on an irrigation device, and while under house arrest much later in life, he prepared a practical demonstration to explain why a recent bell-casting had failed in spectacular fashion. For income, he was partly supported by the sale of his own geometric calculating compass : a sort of elaborate slide-rule for a wide variety of mathematical calculations. Whilst conducting his early observations of the heavens, he also showed, as a side project, that objects float because of their density and not their shape as the prevailing wisdom dictated*, thus overturning centuries of conventional wisdom by having some bloody common sense.

* There was an odd belief that ice was actually heavier than water, which is something so easy to test that it just seems bizarre that nobody ever checked it.

Perhaps the most impressive Galilean spin-off came later. During his experiments on acceleration with inclined planes, he developed standardised measurements to ensure he was making fair comparisons between data points. This was in itself a monumental breakthrough. First, that he came up with a practical method to reach the precision and accuracy needed, but secondly, that the concept of standardised measurements simply didn't exist. This was a profoundly non-mathematical world, and while the mentality of those in the distant past can be startlingly similar to our own, in other ways it can be shocking in its most fundamental differences. Galileo played no small role in changing that. And finally, he did this in his old age while in chronic ill health. So much for the idea that scientific revolutions are the province of the young... In his rigorously quantitative approach, Galileo would surely appreciate the cliché that age really is just a number.

Oh yes, and he also invented the microscope. Janus-like, he looked both to world above and below, and if he spent more time on the firmament than terra firma, this seems to have been merely a matter of happenstance. Galileo appears to have been very much the right man at the right time : he had the practical skills needed to develop the instruments and the mindset to appreciate just how radical his discoveries really were. As he himself wrote with no false modesty :

"I render infinite thanks to God for being so kind as to make me alone the first observer of marvels kept hidden in obscurity for all previous centuries... four planets never seen from the beginning of the world right up to our day."

Through some admittedly blurry images from crude glass, Galileo's discoveries would change the world. Small wonder that not everyone approved. With the simplest of devices and an image quality that still left much to be desired, he was proposing nothing less than a total restructuring of reality.


2) Galileo the man

But before we get carried away, it's worth a brief look at the character of Galileo. The figure that emerges from Sobel's telling is a genius and generally a good egg by the standards of his day. He seems to have been extremely generous to his family, sparing what little income he had (in his early days) to support them with enthusiasm rather than reluctance. 

But he was not much of a social radical. True, he wrote a rather bawdy poem bemoaning the fact that his scholarly toga wouldn't let him visit brothels (we've all had that problem), but he also put his daughters in convents and didn't marry his partner as this was just not the done thing for scholars at the time. Which also reveals that even this most Catholic of countries could, and did, easily find ways around the rules when it suited them. It was never a case of the Church having absolute control over people's lives, which is purely the stuff of myth.

The advantage of Sobel's biography is to set Galileo's work in the context of his personal life. In many scientific histories, it's easy to think of them as pathologically obsessed with research : this Galileo is, instead, a real person with real person things to do. We follow his surprisingly mobile career in institutes in towns across Italy, his chronic ill health (making it all the more impressive that he reached 77 years of age by the time of his death), his love for his family along with his (very) occasional chastisements. We see him being delighted to send them fruit and exasperated when they turn up en masse for a while in his rather modest accommodations; the embarrassment of his daughter on learning that "buffalo eggs" were actually a type of cheese is still palpable after more than three centuries.

What emerges is, as you might expect, a complex character. He was famously entertaining and flamboyant, and could use this in the most obsequious terms of flattery when seeking a patron. He seems to have a real need to be liked by people as well as persuade them of his ideas, but also (like many exuberant promoters) was quite willing to offend. Although generally quite careful in what he wrote, he would sometimes deliberately tread on people's toes if they disagreed with him. He definitely seems to have had an arrogant streak that was perhaps the source of his rhetorical flair, but it was hardly unjustified : he really had seen things that nobody had seen before and contemplated them in ways undreamt of.

But he also seems to have been something of an egalitarian and favoured a meritocratic approach to education. He worked under the patronage system and adopted university dress codes only insofar as he could not avoid them, preferring more casual attire whenever possible. He wrote his outreach dialogues in Italian, not Latin, having a genuine belief that this would be of benefit to the common man and not just to the scholarly elite. If he does seem to nonetheless have enjoyed being one of the elite, he definitely valued learning wherever he found it. He does not appear to have been especially concerned with fighting culture wars or calling for any sort of social reform. He would help people when he could, but doesn't seem to have contemplated any structural changes to society – though he did have some very definite ideas about where the Church was going wrong theologically.

Perhaps his biggest contradiction was that he was willing the bend the rules and outright lie in the interests of the greater truth. In this we must allow two things : first, that most of us will probably try to save our own skins ahead of sticking to our principles, and that parts of the Church were most certainly corrupt. "No-one has spoken with more piety or with greater zeal for the Church than I", he wrote. Yet when necessary, he also called his own work, "merely a poetical conceit, or a dream... this fancy of mine... this chimera." His beliefs were sincere, but his approach was flexible.

Galileo knew he was right and the Church was wrong, and that in his mind, suppression of his ideas was harmful to everyone – including the Church. If he had to lie about what he really believed in order to publish it, then so be it. If he had to take desperate measures in publishing them in the protestant Netherlands, and then proclaiming in transparently ridiculous terms that he had no idea how that happened, then he was surely justified in doing so. Sticking rigidly to his principles and telling the Church where to shove it might have been a heroic stance to take, but Galileo was no hero... and it would have been profoundly unwise. Galileo was right and the Church was wrong, but as we shall see, this is not at all the same as the claim that science and religion were at odds. In Galileo's own mind, such a thing was not even possible. 




In part two, we'll move on to see how Galileo had no issues in reconciling science and religion, and indeed the very idea of a conflict was barely imagined. Nor was he alone in his theological happiness, with his views being widely shared amongst the upper echelons of the Church. Naturally, then, we'll conclude with a look at how, despite everyone being one big happy god-fearing family, everything went so catastrophically wrong for a man so used to widespread acclaim and admiration. 

Monday, 13 April 2026

Review : The Damned

In the midst of reading Neil Price's Phd thesis-book The Viking Way, I happened to stumble upon this 2024 movie by someone I've never heard of. I'm going to review this one for no particular reason, and there's every possibility this will be among the least consequential things I ever write.

Even so, fair warning : this review will contain detailed spoilers. There's just no way I can go on the rant I want to go on without giving the whole game away, so I won't try. Be further advised that this isn't a movie I actually care about in the slightest, and while it may appear otherwise in what follows below, this is only for the sake of hyperbole. In fact, despite everything, the movie left a lingering sense of "that was rather good, actually."


SPOILER-FREE BIT : In terns of production quality this is downright slick. Set somewhere in the icy north of the 19th century, the cinematography is excellent. The costumes are top quality, the acting is on point, and the pacing is just right – likely a bit slow for some, but right for the story in my opinion. The extraneous dialogue is enough to give the characters some depth without feeling tacked-on, and generally everything fits together pretty harmoniously. It's a solid setup for a horror movie : good atmosphere, sensible enough characters, a decent premise. Good job, team !

The plot revolves around a small group of fishermen led by a woman who inherited her husband's business. One day a ship goes down and they find a group of apparently desperate, almost rabid survivors. In their immediate rush to escape (fearing their own small boat will capsize), our protagonists are forced to kill one of them, who sinks into the blackness.

Throughout the rest of the movie, our characters are increasingly concerned and divided over the possibility that the dead man has become a draugr bent on revenge. Full marks for bringing this under-used bit of folklore to the world of cinema, and the resulting character conflicts are well done. Some are convinced from the off that it's a draugr, some initially aren't, but rarely does disagreement becoming anything unbelievable. Character interactions remain fundamentally normal, rather than the usual "let's immediately set everyone at each other's throats" approach which many movies are apt to do. No, here things might get argumentative, but there are no challenges to authority, and breaches of trust, no disagreement that goes beyond the bounds of reconciliation. 

It's a welcome change from the usual stock movie script. About the only minor thing I could pick up on in the preliminary stages of the film is that though it's certainly creepy, it's not really very scary. It's well-executed, but somehow needs to induce just a bit more fear in the audience.

What I haven't mentioned are the reasons our plucky band are concerned about the draugr. For that, I'm afraid I have to go into spoilerville. Stop reading now to avoid disappointment, unless you don't care about such things.








SPOILERS GALORE : We get a host of clues that there's a draugr afoot, not all of them definite. Actually the movie builds this theme up quite nicely, with initially just one older lady (Helga) convinced they're being haunted. Most of the rest disagree, but they respect her, and don't treat her with disdain. She puts up charms to guard against both spiritual and physical attacks, saying that the draugr can invade also the mind as well as being a creature of flesh and blood that can bash your head it. People argue about this, but nobody does the obviously-stupid movie trope of removing the harmless bundles of wood that she thinks will provide a supernatural defence.

And we get some nice bits of draugr-lore and a slow, well-constructed development : one of the fishermen chants a creepy draugr-rhyme in his sleep, and the leader (Eva) begins to see visions of the draugr in her room and around the camp. In most cases, it's clear that this are only visions, with the draugr disappearing when interrupted. Only in the first sighting are things more ambiguous, with the creature appearing outside hunched in the snow, coming towards her but then again disappearing from view once she finds someone else : here we can't tell if the draugr has literally vanished or has just run out of view.

There are much more concrete signs of the draugr though. After nearly exhausting their food supply, our unfortunate band finally have a good catch and their larder is resupplied – enough to feed all half-dozen of them for some considerable time. But the next morning it's all gone. Bits of fish, especially the heads, lie strewn about the landscape. Helga goes missing and is eventually found dead, frozen upright in a kneeling position. One of the coffins of the men they buried from the ship is found opened and empty (the lid placed back with the nails on top), with the others still containing bodies. And two of the crew go insane, one almost murdering another until he's forcibly removed by means of a hammer (resulting in his death) while another gets stabby and then slits his own throat. These men have been worried, but quite sane and amicable right up until their final moments.

At last we have the Final Showdown between Eva and the draugr. He confronts her in the house in a rather good creepy sequence where he slowly comes down the stairs and she hides under a table, and all we see is that his clothing is surprisingly smart. But then we see his hideous and ruined face, which she blasts with a shotgun and then burns the whole place to the ground with alcohol, knowing that only fire can stop a draugr.

Except... then we see a new sequence. It turns out it wasn't a draugr at all, just one of the Basque men from the ship that went down. She's actually just murdered an innocent man who explains (in language translated for the audience only) that he's very sorry for stealing all the fish. All his horrifying attributes are only in his her mind. Instead of a terrifying supernatural corpse sustained by hate and a love of cruelty, the real monsters turn out to be work-related stress and casual racism.

Look, this is a good idea. It's a clever twist on the more typical "rational people turn out to be dogmatic and wrong" plot, but the execution makes no sense. This means the ending, in the space of a few seconds, instantly undoes all the hard work the rest of the movie has done in bringing us to the final confrontation.

If this had been an episode of Uncanny, I'd have found myself on Team Believer. True, not everything here requires a supernatural explanation, and some events can indeed be explained by stress, hunger, and fear of the unknown. The problem is that there are massive unexplained holes here which should have been explained : tell me it's a magical monster and I'll believe it, but ask me to accept that the explanation is rational and I'm damn well going to go looking for rational explanations. Without suitably clever explanations as to how everyone could be so mistaken about things which seemed completely inexplicable, it feels like the writer is being extremely lazy. It would be a bit like if a Scooby Doo episode ended with, "nope it definitely wasn't a ghost" and nothing else.

For example, how has this single man escaped the shipwreck ? Actually back up a bit, why were he and all his comrades lurking in a tiny, sinister, damp cave on the island rather than being on the surface trying to attract attention ? Why, after just one day on the island, did they rush the small boat by jumping into the icy water ? Sure, they're desperate, but this is asking a bit much.

Worse is how that one guy gets off the island to the mainland, a distance which appears to be several miles. If they had their own boat they'd have already used it, so this by itself shoots a massive hole in the subsequent plot. There is absolutely no reason to expect that any survivors could possibly have made it to shore.

Then there's the fish. Sure, he's hungry, but we see enough fish to feed a small group for many days at once. This guy has stolen all of them, single handed, and scattered their filleted remains across the land, has he ? Not bloody likely. And are any of the leader's visions of him real ? If so, then he did a pisspoor job of trying to appear non-threatening, even if hunger and fear are working in her mind to make him appear worse than he is.

Of course we also have the frozen-solid Helga. Sure, someone might wander off and get lost. But frozen in a kneeling position looks a lot more like "work of demonic entity" than "unfortunate case of misdirection". Similarly, I believe that people in this situation could go insane and turn against each other, but not in the way it's portrayed here. You aren't going to go into murder-suicide mode almost instantly : again, that feels far easier to explain as possession or malign influence than the flawed nature of humanity.

Worst of all is that missing corpse. Where's it gone ? Who's stolen it ? Not Helga, presumably, being an old woman. So the Basque sailor ? Why, what's he going to do with it ? Why has he un-nailed a coffin and not re-sealed it ? Where did he put the body ? Why just that one and not the others ? Did he have to open all the others first, and where did he get the hammer ? Did he politely put it back after his inexplicable grave-robbery, or did he just happen to have one with him anyway ?

I think what winds me up about this is that there's very little clue that it might not be a draugr at all. Not that there's none : there's a nice quote about how the living are always more dangerous than the dead, and it's clear that some of the draugr-sightings might just be hallucinations. But the overall trajectory is very clear, going from "maybe it's a monster" to "yes, it's definitely a monster, run like hell".

To be fair, there could be explanations for all of the misinterpretations. But what we needed was a few minutes (a montage flashback) to show how all this had come about, say, showing him disappearing in the snow when she saw him outside; his grief over his comrades and his need to exhume.... well no, probably not that bit. Maybe there's an explanation for that, but it's not at all obvious what it is. 

Without the movie even trying to explain the parts which made it clear there was a supernatural entity at work, it feels like the audience is cheated. The premise is clever and the setup is well done, but if the explanation is supposed to be rational, it's deeply frustrating when half the clues seem to pretty much exclude a logical interpretation. A draugr explains everything easily and naturally, whereas a lost foreigner really doesn't.

My compromise : he wasn't a draugr but a necromancer. Then we get a more satisfying supernatural element without an actual monster (so still giving us a plot twist) which still explains everything a hundredfold more easily than what we're expected to believe. The stolen fish, murder, grave-robbing and insanity are all related to sinister rituals...It undermines the moral message, to be sure, but it makes the whole thing a heck of a lot more fun. 

I get the intention to show that xenophobia is bad, that fear is literally the mind-killer, but I like monsters to be monsters. If you want to explore morality by way of the supernatural, or play mind games with the audience, then you need to have things fully-thought through. As it is, it feels like trying to have it both ways, to say, "actually, there was nothing weird going on at all, hahah, how silly to think so" while at the same time claiming "these people acted in a perfectly sensible way given the evidence". These are plotholes that need to be seen to be sealed off, otherwise it feels like the movie's declaration of what happens feels incredibly forced and unbelievable. Ironically, the movie's important message that we're vulnerable to misinterpretations is undermined by its own wholly unbelievable interpretation of its own events. 

Wednesday, 8 April 2026

Sam Altman Isn't A Nice Chap

Today, an excellent piece of investigative journalism from The New Yorker. 

Anyone remember those brief few days in 2023 when Sam Altman was removed from the board of OpenAI ? Of course you do. But now, more than two years later, we finally get an inside look into what the hell that was all about. Months of investigations have revealed in extreme detail just what was going on behind closed doors. If you're at all interested in these kinds of companies, then this is well worth your time. 

In the most compact form possible, this amounts to : "Sam Altman isn't a paedophile, but he's definitely a lying scumbag". As far as a character assessment goes, this is devastating. With regards to OpenAI the company, the picture is considerably more nuanced, especially in terms of the actual researchers involved (Altman himself is only a businessman). To their great credit, the author here interviews everyone involved, including Altman on multiple occasions. This is by no means a hit piece, but what emerges is abundantly clear. Virtually all those involved say that Altman is an inveterate liar, and while Altman may rightly dispute the interpretation of some recollected conversations, the idea that everyone else but him is misremembering beggars belief. 

In PDF form this runs to 39 pages. No doubt other media outlets will produce a more distilled version, but in the meantime, what follows below is my own 2200 word-summary in pure quotation form. To keep the narrative coherent, some of these are rearranged from their original sequence but no paraphrasing is used. Highlights are my own.


But first, there are two comments I should add for my own perspective. First, the main concerns of the other employees and board members revolved around the issue of AI safety, especially whether it would be aligned with human values and suchlike. I personally believe that these concerns are – provisionally – not sensible. I've said many times that while we can legitimately call LLMs intelligent in a carefully-defined way, I don't believe they show the merest glimmer of consciousness. Safety in terms of "killer robots will be the death of us all" is, I believe, complete hogwash; LLMs have absolutely no will of their own, and if putting them in certain situations is a bad idea, that is entirely the result of human error : we can simply choose not to do that, and nothing stops us from pulling the plug. I said almost from the very start that the whole "this is too dangerous" thing was a genius bit of marketing spin, not a genuine concern on Altman's part.

The provision I make here is that safety and alignment may still matter in the lesser regard of AI not behaving as expected. This certainly does pose problems and it clearly is important to have it behave in a basically-predictable way, as a useful tool rather than as an independent entity. But given a lack of any consciousness, I don't worry at all about a rogue AI deliberately trying to murder everyone of its own volition, as many of the top researchers are apparently genuinely concerned with. 

What matters here in terms of Altman's character is, then, not who's right about safety, but what he said he'd do about it. He lied about this over and over again, and whereas absolutely everyone tells lies from time to time, the degree to which Altman does this, routinely, is inexcusable. He's also a bullshit artist. It's fine to change your mind, but you should acknowledge that you've done this, as well as explaining how and why. Altman simply doesn't bother.

This leads me to my second point : people are being awfully naïve about this. The subreddits exploding with people saying they were ditching OpenAI for Anthropic never made any sense; the idea that the US government bunch of deranged fascist fuckwits wouldn't be using LLMs was beyond belief, never mind that Anthropic were already deeply in bed with the US military (the article also goes into Anthropic's misbehaviour as well, though to a lesser extent). Trying to lionise Anthropic for pushing back on a couple of points is, in my view, a classic example of naivety curving back on itself in an ouroboros of cynicism. And again, to pretend that a CEO will never lie is an absurdity, but this should not be the take-home message of this piece. Rather it should be the more realistic and far more important point that Altman lies so much you can never trust him about anything. That's what should concern everyone, not the non-story that "CEO is a scumbag". That's par for the course. 

That's more than enough from me. On, then, to the summary.




In a tense call after Altman’s firing, the board pressed him to acknowledge a pattern of deception. “This is just so fucked up,” he said repeatedly, according to people on the call. “I can’t change my personality.” Altman says that he doesn’t recall the exchange. “It’s possible I meant something like ‘I do try to be a unifying force,’ ” he told us, adding that this trait had enabled him to lead an immensely successful company. He attributed the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider.” But a board member offered a different interpretation of his statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’ ” Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn’t be trusted?

Most of Altman’s employees at Loopt liked him, but some said that they were struck by his tendency to exaggerate, even about trivial things. One recalled Altman bragging widely that he was a champion Ping-Pong player—“like, Missouri high-school Ping-Pong champ”—and then proving to be one of the worst players in the office. (Altman says that he was probably joking.) As Mark Jacobstein, an older Loopt employee who was asked by investors to act as Altman’s “babysitter,” later told Keach Hagey, for “The Optimist,” a biography of Altman, “There’s a blurring between ‘I think I can maybe accomplish this thing’ and ‘I have already accomplished this thing’ that in its most toxic form leads to Theranos,” Elizabeth Holmes’s fraudulent startup.

If everything went right, the OpenAI founders believed, artificial intelligence could usher in a post-scarcity utopia, automating grunt work, curing cancer, and liberating people to enjoy lives of leisure and abundance. But if the technology went rogue, or fell into the wrong hands, the devastation could be total. China could use it to build a novel bioweapon or a fleet of advanced drones; an A.I. model could outmaneuver its overseers, replicating itself on secret servers so that it couldn’t be turned off; in extreme cases, it might seize control of the energy grid, the stock market, or the nuclear arsenal. 

Not everyone believed this, to say the least, but Altman repeatedly affirmed that he did. He wrote on his blog in 2015 that superhuman machine intelligence “does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal . . . wipes us out.” OpenAI’s founders vowed not to privilege speed over safety, and the organization’s articles of incorporation made benefitting humanity a legally binding duty. If A.I. was going to be the most powerful technology in history, it followed that any individual with sole control over it stood to become uniquely powerful—a scenario that the founders referred to as an “AGI dictatorship.”

By September, 2017, though, Musk had grown impatient. During discussions about whether to reconstitute OpenAI as a for-profit company, he demanded majority control. Altman’s replies varied depending on the context. His main consistent demand seems to have been that if OpenAI were reorganized under the control of a C.E.O. that job should go to him. Sutskever seemed uncomfortable with this idea. He sent Musk and Altman a long, plaintive e-mail on behalf of himself and Brockman, with the subject line “Honest Thoughts.” He wrote, “The goal of OpenAI is to make the future good and to avoid an AGI dictatorship.” He continued, addressing Musk, “So it is a bad idea to create a structure where you could become a dictator.” He relayed similar concerns to Altman: “We don’t understand why the CEO title is so important to you. Your stated reasons have changed, and it’s hard to really understand what’s driving it.”

By 2018, Amodei had started questioning the founders’ motives more openly. “Everything was a rotating set of schemes to raise money,” he later wrote in his notes. In early 2018, Amodei has said, he started drafting a charter for the company and, in weeks of conversations with Altman and Brockman, advocated for its most radical clause: if a “value-aligned, safety-conscious project” came close to building an A.G.I. before OpenAI did, the company would “stop competing with and start assisting this project.” According to the “merge and assist” clause, as it was called, if, say, Google’s researchers figured out how to build a safe A.G.I. first, then OpenAI could wind itself down and donate its resources to Google. By any normal corporate logic, this was an insane thing to promise. But OpenAI was not supposed to be a normal company.

That premise was tested in the spring of 2019, when OpenAI was negotiating a billion-dollar investment from Microsoft. Although Amodei, who was leading the company’s safety team, had helped to pitch the deal to Bill Gates, many people on the team were anxious about it, fearing that Microsoft would insert provisions that overrode OpenAI’s ethical commitments. Amodei presented Altman with a ranked list of safety demands, placing the preservation of the merge-and-assist clause at the very top. Altman agreed to that demand, but in June, as the deal was closing, Amodei discovered that a provision granting Microsoft the power to block OpenAI from any mergers had been added. “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) 

In the course of several meetings in the spring of 2023, Altman seemed to waver. He stopped talking about endowing a prize. Instead, he advocated for establishing an in-house “superalignment team.” An official announcement, referring to the company’s reserves of computing power, pledged that the team would get “20% of the compute we’ve secured to date”—a resource potentially worth more than a billion dollars. The effort was necessary, according to the announcement, because, if alignment remained unsolved, A.G.I. might “lead to the disempowerment of humanity or even human extinction.” 

The twenty-per-cent commitment evaporated, however. Four people who worked on or closely with the team said that the actual resources were between one and two per cent of the company’s compute. Furthermore, a researcher on the team said, “most of the superalignment compute was actually on the oldest cluster with the worst chips.” The researchers believed that superior hardware was being reserved for profit-generating activities. (OpenAI disputes this.) Leike complained to Murati, then the company’s chief technology officer, but she told him to stop pressing the point—the commitment had never been realistic... the superalignment team was dissolved the following year, without completing its mission.

By then, internal messages show, executives and board members had come to believe that Altman’s omissions and deceptions might have ramifications for the safety of OpenAI’s products. In a meeting in December, 2022, Altman assured board members that a variety of features in a forthcoming model, GPT-4, had been approved by a safety panel. Toner, the board member and A.I.-policy expert, requested documentation. She learned that the most controversial features—one that allowed users to “fine-tune” the model for specific tasks, and another that deployed it as a personal assistant—had not been approved. 

Last June, on his personal blog, Altman wrote, referring to artificial superintelligence, “We are past the event horizon; the takeoff has started.” This was, according to the charter, arguably the moment when OpenAI might stop competing with other companies and start working with them. But in that post, called “The Gentle Singularity,” he adopted a new tone, replacing existential terror with ebullient optimism. “We’ll all get better stuff,” he wrote. “We will build ever-more-wonderful things for each other.” He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram.

Some people defended Altman’s business acumen and dismissed his rivals, especially Sutskever and Amodei, as failed aspirants to his throne. Others portrayed them as gullible, absent-minded scientists, or as hysterical “doomers,” gripped by the delusion that the software they were building would somehow come alive and kill them. Yoon, the former board member, argued that Altman was “not this Machiavellian villain” but merely, to the point of “fecklessness,” able to convince himself of the shifting realities of his sales pitches. “He’s too caught up in his own self-belief,” she said. “So he does things that, if you live in the real world, make no sense. But he doesn’t live in the real world.”

Yet most of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”


Six people close to the inquiry alleged that it seemed designed to limit transparency. Some of them said that the investigators initially did not contact important figures at the company. Others were uncomfortable sharing concerns about Altman because they felt there was not a sufficient effort to insure anonymity. “Everything pointed to the fact that they wanted to find the outcome, which is to acquit him,” the employee said. 

Given OpenAI’s 501(c)(3) status and the high-profile nature of the firing, many executives there expected to see extensive findings. In March, 2024, however, OpenAI announced that it would clear Altman but released no report. The company provided, on its website, some eight hundred words acknowledging a “breakdown in trust.” People involved in the investigation said that no report was released because none was written. Instead, the findings were limited to oral briefings.

Many former and current OpenAI employees told us that they were shocked by the lack of disclosure. Altman said he believed that all the board members who joined in the aftermath of his reinstatement received the oral briefings. “That’s an absolute, outright lie,” a person with direct knowledge of the situation said. Some board members told us that ongoing questions about the integrity of the report could prompt, as one put it, “a need for another investigation.”

In a meeting with U.S. intelligence officials in the summer of 2017, he [Altman] claimed that China had launched an “A.G.I. Manhattan Project,” and that OpenAI needed billions of dollars of government funding to keep pace. When pressed for evidence, Altman said, “I’ve heard things.” It was the first of several meetings in which he made the claim. After one of them, he told an intelligence official that he would follow up with evidence. He never did.

“My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.”


Altman is not a technical savant—according to many in his orbit, he lacks extensive expertise in coding or machine learning. Multiple engineers recalled him misusing or confusing basic technical terms. He built OpenAI, in large part, by harnessing other people’s money and technical talent. This doesn’t make him unique. It makes him a businessman. More remarkable is his ability to convince skittish engineers, investors, and a tech-skeptical public that their priorities, even when mutually exclusive, are also his priorities. When such people have tried to hinder his next move, he has often found the words to neutralize them, at least temporarily; usually, by the time they lose patience with him, he’s got what he needs. “He sets up structures that, on paper, constrain him in the future,” Wainwright, the former OpenAI researcher, said. “But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.”

Even people close to Altman find it difficult to know where his “hope for humanity” ends and his ambition begins. His greatest strength has always been his ability to convince disparate groups that what he wants and what they need are one and the same. He made use of a unique historical juncture, when the public was wary of tech-industry hype and most of the researchers capable of building A.G.I. were terrified of bringing it into existence. Altman responded with a move that no other pitchman had perfected: he used apocalyptic rhetoric to explain how A.G.I. could destroy us all—and why, therefore, he should be the one to build it. Maybe this was a premeditated masterstroke. Maybe he was fumbling for an advantage. Either way, it worked.

Monday, 6 April 2026

The Wisdom of Russell Howard's Grandmother

Today's post resumes my unusual habit of amateur epistemology.

I've explored the definition of understanding many times, concluding that it's the knowledge of how things connect and interrelate. The more we understand something, the better we can predict how it behaves in novel situations. Fair enough as far as it goes, but I've always found the main issue with this is what happens when we reach some seemingly irreducible fact that we can't understand. We can always memorise a mathematical operator, however complex it might be, but that's not at all the same as being able to apply it in anger.

Here I can offer two explanations for why we reach such limits. The first I can suggest immediately, while the second will take a bit longer and be developed over the rest of the post.

The first explanation is simply hardware. It may be that we just can't hold more than a certain number of connections to some mental objects, just as we can't process things at arbitrarily high speeds. Perhaps the brain allocates only a certain volume to each topic, and when we reach its limit, we simply can't add anything more into that particular bucket. It might be that either we just cannot absorb anything more on a broad subject*, or that an apparently singular individual item just requires too many connections to too many others for us to properly understand it – in essence, a straw that breaks a camel's back*.

* Like when Homer Simpson forgets how to drive because he takes a wine-making course.
** Perhaps somewhat literally. I had several lecture courses which imparted negative knowledge, meaning I had less understanding of the subject than when I went it.

This, I think, gets us a long way, but still doesn't fully address the more fundamental limits of understanding. For that, I'm going to pursue the more philosophical issue of wisdom. Even Plato never really came up with a convincing definition of this, so you can't accuse me of lacking ambition.

It's probably helpful here to recap the last time I examined such issues. In that post I ventured four main ideas :

  • Analytical thinking asks, "what if this is true ?", exploring the full consequences of a proposition.
  • Critical thinking asks, "is this actually true ?", being a concern for accuracy rather than with exploring any consequences.
Intimately connected with these two main points were two other slightly more amorphous concepts :
  • Curiousity is the yearning for more knowledge. It can take different forms, such as the desire to learn about more and more topics (e.g. consuming endless Buzzfeed lists) versus the desire to verify existing claims, but the essence of it is the same.
  • Multi-level thinking is the ability to consider a position on different scales, e.g. whether each line of code is syntactically correct versus whether the underlying method is doing what it's supposed to be doing. Grammar Nazis versus fact-checkers, I guess.
All of these are closely related, and separating them like this is somewhat artificial... but, as we shall see, useful.

Which all leads to my proposed definition of wisdom : knowing the best thing to do

Hmm. That seems a bit trivial to bother with.

A slightly less compressed form might help : knowing how to carry out the best solution to a problem. But this is probably still too compressed to seem of any use, so let's deconstruct this more fully. It's been carefully phrased to include two key aspects. First, the wise thinker must be able to assess a proposed solution and realise if there's a better approach. But secondly, the alternative they suggest must be something they actually know how to enact. After all, there's no wisdom in realising that everyone would be happier (say) if you gave them all more money unless you have a workable scheme to raise the necessary funding.

This definition works, I think, for both moral and purely logical problems. To give a recent example of the latter : I gave ChatGPT a coding question, saying I'd found a particular method to solve my problem which should work, but it explained that there was a much better approach so it went off and implemented that instead (in this case it worked perfectly – and this was a problem I'd previously spent some weeks trying to figure out from first principles*). 

* For the interested reader : I wanted to use binary_fill_holes to fill in meshes in Blender with an integer grid of vertices. ChatGPT realised that there was a much better, though badly-documented, Blender-internal solution that was the ideal way to meet my objective. With hindsight, my own solution was actually pretty darn close, but the specific implementation was full of holes... pun intended, sorry not sorry.

Morally, the obvious case is Jurassic Park. Sure, you could bring back dinosaurs, but that famously doesn't mean that you should. Or as Russell Howard says, sure, legally you can wake up your gran while dressed as Hitler, but that doesn't mean it's a good idea.

In some sense this could be described as supercritical thinking. It's concerned not just with all scales of the problem itself, but goes beyond that to consider its full consequences in context, to address whether the proposed solution would be a good approach or even whether the problem is one that needs solving at all. It's a union of critical, analytical, and multi-level thinking all combined and expanded. Rather than knowing what sort of thinking to apply, as I tentatively suggested previously, wisdom might be better described as turning rational thought up to 11.

And this takes us back to the limits of understanding. Wisdom here might be in recognising that these limits simply cannot be broken, that trying to probe any deeper won't result in anything useful – that we should stop when we have a definition that's actionable and have shown that further investigation won't bring any more improvements*. Likewise, I could go further with this post to better define what I mean by "best solution". But this would open up an enormous can of worms that probably wouldn't help and would make everything much longer. The wiser course of action seems to be stop here as far the definition goes. What remains is only a few clarifications and some practical consequences.

* In this particular case, we end up trying to define words by using other words, and hit the limit of what pure language can convey. I have some further musings on this which I may or may not get around to writing up eventually.

Wisdom, like intelligence, is sometimes used as a synonym for raw knowledge. If instead we say it means knowing what's the best thing to do, then clearly this requires knowledge, just as it needs both critical and analytical thinking skills. But it's not the same as any of these. We can immediately see that the correlation could be imperfect, that someone might have a huge breadth and depth of knowledge but be unable to see the relevant similarities across different fields.

This strongly suggests that wisdom is a thing that can be taught... at least, to the same extent that knowledge can be taught. The behaviour of LLMs, as per the earlier example, might offer some clues here. For these, I strongly suspect that knowledge and wisdom are strongly coupled, that all you need to make them wiser is a larger training data set and a bigger context window – they'll consider more information from more fields of expertise at more and more scales automatically*. That said, you can't really "teach" an LLM anything except by fully retraining it, which in effect gives you a whole new model. All you can really do is instruct them.

* Though of course, the quality of the output still depends on the quality of the input training data as well as the prompt.

This is not much like humans, who can certainly be taught and indeed are (mostly) capable of learning from their mistakes. Indeed, wisdom requires knowing what to avoid just as much as when to proceed. Whereas in an LLM wisdom may emerge naturally as a function of size of training data, this is (I think) likely only a trivial result of that training data containing more and more wise behaviour. It's much harder to gauge whether this happens for humans through absorbing sheer volume of knowledge, though I suspect if we're only ever told, "these things are true, learn these parrot-fashion" instead of, "here's how to evaluate knowledge", the result tends to be someone who's neither wise nor critical. Our own training certainly does matter.

Of course, LLMs don't really have beliefs and opinions in the same way that humans do. An LLM is a mass of statistical information and probabilistic weights, with no real fixed ideas at all – certainly not between conversations in the same way that humans have some ideas that they hold as almost permanently fixed. But likewise, updating our own world view in response to new information is seldom easy, just as including it in an LLM isn't as simple as telling it something in a single conversation. The analogies are interesting just as much as for their differences as their similarities.

In any case, the ability to update one's world view is not the definition of wisdom, but it does follow directly from it. The wise thinker knows when a single fact is of limited consequence and when it may necessitate a paradigm shift. They are able to judge when the new information is itself likely wrong and when it's their own existing ideas which are at fault. They consider also the metadata of who said it and why – they do not evaluate it purely on its own merits*. So the ancient Greeks were right to value to self-knowledge, as understanding one's own biases is essential in understanding how we respond to new data, but this isn't wisdom in itself, just as the ability to learn from experience is part of it, but not itself the definition of wisdom.

* The idea that ignoring the source of information is somehow actually the correct, rational approach is a curiously persistent and incredibly widespread error. 

Finally, it's obvious how the Jurassic Park scientists were intelligent but unwise. Russell Howard, by contrast, is much less intelligent but much wiser : having  a really dumb idea but realising that this would be a Bad Thing To Do. What of his grandmother ? Well, if she wakes up convinced that Hitler has returned, she's not very wise at all, but if she realises that this is so incompatible with her well-established knowledge that Hitler is long dead, then probably she's a lot wiser than her grandson. So c'mon Russel, put it to the test. For SCIENCE !

Friday, 27 March 2026

Review : Vital Organs

This book has been sat on my shelf for quite some time. A friend got it for me for Christmas and I just plain forgot about it.

I'll skip to the chase : this is the most 6/10 book that ever did 6/10. It is the very epitome of 6/10. If there was a Platonic form for 6/10, this book would be it.

It isn't bad exactly. But it isn't good either. It has enough in there to keep it interesting, but for a fairly large-print, double-spaced book with short chapters, less than 300 pages, and plenty of jokes, it's far more of a slog than it has any right to be.

I honestly don't quite know how the author did it. It's like she has the literary equivalent of poor comic timing : some of the jokes work well, but most fall flat. And all the sentences are short. Much too short. It becomes quite awkward. Longer sentences would help. It's like being out of breath the whole time. Constantly pausing is no good. It become exhausting. A real struggle to deal with. I wish she wouldn't do that.

Phew ! Seriously though, it's not a fun read. There's really no flow of the text at all, no development to anything, no structure. The author has a tendency to veer wildly and lazily into technical jargon with no or minimal explanation, the kind of attempt which I find very suspicious... it's like the author wants to say, "look, I'm not dumbing down !" and everyone else is far too polite – or scared of appearing ignorant – to ever say anything. It's not the right way to do outreach.

The main problem is that there's no obvious reason to write this book besides "collections of assorted incidents generally sell well", like an internet list that got massively out of hand. The author appears to be trying to do do main things : to give an unexpected history of body parts, and to give the reader a crash course in biology. The first part is okay – she has some interesting points about how we do more weird stuff with bits of bodies than we like to think – whereas the second is absolutely hopeless, and the combination is just as messy as any of the anecdotes. Not a single incident stood out as anything so memorable I feel the need to recount it here.

Could it have been better ? Honestly, I think not much. There just doesn't seem enough of a premise to the whole "people do strange things with bodies" to warrant a book, and no hint that there's any common underlying reason to it. If there was, things might have been more interesting. But as it is, the easy explanation hear appears to be entirely correct : because people are weird and like doing weird things. There's no pattern to any of it, nothing to generalise. It would been far better not to collect these kinds of things in a single book and leave them as isolated incidents scattered throughout the vast corpus of historical records.

There isn't much else to say about a book that I don't think needs to exist, so I'll leave this as one of the shortest post I've written in years.

Monday, 16 March 2026

The Logician's Swindle

What makes a puzzle annoying ? When is solving a problem rewarding, and when is finding out the answer just frustrating ? If we could answer this, we might get a long way towards making the world a happier place. Getting people to actually enjoy solving problems, rather than being pissed off at their opponents for discovering a flaw in their arguments, would surely benefit political discourse enormously.

I don't propose to try and answer all of this today. Instead, what I can do is address one particular aspect of the problem. I say that at least one major cause of puzzles being annoying rather than enjoyable is when you've been outright cheated, and that this happens far more often than it should.

Specifically, consider Newcomb's Paradox as described on Veritasium. The video begins :

You walk into a room, and there's a supercomputer and two boxes on the table. One box is open, and it's got $1,000 in it. There's no trick. You know it's $1,000. The other box is a mystery box, you can't see inside.

Now, the supercomputer says you can either take both boxes, that is the mystery box and the $1,000, or you can just take the mystery box.

So, what's in that mystery box?

Well, the supercomputer tells you that before you walked into the room, it made a prediction about your choice. If the supercomputer predicted you would just take the mystery box and you'd leave the $1,000 on the table, well, then it put $1 million into the mystery box. But if the supercomputer predicted that you would take both boxes, then it put nothing in the mystery box.

The supercomputer made its prediction before you knew about the problem and it has already set up the boxes. It's not trying to trick you, it's not trying to deprive you of any money. Its only goal is to make the correct prediction.

So, what do you do? Do you take both boxes or do you just take the mystery box?

Don't worry about how the supercomputer is making its prediction. Instead of a computer, you could think of it as a super intelligent alien, a cunning demon, or even a team of the world's best psychologists. It really doesn't matter who or what is making the prediction. All you need to know is that they are extremely accurate and that they made that prediction before you walked into the room.

I highlight certain parts because they feel crucial. To me, this is saying very explicitly, "don't think about this aspect of the problem, it's not important at all". Were this not so, I would otherwise object to how such a thing could be possible, and the details would certainly matter : was the machine running over a diverse sample of people, or was there something particular about them that helped its accuracy ? But no, this apparently isn't important, so whatever misgivings I have about free will and suchlike, I willingly surrender for the purpose of the test. I put them aside, still fully expecting to be fooled (I suck at logical puzzles) but in some other way.

Having made that assumption, the answer is obvious. If the machine is essentially always accurate, I take one box. It knows, magically, that this box will contain a million dollars, and I walk out happy and rich and in search for a bank offering a good exchange rate to a proper currency. 

But later in the video we get :

Here's how I think about the problem in a way that actually makes sense. You know that the supercomputer has already set up the boxes, so whatever I decide to do now, it doesn't change whether there's zero or $1 million in that mystery box, and that gives us four possible options that I've written down here.

If there is $0 in a mystery box, then I could one-box and get $0 or I could two-box and get $1,000, but there could also be $1 million in a mystery box. And in that case, I would get $1 million if I one-box or I would get $1,001,000 if I two-box. So, I'm always better off by picking both boxes.

Rubbish. Complete twaddle. You just told us that the machine is accurate and we shouldn't factor this in to our calculations, but in this way of thinking you cannot possibly ignore how the machine works. This is not even self-consistent ! By saying that the machine is essentially perfectly accurate, you've eliminated the very possibility of $1,001,000. That can only happen if the machine actually is inaccurate in some cases, which to my mind you've all but told us directly to discount.

This, then is a swindle, and one common to various logical puzzles. "Don't think about this aspect of the problem", they say, only later to say, "Hah ! You should have thought about this aspect of the problem after all, you fool !". Right, so you expect me to think you're a liar ? How is that a fair test ?

The rest of the video is a perfectly decent discussion of free will etc. (Veritasium is one of my favourite YouTube channels), but the poor description from the outset makes the whole thing a mess. Having been told that accuracy was not an issue, I expect something else I've overlooked to come into play. Naturally I overlooked determinism and all that because you told me to overlook it. The pettiness of it all annoys me quite intensely.

Don't worry, I'm not going down the free will avenue with this post. Rather, I just want to briefly outline that this kind of swindle is common to logic problems, and is itself one particular expression of a more general reason they're so often very irritating.


The closest similarity is surely the Monty Hall problem (the one with the prize goats). That one always confused the heck out of me because people never properly explained that I should have been paying crucial attention to the host's knowledge, not how many goats there are or how many doors. But any logic puzzle can suffer if you're not properly informed about what the key aspect of the problem is, or worse, if you're actively told to ignore it.

Not that framing doesn't sometimes reveal something very interesting. Wason's selection is fascinating in showing how the same people can have much more difficulty solving the same task if it's described slightly differently – especially so when the alternative form is nothing they wouldn't also be familiar with. But there, the whole point is to study psychology. No deception is employed, no swindle pulls the solution out from beneath the solver's feet. The facts are laid bare and it presents a straightforward yet surprising challenge to many people who take it. No, framing is only annoying when it's done to deliberately thwart the participant. 

There's also a common tendency for the puzzle-setter to declare the rational solution from authority, saying "this is obviously the correct solution because the alternative doesn't make sense to me". A classic example concerns people refusing small amounts of compensation when they would normally expect a much bigger payout. Time and time again we hear people declaring that accepting the small offer would be rational since they come out with a net cash gain. But to any sensible person there are a multitude of reasons why this would be an extremely foolish thing to do : accepting the initial offer may deny them any chance at the larger amount; they may simply feel insulted and disrespected, and responding to such behaviour is essentially letting the bully get away with it. It is only rational in an incredibly narrow and naive economic sense, and more broadly simply isn't rational at all*.

* Veritasium does this with a unique peculiarity, openly acknowledging that the "irrational" decision of choosing one box is the more profitable. I find this is going deep into "what's wrong with you ?" territory.

Again, this is a sort of swindle, denying the opposing argument by forbidding debate rather than engaging with it on an equal footing. You thought things were going to be fair and above-board only to find out that they were anything but, that the answer had already been decided without you.

Another similarity is the pettiness. Veritasium didn't have to pull the rug out from under the viewer's feet any more than anyone has to accept that getting a smaller payout is somehow rational. 

Very occasionally, I've run public surveys to help me with my own research. I've tried to ensure the wording was extremely careful, including omitting details when this would bias the result. For example I once ran a public poll on how many groups of points – galaxies – people could see in a plot, deliberately not telling them what they were looking at. Some people objected that there wasn't enough information (e.g. what sort of scales they should be considering), and I sympathise that they might find this annoying. But for me this was the whole point, to gauge what people's natural reactions were : I wanted to know if they would instinctively identify the same groupings that appeared natural and obvious to me (most of them did, as it turned out). I needed to know if my additional knowledge was biasing me, or if the groupings I identified would be readily visible without this extra information. 

The point here is that there's absolutely no reason for misdirection. It's perfectly possible to account for this in a way that will give you a meaningful result to the question you're asking. Sometimes, this can only become apparent after the fact, but in those cases the participant should feel relieved, not annoyed. Annoyance only happens when the misdirection was unnecessary. 

A second personal example : group meetings back in my PhD days. These served the valuable purpose of getting the students used to dealing with tough questions. But they also turned the experience into a weekly grilling that made the whole thing quite intensely annoying... instead of having an enjoyable, low-stakes discussion about science, we had to deal with supervisors being deliberately over-critical. That we all knew full well what was going on didn't help in the least. It would have been fine if such sessions had been clearly demarcated and set aside as such, with regular meetings more about science for its own sake. Trying to pretend this was how scientific discussion should happen, though, was just unfair.

Again, there was no reason for the misdirection. This too was a sort of swindle. Oh, you think you're here to discuss your work ? You thought I was being harsh because I wanted to be ? Hah hah, fooled you ! The idea that maybe they could have just not done that was never raised.

On an grander scale, problems with the alternatives to dark matter. This too feels like something of a swindle : proponents often raise objections to dark matter which are based entirely on the properties of the ordinary matter we can see. They make highly dubious inferences about the necessary connection to the dark matter they're trying to demonstrate doesn't exist, saying that the lack of a naively-expected correlation proves it can't exist. Some of these problems can become obvious, but sometimes it's worth spelling this out at the high level because it's all too easy to lose sight of the forest for the trees. Once you start questioning the underlying assumption and realising that maybe the connection isn't so direct after all, often the whole thing falls apart.

And in other arenas too we find possible swindles. As I've covered before, thought experiments become extremely annoying when changing a small detail would profoundly alter the result but the instigator refuses to consider any variation : no, you must focus on this aspect of the problem because I said so, even if my scenario is actually bunk. Just like insisting someone should accept a miniscule payout, it's disrespectful not to think the other person's opinion might have some value. 

Likewise with analogies. An indirect analogy can be extremely powerful when the relevant aspect is sufficiently similar to its comparison subject, becoming thought-provoking in both its similarities and in its minor, extraneous differences. When an analogy is intended to be direct, though, the seemingly-extraneous details can become crucial, so expecting people to shut up and ignore them is not realistic. It's extremely difficult to focus on the "relevant" bit (usually declared by authority) when there are obvious deficiencies in the whole thing. Conversely, it does no good to pretend similarities don't exist when they do, or to overlook them on grounds which are actually minor details or only quantitative differences.


All this sets out some conditions for when puzzles becoming annoying, and gives us a rough working definition : The Logician's Swindle is the use of unnecessary misdirection from a position of unjustified authority.

This is similar to but not quite the same as the Magician's Choice. In the latter, we know we're being denied crucial information, misdirected, and otherwise deceived. We go in with eyes open knowing we'll almost certainly be tricked and often paying for the privilege of suspension of disbelief. We know we won't be able to solve the problem and we enjoy our failed attempts to work out what's going on.

The Logician's Swindle is altogether nastier. Here, we're supposed to have all the information we need to reach the "correct" conclusion, but we find only afterwards that actually we don't – with the swindler often denying this for the sake of making us look foolish. And the conclusion itself may be open to dispute but the proponent argues from a completely artificial authority that it isn't. Worst of all is that "mistakes" can (though do not always actually) carry real-world consequences. In short, it's a scam : a discussion that should be in good faith which actually isn't.

And that's why I hate logic puzzles.

Navigating the AI Hype And Hysterics

Back in the days when AI development meant working towards something like an artificial human, I had three rules to bear in mind when readi...