Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Thursday, 3 July 2025

Review : The Brain

I interrupt my mythology book reviews to turn to the completely different matter of neuroscience.

David Eagleman's Livewired was one of my favourite reads of recent years. He put forward a genuinely technologically optimistic study that has absolutely sod-all to do with that stupid manifesto, and felt like a much-needed counterweight to the equally stupid "all technological innovation is crap" cynicism which seems to pervade social media. He also explored things philosophically, claiming that it is a actually possible for us to experience new qualia. 

I've mentioned Livewired a few times over the years and it's my shame that I never did a review of it. I think I'm going to have to re-read it to do it justice.

Anyway, his earlier, shorter book The Brain was an obvious read. The major theme of Livewired was the brain's tremendous adaptability, how it could repurpose itself to accomplish new tasks with old hardware. Eagleman did a very thorough job of describing just how far this could go, setting out both when surprising levels of flexibility were possible but also when limits would be reached. 

There are plenty of obvious overlaps between these two books. Since The Brain covers these much more concisely, perhaps by covering them in brief here, I can shorten my eventual review of Livewired... which would otherwise risk just rewriting the whole damn book, 'cos it was bloody good.

The Brain is a very good, very short read. I give it 8/10 for being such an excellent but little compendium on how the brain works. It doesn't tie itself down in unnecessary caveats or tangents but doesn't skip the uncertainties either. It just gets on with things, so so will I.


Semi-permanent plasticity

Brain development, says Eagleman, continues roughly up to the age of 25 or so. Sure, the earliest years are important (in Livewired he makes a bit more of this, noting that certain very basic skills are almost impossible to develop once out of childhood, e.g. feral children will never fully integrate into society), but much happens throughout adolescence and early adulthood too. New connections are formed and discarded as they prove useful or unhelpful (I believe in Livewired he describes this as a process of babbling).

Teenagers brains, he says, are literally different from adults, just as children are. They're socially awkward risk-takers not just because of lack of experience, but also because of their neurology (though of course, presumably experience plays a large role in shaping their neural structures). They are, paradoxically, emotionally hypersensitive but also prone to seeking out highly emotional activities, and unable to control their emotional responses as much as adults are. In one experiment they got physically anxious when asked to sit in a shop window but at the same time they send each other naked pictures of themselves*. They are quite literally immature.

* Not literally at the same time, you understand.

But that's not the end of the story at all. If all the major wiring is in place by around 25, substantial redevelopment is still possible indefinitely afterwards, at least for re-using old networks in new ways. Sufficient practise at a skill can cause macroscopic changes in the brain basically at any age; in Livewired he says this can occur in a matter of hours.

This plasticity, claims Eagleman, is unique to humans. Most animals rely much more on hardwired instincts*. Human adaptability gives us the tremendous advantage of of immense versatility, which far exceeds the penalty of our very long development period. Continuous practise may even help prevent the onset of dementia such as Alzheimers, not by fortifying the connections so much as creating redundant ones as backups. As some connections fail, even the ageing brain is able to repurpose old ones to do different jobs. The degradation can't be stopped but its effects can be reduced.

* I wonder, though, how much research has been done on animal brains in this regard. After all, animal intelligence has continuously exceeded our expectations.


Decision time 

Plasticity also plays a role when the brain has to make a choice. The brain acts, says Eagleman, as a series of competing networks each vying for supremacy, although I rather prefer his other analogy of a parliament. The idea that we have many voices inside of us is quite real, with different networks continuously firing until, finally, the brain acts to "crush ambiguity into choices" and makes a decision.

Exactly how this happens is still somewhat unclear. There's a reward system, in that the brain favours networks which have previously made predictions that have been validated : if one mode of thinking has successfully predicted a good outcome in the past, that network will be favoured. Conversely, those networks which don't give predictions in agreement with reality are downvoted. The brain, then, acts as a sort of prediction engine, continually checking its findings against different models*. Even abstract concepts can act as a reward or punishment, but immediate, tangible effects tend to override all these. Hence it's easy to avoid doing the things we should do in favour of something else (eating ice cream instead of studying). 

* This makes it all the more mysterious that the brain is generally able to do this very well for low-level activities (few people continuously stab themselves with forks) but is often shite at higher reasoning (like believing in the Jewish Space Laser).

Eagleman's suggested self-help solution is to be like Odysseus and nail ourselves to the mast : ahead of time, ensure that the actual situation we'll find ourselves in is more tangibly rewarding/punishing than it otherwise would be. Don't give the brain the option to be distracted by temptation. Perhaps more interestingly, watching our neural responses (when we've got the option to do so) has also proven helpful in getting participants to learn self-control. Maybe one day we'll have phone-accessible EEG-hats and can cultivate our own desired responses using an app...

But exactly how the brain decides, "this network is the winner" and decides that this is the right approach is nowhere made clear. Emotions, says Eagleman, act as a heuristic for decision-making, combining all the possible effects into a simple sensation we can respond to, which explains that uncomfortable sensation while we haven't committed to anything. This, though, is still more description than anything explanatory, albeit a useful one. 

Finally, it's also interesting to me that neural activity is more coherent and correlated while we're asleep than when we're awake. Is the brain better able to get on with things without that pesky conscious mind sticking its nose in ? Maybe.


The nature of reality

This will have to be either very brief or extraordinarily long, so I'm going for the former. The world we experience is not the outside world itself, says Eagleman. Not only is there a delay in processing different sensory signals, but there's even a different delay for each type of perception. Yet somehow, the brain combines all of this into a unified experiential whole. 

And this unified approach appears to be crucial : to assign meaning to a sensory input, it must correlate to something else. If you only have visual data, you won't be able to see. While I had the impression from some of the case studies of Oliver Sacks that we learn to read the world around us, it's more subtle than that – interactivity is crucial, an idea developed further by Peter Godfrey-Smith (another one who's books I really must blog up sometime). Interestingly, when this multisensory approach is denied, the result is hallucinations... eerily similar to LLMs.

Perhaps one of the most compassionate parts of the book describes Schizophrenia as a sort of waking dream. In this condition, says Eagleman, suffers experience hallucinations without any kind of distinction between them and real, external sensory inputs. This naturally explains their behaviour, which may be perfectly rational but in response to a reality all of their own. I wonder if it doesn't go even further than that : in my dreams I rarely respond rationally to anything, yet at the time it feels coherent and logical. Maybe the logical reasoning centres are also impaired, without affecting the sensation that things have been done correctly... the reward networks might be all messed up.

More philosophically, Eagleman stresses that our perceptual reality is not reality itself : "the real world is not full of rich sensory events; instead our brains light up the world with their own sensuality". Perceptual time is also not experiential time (let alone real time). Experiments have shown that while high-stress situations make us feel like time has slowed, they don't affect our ability to think or perceive more quickly. 

Here I think I'd need to clarify with Eagleman himself exactly what he's getting at. I agree, our perceptions don't correspond directly to reality, in that our sensory experience has no special claim on validity (compared to, say, a bat or a whale), and that we have to learn how to create our own mental worlds*. But all the same, surely our experiences do have some equivalence with external reality. There is something outside that induces an experience; redness is not totally meaningless, nor is pain or sound or heat. We can verify these sufficiently well so as to reliably identify exceptions (like the responses of Schizophrenics) as being disconnected from that external reality. So in that sense, I don't agree that our perceptions are purely fabrications; since we have access to nothing else, it doesn't feel at all useful to me to claim that perception is not reality. Perceptual reality is the only thing we will ever have any access to.

* Eagleman mentions an interesting case of synesthesia in which letters are associated with colours. I wonder how this works, given that the letter symbols themselves have no meaning until they're learned.

Finally, I also don't agree that we don't have free will. Eagleman says (as do many others) that consciousness does appear to have some uses, particularly when learning a task for the first time or when things need to be done in an especially careful, controlled way, or when big-picture thinking is required – integrating one line of thinking from one subject with something seemingly unrelated. Seems fine to me, though as to why conscious experience should be needed for this is anyone's guess. And as for those experiments where brain activity can be predicted ahead of the conscious sensation of having made a choice... nah, I've covered that umpteen times already. Far more likely, in my view, that conscious sensation actually does involve being in control rather than a weird way in which the brain constructs a narrative* of deliberation for no good reason.

* Though I do also agree that constructing a narrative is something the brain does a great deal of, as well as creating a theory of mind for us to predict and respond to other people. Interactivity plays an interesting part here, too : while those with too much Botox are hard to read, they themselves have difficulty reading others as they're not able to be so expressive.


You will all become one with the Borg

Well, maybe. The brain's adaptability is not unlimited but it's tremendously powerful all the same; Eagleman gives the case of a girl with literally half a brain who lived a perfectly normal life (and see that recent post about the notorious but misreported Phineas Gage). A key concept Eagleman develops here is the "plug and play" model of sensory inputs. Essentially, our sensory peripherals – eyes, ears etc. – can be replaced and the brain will still find a way to deal with the information. It takes a while for the brain to learn how to process it, but it works (Kevin Warwick came to much the same conclusion in "I, Cyborg").

Personally I think this is most interesting stuff in Eagleman's repertoire. I love the idea of being able to, quite literally, reject your reality and substitute my own. And this actually works. Experiments have enabled people to "see" using tactile sensors on their backs, foreheads and tongues, hooked up to cameras. They can identify objects and accurately judge distances. It works equally well if the tactile interface is connected to an audio sensor instead of a visual one. And this can be done wirelessly, letting volunteers experience sensory input from other places on the planet – and in reverse, to control mechanical arms remotely. 

Real, it turns out, really is just an electrochemical reaction in your brain. 

Of course, things are still limited. The possibility of digitally scanning the brain is a stupendous, unrealisable challenge. Transhumanist dreams of fully uploading our consciousness are also likely futile even in a materialistic perspective, even if we did manage to scan our entire brain and its neural patterns. If consciousness is indeed just those patterns, then we could simulate the brain inside a computer, yes... but, Eagleman says, while it might well be something (or someone) that experiences something, it still wouldn't be us. Even from his perspective of consciousness being emergent, which I don't really agree with*, a transfer just doesn't work. A copy ? Sure, if it's really the patterns that are conscious, and not something to do with the physical substance. Even then, it would have to be capable of change, of forming new memories and responding to new sensations.

* He attempts to refute the argument of Leibniz's Mill, saying that yes, you can't find perception in any single mechanical part of the brain, but maybe you could in the emergent whole. I just don't see it. To me, the gap between experience and physicality is just too large, too fundamental to ever be bridged. The only way I can see this working is if mind and matter are, somehow, of essentially the same stuff (neutral monism, idealism, etc. etc. etc., you know the drill by now). Otherwise, I have no clue how a gigantic abacus could ever be conscious, or how this would happen if we moved the balls around but only in a very particular way.


In short, a fascinating and worthwhile read. I'm going to have to try and get hold of his TV documentaries and read his other books (of which I'm glad to see there are several) at some point. One day I should try a fuller write-up against materialism, but what's really interesting is that often the disagreement with my own, somewhat more dualist perspective, is that the disagreement isn't really that stark after all.

Tuesday, 1 July 2025

Single Female Autarky

Sorry. The headline is purely contrived based on the Bender's little parody song in Futurama :

Single female lawyer !
Fighting for her client
Wearing sexy miniskirts and
Being self-reliant !

What in the world am I on about ?

Xenophobia.

Ooo...kaaay....

Right. There was another piece I read recently on Aeon that's entitled : The allure of autarky : Liberal thinkers are shocked that nations are once again isolating from the world. The real surprise would be if they didn’t. 

Hopefully things are starting to make a bit more sense now. If not, they will soon.

The title is certainly interesting enough. The perceived decline in liberalism is a definite cause for concern, although I stress "perceived" : certain electoral results would strongly dispute the notion that the West is doomed to fall into authoritarianism. We can't allow ourselves to think that way, and we shouldn't, because it isn't true. Nevertheless, the return of authoritarian, anti-liberal ideologies is worrying. 

But to me it's also deeply surprising. Why would anyone object to a social system that allows people to live their own lives how they choose, insofar as they don't interfere with anyone else ? Who actually wants other people sticking their noses in and meddling in their own lifestyle choices ? Not me !

That, though, is a very big topic indeed, and best left for another time. Still, one of the main aspects is that covered in the essay of isolationism, or to give it its tactful window-dressing... self-reliance.

I want to start with a quote from near the end of the article. Without this, it would be all too easy to conclude that the author has a very different take on the matter than what they actually do.

Rousseau was mistaken to believe that primitive man was a self-sufficient loner. And the evidence is unambiguous that globalisation – greater trading links, the transmission of know-how and technology, more cross-border investment, the migration of people – has delivered spectacular material benefits for humankind and higher living standards over recent centuries, and especially in the era since the Second World War... Trade and interconnection, whether or not we realise it or like it, is part of who we are – and always has been.

Right, good. We've established that we're in the same moral ballpark and can proceed accordingly. Unfortunately, I still don't think it's a very good essay. You'll get little argument from me that isolationist tendencies, a.k.a. the desire to live in a self-sufficient, self-reliant "autarky", are very old ways of thinking indeed. The problem is that the author seems to try quite hard at excusing this as something more respectable than xenophobia, but doesn't really present any convincing argument for why this should be. They also state several times that this "autarky" has been widely touted as a goal of the progressive left. This I don't get at all : it seems neither progressive nor left, but reactionary and conservative through and through.

From those earliest days of Western civilisation, the moral virtue of autarky was not just a goal for an individual, but an aspiration for the collective too. According to Aristotle, a contemporary of Diogenes, the ideal city state in the ancient world was also self-sufficient, and those inside the polity would have everything they needed to pursue a good philosophical life – unlike those outside it.

Yes, but in Plato at least, everyone would be dependent on each other. The point was very much to live in a harmonious community, which could and would interact with outsiders. The boundaries of Magnesia and Republic feel more to me simply a way of intellectual self-control than anything else, a way to mark the limits of the thought experiment and stop it getting our of hand. They're not a key feature of these early world-building efforts, so far as I can tell.

Aquinas was an advocate for economic, not just spiritual, autarky. He noted that there are two ways a city can feed itself: by growing food on its own surrounding fields, or through trade. ‘It is quite clear that the first means is better,’ Aquinas concluded in De Regno (1265), his book on kingship. ‘The more dignified a thing is, the more self-sufficient it is, since whatever needs another’s help is by that fact proven to be deficient.’ Aquinas also proffered a moral case for autarky when he noted that ‘greed is awakened in the hearts of the citizens through the pursuit of trade.’

Yikes ! That sounds... ghastly ? I think the idea we can all coexist in glorious isolation is utterly wrong-headed. We're all interdependent whether we like or realise it or not; we might try and minimise our interactions with others (I generally do) but pretending we can all manage by ourselves is the utmost folly.

A more extreme version is a classic from Japan :

The policy of sakoku or ‘closed country’ was imposed on the islands of Japan in the 17th century by the Tokugawa shogunate, a form of feudal military dictatorship. Western Christian missionaries were banned and those that were already in the country were persecuted. Emigration was forbidden and foreign trade was reduced almost to nothing. ‘The Christians have come to Japan … to propagate an evil creed and subvert the true doctrine,’ proclaimed an edict from the shogunate in 1614.

That's just simple xenophobia though. "Our culture is better and special, but cannot withstand interaction with others" is an age-old attitude that points to a deep-seated psychosis in human nature : our way of life is the best, but everyone will abandon it the moment any of those pesky foreigners show up. Righto then, if people are abandoning your ideal out of their own free will, then how the blazes is your approach better ? If your way is so good, why don't people want to keep to it ?

(That's a problem for liberalism, too of course.)

I should probably add that I don't think self-reliance is a bad thing in itself. It's good to be able to do what you can without bothering others. It's especially good to be able to think for oneself. But not every cry for help points to subservience, and interactions with foreign parts are good for broadening the mind, not for corrupting the soul.

Rousseau conjectured in his Discourse on Inequality (1754) that primitive man had been naturally ‘solitary’, coming together with others only for mating, and was much happier for it. ‘No one who depends on others, and lacks resources of his own, can ever be free,’ he warned the Corsicans in 1765. ‘[P]ay little attention to foreign countries, give little heed to commerce; but multiply as far as possible your domestic production and consumption of foodstuffs,’ was Rousseau’s advice to the Poles in 1772.

This just sounds like he didn't like people at all, really. I can sympathise – many of them are awful. But the thing about interdependency is that it's supposed to be a two-way street : if you're dependent on them then they're also dependent on you. E.g. Europe is dependent on cheap Chinese goods, but China is dependent on Europe buying said goods. 

And the ever-unspoken question here is just how far are we supposed to take this ? Should each man be an island unto himself ? Impossible, unless he scrapes a living from rocks*. For virtually everyone that would be a profound level of suffering and I don't see that as "freedom" in any meaningful sense. A hamlet, then ? A village, city, nation ? A continent ? No, no, this is all wrong-headed. Cooperation affords us more opportunities, not less, and so long as we can opt out of them (e.g. we don't all have to buy the latest fashionable goods if we don't want to) then we can fairly be called free. 

* On a related point, this article discusses a possible organism that might be sort of "half alive" in that it depends on others to sustain some of its basic biological processes. This implies the weirdness of the ultimate extreme implication of this kind of reasoning. Viruses, say some, aren't alive because they can't reproduce without a host, but plenty of macroscopic parasites can't even survive at all without a host. We need our external environment to survive (and we ourselves host our own microbiome), but clearly we're alive. I think this points to a terrible flaw in the whole thing : interdependency is part of our nature even at the biological level, and the only real escape from this is death. There are strict limits as to how far self-reliancy is even possible.

Incidentally, this all reminds me of Private Eye's regular agricultural column, which insists that Britain must make as much of its own food as possible and any environmental consequences be damned. I'm continuously wondering why, but despite many letters from other readers asking similar questions, answers have come there none.

‘In a nation which has closed in this way, whose members live only among themselves and very little with foreigners … a higher degree of national honour and a sharply determined national character will develop very quickly,’ claimed Fichte.

Yeah, right. It's just simple and barely-disguised xenophobia. Living in a self-contained country won't lead to a sense of "national honour". Instead it will just foster closed-minded stupidity.

Jerry Mander, an ecologist... believed that one of the problems with globalisation was that it encouraged ‘voracious consumerism’... Like Fichte before them, today’s Left-wing anti-globalisation activists often argue that free trade disproportionately benefits wealthy nations while harming poorer ones. In this view, autarky becomes the natural path to both domestic and international social justice.... Progressive thought has long carried a current of economic isolationism. Yet, as recent history makes clear, the drive toward self-sufficiency is by no means confined to the Left or to environmentalist movements.

These sound like crazy people, more than anyone I'd normally associate with left-wing activists (excepting the possible "let's all go and live in the trees" types of loonies), let alone progressives. Saying it's "by no means confined to the left" is a mad statement, like claiming that empire-building is by no means confined to the British. Which segues nicely to the next segments.

Then there are some examples of when autarky is at least understandable, but there are heavily extenuating circumstances :

Swadeshi was Gandhi’s antidote to what he saw as the predatory imperial capitalism of the British. And that mindset of India needing self-sufficiency remained long after independence was achieved. 
‘Independence means self-reliance,’ stated Nyerere’s 1967 Arusha declaration. For him, autarky and his distinctive vision of African socialism were inseparable. 
In January 1790, George Washington, the first president of the United States, rose to deliver his first message to the US Congress: ‘A free people ought not only to be armed, but disciplined,’ he declared, ‘and their safety and interest require that they should promote such manufactories as tend to render them independent of others for essential, particularly military, supplies.’

Okay, yes, the British have a lot to answer for. Fair enough. Absolutely fair enough. But these seem like such specific examples that they're of no help in understanding the present mood at all. Sure, if you're just thrown off an oppressor, of course you're going to want to be self-reliant ! That's obvious. The problem is that this is happening in countries which have not been subject to any sort of oppression whatsoever, yet the right have, for example, managed to somewhat successfully paint the European Union as some sort of modern-day German (read : Nazi) hegemony. And this difference between true oppression, the legitimate need for freedom, and manufactured oppression – that's what really matters.

To return to the empire-builder's own perspective :

The road to national self-preservation for the former lance-corporal would have to run through a radical programme of building national self-sufficiency. And he believed that Germany’s salvation lay in conquering and exploiting the rural bounty of lands to the east, thus gaining the notorious Lebensraum (‘living space’).

I mean, come on. In what world does "self reliancy" mean "conquest and enslavement of others" ? That's crazy, and I have a very hard time seeing why the author included it. All it does is reinforce the message that it's no more than a presentable version of xenophobia. No noble desire for self-control, just a paranoid dislike of foreigners. It's that simple.

Here, though, is a stranger puzzle :

Mencius Moldbug, the blogging alias of the US computer scientist Curtis Yarvin, is a prominent tech-authoritarian theorist whose influence extends to some Trump-aligned politicians and wealthy backers. Yarvin advocates dismantling US democracy in favour of a monarchy or a national ‘CEO’-like figure. As international travel collapsed at the onset of the COVID-19 pandemic in 2020, Yarvin saw his moment, not just tolerating isolationism ‘but promoting it’. 

The right-wing tech CEO is something I don't get. Technological breakthroughs require scientific advancements, which are based on cooperation at all scales of society. So just why so many techbros fall prey to this cognitive dissonance is something I struggle with. To me, cool new tech gadgets are the easy selling point for scientific research. That anyone would seek to pervert this into the goal of dismantling democracy and soliciting isolationism is both absolutely bonkers and at the same time viscerally disgusting.

So what makes the autarkic urge so persistent? That protean ability to be moulded and affixed to a seemingly endless host of ideologies is surely key. But perhaps it’s also that umbilical link in autarky – evident since the days of ancient Greece – between personal morality and the question of how we should relate to each other within communities and between communities. To be successful, political movements have to appeal to something fundamental in everyone’s nature. Our innate sense of the virtue of self-reliance is often the foundation stone on which they build.

Nah mate, it's xenophobia. Self-reliance as meaning self-improvement is something everyone endorses (especially if of the sexy miniskirt variety*), but I'm not convinced that the author has at all demonstrated that this is really what's appealing about modern authoritarians. Nor am I convinced that autarky isn't (by far) dominated by the conservative economic right, with any appeal to the progressive left being firmly limited to a few tail-end-of-the-Gaussian weirdos. The author has, I think, both over-thought and under-sold their case.

* Do they count as self-improvement ? Shut up, they do.

This is a shame. It remains a difficulty to explain how the right has succeeded in the face of a liberal society that is/was generally prosperous (however imperfect); we can venture xenophobia, media manipulation, unrealistic expectations, and various others that seem more plausible than an appeal to "self reliance". Still, something unsatisfying remains about the situation in which people keep voting for options which are so, so obviously against their own interests.

 I prefer to end with a much-used and much-needed quote from Tolkien. :

The wide world is all about you. You can fence yourselves in, but you cannot forever fence it out.

Or possibly, to bring this back to the somewhat anarchic introduction with which this post began, a quote instead from Marge Simpson :

There's no shame in being pariahs.

Autarky is a daft notion. Just look at North Korea.

Monday, 30 June 2025

Black holes belay bird Bentleys

Following that marvellous Aeon piece on how Phineas Gage didn't suffer a horrible personality disorder after taking a bolt to the brain (see previous post), here's a couple more from them.

.... what, TWO ? Yes, two ! On related topics, obviously. But one of them isn't much good at all, so I'm going to reduce it to the bits I found interesting and ignore the rest.


This first piece is a bit of a meandering rant about how humans learned culture from animals. I mean, yeah, sure, maybe, but how do we know we wouldn't have figured it out anyway ? Being animals ourselves, surely we evolved with a much greater tendency to act on instinct rather than reasoning and therefore didn't start off as such "blank slates" as modern babies are. So we'd have had animal-like tendencies because we were, well, actually animals.

The main problem with the essay is that the author seems hung up on an earlier bias against the idea of animal cultures. I don't think this is nearly such an issue today; it's obvious to anyone watching nature documentaries that many animals possess something akin to culture (even if we might not usually use the word). But there are a couple of interesting points. One is highly specific :

In several caves in France, such as Bara-Bahau, Baume-Latrone and Margot, human-made finger flutings or ‘meanders’ follow earlier cave bear scratches. Some of these long lines of finger-combed grooves are superimposed directly over claw marks. Others are located near the bear-made traces, echoing their orientation. 
In Aldène cave in the south of France, human artists ‘completed’ earlier animal markings. More than 35,000 years ago, a single engraved line added above the gouges left by a cave bear created the outline of a mammoth from trunk to tail – the claw marks were used to suggest a shaggy coat and limbs. In Pech-Merle, the same cave where Lemozi mistook cave bear claw marks as human carvings of a wounded shaman, a niche within a narrow crawlway is marked by four cave bear claw marks. These marks are associated with five human handprints, rubbed in red ochre, that date to the Gravettian period, about 30,000 years ago. 
For Lorblanchet and Bahn, the association between the traces of cave bear paws and human hands is no accident: ‘It is remarkable (and the Gravettians doubtless noticed it),’ they wrote, ‘that a rubbed adult hand, with fingers slightly apart, leaves a trace identical in size to that of an adult cave bear clawmark.’

And that's a very interesting case in which humans may indeed have been directly influenced by animals. Still, it doesn't mean that the claw marking were the bears attempting to do line drawings* – the innovation here is all human (unlike, say, the bower bird). I don't find the other examples nearly as persuasive, however.

* If you go down to the woods today....

The second interesting point is more conceptual. When the author talks of "culture", I think they're really referring to history. That is, remembered culture, a transmitted knowledge of how things used to be in the past : specific events, old traditions, different mindsets of what the ancestors believed. The author does imply this sort of definition, but more by accident than anything else :

This is based on a well-established belief: humans make and do things because only we have culture, and when those things we make and do change over time, we call it history. When animals make and do things, we call it instinct, not culture. When the things they make and do change over time, we call it evolution, not history. Anthropologists have pointed out that this is an unusual way of thinking: at what point did we stop merely evolving from our long line of hominid ancestors, cross an irreversible threshold from nature to culture, and kickstart history ?

... as the British anthropologist Tim Ingold argues, we never speak of ‘anatomically modern chimpanzees’ or ‘anatomically modern elephants’ because the assumption is that those species have remained entirely unchanged in their behaviours since they first took on the physical forms we see today. The difference, we assume, is that they have no culture.

And surely it's interesting to think of animals as having this kind of historical culture. So far as I know, no animal shows evidence for this kind of thinking – episodic memories and old knowledge, sure, but nothing approaching human history.


The second article is very much better. It asks the provocative question as to why other animals don't possess a culture as rich as humans – that is, in the Pratchettian sense of extelligence, of highly complex physical objects like Bentleys and nuclear reactors. Specifically, birds. Plenty of animals are intelligent and even have culture, but many have other factors which readily explain why they don't have sophisticated devices (short lifespans in the case of cephalopods). But birds, quoth the author, appear to have an awful lot of the basic necessities :

  • Large brains with strong intelligence and learning capabilities
  • Long lives, so that understanding of the world can be learned and refined over time
  • Highly overlapped generations, so that information learned can be transmitted through time
  • Meticulous rearing of offspring, for the same reason
  • Refined intra-species communication, so that high-fidelity transmission of information can be communicated.
So why no Bentley-driving birds ? For this the author develops a very nice metaphor of evolution. They imagine a 2D landscape where the x, y position represent general parameters of a species and the height is how well they're adapted to current conditions. Then we can think of evolution as a blind mountaineer, only able to react to the local gradient and never to make any long-term plans at all :
This is why evolution can get ‘stuck’ at the local maximum – the blind mountaineer will never go ‘down’ in the service of getting to a higher peak on the other side of a valley, because he doesn’t know the peak is there; he senses only that in that direction lies a valley of death.
Now the neat trick is to invert this and make the troughs, not the peaks, represent local suitability. Instead of a blind mountaineer, imagine a ball rolling under gravity – something like the old physics classic of a rubber sheet deformed by a heavy ball, but with the sheet much more complex. The ball itself can deform the sheet but it can also change in response to external stimuli : for example, a cheetah evolving to run faster than its prey sinks the trough a little deeper, but a changing climate may either deepen the trough or fill it in. And since most local minima won't be all that deep, random chance mutations or external changes (both of which happen continuously) usually ensure that animals do keep evolving; only a few get stuck in such extreme minima that they're not likely to change any further.

The useful bit about this for the lack of bird culture is it helps to show that birds aren't a case of a cultural near miss :
Unlike objects flying through space, an evolving species isn’t rapidly whizzing by in a straight line – it is wibbling and wobbling about in evolutionary space, taking a drunkard’s walk in the general vicinity of its current landscape. If birds were simply almost-cultural, with all the predispositions they have, they wouldn’t ‘miss’ cultural evolution and sail off into the distance, they would continue to wobble about the rim, always one small deviation away from falling in to join us in the valley of cultural plenty [i.e. they'd fall in and evolve culture eventually].

Instead, they fell into a black hole called flight. Once birds as a group had flight, their future was sealed – flight would be their defining trait, and would delimit the futures available to them. Flight resolved a huge amount of the pressure that drove humans to seek cultural efficiencies in feeding themselves, fighting predators, hunting prey, transmission of information, and all of the other complicated things that we do to be successful. Birds, by comparison, live on Easy Street – they fly away. They fly away from predators, they fly away from food shortage, they fly away from environmental change. There is no pressure on them to evolve the means to establish agriculture. They can fly away instead.
Flight, says the author (developing this in more detail in the essay) is such a powerful advantage that it removes most of the selection pressure. A bird with culture wouldn't have a significant advantage over one without, because the solution to all its problems would still be the same : fly. It would even have a substantial disadvantage, because the extra intelligence would be biologically expensive. And of course, it would need to have grasping hands, which bird's wings are ill-suited to providing.

Conversely :
It is probably the case that our cultural abilities are also a black hole in evolution. Everything that is true about flight’s incredible selection benefit is true about human culture. We have also fallen down an impossibly steep slope of selection to arrive at the incredible complexity of human life today. I cannot fathom what set of circumstances would cause us to evolve away from this complexity. But at the bottom of our two black holes, we and the birds are separated.

We will both meander about the infinite space of our black holes but will not leave them. They will not play canasta, and we will not fly. Our futures are expansive, but point interminably to our respective singularities – theirs to flight, and ours to culture. For each to have the other would be splendid, but evolution doesn’t aim at splendid. It rolls, unthinkingly, away from pressure, and our respective pressures have been released by our respective all-defining traits.
I think this is an extremely clever and well-thought-out piece, but I still think the picture is incomplete. The author mentions flightless birds as evolving due to exceptional circumstances (e.g. lack of local predators), but what's prevented flightless birds from developing culture ? After all, flightlessness has been a thing for tens of millions of years, possibly longer. I can't believe the wing structure is that big of a deal-breaker.

Of the other apes I suppose the situation is more complex – clearly some of them did develop human culture ! And even human-like culture, considering we know of Neanderthal art and the like. I think we still don't know enough about how our own thought processes really got going to say exactly why the other great apes aren't driving motorcycles, let alone why birds aren't reading Shakespeare. 

Still, it's thought-provoking stuff. I remember many short stories by Stephen Baxter in which humans of the far future, trapped in specific circumstances, lose intelligence as a result of selection pressure and biological expense (often, amusingly, while retaining a level of advanced technology). But one thing he never explored though are the evolutionary consequences of jetpacks. Give everyone a cheap, simple, reliable jetpack, let them simmer for about half a million years... what would the result be ? A whole new expanded way of thinking, or just the opposite – idiocracy but in the clouds ? If the Aeon logic is correct, giving everyone a jetpack should mean the end of history.

... nah, I still want a jetpack, dammit.

Head injuries won't necessarily turn Jekyll into Hyde

I already shared this on my social media feed, but it turned out to be rather long. And I find it so interesting I want to keep it here as well as a go-to reference. This version has some very minor editing and additions. It's a lot more "quotes from the article" and a lot less of my own commentary than usual, but if you're already familiar with Phineas Gage, it'll save you a long read on Aeon (if you're not, I suggest you just use the link immediately below instead).

This is an absolute belter of piece from Aeon ! I haven't kept up with them much of late (there's just not enough time) but I immediately went to the site and downloaded a few more articles to get back into the swing of things. Aeon do have the occasional utter dud on an article, but when they're right, they're right.


You've probably heard of Phineas Gage, the 19th century railway worker who survived a horrific brain injury only to come out with a changed, ill-mannered personality. Well, he did survive, but the evidence for a changed personality is next to non-existent.

There are three primary sources written about Gage by people who met him. Harlow wrote his first report in 1848, limited to an account of the recovery, and a second in 1868, which reproduced the first and added further interpretations. Another physician, Henry Bigelow, wrote a paper after meeting Gage the year following the accident. 
Of the two, only Harlow mentions Gage’s personality change. His total word count on the subject is just over 300. [My emphasis] At no point does he ask Gage about it or offer any quotes from him – Gage was dead by the time Harlow wrote the second paper. Nothing written by anyone who knew Gage before his injury survives, only paraphrases noted down by the two doctors. 
So, the sources are scant to begin with. And, according to the psychologist and historian Malcolm Macmillan in 2008, most of those who have written about Gage since the primary sources were published do not appear to have checked them, promoting some aspects of the story, ignoring others and embellishing liberally. Macmillan’s conclusions were echoed in a separate 2022 review of more recent Gage literature, which found that half of the 25 papers analysed gave negative descriptions of Gage that were not based on the primary sources.

It really is quite concerning how often widely known stories turn out to be based on almost nothing. The unfortunate part is that because everyone repeats it, it's easy to assume that this is because it's well-verified. But this, unless the reports are all independent, is circular : more repetition sounds like ever-greater credibility, even when it's just literally repeating what someone else said. Now a lot of independent verifications do constitute powerful evidence, to be sure, but a lot of people repeating the result does not. The difficulty is that the brain doesn't instinctively understand this.

Hanna Damasio’s claim that Gage ‘began a new life of wandering’ seems to be extrapolated from a single use of the word ‘wanderings’ by Harlow, in context of the difficulty he had in tracking down Gage in later life. Nowhere did Harlow or Bigelow suggest that ‘wandering’ was typical of Gage’s post-injury existence.

Of the various falsehoods written about Gage, perhaps the one most clearly contradicted by the primary sources is the Damasios’ claim that Gage ‘could not be trusted to honour his commitments’ and that he ‘never returned to a fully independent existence’. In fact, the sources make no mention of Gage being dependent on anyone from the time he recovered from the injury until the last year of his life when he succumbed to illness. They also clearly state that, after he gave up his public appearances, he worked at a livery stable for 18 months and then he moved to Valparaíso in Chile where, according to Harlow, he worked for nearly eight years, ‘caring for horses, and often driving a coach heavily laden and drawn by six horses’.

As Macmillan pointed out, the available facts about Gage fly in the face of claims made about his transformation and reduced capacities. Macmillan gave a carefully sourced description of the demanding nature of Gage’s job in Chile: the dependability required of him in rising in the small hours, loading passengers’ luggage and possibly handling fares; the high level of dexterity and sustained attention necessary for driving six horses; the foresight and self-control involved in navigating the unwieldy coach along the crowded and sometimes treacherous Valparaíso-Santiago road. He also pointed out that Gage, at first a stranger to Chile, would have had to learn something of its language and customs and ‘deal with political upheavals that frequently spilled into everyday life’.

So really, little or no credible evidence for his supposed changed personality from mild-mannered worker to whoremongering brute. It just didn't happen. He became an expat working in the hospitality industry, for all intents and purposes.

The other case is of Eadweard Muybridge, the photographer famous for those early time series photographs of running horses and the like. I'm never heard that there was a claim he had a changed personality, even though he did something much worse than Gage after surviving a stagecoach accident :

...he discovered that his wife, Flora, had become romantically involved with a local theatre critic and, by one biographer’s description, ‘swindler’ called Harry Larkyns. Soon after that, Muybridge came to suspect that his infant son was in fact Larkyns’s child. Muybridge travelled to the town of Calistoga, 75 miles north of his home in San Francisco, and knocked on Larkyns’s door. When Larkyns appeared, Muybridge shot him in the chest with a pistol. Larkyns died at the scene. Muybridge did not resist arrest, freely admitting to the murder, and was held in custody until his trial the following February.

If the testimonies of his friends are to be believed, it was the instability brought on by the brain injury that led Muybridge to kill Larkyns. ‘The killing would have surprised me much before his accident but not much after it,’ said one defence witness, according to the Sacramento Daily Union. The brain scientists take it at face value – labelling the murder as Exhibit A in their argument for Muybridge’s disinhibition.

Why, then, don’t we find Muybridge alongside Gage in neuropsychology textbooks or Wikipedia articles about frontal lobe function? The simplest answer is that neither the testimonies about his personality change nor the retrospective assessments of the clinicians are believable. Some of them amount to very little even on paper – the mention by his friends of Muybridge’s hair changing colour and his idiosyncratic approach to business, for example. But, more pressingly, the testimonies were given in support of an insanity plea submitted by Muybridge’s defence council, in the hope of saving him from the death penalty. How sure can we be that his friends would have said the same things without the urgent stakes of the trial hovering in their minds?

Even if we accept the testimonies as the whole truth, it’s notable that the jury present at the trial roundly rejected the insanity plea. In violation of the law and of the judge’s instructions, when they acquitted Muybridge, it was on the grounds that Larkyns’s death was not a murder committed by a mad person but a justifiable homicide committed by a sane one. Perhaps they’d read the interview with Muybridge published by the San Francisco Chronicle while he was in prison awaiting trial. He is quoted as saying:

"I objected to the plea of insanity because I thought a man to be crazy must not know what he was doing, and I knew what I was doing. I was beside myself with rage and indignation, and resolved to avenge my dishonour.I objected to the plea of insanity because I thought a man to be crazy must not know what he was doing, and I knew what I was doing. I was beside myself with rage and indignation, and resolved to avenge my dishonour."

Fortunately for him, his photography had made him famous. And so the results were quite different from Gage, who was blamed for doing nothing, whereas Gage was excused murder :

The forces of social status and uncritical trust in professionals that worked so well in Muybridge’s favour did the opposite for Gage. During his life, he may well have had friends. People were willing to employ him. He was cared for by his family towards the end. And there’s no suggestion in the primary sources that he ever became isolated or spurned from any community. However, without powerful or wealthy friends to defend his reputation, the story told about Gage by Harlow operated in isolation after his death. In the absence of dissenting voices, it was treated as fair game by commentators with their own professional agendas.

Finally, the main point of the essay is against the notion of "disinhibition", or at least that this shouldn't be used as an easy explanation for all psychological changes or disorders :

Among those who have accepted the revision of the Gage story, the consensus seems to be that his character probably did change but that any disinhibition that did occur was temporary. But it’s important we subject even this claim to counter-proposals, that we ask what else might have provoked such a temporary change, assuming we believe in it. Some have suggested that psychological trauma might have played a role... But maybe Gage was just pissed off.

On a modern case :

Callum was clear that both examples were out of character but that they still held meaning. ‘In the first instance, even though my family were a bit surprised at what I was saying, it also reassured them that I was still the same person. I was suggesting doing things that demonstrated I remembered my friends, even if my family didn’t want to hear about those things.’

This persistence of self was something Callum also emphasised about the apparent disinhibition. Disobeying doctors’ orders by wheeling himself out of the hospital may have been atypical for his pre-injury character, but this behaviour still belonged to him rather than to some mysterious new identity ‘unveiled’ by the injury. ‘I’m not a different person,’ he said. ‘I’m me after something traumatic has happened to my brain. It was still me doing those things. It was me who was disinhibited.’ And he also insisted that the behaviour held value.

Personalities vary all the time. The difficulty, I guess, is in deciding when a change is legitimate or valid. One person can be excused murder because they lost their temper; another can become a textbook classic because maybe they felt a bit grumpy one morning after taking a bolt to the head. As Cicero said (In Defence Of Milo) :

If our lives are endangered by plots or violence or armed robbers or enemies, any and every method of protecting ourselves is morally right. When weapons reduce them to silence, the laws no longer expect one to await their pronouncement. For people who decide to wait for these will have to wait for justice, too – and meanwhile they will suffer injustice first.  

And motivation does matter. But Cicero was explicitly referring to self-defence, not premeditated murder (though he had no qualms about this when it came to tyrants and the like : "to end the life of a man who is a bandit and a brigand can never be a sin"). 

Yet when someone respected does something abhorrent, we're all too apt to assume they must have had a good, valid excuse and it doesn't represent their true character. When someone of lowlier standing does something considerably less problematic, we assume that they must have had some horrible psychological problem; we view their character and agency as somehow less valid than that of others – even their trauma must reflect some deep flaw rather than their making a real choice. The tale grows in the telling indeed.

Sunday, 29 June 2025

Liberalism And The Sealt Belt Probelm

Every once in a while, I come across a meme that's more provocative than the usual self-righteous bullshit that pervades my social media feed. Now, this one certainly does feel like the typical holier-than-thou variety...

... but that last one got me thinking. The others are straightforwardly due to self-interest. But it's maybe less obvious why the media should be uninterested in providing the truth. 

One answer comes from the underrated Bond flick Tomorrow Never Dies, in which a newspaper editor tries to instigate World War III (through careful misinformation and manipulation) to drum up sales. It's very silly, but good lord has its main theme aged well. Still, surely in reality plenty of interesting stuff actually happens which ought to sell copy easily enough. "Lack of interesting stories" doesn't seem like a credible motivation for lying, still less in starting a war.

A key component of the meme, though, is "a world run by". This is different from the real-world situation in which the various components of society are roughly in some sort of balance. It suggests instead a weird, top-heavy power structure in which one group has become uniquely privileged. This is a big difference to reality. If the media is not unduly powerful (and is independent of other power groups), it's in their own interest to provide the truth because there's more than enough corruption for them to report on anyway.

Or at least, we might naively assume so, at least for the purposes of what I want to explore here.

Clearly if the media themselves were in charge, or more pertinently if they had undue influence, it would not be in their own interests to report on the truth (not the whole of it, any rate). As the movers and shakers, reporting on their own corruption would do them little good – it would commit the cardinal sin of harming sales and ruining their bottom line.

So a media-heavy power structure would indeed "never know the truth", in a sense. I think it would be more accurately be described as being drenched in perpetual bullshit : a media-ocracy would have no problem reporting the truth when it served the journalist's interests, but would equally have no scruples about barefaced lying when that was more to their advantage.

The optimal structure of society is much too big an ask for this post, but I would also briefly refer to this meme as well :

True, the specific economic realities aren't really directly comparable, but the point of the meme is surely the moral intention rather than fiscal policy...

STOP ! I don't want to go down that road. Instead, what the first meme mainly got me thinking about was.... seatbelts.

Yeah, seatbelts. See, this to me has always seemed like a possible weakness in classical liberal theory. If the idea is that we should allow everyone to behave as they wish so long as it doesn't interfere with others*, doesn't that mean we should allow people to ignore their own seatbelts ? How can we fine people or take away their driving licenses for only putting themselves at risk ?

*Excepting that we're allowed to discuss with them and to try and persuade them of right action when we disagree with their choices.

Well, if they really do only put themselves at risk, maybe we can't. Similarly, we don't generally suggest outright bans on smoking, only on smoking in public areas where others are affected. The restraint afforded by a seatbelt, on the other hand, can also help drivers maintain control of their vehicles and thus make their own driving safer, thus helping protect other people as well.

That's were the media comes back in. Here the consumer has to bear a measure of responsibility as well as the producer. Sure, we can rant and rave about the manufacture (and it often is manufacture, not reporting) of clickbait and ragebait, but we can also choose not to consume it. The problem is that we don't. Lots of people actually enjoy this kind of content : if they didn't, the market would have self-corrected by now. Markets are far from perfect at this, but it does happen.

Excessive consumption of garbage media by a lone individual does little direct harm to anyone except themselves. If they want to be an idiotic dumbass, one might think, then that's on them. The problem is that true hermits are nearly non-existent, and one stupid person has to interact with everyone else. The old adage that bullshit takes more than an order of magnitude more effort to refute than to produce is correct, so one person sinking into the addictive clickswarm of useless prattling articles is doing more than just harming themselves. Like a virus, they afflict everyone they come into contact with : to bring them back to sanity requires a protracted effort of their community, if it's even possible.

The tragedy is that the media production of these articles is not entirely the result of cynical profiteering. It's because we really do enjoy them. It's not all corruption and exploitation – it's also just human nature.

So, maybe, what we have to do is treat this deleterious effect on mental health in the same way we treat physically harmful activities. Suppose that someone is in a situation where removing the seat belt really would only risk themselves and no other. There, we would still protest that allowing rare exceptions does more good than harm. It's easier, and better overall, to make wearing seatbelts a blanket rule : it inflicts only the most minor of inconveniences on a tiny minority for the sake of the (much more valuable) effect of protecting a large number of lives.

And we might even go further. We might allow ourselves to say, "this is for your own good -– we don't want you to die, so we won't allow you to take this pointless risk". People simply don't always know what's best for themselves; we might also take the angle that an injury suffered as a result of ill judgement requires resources we aren't prepared to utilise when we could have prevented it instead.

Managing the media as component of mental health would also allow us to control a public health crisis. A few smokers are something we can handle; an epidemic of lung cancer is clearly something we want to avoid. A few stupid people are entertaining; a horde of them are one of the most dangerous forces on the planet.

This is perhaps an illiberal position. The problem is that relatively few people are interested in, or have the time to, consider detailed reports and the full context of a story. Bad journalism thrives in part because people don't want good journalism : they want the emotion-inducing nonsense instead. And just as the odd cigarette or two won't do anyone much harm, but the risk of addiction may be deemed too great (certainly we accept this for some drugs, at any rate), so too may it be for crappy journalism. There's a sort-of "tragedy of the commons" about the whole thing, in the sense that one person reading one bit of celebrity gossip causes zero harm, but the cumulative effect can be seriously hazardous to community health. 

If we are prepared to ever say, "we can't allow you to do this enjoyable thing – we've found the consequences for society outweigh the benefits for the individual", then surely the ability to think clearly and rationally is something to which we should apply this reasoning. 

Obviously, we cannot regulate intelligence, nor prevent stupidity. But we can take action to make it becoming worse than it otherwise could be. And following another liberal principle, we can also minimise these interventions, making the smallest restrictions for the greatest gain. So this doesn't mean running wild with regulations or banning everything left right and centre, which would just replace one problem with another. We proceed on a case-by-case basis : this is problematic, apply what regulations are required to restrict it (using outright bans only as a last resort, the goal should more be a realistic minimising of bad behaviour than utter prevention); this is truly harmless, let everyone go nuts.

Finally, in order to improve any ability it must be operated at its limits. Consuming and analysing every piece of media clickbait, or more to the point, every bit of verbal effluence that spews from Trump's anus – sorry, mouth – is unhealthy. It is, I suggest, literally weaponised incompetence, designed to keep people stupid by having them exercise their brainpower in utterly unproductive engagement. Even if it doesn't actually degrade critical thinking skills, at the very least it becomes a pointless distraction. Endlessly proving that the latest thing that Trump said is wrong is counterproductive. We know he's a moron ! Saying it a thousand times for a thousand different reasons is unnecessary, fucking exhausting, and worse – every moment spent analysing the latest bit of garbage is a wasted opportunity from working out how to overthrow this deplorable fascist monster.

Rant over. I do not believe in maintaining a liberal society through illiberal means, but I trust that this is not what I'm suggesting. Rather, if we know that an activity leads away from our goal, we have to take steps to avoid it. If an activity is seemingly harmless and only affects an individual, but actually turns out to have provably harmful effects on the whole of society, then we don't ignore it. We rarely ban but frequently regulate to keep things from spiralling out of control. Sometimes, we surrender our own judgement as being inferior to that of experts, and allow them – under proper accountability -– to save us from ourselves.

Saturday, 28 June 2025

Review : Impossible Monsters (II)

Time to conclude my breakdown of Michael Taylor's Impossible Monsters, which looks at the discovery of dinosaurs from the perspective of science and religion. In part one I covered how the boundaries between the two were often fuzzy, especially in the early part of the Victorian period. Individuals, too, were the proverbial rich tapestry : some clergy were tolerant and liberal, some agnostics were nasty little bastards. Here I'll look more at how the theory of evolution itself developed, and some of the conflicts that occurred between theory and theology.

Sometimes, a harmonious coexistence of science and religion prevailed. But there were lines which, if crossed, could lead to psychological violence with serious consequences for those involved.

At the start of the period this was almost entirely in favour of the religious, who held all the cards and pulled all the strings of society. By the end, the roles were in no sense reversed, but a critical tipping point had been breached. No longer would every atheist routinely fear for their job security, let alone their social standing and still less for their basic freedoms. It wasn't that there was no longer disagreement by any means. But it was now possible for the outwardly atheist and agnostic to receive the highest honours society could bestow, even to get away with deliberate provocation against the more hardline religious elements.


Philosoraptor Continues

Britain had advanced considerably along the road to liberalism. Like the scientific discoveries that accompanied, and perhaps assisted, the progress of society, it was sometimes slow and grinding work but occasionally punctuated by moments of sudden and decisive change. As a case study, Taylor shows how the theory of evolution came about largely due to decades of painstaking work, building from smaller parts of the main idea, rather than due to lone geniuses or a flash of inspiration. 

Rather than making this end result less dramatic, the exact reverse is true – but it's only clearly visible when we take a step back. For the kind of change of thinking this necessitated affected nothing less than our understanding of time itself, a reshaping of thought so profound that simply couldn't have happened overnight.

Such radical ideas witnessed no small amount of outright hostility, and if the supremacy of religion was waning, it wouldn't go down without a fight. Continuing a theme from the first post, some of this was in fact a result of perfectly legitimate scientific doubts about the strength of the evidence, which initially had literally dinosaur-sized holes in it. In no way did evolution, at first, meet the standards of "extraordinary evidence" required of extraordinary claims. But some of it was due to purely human, emotional fallacies. Ideologies could run rampant on both sides, but ultimately, this was a fight in which there could be only one winner.


Incremental revolutions

But let's begin with a gentle look at how the theory of evolution itself evolved. A variety of social factors shaped how people accepted or rejected the same evidence, or interpreted it differently to each other. Just as the sociological situation was nuanced, so too was the state of the pure evidence itself. The final conclusion of evolution by natural selection was, undeniably, a scientific revolution, but it was not the work of a singular genius. 

One of the most famous building blocks is that of Thomas Malthus, who thought that population growth would eventually lead to consumption of finite resources and thereby catastrophe. Malthus crops up a few times in Impossible Monsters, not for his economic gloom-mongering, but for the more basic principle of natural change implicit in his theory. Keith Thomas' view that medieval paintings show classical antiquity in then-contemporary styles becomes easier to understand when you remember that they genuinely thought the world was just a few millennia old : there just hadn't been enough time for significant changes. 

Malthus' extrapolation to the future was a small but important contribution to reshaping our view of time and thus our own place in the world. Rather than placing us at the summit of creation, we were now at the threshold of a deep and fearful pit – with no obvious mode of escape.

A second, more direct component came from Charles Lyell and James Hutton. Hutton had proposed the idea of slow, continuous geological development and change – deep time and uniformitarianism ("the present is the key to the past", as my geology teacher used to say... because he stole it from Lyell) – as far back as the 1790s. This was certainly religiously shocking and evinced no small amount of harsh invective, but intellectually it was something of a damp squib. It lay all but ignored and forgotten until Lyell more successfully resurrected it in the 1830s, in a larger, more careful work with a greater body of evidence.

Biologically, too, the idea of change was not unprecedented. Already in 1809, Lamarck had proposed that animals could change their individual characteristics over time in response to their environment. The implication that man could have been the descendent of monkeys had not gone unnoticed, but there was little evidence for Lamarck's idea of transmutation at the time.

Overall, the development of Darwin and Wallace's theory of evolution was the end result of a series of incremental developments unregarded by the general public. It was punctuated by moments of sound and fury signifying breakthroughs, but even those came out of work of the most extraordinary levels of painstaking tedium. Darwin himself conducted some pretty horrific experiments (especially on pigeons), and spent so many years studying barnacles that he hated them "as no man ever did before". He applied equally diligent efforts to a study of plants.

This is not to say that the Origin of Species didn't mark a watershed moment in scientific progress : it did*). But it did not spring forth from the head of Darwin fully formed in a flash of genius. It came from a stupendous amount of specialist work by both himself and a great many others – not just Hutton, Lyle, Lamarck, but a host of others as well. Essentially all of the major concepts were already in place for Darwin's big moment, just as so many pieces of the puzzle were ready for Einstein to assemble his theories of relativity. 

* Interestingly though, an early talk shortly before the more famous debates failed to gain much attention from anyone. Taylor makes it clear that the subsequent debates, however, did play out much as in the lively fashion which popular history records.

Darwin, though, gave a plausible mechanism* for how Lamarckian-style changes might come about –  one species changing into another – together with a wealth of meticulously-detailed evidence for it actually happening. Both Wallace and Darwin formulated their ideas only after global voyages lasting years, collecting their own data by hand because nothing else suitable existed. It was physically exhausting, gruelling work to come up with an idea we now take for granted. Not that it was by any means complete at this stage : the lack of a theory of genetics would remain a difficulty long afterwards. But the force of the arguments and evidence were, slowly, becoming irresistible.

* Oddly, initially he seemed a bit confused as to how natural selection related to animal husbandry, only later realising that it provided solid support for the idea.

In short, scientific advancement isn't without moments of important breakthroughs or paradigm shifts. But those owe at least as much to dedication, specialisation, and an awful lot of hard graft as they ever do to innate genius.


One side can simply be wrong

The distinction between early science and religion was nowhere near as stark as it appears today. But a difference was emerging, gradatim ferociter, into a form that many would eventually come to see as irreconcilable. 

Conflict was present right from the earliest days of modern geological investigations. Hutton's work was described as "contrary to reason and the tenor of Mosaic history, thus leading to an abyss". But disagreement was by no means unrelenting. It was, for a while, possible to frame geological discoveries so as to support scripture, and when this happened there was no problem. This did not even necessarily mean acquiescing to Biblical literalism. Catastrophisim was popular not just because everyone knew that local catastrophes – even extinctions – did happen from time to time, but also because they allowed a metaphorical but acceptable interpretation of the Bible. 

The basic idea was that there could have been multiple catastrophes on a par with Noah's Flood, with God remaking the world multiple times over. If any of the clergy objected to this, it wasn't on a significant scale : there seemed no problem interpreting Biblical "days" of creation as firmly metaphorical. And on the academic side, Buckland, Mantell, Anning, and very nearly all the great and the good of the early fossil hunters and geologists were devout Christians, with Buckland in particular keen to emphasise the harmony of geology and scripture. 

More interesting is the case of Charles Lyell. Whereas the others were actively trying to support scripture, Lyell explicitly declared he wished to "free the science from Moses". Yet his epic, 1,400 page Principles of Geology, though he regarded it as a "deliberate strike against religious dogma", did little to provoke anything nastier than mild criticism. Unfortunately Taylor doesn't go in to why this should be : religious institutions were largely praiseworthy of the Principles, sometimes profusely so; specialists gave it really no more than the most modest of rebukes. How Lyell managed to avoid this, despite resurrecting Hutton's shocking ideas, isn't clear.

EDIT : On reflection, part of the reason might be the following. Hutton explicitly drew attention to the extreme conclusions of his theory, i.e. infinite time, which was believed to be contrary to scripture. He also lacked evidence and didn't present his arguments well. Lyell was more accessible, there had been at least some movement towards considering longer timespans by the time he published, and he didn't rub anyone's nose in it. That is, he didn't make any specific claims with regards to the age of the Earth, but instead tried to let this flow naturally out his prodigious and very carefully argued body of evidence.

Things did eventually turn ugly though. When geology was used to support scripture there was no problem, but when the roles were reversed and the Bible made to be subservient, it was another matter entirely. The clergy had no problem with science giving way before faith, but, especially in America, they had no truck with the opposite. Notably, the same could not be said in Europe, and it was a paradox of the age that while Britain initially led the world in dinosaur research, it did so from a much more conservative position. The whole idea of liberalism and European-style rationalism was openly regarded with something approaching horror.

To cut a long story short(er), Darwin was afraid of publishing his ideas for very good reasons. True, sometimes radical positions had gone largely unpunished save for criticism, but the consequences could be serious. Manuscripts were burned, authors fined, imprisoned, and socially ostracised – no small punishment for academics who weren't in the Old Boys club. Deviants could be publicly humiliated, some police offers were recorded as wishing they could still burn heretics at the stake. Frederik Maurice was hounded from academia merely for debating the meaning of the words "endless" and eternity; only through a protracted struggle did he manage to return. Others, such as Charles Bradlaugh and Annie Beasant, would suffer prosecution and – in Bradlaugh's case – lengthy imprisonment for the right to atheism and socially progressive views.

The conflict between science and religion was, then, real and it did happen. And while in some ways many of the scientists were equally belligerent in their approach to debate (Huxley in particularly must count as the most aggressive agnostic of his age), ultimately it wasn't a fair fight. Scientists were arguing from a position of evidence : imperfect and incomplete, but progressively improving. The faithful, increasingly, clung to their views only out of... well, faith.

Religious literalism was ultimately dealt a mortal wound with the discovery that reptiles preceded birds, contrary to Genesis; even the most metaphorical interpretation of "day" simply couldn't make the ordering work. Attempts to compromise were, in the end, futile. 

To be fair, many changed their minds in response. The majority of religious believers today aren't Creationists. But a scientific world view and one which adheres to the literal truth of the Bible are inevitably at odds. Taylor's major point here (backed up by Keith Thomas where he strays into this period) is that literalist beliefs persisted far longer than we like to think – they were absolutely normal in Victorian Britain, and hadn't been overturned by previously findings.

There are many ways in which science and religion can and do happily coexist. I personally find some of the Biblical metaphors useful : the loss of innocence of Adam and Eve, the magnitude of the cosmos expressed to Job, and of course the markedly socialist tracks of some of the teachings of Jesus. In no way does this mean I believe in any of it, but for those who do, there is no reason for any sort of anti-science attitude. To me, to believe that an ancient text of any kind must be on unimpeachable Truth is just not sensible, but to deny that it contains any truths at all is equally crazy. There are innumerable straw man arguments raised by New Atheists (and indeed by Victorian atheists, as Taylor shows !) that I have no sympathy with, the sort that tar all believers as Six Day Creationists – as though you could reduce the awesome complexity of the human condition to a box-ticking exercise. You can't and it's silly.

But, Richard Dawkins was dead right when he said that one side can be simply wrong. Even a couple of centuries ago, there was no good scientific evidence for an Earth billions of years old. Early objections did not proceed entirely from religious devotion, though that was part of it. But nowadays any position against the scientifically-determined age of the Earth or evolution is untenable. Despite protestations to the contrary, that debate has been long since won.




How exactly did religious literalism develop ? Even Greek mythology alone contains different explanations of the same thing : different creation stories, different explanations, different actions by the same heroes, different moral interpretations. It would be difficult indeed to have any sort of devotion to any one particular story in that system, which is innately flexible and versatile.

The Biblical stories are much simpler. They state unequivocally what happened with no room for alternatives. Yet even the earliest Church grandees realised that contradictions pointed to metaphorical interpretations, and theological debates raged incessantly (a very nice SMBC illustrates how taking things literally is itself fraught with difficulty). From Keith Thomas I never had the impression that a Creationist view was especially widespread in the late medieval/early modern period – and he gives pretty good direct evidence that many ordinary people both hated religion. They neither understood it nor saw it as especially important; it was really understood that the practise of going to Church was what mattered, not what one believed.

Something had clearly changed by the Victorian period, but as to how this came about would surely take another book. Whether this represents an error or only an incompleteness on the part of Keith Thomas, I don't care to speculate. I note, though, that Taylor doesn't restrict himself to the academics, with ordinary people also being a bunch of literalist zealots in his account. 

On that point, the difference between literalists and zealots is also something not often discussed. I tend to think of them as synonymous, but in the Victorian period this seems not to have been the case : people accepted the literal truth of the Bible as their default position, but most of them changed their opinions when enough new data was presented. A hardcore of fanatics were unconvinced, of course, but the point is that type and strength of belief don't necessarily correlate. Presumably there's a selection effect at work here though : today, religious literalism demands irrational devotion, in a way it simply didn't back in the era of early Victorian science.

If there were some stark differences in beliefs of the era compared to today, there were also some interesting parallels. One of which, muscular Christianity, had suspiciously familiar emphasis on male strength and patriotic duty to certain modern day movements. How depressing that some of the stupider Victorian beliefs appear to be making such a resurgence !

There's nothing wrong with setting ambitious goals for oneself, of course. What becomes problematic is when one applies those same standards to others who might have entirely different life goals, and of course how one responds when they don't meet those expectations. This is why I harp on about the myth of lone geniuses so much (of which I go into a bit more detail on Quora, see also the comments). They give the impression that all science is done by a handful of supermen and all the work by the rest of us counts for nothing. They also pander to the crazies who believe themselves misunderstood only because they're so far above the ordinary scientists.

What Impossible Monsters shows is just how much the incremental work really matters. Yes, there were moments of genius here and there (radical free-thinking did play a part), and no small amount of luck was involved too. But far more important this was dedication to hard, tedious work by people with all the same flaws and virtues as the rest of us. Decades of their patient efforts ultimately achieved something stupendous : they changed the way we think.

Thursday, 26 June 2025

Review : Impossible Monsters (I)

Michael Taylor's Impossible Monsters fits in extremely well with my recent reads of mythological animals. Here are the real monstrous creatures of the past... but Taylor's focus is not on the dinosaurs themselves but how they changed our understanding of the world. A seismic shift that had begun centuries earlier was about to come to its heady climax : from a world of magic and mystics we had shifted to intercessionary relationships with an omnipotent singular God. Now we were about to take another dramatic lurch, replacing scriptural literalism with scientific materialism. The last vestiges of a medieval mindset were, at last, giving way to modernity.

Here, then, is a philosophical, theological, and sociological history of dinosaurs – and of those who found them.


The Review Bit

There's so little to criticise about the book that I can keep this brief. The book has an excellent balance of breadth and depth presented in an engrossing, accessible way without dumbing down. Taylor gives mini-biopics of each of the major characters that greatly add to the understanding of how the discoveries came about, very carefully curated to balance the purely human, amusing side of things with informing the reader of just how different the academic process was in Victorian Britain. He covers a range of topics with sensitivity and impartiality, from the sociological situation (such as the treatment of the poor and destitute) to the theological disputes (especially the big one, over whether the Bible needed to be taken as literal truth). 

Rarely, if ever, does he get on a hobby-horse and rant at the reader about the absurdity of religious literalism. At the same time, he's abundantly clear that it's Wrong With A Capital W. He balances the attempts to harmonise science and religion with the painful conflicts that occurred between the two. The latter, he says, happened loudly and frequently. This is in stark contrast to an earlier documentary which claimed (as I remember it) that Creationism and the like never really got going until decades later, and that the initial reaction to evolution was actually fairly muted. 

Not so, says Taylor, putting forth a wealth of evidence to the contrary, and always careful to set everything in context. For example he compares the sales figures of Darwin's works with other (religiously moderate) texts*, which initially far exceeded them; he notes that when Paine's The Age Of Reason was censored, this wasn't because of the work's particular nature but part of a prevailing approach to anything deemed offensive at the time. Even so, there was no shortage of incidents of harsh invective from both sides, sometimes with severe and serious consequences for those caught up in the debates.

* Of which Darwin himself approved, despite having all but lost his own personal religious faith by this point.

In Taylor's history, science ultimately wins – but there's no unnecessary triumphalism here. He tries quite earnestly to get into the literalist mindset rather than pronouncing believers as simply idiots, and isn't afraid to criticise the scientists either. What emerges is, if you'll forgive the clichés, an extremely rich and nuanced picture, colourful characters rather than any monochromatic sort of good-versus-evil struggle. These were real people with all their foibles : Darwin's staunch social conservatism persisted despite his loss of faith; Richard Owen was highly intelligent but capable of being a Right Stupid Bastard; Huxley's agnosticism was often viciously aggressive. And to Taylor's great credit, what could have been a complex, messy narrative is always kept clear and on-point.

There is one glaring omission, however. Taylor chronicles in detail both the geological discoveries and the changing social and scientific reaction to them. But what drove those changed responses ? At the start of the 19th century, someone speaking out against scripture would certainly be socially ostracised and likely fined or even imprisoned. By the end of the period, atheist scientists were being awarded high honours. So was it the changing geological findings that drove a loss of faith, or was it the loss of faith that allowed for new interpretations of the evidence ? What specifically was different in Victorian Britain that allowed for such radically different understandings of the same sort of dinosaur bones that had undoubtedly been unearthed for centuries past ?

To be fair, this is really the only difficulty of any substance I have with the book – but it's a pretty big one. Overall, I'm giving this one 8/10. I suspect it's a book that will age well and become a lasting influence.


The World According To Philosoraptor

Taylor references Jurassic Park a couple of times, but surprisingly never uses any philosoraptor memes. Oh well, a missed opportunity. On the other hand, most of the ideas discussed in the book don't lend themselves easily to meme-format, so perhaps it's for the best. So here's my summary of the most interesting themes from the book, delivered as good-old-fashioned text rather than the considerably more popular captioned images.

In this first part, I'll look the complexities in distinguishing science from religion at the start of the period, as well as just how weird some of the major characters were. In part two I'll cover how the theory of evolution was itself, in part, the result of an evolutionary rather than revolutionary process, and how while science and religion can sometimes live together happily, sometimes they can't help but fight it out.


Blurred boundaries

Taylor promises not to go on a polemic against religious literalism and he does indeed manage to steer clear of this. He clearly views such ideas with disdain, but he makes a pretty good attempt at sympathising with the believers as people. Often they were, at the time, genuinely doing the best they could with the evidence they had at hand (the same cannot be said for modern lunatics, of course).

While most of the book covers the period of the 19th century, the prologue examines the famous estimate of the age of the Earth by Bishop Ussher. It's worth remembering that this now obviously nonsensical result was obtained a mere two centuries before the Victorian era, and while there had been other similar calculations, this was one of exceptionally diligent effort*. And it was a calculation. This was no mere quick reading of the Bible and trying to make things add up on the back of an envelope. This was a slow, tedious examination of a multitude of historical documents that took years to complete. It was a meticulous, scholarly effort – you could even call it scientific, in its way.

* Though other estimates had reached similar values. Ussher's would also have likely have faded into similar obscurity, had he not been fortunate enough to have it included in the commentary printed with certain editions of the Bible.

Without the careful, fully scientific results that would take thousands of people centuries longer to obtain, Ussher simply had no better methods available. The conceptual leaps needed to arrive at the modern value were monumental : atomic theory, thermodynamics, whole forms of mathematics... he had none of this. It wasn't his fault, the poor sod.

And if doubts about the age of the Earth were creeping in by the start of the Victorian era, they were weakly founded, and nowhere near strong enough to challenge the prevailing literalist wisdom. At most there was uncertainty and concern rather than any genuine rival theories; no alternative, self-consistent world view had yet been presented. Again, the conceptual leap needed to go from an Earth thousands of years old to billions is vast, and can't – shouldn't – be done without extraordinary evidence. Scientifically, Ussher and the early fossil hunters were on surprisingly solid ground, given what was actually known at the time.

It's a similar story with the development of the central theory in the field : the theory of evolution, the climax of Taylor's book. If the boundaries between science and religion were not always well-delineated, then evolution was by no means the self-evident notion it appears today. At least, not at first. One of the major problems was, ironically, with the fossil record. To arrive at evolution proper required a series of advancements more than a singular inspiration (as we'll see next time), but one of the problem was that animals in the past, naively, should be expected to be simpler than the modern ones. Dinosaurs, being huge and complex beasts, did not fit that pattern at all. 

Nor, with the fossil record still being hugely incomplete, were there any signs of clear progressions. It all just seemed too chaotic, too haphazard. This fit neatly with the widely-held idea of occasional catastrophes, which nobody took issue with since these demonstrably happened. But there was a huge implicit bias that any sort of slow, incremental change must necessarily be progressive. The idea of speciation, and in particular adaptation to the current circumstances (probably Darwin's biggest and most unique contribution to the field), was of an altogether different order to the kind of changes that were accepted to occur. Everyone could see disasters for themselves, but you couldn't see speciation (or even lesser developments) unless you were extremely dedicated and carefully looking for it.

So in essence, just as in the very distant past the idea of a spherical Earth would have contradicted all the evidence any sane person could see, so too was evolution on a much more rickety pedestal than it is today. You could very credibly argue that its early advocates were making nothing less than a leap of faith in adopting it, because some of its findings went against the available evidence rather than being shaped by it. Not all of evolution's (early) detractors were skeptical simply because of religious devotion : just as with Galileo, some doubts were scientifically legitimate.

The human factors should not be neglected either. Richard Owen, who coined the word "dinosaur" and founded the Natural History Museum, could be a horrible elitist snob, as could much of the Geological Society. Indeed Owen coined "dinosaur" as a deliberate way to highlight the apparent absurdity of the complexity of earlier creatures. Remember, this era was not far removed from a period where even the notion of extinction implied the heretical notion that God could make mistakes : prevailing wisdom is difficult to overcome because sometimes we're not even aware of the assumptions behind it. But try and flip it around and things become easier to understand. If today an academic were to try and claim a divine origin for all things, then the difficulties they would encounter become clear.

A few final points. Materialism (the idea that the physical world is all there is) was a problem both socially as well as theologically. Not only would it flatly contradict the Bible by leaving little or no room for God, but it would also give the lower orders... agency. Suddenly everyone's brain would become, after a fashion, equally capable of understanding and reason, or at least having equal potential. Any idea of nobility, any hint of being one of the elect*, would be gone. This was the dramatic restructuring of thought that the Victorians were being demanded to make.

* On a tangential note, the idea of God's Chosen People baffles me. Whenever this crops up, it's never clear why God made that choice – if he did it just on whimsy, then he's clearly a moron. Theologically this seems bonkers.

To give this some context, Richard Owen was progressive by the standards of his day in that while he regarded certain races as far inferior to others, he did at least accept them as human. The more conservative elements of his day... didn't. They literally believed that some people weren't actually people. So the idea of allowing them freedom of opinions was a tough selling point, to say the least.

It's worth stressing the character complexities a little more. Owen could be an academic thug, but he was also a dedicated champion of science and public outreach. He also had some experiences which were truly bizarre, involving decapitated heads and ghost stories (I give a proper quotation here). And of course, he was religiously conservative : he could accept evolution if it was pre-ordained, but the idea of natural selection was to him a step much too far. 

Other religious figures responded differently to scientific progress. While the various crises that afflicted Mary Anning made her cling all the more deeply to her faith, the loss of Darwin's daughter saw him all but lose his. The great fossil hunter Gideon Mantell refused to be swayed away from his faith, while Charles Lyell was sympathetic to the principles of evolution but couldn't stomach the idea of mankind arising from apes. And, completely independently of geology, Bishop Colenso decided that the Bible was actually riddled with contradictions* and was excommunicated for saying so (he was later reinstated despite his highly unorthodox views). 

* For example, he was required to deny the existence of witchcraft despite witches very much existing in the Bible.

Finally, the famous Alfred Russel Wallace later became skeptical of natural selection, but vigorously defended the scientific method : he was, confusingly, staunchly opposed to the Flat Earth movement, anti-colonial, pro-phrenology and pro-supernatural. Characterising anyone as simply religious or atheist or agnostic is, then, a calamitous oversimplification of the extremely complex factors that shaped their highly individual beliefs.





The boundaries between science and religion, the faithful and the heretical, are awfully confused. Equally, the transformation from a religious to a secular society can't be described as any single process. Sometimes there were incremental changes, one small idea leading inexorably to another small idea, with differences only becoming clear after a long series of such developments. But at other times there were sudden, lurching realisations and discoveries that brought about revolutions far more quickly and with a much more aggressive debate. And while in those slow, progressive periods, compromise and debate are often the order of the day, in the point of revolution there are usually clear winners and losers. That's what I'll look at in part two.

Review : The Brain

I interrupt my mythology book reviews to turn to the completely different matter of neuroscience. David Eagleman's Livewired was one of ...