Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Tuesday, 31 May 2022

Review : Athelstan

When I reviewed Marc Morris' excellent The Anglo-Saxons, I said I'd withhold from any comments about Athelstan until I completed Sarah Foot's book of the same name. Time to make good on my promise.

I'm going to start by saying that Athelstan is a hugely overlooked figure in British history. He easily ranks as the most powerful of all the Anglo-Saxon monarchs, bar none, and one of the greatest of British kings of any era. Noted for his piety and power even on the international stage, he brought his grandfather Alfred's achievement to its ultimate fruition : instead of saving his country from the Vikings, he advanced it against them. Outright conquering the whole of England, he subdued the Welsh and Scots (leading an army to the very northernmost fringes of Scotland) so that he has fair claim to be the first king of all Britain. This is not someone we should have forgotten about. Morris does an excellent job of describing Athelstan's achievements, so I was hoping Foot would give even more insight into this little-known but crucial figure from history.

She doesn't. I have to say that the book is just not very good. Foot begins well enough by saying she wants to write a biography more than a history, but she ends up doing neither. Virtually the entire book is ponderously swamped by an excess of tedious minutiae that conveys little historical information and absolutely no biographic detail whatever. The history of Athelstan is confined almost entirely to the first summary chapter, with all the rest of the book but the epilogue being largely pointless. The character of Athelstan is scarcely evident here at all.

Whereas Morris has a knack for recasting uncertainty into curiosity, Foot just comes across as pathologically indecisive. Instead of trying to present a flowing narrative of Athelstan's life, she considers every possible interpretation of every bit of unimportant evidence, committing to absolutely nothing on even the smallest detail. It's a truly torturous and frustrating experience for what should have been at the very least an intriguing read.

Now to my mind the right approach with writing a book on a subject about which little specific evidence survives is to present the general context : what is known from other sources about how Anglo-Saxon kings usually behaved, how they ran their court, how religion affected them, what life in general was like for their subjects. Where specific evidence is lacking, paint me a picture of the most likely version, make the uncertainty clear and we're all good. But Foot just doesn't do any of this.

Foot is not even any good at headings. In the chapter on "Family", the sub-section "Aunt" contains almost nothing about Athelstan's aunt at all. The chapter "Church" is concerned mainly with Athelstan's patronship of poets. And the entire chapter "Court" is basically a long list of places and dates, with nearly zero information on how the court was run. The best interpretation I can think of is that this is a scholarly tome, and if you're looking for a dry and lifeless list of primary and secondary sources about Athelstan, this might be suitable - as a popular history, it's a miserable failure. The whole book is a series of disconnected, tedious, inconclusive statements which... no. I just don't like it at all.

Even a stopped clock is right twice a day, and there are some interesting details scattered throughout*. For example, that we find such clerks in Anglo-Saxon England as Israel the Grammarian : the name being biblical, not because of Jewish origins. Or how trading was allowed on Sundays - very much against my expectations, but relegated to a throwaway statement. And perhaps more importantly, how English was used as the official language of government, in marked contrast to the post-Norman era (I had assumed it would have been Latin). Foot even becomes dangerously risqué at one point, making an actual joke (!) about how Athelstan was "tough on theft, tough on the causes of theft".

* If I really wanted to, I am quite confident I could extract the best bits from this and Morris and produce a quite interesting 50 pages or so promotional pamphlet about how awesome Athelstan wasn't. Unfortunately I have no desire to read this book again in a hurry, so I won't.

There are two things from the book I find interesting, besides some minor details about how Athelstan was genuinely pious, concerned for the welfare of the poor, and formulated a rather cosmopolitan court. The first is Athelstan's use of poetry as propaganda instead of conventional prose descriptions. Foot speculates that even Beowulf might have been composed during this era, though since she doesn't give any quotations from the other poems, it's hard to see if this fits the style of the era or not. The whole thing is flavourless... she doesn't use the word "propaganda", but probably should have. There is however at least a quote from another scholar about how awful Anglo-Saxon rhetoric tended to be :

The object of the compilers of these charters was to express their meaning by the use of the greatest possible number of words and by the choice of the most grandiloquent, bombastic words that they could find. Every sentence is so overloaded by the heaping up of unnecessary words that the meaning is almost buried out of sight. The invocation with its appended clauses, opening with pompous and partly alliterative words, will proceed amongst a blaze of verbal fireworks throughout twenty lines of smallish type, and the pyrotechnic display will be maintained with equal magnificence throughout the whole charter, leaving the reader, dazzled by the glaze and blinded by the smoke, in a state of uncertainty as to the meaning of these frequently untranslatable and usually interminable sentences.

Well, gosh ! Sir, that is some commendable hyperbole about hyperbole right there. If only Foot herself could write like this, the book would be a masterpiece. And to be fair she does mention how the formulaic structure of the charters meant they could be rendered accessible to the commoners - that the high language intended for the elites could be circumvented by eventual clear statement as to what was supposed to happen. It's not that Foot doesn't have interesting source material to work with, it's just that she does a lousy job of curating that material - or even giving any quotes from the sources, so the reader could at least get a gist of things. Give me a few of these "interminable sentences", at least.

The second thing I do like about the book very much is the epilogue. Here Foot does a genuinely excellent job of describing the history of how Athelstan came to be forgotten. As late as the 18th century, he was still in the public consciousness as a great king from the heroic past. His decline was slow, with no clear single cause. But there are several factors. First there were allegations (likely erroneous) that he was illegitimate and murdered his brother to ascend the throne. Second he had no heirs, apparently a political choice in order to bolster his claims to kingship over multiple rival peoples. This meant he had no direct descendants with any motivation to sing his praises. Then, his achievements were quickly undone after his death, and his fixation on poetry rather than straightforward record-keeping may have limited the amount of biographic detail available for later scholars to draw on.

The biggest factor appears to have been a Victorian obsession with his grandfather, Alfred. By depicting him, completely wrongly, as the first king of Britain, Athelstan's main achievement was rendered impotent. As Alfred's star rose to the ludicrous height of being, "the most perfect character in history", so Athelstan's was doomed to sink into obscurity. This is a tragedy. Only here, at the end, does Foot's passion for her subject become evident. Elsewhere the miraculous survival of artifacts directly related to Athelstan elicits no emotion whatsoever, but it's clear that she truly wants Athelstan back in the public consciousness. It's just a shame that this book isn't going to be the one that does it.

Overall, I can't give this more than 2/10. There's interesting material in here, but it's arranged badly and described with all the enthusiasm of a dead moth. Maybe one day Athelstan will get a more deserving accolade, like his own movie, but I won't hold my breath.

Monday, 30 May 2022

Materialism is wrong but useful

I'm slowly trying to resume a long-delayed attempt to blog about Peter Godfrey-Smith's excellent Metazoa. This is an wonderful book which looks at the materialist perspective on animal minds. Regular readers will know that the notion that mind and matter are the same thing is not one I have any truck with, and it's just lucky for Godfrey-Smith that I like his writing so much. Otherwise I'd have to get jolly cross with him for being so silly.

I realised that in my early draft of the post I'd gone off on a long tangent, so I've decided to spin this off into its own post here. Since I think the whole concept is manifestly daft, it might surprise you to learn that I nevertheless have some strong materialist sympathies, and of course this warrants an explanation.

Specifically, Godfrey-Smith contends that at the very least, life, though not necessarily mind, can be explained through materialism. Here I tend to agree. I will even go much further. I will generalise and say that materialism is an outstanding and necessary premise for all science. For science, (note the strong emphasis there !) observable, physical components of reality are all there are, and there is no need to invoke anything extra in order to theorise accurate models with strong predictive powers.

Long-term readers will now be hanging their heads in despair. "Hang on a minute, Rhys", you might say. "You said you were an agnostic !* That post took me six days to read, you bastard ! And didn't you also say that you really like this gif as an analogy to science ?"

* I have great respect for my small but thoughtful readership, so much so that I assume they can actually convey hyperlinks though the spoken word.

Indeed so, attentive reader.  Let me address the second point first. 

(Incidentally, if you haven't read the giant post about atheism then don't bother yourself with that right now, but the link to Ian Wardell's piece about the gif is worth your time before proceeding here.)

I do think the analogy of unobservable mechanisms is extremely useful. But it is by no means complete. When early man looked at the horizon, he surely thought, "mmm, mammoth steaks tonight, me likey." Afterwards, he probably wondered, "me tired. me keep walking forever, or will me fall of edge of world ?". He would have had little way to know for sure. Slightly less early man would start to speculate that the Earth was round, and would have been able to keep walking to take a short cut home, but he too would have had little way to test this directly.

For those early thinkers, the shape of the Earth was like the triangles in the gif. It couldn't be observed directly, but the different ideas made different predictions. Initially, both round and flat models gave equally good results. As more and more observations were accumulated, divergences between the theories and observations became greater and greater. Eventually, long before even Magellan finally actually did it, the discrepancies became so great that one model was all but directly observable. The triangles had become visible.

So it is with most (though perhaps not all) science. We can make predictions for which testing requires instrumentation we may not access for many decades, centuries, even millennia. But eventually, good science brings the results into view, such as that the findings explain themselves. Once you see the triangles, it's game over.

There are major caveats though. If you see the triangles but not, say, the circles, that doesn't necessarily mean the circles don't exist as well or even that the triangles are definitely the cause of motion. But this can be established : atoms aren't illusions or convenient models any more than the shape of the Earth is.

That's the superficial interpretation, which holds up reasonably well in a limited way. But of course, the gif is better than that. It works as an analogy because it shows how a whole multitude of geometrical patterns work equally well as an explanation. The original artist may have used any one of them when creating the animation, or something else entirely. It was generated by some mechanism, but we can't know from the gif alone exactly how this was done. So even "seeing the triangles" is not necessarily enough, although the basic point I made still stands : it is theoretically possible to go to the artist and ask exactly how it was made. We can peep behind the curtain, if we have sufficient data. There is no need to keep digging to infinity.

It's not necessary to dive too deeply into the nature of reality or existence here, but we do need to at least go paddling. The "applicant conditions" are relevant. That is, within the everyday macroscopic realm, chairs definitely do exist - but at the subatomic level, there are no particles or property of chair-ness. Chairs do exist within a certain domain, but not within others. Likewise, perhaps at some deeper, underlying level of reality atoms don't exist either, but that doesn't make atomic physics somehow wrong, let alone lend any credence at all to a Flat Earth. Creationism remains monumentally stupid.

So I take materialism to be entirely correct, as a scientist. That doesn't mean there isn't something going on behind the scenes, it just means it's not important in understanding the observables. The universe, I quite happily take it on faith, can ultimately - or more probably only to a very large degree - become self-explaining with sufficient observations. If you want to believe in something else going on as well, or even instead of, then fine with me. It's only if you think you need a divine influence to explain observable phenomena, if you not merely ignore but actually reject the materialist explanation that we're going to have a problem... but more on this when I eventually tackle Berkeley. 

The point is that the scientific, materialist perspective says that you can explain the observables using themselves, and don't need to invoke anything else. And this is something you probably have to assume while you're doing science. You do not have to make this assumption - indeed, should not make it - while doing philosophy. This is why I earlier placed a strong emphasis on science and not on scientists.

What, though, of the notion that the data doesn't speak for itself ? This is important. Usually in research, data is scant. A host of theories are equally valid; which one you prefer and which data you select is indeed down to personal preference. But all the same, clearly the Earth is round. Clearly viruses do exist. Bricks are demonstrably painful if you drop one on your toe. You can't argue your way out of a punch to the gut.

As I mentioned already, sometimes fact and theory are interchangeable. The observation of a round Earth inevitably constrains predictions for navigation - there is no way a model which uses a flat Earth can be made to work, the data does have direct implications for itself.  But how do we reconcile this with the notion that interpretation is invariably a subjective mental process ? Is it just because there are limitations specific to certain cases, or is something more fundamental at work ?

A possible solution to this apparent paradox may lie again with the applicant conditions. Within our perceptive domain, within the self-consistent nature of the data, with all the senses and natural interpretations available to us (I mean the most fundamental ones of all, like space and time, concepts which we appear to be hard-wired to believe in), a round Earth is a direct result of the data. You can't arrange things in a circle and say there isn't a circle. But to an entity which perceived even these apparently most basic aspects differently, perhaps even "roundness" wouldn't make sense of a concept. They would have to employ some other, utterly different model.

A rough analogy might be how if you assume a theory is true, then anything that contradicts it must be taken to be false. If and when you're working within the paradigm of, say, the planets moving on solid crystal spheres, then anyone saying that comets are icy bodies that cross the orbits of the planets must be mad - great lumps of ice ought to shatter the spheres, but clearly this doesn't happen.

This doesn't mean your paradigm can't change, of course. But while you accept it, just as while you accept the notions of space, time, matter, solidity, etc., then you can't possibly accept certain statements as anything other than insane. So the data can speak for itself, albeit in a limited and provisional fashion. And, importantly, the fact that what you understand by space, time, solidity and all the rest might not be the Absolute Truth of Reality, does not mean that your observations are meaningless. Contrary to a recent argument, when I say I'm "certain", I don't mean it as a mere approximation, but that I am literally certain - within my own paradigms, at least. More on this when I get around to Hume, I suppose.

In short, I am quite prepared to accept that the true, deepest nature of reality is inaccessible, and might even feature something one could legitimately call divine. When I consider philosophy, I reject materialism as absurd. But when I practise science, I embrace it as necessary. I have little truck with the notion that we actually need this deeper aspect of reality to explain anything observable, saving, importantly, our own inner awareness. The intersection, where conscious beings make choices based on purely subjective qualia, is absolutely fascinating, but fortunately it does not appear to have any bearing on galaxy evolution or gas dynamics. If it did, I'd be in a right pickle.

The illusion of qualia : why the sky is not blue

A recent online discussion did its usual thing of degenerating into philosophy about qualia and suchlike, so I want to record some of the main points that seem important to me. In particular, it felt like there was an attempt to define colour to be something explicitly and exclusively materialistic, e.g. blueness is a certain wavelength range of a photon, which I think is absolutely impossible. This has similar vibes to an earlier discussion in which colours were outright claimed to not be qualia at all, which felt really bizarre, so even if I misinterpreted something, this post should still have some value. I will try to set forth why colours are indeed qualia, and why we may meaningfully speak of objective and subjective reality despite only ever having direct access to the subjective version.

Here is how the colour situation seems to me.

Objects reflect or emit photons depending on their material properties. The photons are received by my eye and then my befuddled, beer-addled brain does its best to form an image out of them.

This corresponds to stuff that occurs “out there”. In this regard we do have some limited form of knowledge of the external world. We can only know it through perception, which is erroneous and always incomplete, but rarely wholly flawed. Repeat observation under different circumstances lets us establish things with arbitrarily high confidence (I would even argue for a kind of true certainty, but that can of worms can be left safely closed).

What this means is that I can establish that there is something out there that causes blueness in here. Under the same circumstances, the sky will always look blue. Does this mean that I can say that the sky itself, therefore, is blue ?

No.

Well… maybe. Sort-of. Not really.

That is, we can say, “the sky is blue” as a convenient shorthand. I don’t need to stipulate the exact functionality of the eye or the precise meteorological conditions. We all know that what I really mean is under typical conditions with typical human eyes I will see a colour corresponding closely to other objects that are prone to inducing a similar experience (albeit themselves sometimes under different conditions, but with the experience itself being qualitatively similar). So as a shorthand there is no problem with saying “the sky is blue”; indeed, it would be monstrously stupid not to use this in everyday speech.

But strictly speaking, blueness itself, in my opinion, is very much “in here” and never to be found “out there” at all. If I never experienced anything but the light from sodium lamps, my whole colour experience would be profoundly, utterly different. I would have no way to know that there could be such a thing as “blueness”. Knowledge that photons could have wavelengths unfamiliar to me would not help me imagine blueness any more than I can presently imagine the colour of 21cm radio waves. I would know that experiencing other colours might be possible, but absolutely no knowledge whatsoever of the experience itself. Mary is trapped in her room forever…

(Leaving aside the minor detail that since we can perceive colours without photons present at all, e.g. phosphene vision by tactile stimulation, so perhaps some level of blueness is always present in the noise.)

And even photons purely of the specified wavelength are of no help here in defining colour : not, at any rate, the qualia of colour. True, if we receive nothing but those photons and we all have similar eyes and brains we should all experience much the same thing, but that does not mean the photon itself is blue. It only means that induces the same qualitative experience given our similarities. It doesn’t cause blueness in other animals - or the severely colour-blind - any more than the 21 cm waves cause colour to be induced in me.

Likewise the famous "no red pixels" illusion : I do not think it is correct to say that the strawberries appear to be relatively red, they appear actually to be red to me. Hence photons can only be said to induce colour, not to have colour in and of themselves. Colour is not something they have independently of being observed. It makes no difference at all if one person perceives red while another sees green, the photons themselves will have the same energy and wavelength.

To further emphasise the point : if we take only a small section of the image, we will see blue, whereas if we see the whole image, we see red strawberries on a blue background even though the wavelength being received from the original section has not changed. Hence colour is not wavelength. The same wavelengths are capable of inducing different colours in different situations.

This means there is no such thing as absolute, objective colour. Certainly blueness is something I do indeed perceive, though. A multitude of objects are capable of inducing blueness in a plethora of conditions, but there is no need except (importantly !) convenience to say that the objects themselves are blue. The properties of the photons are “out there” are invariant, the experience of colour is entirely internal and subject to a host of both external and internal influences. It is only that conditions (both internal and external) are so frequently similar that we feel we can say that blueness is a property of the objects themselves. Aliens living on the Planet of the Sodium Lamps* would not see this at all, and there is no reason my claim on reality should be greater than theirs.

* Worst Doctor Who episode ever.

Hence, it does not really matter if my blue is the same as your blue. Blueness is something the brain ascribes, not something that is found externally. Someone not perceiving blue when others do is in no way “wrong” or “mistaken”, in the way they would be if they measured the wavelength incorrectly. They just have a different, equally valid perspective.

At least that’s my opinion anyway : we can surely say “objects are blue” when analysing scientific properties, because we all understand the contextual convention, but we would be extremely foolish when we go beyond this and start discussing the nature of reality. 

This is how I try to square the common-sense notion that "bananas are yellow" with the more rigorous examinations that show that colour is a purely internal, photon-independent phenomenon. It depends on the context in which we speak : in everyday life we all implicitly assume common viewing conditions, but this does not apply when we're discussing things at a much more fundamental level. We can legitimately say that objects "have" colour... but the meaning of this statement is categorically different from the subjective experience of colour itself, which is an internal matter we can never communicate to anyone.

As also pointed out, dictionary definitions are no help without common experience. Once you've experienced blueness, you can define it in relation to those conditions that induce it - but knowledge of the wavelength range of a photon still tells you nothing about the experience of it. Yet that definition may very well still be extremely useful : knowing I need photons of a certain wavelength range for an experiment, I can set things up completely independently and don't need reference to my own prior internal experience at all. So saying that a wavelength range corresponds to a certain colour has genuine value. 

This, then, is the sense by which I mean we can speak of objective and subjective reality. Strictly speaking, the number I read off an instrument is experienced subjectively, but if I give you that number to reproduce a "blue" photon, you can do so. In contrast if I just tell you I saw something blue, and you have never seen something blue before, I have told you absolutely nothing; you have no way at all to recreate what I experienced. The objective and subjective clearly are qualitatively different.


To sum up : photons induce an internal subjective experience of colour, depending on their own wavelength, our viewing apparatus (our eyes and brain), and, crucially, the surrounding context of other photons. We can say that objects have colour in that they reflect a certain fraction of photons of a certain wavelength range, but this is not at all the same as subjective experience : seeing red is different than knowing the numerical values of the corresponding photons. Indeed, if we define colour purely as a wavelength range, when we objectively measure the strawberry image, we will find the entire image is blue - which it clearly isn't !

Colours, then, are indeed qualia, in a particular definition : they are subjective experience and nothing else. Our brains may well be assigning completely different experiential qualities to different wavelengths in different situations (certainly wavelength does not uniquely define the experience); we have no idea if my red is the same as your red. But colour could also be said to "be" wavelength... but this is in a categorically different sense to that of actual perceived colour. It is not unreasonable to use these two radically different definitions, but the context needs to be rigorously understood. The former is crucial for certain philosophy discussions; the latter infinitely more useful for scientific analysis.

In short, the titular claim that the sky is not blue is correct, but if I don't specify the context, I can rightly expected to be viewed as mad. 

(We need not even approach the kettle of fish that is astronomical colour, which is something else again.)

Saturday, 28 May 2022

Could hypnotoad really take over the world ?

An important question, one which this article unwittingly hints at...

But first, though I've had a passing interest in hypnosis since Paul McKenna's TV stage show (I even saw it live one), I've never read a description of what hypnotic suggestions feel like to experience :

"I want you to pay very close attention to your hand – how it feels, what is going on in it. Notice whether or not your hand is a little numb or tingling. The slight effort it takes to keep from bending your wrist. Pay very close attention to your hand. I want you to imagine you are holding something very heavy in your hand, such as a heavy book. Something very, very heavy. Hold the book in your hand. Now your hand and arm feel very heavy with the weight of the book pressing down."

Out of nowhere, there it is in my hand. Eyes still closed, I marvel at the weight of it. It feels just as though there really is a substantial volume in my outstretched hand – the only way I can tell it's not a real book is that I can't feel the touch of its cover in my palm.

"As it gets heavier and heavier, your arm moves down more and more, getting heavier, heavier, heavier, heavier, your hand goes down, down, all the way down…"

And it does. Terhune hardly has time to finish the suggestion before my hand hits the sofa. From the direction of his desk, I hear the scratching of pencil on paper. I still feel calm and relaxed, but somewhere in my head a small voice is saying, "Wow!"

Then another test – Terhune tells me to hold my arm out straight ahead. "This time what I want you to do is to think of your arm becoming incredibly stiff and rigid," he says.

And it's as if my elbow is made of dry, splintery wood. The sensation isn't as strong as the heavy book, but there is certainly a resistance there as I try to bend at the elbow. After a moment, I'm able to push through it and the sensation eases. But it's an effort.

Which suggests that perhaps the Matrix is in principle possible : you do not need sensory input to experience sensation. You don't need photons to perceive colour... but the mind's eye tends to be a pale shadow of real sensation. This hypnotic state, however, sounds a lot stronger. Interestingly the reporter was not susceptible to other suggestions like hearing music or having a dream, though some people are. Why some suggestions work and others don't is unclear.

This brings in hypnotoad. A few of the most susceptible people can be persuaded to forget the name of objects, but could you persuade them to do something more dangerous, even malevolent ? The difficulty seems to be that ordinary people are actually willing to do dangerous and malevolent things anyway (much more than you might suspect), so establishing whether hypnosis is responsible is complicated :

Barnier also used a control group – people who hadn't been hypnotised, but simply asked to send her a postcard every day. "I said, 'I'm a PhD student and I'm just trying to write up my thesis. Here's some postcards, will you just send me one every single day?'"

Perhaps surprisingly, this group also obliged. When Barnier called them up to talk about their experience, they were more prosaic. "They said, 'Well, you seemed desperate.'"

In 1939, one alarming experiment gave deeply hypnotised participants the suggestion to grab a large diamondback rattlesnake. The participants were told the snake was just a coil of rope. One participant made to grab it – but was prevented from doing so by a pane of glass.  Another came out of hypnosis and refused. Two other hypnotised participants weren't even told the snake was a coil of rope, and both went to grab it anyway. Two of the participants were then given the suggestion that they were angry with an experimental assistant for putting them in such a dangerous situation. They were told they would be unable to resist throwing a flask of concentrated acid in the assistant's face – both did (in a sleight of hand, the real flask of acid had been switched with a harmless liquid the same colour).

A control group of unhypnotised people were also asked to take part – but most didn't get far as they were terrified of the snake and wouldn't go near it. The findings were replicated in another study in 1952, but later investigations criticised that the controls hadn't been put under the same pressure as the hypnotised group, making the comparison unfair.

An experiment in 1973 sought to address the question more robustly, putting hypnotised and non-hypnotised participants on an equal footing. One group of university students was hypnotised and given the suggestion to go out on campus and sell what they were told was heroin, the other group was simply asked – both went out and did it. The experimenters got into trouble though, because the father of one of the participants was a professor on campus. He was "less than delighted" to find his daughter had been attempting to peddle heroin to her peers.

"The conclusion is, undergrad students are willing to do some crazy things," says Terhune. "It's nothing to do with hypnosis."

Nevertheless, since you can make people think they're perceiving something which isn't real, surely that has at least all the problems associated with ordinary manipulation. So I would be very surprised if hypnosis couldn't be used with ill intent; the scene in Doctor Who in which the hypnotised population of Earth refuses to commit suicide may be wishful thinking. Conclusion : hypnotoad probably could take over the world.

Is this yet another sign of how awful we all are ? Are we all just highly suggestible sheeple, craving demagogues to lead us to our own destruction ?

Actually, I don't think so. There's an awful lot of (quite understandable) cynicism around lately, of which more in a future post, about how we're just such a shitty species. I refuse to agree. I think it is more a case that our capacity for great achievement, that is, for great creativity, automatically entails a capacity for great destruction. For as someone once said, every act of creation is also an act of destruction. Even painting a picture destroys the blankness of the canvas. Everything is change.

I think our problem lies not in our nature but in our failure to manage ourselves. Almost all of our often worst impulses - greed, anger, selfishness, even hatred - all of these can be harnessed to good effect. The problem is that we don't have a system anywhere near sufficiently robust to ensure these tendencies are used appropriately, to use our anger to correct injustice, our selfishness to demand higher standards for all, our hatred to tackle the outright villainous. We have ended up in an inequality trap, one that is very difficult to break out of and frighteningly easy to perpetuate.

It might be a step too far to say that how susceptible someone is to being hypnotised depends on how creative they are. But it would also be too simplistic to point to their overall tendency to suggestibility. It's more interesting than that :

There are also some indicators of personality traits linked to hypnotisability – but not at the level of the "Big Five" traits: highs and lows alike can be extroverts or introverts, agreeable or disagreeable, neurotic or emotionally stable, open or closed to new experiences, conscientious or highly disorganised. However, some subtler characteristics are more commonly found in highs – such as being more imaginatively engaged, responsive to environmental cues or predisposed to self-transcendence, says Terhune.

Anecdotally, the hypnotism researchers I spoke to describe a few traits they often see in highs. They're the people who get so engrossed in a book they lose track of their surroundings, or who scream out loud at jump scares in films.

High or low, research shows that you are stuck with your level of hypnotisability throughout life. A 1989 study at Stanford University tested 50 psychology freshmen students for hypnotisability and retested them 25 years later. The former classmates had remarkably stable scores over the years, more stable even than other individual differences such as intelligence.

This capacity for, let us call it... empathy, is certainly one that hypnotoad could exploit. But it's also one of vital importance. Rather than being a fundamentally flawed species, I think we're one whose capabilities form a very sharp but crucially double-edged sword. We have yet to learn how to wield it safely.

Moving on to what ? Fascism ?

I've read the much-awaited Sue Gray report in full. As you may imagine, I am not happy.

First let me start by mentioning a number of oddities about the report itself. By her own admission, it's incomplete :

Given the piecemeal manner in which events were brought to my attention, it is possible that events took place which were not the subject of investigation.

Which is understandable, but given the goal of only establishing a broad narrative of what happened and not assigning individual responsibility, does mean it's extremely limited. This would not be so bad were the independent investigation by the Metropolitan Police itself not subject to deep concerns about impartiality and transparency, e.g. by not naming who was fined and by only fining select individuals at the same events. This means there isn't all that much scope for accountability for those right at the heart of government :

It is not my role to make a judgment on whether or not the criminal law has been broken; my focus has been on establishing the nature and purpose of the events and whether those events were appropriate in light of the Government's own guidance.

I also find the relationship between the two investigations to be a bit suspicious. Gray says she didn't think it necessary or appropriate to investigate matters the police were dealing with, but doesn't say why. Surely two independent investigations are better than one ? And she provided materials to the police but this flow of information seems to have been only one-way. While there are legitimate grounds not to sieve every detail through a fine-tooth comb (and I agree with Gray that there is no need to release every photo examined*), this just seems far too weak. We should have the absolute right to know when those in government commit illegalities. It does not make any sense to me not to release the names.

* Total transparency would be a mistake. Insofar as one-on-one speech is concerned, I have some very strong sympathies for free speech absolutism. In private settings we should have the right to say things we don't really mean without being dragged over the coals, and we should not make private discussions public retroactively. It is only when we enter explicitly public, open venues that I would take a very different stance.

And yet, even given its highly limited scope and somewhat suspicious context (especially the meeting with Gray apparently initiated by Downing Street), the report itself to me still seems to be damning.

I will not do a line-by-line dissection of the report, there doesn't seem any need. But in the interests of fairness, I will say that a few events do seem to have been in broad agreement with government guidance. The presence of food and even alcohol is not what the controversy is about. Of course you're going to need to food at a long meeting, that is unavoidable. And if you want a glass of wine or a beer, I don't have a problem with that. Even if you take a moment to do some non-productive work activity, e.g. thanking colleagues, wishing them well if they're leaving, that sort of thing... that's basically an unavoidable necessity, in my view. Having a drink outside does not automatically constitute a garden party. A few minutes to thank colleagues does not transform a work meeting into a work event.

This, however, is to expose the ridiculousness of "beergate". Because it was never about whether people had food (even cake !) or drank alcohol. It was about whether they gathered together exclusively for socialising (in contravention of the rules) and whether they lied about it afterwards. It has very little to do with whether the events in question actually posed a significant public health risk, but with whether those at the top of government were a bunch of lying hypocritical scumbags. On what grounds should they be allowed to have shindigs when other people couldn't have funerals ? How could the Prime Minister possibly be confused about whether he was at a party or a work event ?

A few of the events in Gray's report do seem to fit into the non-productive work category. But the vast majority, including some the Prime Minister attended, don't. Some highlights :

..."would like to do speeches tomorrow when we have our drinks which aren't drinks"...

Helen MacNamara, Deputy Cabinet Secretary, attended for part of the evening and provided a karaoke machine which was set up in an adjoining office to the waiting room.

The event lasted for a number of hours. There was excessive alcohol consumption by some individuals. One individual was sick. There was a minor altercation between two other individuals.

The event broke up in stages with a few members of staff leaving from around 21.00 and the last member of staff, who stayed to tidy up, leaving at 03.13.

Alcohol and food was available in Downing Street and at Whitehall, supplied and paid for by staff attending. The quiz and prize-giving lasted approximately three and a half hours.

A small number of individuals (three or four) remained in the Pillared Room for a while longer and then went to the Private Office area, where they continued to drink alcohol until approximately 01.00.

There are plenty more like that in the report. The pattern is clear : most of the events were pre-planned, explicitly social events where the whole purpose was to drink alcohol and consume food, not to get any work done - sometimes until the small hours of the morning. Even if some of these were arranged in relation to colleagues leaving (and not exclusively social occasions like Christmas), this still goes well beyond the pale. Yes, you might normally have a special, outside-of-work hours evening for this, but you can't do this during a pandemic - you just can't. So forcing it in during work hours instead is really rubbing it in the noses of everyone who actually did follow the rules. Sorry, but in these situations, you have to limit yourself to a brief thank-you, maybe a present-giving : not a whole evening and a party, for crying out loud.

And sure, during work hours you need breaks. That's fine. But you don't organise social activities for those breaks. And that sometimes the messages show the attendees were actively trying to keep the events covid-safe only makes it worse : how could they possibly be aware of the dangers of the virus but not the rules around it ? How could it possibly not have occurred to them that these openly social events weren't in violation of rules to protect everyone from a virus that they themselves were in fact trying to avoid spreading ? That the Prime Minister himself says it "never occurred to him" I find deeply insulting.

Even if the BoJo had never attended any of the events, to my mind it would still be a resigning matter. He presided over a culture of excess at the heart of government during a prolonged national emergency. If he wasn't aware of what his own staff were up to, he would clearly be too incompetent for government himself. But he did attend. He knew, inevitably, that there were explicitly social events happening in his own house. And I see no other interpretation of his denial of this other than a barefaced lie, especially given that he routinely repeats factually incorrect statements in the House of Commons as a matter of course.

That the events took part at all is bad enough. We should expect better than hypocrisy from those running the country. That they repeatedly lied about the events afterwards, that the report shows they were aware of rule-breaking ("we seem to have gotten away with it") is truly damning. If we can't trust the Prime Minister of the country with such trivialities as cake, if he is either so incompetent that he doesn't realise that parties until the early hours of the morning were going on beneath his very nose, or felt so devoted to the need to socialise that he felt compelled to lie about them... how can we trust such a man with anything ?

And the icing on the cake... now we find out the Prime Minister is marking his own homework again, changing the ministerial code so he won't have to resign. That's an outrage. Telling us all to move on is, not for the first time, deplorable. This is openly fascistic : not actually fascism, but very definitely in that direction. If you allow the Prime Minister to lie and change the rules about removing ministers from office, where exactly do you draw the line ? Just what accountability to standards is there ? Doing it entirely by general election ? That is pure populism, and altogether too close to true fascism for comfort. I honestly don't care that the situation is worse in other countries : all of us deserve better. And it's only by complaining, by getting angry at the small things, that we prevent the political offences from becoming real, personal offences.

I've done enough rhetorical pieces against Boris that I've run out of things to say, so I'll conclude this one very simply : fuck you, sir. Fuck you.

Friday, 20 May 2022

Review : The Anglo-Saxons

Let's follow-up immediately on the last history post (about James Hawes The Shortest History of England) with a review of Marc Morris' The Anglo-Saxons.

Morris is one of my favourite history writers, and he's definitely on fine form with this one. For starters, I'm going to give him enormous brownie points for the absolutely superb, common-bloody-sense structure of the physical book itself. There are figures within the main text as well as photographic plates. Colour figures on the plates are cited in the text by number, instead of just throwing in a random assortment of related pictures like practically everyone else does. Maps are included at the start of each chapter, not randomly scattered without rhyme or reason (again, which is what everyone else does). And though the notes at the back do have some additional commentary as well as just pure references, these are few enough that there's no need to keep flicking back and forth. All this instantly makes the reading experience ten times more pleasant than most popular history books, which don't do any of this. Well done Marc, well done.

Personally, I would love to get Marc Morris and Francis Pryor to have a beer together. Their views are similar but different enough, and they're both just generally entertaining people, that there's no way it couldn't be an interesting conversation. Throughout the book, I kept wanting to re-read Pryor's Britain B.C. and Britain A.D. to examine their contrasting views properly. Unfortunately I don't have my copies of those here, so any comparisons I make will have to be done from memory.

Morris' biggest difference from Pryor is probably his views on the end of Roman Britain. Morris views this in traditionally cataclysmic style, not for nothing entitling the first chapter, "The Ruin of Britain" :

The archaeological record, previously so abundant, becomes almost undetectably thin.... within a generation the villas and towns of Roman Britain had been almost completely abandoned. The implication of this data is unavoidable : society had collapsed... huge numbers of people must have been on the move in search of food and shelter. People must have perished in huge numbers, through famine, disease and violence.

Pryor disagrees completely, pointing to the construction of large wooden buildings replacing the new-fangled stone monstrosities of Rome, as well as actual improvements in written Latin in some places. True, there was a seismic shift - a collapse, even - in government. But overall, the coming of the Romans didn't change things nearly as much as is popularly portrayed, with Britons having roads and sewers all of their own long before Rome did. And presumably some sort of national-scale "government", at least at times, as evidenced by Stonehenge. Celtic society was very different to Roman, but it would be an over-simplification to view it as necessarily less sophisticated. Rome hadn't introduced all that much that the Celts actually wanted or couldn't sustain for themselves if they did, so its departure, though sudden, needn't have been that much of a shock.

It would surely be a step too far to proclaim the departure of the legions as nothing of any import, still less that it might have been an actual good thing. The international reach of Rome, its efficient administration and record-keeping, was never imitated anywhere in the Celtic world. Still, perhaps it's not so outlandish for Pryor to hold to a view of the time as one more of unpleasant transition than true apocalypse.

Beyond this Pryor and Morris are in better agreement. Pryor argues that the Anglo-Saxon invasion flat-out never happened, with at most at handful of soldiers of raiders coming across from the continent. He, an archaeologist, views the historical record as prone to wild exaggeration by over-imaginative monks, with no archaeological evidence indicating any substantial change in the way that is seen with the Roman invasion. 

Morris, a historian, has a more compromising view. He doesn't subscribe to the wholesale replacement theory*, but thinks this may have happened in some areas, while in others there was a much more modest "elite transfer" - much more in line with Pryor's view that the transfer was primarily one of culture, not people. Overall, says Morris, the numbers of immigrants was indeed substantial, but it took decades and they never outnumbered, let alone replaced, the native Britons.

* And unlike Hawes, definitely doesn't think the sea is what enabled mass migration, thankfully.

So there we have a broad agreement between historian and archaeologist, but another disagreement arises immediately. Accepting that the Britons remained numerically superior, why was their culture replaced ? Pryor's answer is that their deepest values did endure and continue to this very day, but these are not the sort of bricks-and-mortar aspects of hard culture, which did indeed change. Morris' view is that the Romano-British culture was, frankly, shite (or at the very least in state of total disarray), so the Anglo-Saxons saw nothing worth keeping (Pryor would probably spin this as the wise native Britons deciding that they liked these new ideas and cunningly adopted them voluntarily). And whereas Pryor views Christianity as having survived the fall - sorry, end - of Roman Britain in a generally healthy state, Morris says it was confined, ironically enough, to the Celtic margins of Wales and Cornwall. It's a romantic, captivating view of a doomed society clinging to its last vestiges of its culture, set against the backdrops of Tintagel with a horde of hairy barbarians pounding at the gates, slowing fading into history, then legend, and finally myth.

Blimey ! But this seems rather extreme. I wouldn't like to venture if and how Pryor and Morris could reconcile their views, but it's difficult to believe it could be as dramatic as that.

From this point on though I believe Pryor and Morris would probably get along very well. In fact, they'd probably join forces to tackle James Hawes' North-South divide theory. This is never explicit in Morris' book and barely evident at all (even if one is looking) in the first few chapters, at the very most. Actually if anything Morris has quite the opposite view to Hawes in a couple of ways. First, for at least the early period of Anglo-Saxon Britain, it was Northumbria that was the dominant power, not the lowly southerners. Secondly, the rise of Wessex emerges as a story dependent in no small part on sheer blind luck.

This raises a bigger, more philosophical disagreement with Hawes. Whereas Hawes is looking for the big-picture reasons why history happened in the way it did, Morris is seemingly happier to ascribe the outcomes to pure happenstance. But actually, I think Morris has thought about this more deeply than Hawes. Rather than looking only for materialistic reasons driving history (which are important), he also considers the nature of different political structures. During the early phase dominated by rival warlords :

Power such as this, based on personal charisma and continuous military success, would always be volatile, and liable to challenge.

And later, after the heroic successes of Alfred and the stunning achievements (unfairly forgotten) of Athelstan :

War was avoided by a great council... but this development was clearly a blow to the notion that there was a single "kingdom of the English", and raised the prospect that Wessex, Mercia and Northumbria might once again go their separate ways.

For all their many successes, the dark age kings of Britain had failed to form a unified state in the way that Rome did. The problem of how to get disparate peoples to agree on such a system was one that wouldn't really be solved until the Norman Conquest. Until then it was largely - by no means entirely - a case of power based on strength and common consent in who the ruler was, not in any deeper, more ideological notion defining kingship.

Yet this failure should not disguise the enormously significant progress which did happen throughout this era. Morris may be a historian but he doesn't neglect the archaeology, noting that the two are contradictory when it comes to the development of early kings - the former claiming a much earlier development than the latter attests. We do see an early egalitarianism as Pryor claims, but there is also a rapid development of an elite in the late sixth century. 

We should also remember that enormous amounts of information have been lost, with even the colossal construction of Offa's Dyke now being something which is at best poorly understood. Morris favours an interesting conclusion : it was partly military, being too large to be against small-scale raiders, partly a sheer symbol of political power, and perhaps most interestingly it was partly racial. The grand narrative in Morris' work is one of the development of the concept of Englishness, being largely about a people, not a place. Hence the Dyke is built not against the Northumbrians or the peoples of Wessex, but a group who were much more distinctly foreign - the Welsh.

This development was slow and gradual. The early kingdoms didn't exactly see eye to eye, so proclaiming the notion of "Englishness" was not about fellowship and brotherhood, but more about a power-grab from Wessex. Of course, it was eventually successful - today the idea of Wessex as a regional power in its own right has completely vanished. It's the English who have power, and yes within that there is a north-south divide, but it's the whole of England which is set against the other nations of Britain, not individual segments.

The book is a large and enthralling read, and I will skip over the details. There is a subtle, ineffable change as the book progresses. It's hard to describe, but the early period of mead halls and Viking raids feels like a different world from the later period of monasteries and pan-national kingship. Without trying to describe the differences (or their causes) explicitly, they nonetheless come across. Full marks to Morris on that score.

Sometimes he is more explicit about cultural developments. For example, he notes how ideas about divine retribution developed in monasteries and only later spread into secular culture - they were not an inherent factor in the Christian faith. Likewise the development of Alfred's fortified burghs only later and accidentally led to technological progress and the spread of villages. And though he notes that the importance of the famous Wittans (that could in principle meritocratically dictate who was king, but in practise invariably selected the same family over and over again) was exaggerated, he does chart how restrictions came to be placed on royal power.

The final section of the book is perhaps its weakest, though to be fair this is covered much more in-depth in his other book The Norman Conquest. His chapters on Alfred and Athelstan are outstanding. He is careful to describe which of Alfred's achievements have been exaggerated or wrongly ascribed, but he nevertheless emerges as a figure truly deserving of his epithet . Here is a book both critical and engaging, not sacrificing narrative for the same of rigorous skepticism not the other way around :

The important and incontrovertible point remains that the scheme to turn Latin works into English was Alfred's own initiative. He selected the texts he thought were "most necessary for all men to know" and discussed their contents with the scholars... Without Alfred directing their labours, none of this would have happened.

Not all of Alfred's schemes were so successful... [his] ships were vaingloriously large affairs, and hence less effective than they might otherwise have been. When they were sent out to confront a small Danish fleet that same summer, all the king's new vessels ran aground.

He was clearly not the the superhero of Georgian and Victorian myth, [not] the founder of the Royal Navy, let alone "the most perfect character in history", as one nineteenth-century scholar hyperbolically insisted. But he was courageous, clever, innovative, pious, resolute, and far-sighted : qualities which, taken together, more than justify the later decision to honour him with the word "great".

As for Athelstan, I shall leave that for another post when I review Sarah Foot's Athelstan, which is to be honest pretty awful. Instead I shall close with another lengthy quote from Morris - I would have liked a longer, more general conclusion, but his summary is still excellent (I note also that he again agrees with Pryor in that feudalism, serfdom and slavery pre-date the Conquest). Factoring in the brownie points for a well-organised text, I'm giving this one 9/10.

Much of that England is now gone forever. The Anglo-Saxons never truly believed, as the Romans did, that they were building for eternity. Their timber halls and hunting lodges burned long ago, as their owners anticipated... With one or two notable exceptions, the physical legacy of the Anglo-Saxons in thin, their surviving monuments few.

A lot of what is often touted as the enduing legacy of the Anglo-Saxons proves on closer inspection to be mythological. The claim that they invented representative government because their kings held large assemblies ignores the fact that other rulers in contemporary Europe did the same. The belief that they were pioneering in their love of freedom requires us to forget that their nearest continental neighbours called themselves the Franks - that is, the free people. Their laws and legal concepts were mostly gone by the twelfth centuries, replaced with newly-drafter Norman ones. The notion that they considered themselves to be uniquely favoured by God has lately been discredited on the grounds that no surviving document actually claims that distinction on their behalf.

And yet, though their buildings are mostly gone, and their myths have been dispelled, a great deal of the Anglo-Saxon inheritance remains. The head of the English Church is still based at Canterbury... Westminster is [still] the political heart of the kingdom... The shires of England, though tinkered with, are essentially the same as they were at the time of their creation more than 1,000 years ago. Most English villages can boast that they are first mentioned in the Domesday book, but their names often indicate a history that began centuries earlier. The fact that so much remains is remarkable. Roman Britannia, despite the grandeur of its ruins, lasted barely 400 years, and was over by the mid-fifth century. England is still a work in progress. 

Tuesday, 17 May 2022

Review : The Shortest History

I saw James Hawes' The Shortest History of England in an airport bookshop. I almost picked it up there and then, but something stayed my hand so it took several more months before I finally read this.

It's an interesting little book, and not at all what I was expecting. It's not actually that short, and anyone claiming to have actually followed the tagline and read it in a day is certainly a liar. Would they do the additional tagline of remembering it for a lifetime ? Probably not. It's good, but it's not that good.

These descriptions plus the copious use of illustrations gave me the impression that this might be designed as an entertaining, humorous read. It isn't. It's light reading, and I suppose it's entertaining in that it's interesting, but it's not funny and isn't supposed to be. While the illustrations do look superficially like they're dumbing things down, I actually found them in the main to be very neat, concise ways of summarising important points in a memorable way. It's almost like reading a set of revision notes for a history exam.

Early on I was dismayed by a profoundly stupid remark that the sea enabled mass migration to Anglo-Saxon era Britain in a way that wasn't possible elsewhere due to the harshness of overland routes. This doesn't make any sense to me at all. Early sea travel was profoundly dangerous, whereas the mass land migrations of the Huns and other Asiatic tribes didn't seem all that problematic centuries earlier. I still have difficulty in wrapping my head around this whole weird notion.

But, with one or two other weird oddities here and there, this is an exception to the general trend. The book is a good history in itself, though necessarily thrifty with details. The real goal, though, is not to provide a history per se, but to give the reader a distilled set of conclusions the author has obviously spent a good deal more time formulating (citations litter the work like sand on the beach). Sometimes I wished it had been considerably longer, although, I did find that reading it quickly was advantageous. This is not a deep analytic history where you can pore over every page, but one that you should read rapidly to get swept along by the general current. Try and run it over with a fine-toothed comb and you'll have a nasty time of it.

Throughout, Hawes is keen to emphasise what in his view has made England unique - both in good and bad ways. Not all his conclusions are convincing and it would benefit a lot from a more detailed look at the other countries he occasionally contrasts England with - without these, the reader just has to take it all on trust. He also seems to fall victim to a sort-of mythical golden age fallacy, where we are currently in an inevitable era of decline and fall. Making the history deliberately short has some considerable virtues, but it also makes its sweeping, grand narrative feel rather less carefully considered than it ought to. 

And yet, while there are innumerable details one could quibble over, and no small number of which deserve to be shot down in flames (the practise of grave goods ending within a generation is scarcely credible, we still do this today !), this would be to do the work a disservice. Instead I will withhold excessive pedantry and concentrate on the major conclusions instead.

Colonialism is a running theme throughout the book. England was born itself as a Norman colony, which enabled a kind of meritocratic social mobility - at least for a while. While the native English were replaced wholesale as political powers, many were sill rich. Whereas previously the established system meant that everyone's place was assigned through genealogy, the desire for young pro-Norman English to stay rich meant there was now a route to reclaiming something of their family losses through marriage. The colonial system meant that the primary means of social stratification occurred not through birthright, but a sort of cultural meritocracy : learn French language and culture, and you too could join the new elite.

This led, according to Hawes, directly to Parliament. He says :

If England had been a normal country with its own ruling class, Edward [I] might have tried to tame the aristocracy by giving the peasants (who could be taxed more easily) rights over their own property. This is exactly what happened in France in this era. But Edward, as a French-speaking king, could hardly side with English-speaking peasants against his own, French-speaking nobility. So he admitted that he needed to negotiate - parley - with them.

There's an awful lot to unpack there, but Hawes doesn't, because then it wouldn't be the shortest history.

Hawes is also a bit schizophrenic when it comes to the importance of the English populace at large, at various times insisting their entire way of life was "dead", only for the English to come out on top of the current elites again a few pages later. It's a bit odd.

Similarly, in later years (the Roman-medieval period taking up a disproportionately small section of the book) this led to a meritocratic Empire on England's part. Since England's wealth came from taxing businessmen, this led to a virtuous circle in which the rich elites were those who taxed themselves : their wealth fuelling the state, which was represented by the businessmen themselves, who fed the taxes back into investments in business. And the meritocratic, colonial nature of England itself held firm. The language of Gibbon "wasn't anybody's natural language; it was the property of no ethnic group. It was something that you had to learn - that anybody could learn if they had the right sort of education."

As for the Empire, Hawes subscribes to the popular notion that it happened only because life in England was so bad that everyone was desperately trying to escape it. Which is a bit at odds with the notion of such a meritocratic state. It also feels somewhat besides the point to claim that the rulers were loyal to a fictitious vision of England and so by extension couldn't be considered to be truly nationalists. Or indeed when he goes on to claim that Brexit wasn't about racial attitudes but against a foreign elite imposing their will on the down-and-out English. He's not wrong in that that's what Farage et al. claimed (Hawes, I should stress, does not have any real sympathy for the Brexiteers), but it seems strange in the extreme to claim that this explicitly anti-foreigner attitude isn't racism.

Hawes other major theme - indeed the major theme of the book - is the north-south divide. In Hawes view, this is the dominating factor in British society since time immemorial. He has innumerable maps illustrating the clear differences between the north-south divide, but he makes his case too strongly. His view is one in which the south of England is just innately better than the north, and has always been clearly dominant. This, frankly, is just nonsense, which I'll have more to say about in the next post when I review a very different book (Marc Morris : The Anglo-Saxons).

What I think he's doing here is a little bit of the Texas Sharpshooter fallacy : he cherry-picks maps that support the notion of a divide, and ignores any commonalities that exist or any trends which are more complex than this. It doesn't mean he's wrong to say there isn't a difference. It's just that he heavily overstates the case. Every aspect of modern culture and politics, he says, is inherently dominated by this. For example on the formation of the BBC :

The new BBC, founded in 1922, adopted RP and by 1926, 2.25 million licensed radios were pumping out the accent of the public schools across the land. Yet again, the ambitious of England were told : come, talk like us, and set yourself apart from your own folk.

And politics :

At the 1924 election, former Southern Liberal voters went Tory, and stayed there; former Northern Liberal voters went with Labour, and stayed there. This finally locked down the political North-South divide... The Conservatives were no longer facing off against a genuine rival English party. The opposition now was the Party of Outer Britain (Northern English + Celts) a.k.a. Labour. This hardened the age-old suspicion among Southerners that the North was somehow not properly English.

This is too much ! Of course it ignores any other demographic factors that could be at work : the Tories dominate not the south but the countryside in general; Labour's heartlands are not any particular latitudes but the cosmopolitan cities dominated by working people. And a historian should most definitely know better than to say that a mere century ago things were "finally locked down".

Up until this point, Hawes is overall optimistic about Britain in general (some notable exceptions aside). But now he paints a picture of nothing but decline and fall. The North-South divide forms a perfectly vicious circle of polarising politics, while financial bailouts from the Americans after WWII came with the price of necessitating such huge amounts of defence spending that we inevitably remained financially crippled and little more than an American satellite state.

As the world thaws, these islands will resume the course charted in 1885. Soon, the UK will end. No doubt it will happen as suddenly and unexpectedly as the Eastern Bloc in 1989. With luck, it will be peaceful. The English will emerge from the empires of their elites, to find themselves alone, wondering who they really are after all this time - and as divided as they were when the North-South split was first noted by Bede 1,300 years ago.

As taxes rise, as the rhetoric of levelling up yields to reality, as Brexit goes wrong - and it is going very wrong indeed in Ultster -... is it time to admit that England on its own has rarely, if ever, been a functioning nation-state ?

All of which feels understandably but ludicrously pessimistic, even cynical - and worse, the kind of cynical pessimism that'd curiously loved by every single generation that's ever lived. I don't buy it.

I commend the author for trying to paint a broad-brush picture of history and trying to get at the underlying causes of what happened. We need more books that try to do this, to connect small-scale events to big-picture patterns rather than simply giving us the raw facts. The book I'm currently reading, Sarah Foot's Athelstan, is a plodding scholarly tome that is essentially compromised of nothing but useless dates and events. Hawes' work is far, far better than that. 

But while his conclusions are provocative, and surely not wholly wrong, I have to consider this ultimately a failure. It would have been a lot stronger if he'd picked some cases which transcend the North-South divide and then explain exactly what causes this and therefore the general conditions under which we can expect to see North-South influences and when other factors ought to dominate. Likewise, it needs a lot more contrasting examples to illustrate exactly what was unique to England.

Finally, other historians definitely don't agree with Hawes' conclusions. No hint of an inherent North-South divide is event in Marc Morris; Francis Pryor argues that many British traditions date back even before the Romans, a period Hawes ignores completely. Full marks for effort, Hawes, but you need to show your working. 7/10 from me.

Antitheists have it backwards

I would suggest that this is a profound misunderstanding among the antitheists about if and how religion and stupidity are causally connected.

A stupid person may become religious, or they may not, but they will still be stupid. An intelligent person may or may not become religious, and they will still be intelligent. True, some beliefs are ideas that posses the mind, not ideas the mind possesses, but this is not true in and of the ideas themselves. They are a function at least as much of the person holding them. So yes, under some circumstances, some ideas can actually make you stupider, and some ideas (cough cough FLAT EARTH) it’s just not possible to hold without being an idiot. Believing in the literal truth of parts of many religious truths is insane. But assuming that all religious people believe in the literal truth of their texts is equally bonkers.

In short, the causal relation probably goes largely in the direction opposite to what’s assumed. People who are religious nutters would probably have just become some other type of nutter if there wasn’t a religion to follow.

Or perhaps, to refine this a little... it's not exactly that stupid people become religious or that religion makes you stupid. A stupid religious person will believe stupid religious things. An intelligent religious person will believe intelligent religious things. The stupid one may follow their beliefs rigidly and because they've been told they're correct, but equally, the intelligent one may believe what they do because they genuinely seem to them to be correct - they will understand not to apply them inflexibly or unquestionably.

There is also as a nice atheist take on Pascal’s Wager which boils down to : you can’t choose what to believe. Hence pretending to believe in God makes no sense. This is entirely fair. But surely by the same token, by what grounds should it therefore be acceptable to discriminate against the religious ? They no more choose to believe than they choose their skin colour. You could instead try treating people as individuals, examining what they actually do instead of what they say, and stop assuming they must adhere to a very childish view of what religion is supposed to be… my grandmother was a racist old bat who self-identified as a Christian but didn’t believe in an afterlife; my most valuable mentor in astronomy is a Catholic. People are, in short, complicated, and treating them as simple tends not to accomplish very much.

Or to finish with an unpopular opinion... it's not that there are so many religious idiots in America because religion has been allowed to freely dominate there, as opposed to secular Europe... it's because American culture in general promotes idiocy. Tendency towards religious nuttery is a consequence of a root cause, not the cause itself.

Monday, 16 May 2022

Review : Doctor Strange And The Limited Multiverse Of Mild Befuddlement

This a very preliminary review because I have to say I fell asleep during several parts of the film. I'll revisit this in due course. The following contains information about things that happen in the movie but no spoilers.

I generally do like Marvel movies but I have a particular fondness for Doctor Strange. I felt that its philosophical leanings were a lot more interesting than one can generally expect for a comic book movie and the artistry of the special effects reflected that very well. "Not everything makes sense. Not everything has to." Plus the humour and Benedict Cumberbatch fit together perfectly.

I also like Sam Raimi, with Drag Me to Hell being absolutely hilariously superb. So when everyone says that this is much more of a Sam Raimi than a Marvel movie, I had naturally high expectations.

Unfortunately I have to be somewhat of a lone voice - friends and critics alike tell me this is a a great movie, but I was rather disappointed. It's not that I don't like it, it's just that I thought it could and should have been an awful lot better. There are some good moments, but nothing like the truly unhinged wildness I was expecting. Although the movie does go more Raimi as it goes on, for about two thirds at least it's very much mediocre Marvel with nothing very distinctive about it at all - it's rather humdrum. I'm forced to give it 6/10, maybe even 5/10 if I'm feeling harsh.

The first Doctor Strange was about as self-contained as a Marvel movie can be. This one really, really isn't. Fair enough that it expects you to have a working knowledge of the Marvel universe and to have seen at least the first Doctor Strange movie. Less reasonably, you also really need to have seen WandaVision, which I haven't. From the bits I have seen, it looked like a weird, experimental, extremely slow-burning piece which couldn't possibly be important in the grand scheme of things. And Wanda herself, to me at least, is never portrayed as a character of great importance despite being extremely powerful.

In this movie though, Wanda is crucial, and worse, has changed drastically from the last Avengers movie. This reasons for this are just not sufficiently well-explained for me to ever fully accept this - it keeps feeling very forced. Just being told the reason for the change is nowhere near as effective as having actually experienced that development, so just being asked to take it on faith isn't really good enough. So the whole main plot of the movie is left feeling extremely hollow.

It could be argued that the Marvel universe is now just too darn big. To follow future movies, viewers are also going to have watched not just the horde of movies but also the swelling plethora of TV series, and that's just too much for most of the audience. There's probably some merit in that but I am not convinced this is at all unavoidable. 

For instance here, you might remember that the end of the first movie sets up Mordo as an obvious future antagonist. This is referenced in the second film but this plotline just doesn't happen. To me that doesn't feel like a good narrative choice : you've given yourself an opening for a future self-contained villain but then ignored it. Now if you want to make a movie that's about Wanda instead, go full Wanda. Give her her own movie. Call it the Wanderverse or something. Don't merge it with Doctor Strange, who already has the next villain to fight... or at least, wait until the third film before you do that. Skipping the development from the end of the first film is almost like retconning. I can't think of any reason Mordo couldn't have been the villain for this piece while doing essentially the same plotline.

And if this movie might possibly have been better titled if it had had "Wanda" in it somewhere, it definitely shouldn't have been called the Multiverse of Madness. The existence of the multiverse is almost incidental to the plot, which is especially disappointing because Doctor Strange exploring the philosophical implications of a multiverse is something I would absolutely love to see. We don't get that. None of the character interactions depend on the existence of parallel realities in a way that couldn't otherwise have happened by some different mechanism*. As for madness, there's precious little of that. Rather than a gloriously anarchic ride through a myriad of different possibilities, we get a single, too fast sequence of seeing a few different worlds (admittedly quite nicely done about from being too rapid - not too short exactly, but too fast to see anything clearly) and that's about it. There's a lot of missed opportunities here... infinite, in fact.

* See the Pratchett/Baxter Long Earth series for a case where parallel realities are genuinely crucial to the plot.

I also felt that some of the effects sequences were unnecessary - yes, it's a Marvel movie, but even so, they should serve some purpose. It all feels a bit ad-hoc. And none of them had the artistic flair of the first movie - they're nice enough, but very generic.

This isn't to say the movie isn't without some moments of brilliance. I did enjoy the finale very much, but there needs to be much, much more like this in the rest of the film. Generally the pacing doesn't feel quite right, like the movie is focusing on the wrong things at the wrong time, introducing too much of the wrong things while neglecting what it's already got to work with.

Oh well. I guess my homework is to watch WandaVision and then re-watch this movie and not fall asleep. Apparently I missed some of the best bits, so maybe I'll yet change my mind about this completely. We'll see.


EDIT : I finally watched WandaVision (which, though a very slow burn, ultimately does ignite rather ferociously and is excellent) and re-watched Doctor Strange 2, so I have to revise my opinion considerably. DS2 is a good movie which I'll happily upgrade to 7-8/10. I still think it could have used Mordo instead of Wanda, and there's not as much use of the multiverse as there could have been. But it's fun, clever, spectacular, and develops nicely from WandaVision (see also Spider-Man : No Way Home, which is great, but definitely don't watch without WV first). 

But though it does deserve to be called "Multiverse of Madness", it doesn't have the mind-warping vibe that the first movie did. Doctor Strange felt like an accessible way to explore different concepts of reality, presenting the starkly (no pun intended) materialistic neurosurgeon in contrast with the idealistic (in the philosophical sense) mystics. DS2 doesn't really quite manage this. Yes, it's integral to the plot, but it doesn't really ever try to use it to get the viewer to question their own world view. What it does, it does well. I just think it could have done just that little bit more.

Sunday, 8 May 2022

It's not the post office

I've mentioned many times previously that social media requires regulation. I've explained my position regarding the ideals of free speech often enough that I don't feel the need to do that again in any detail. Suffice to say that the 'marketplace of ideas' metaphor is if anything too successful : real markets are full of people buying and selling utter crap. By itself, the market is just not a valid solution for optimising anything very much; with proper oversight, it's a different story. Nobody seriously believe in a totally free market. Nobody.

Today I just want to briefly cover a different angle from a recent discussion. Sometimes a counter-argument to regulation is made by way of comparison to more traditional media : TV, radio, telephone, the postal service, etc. We don't routinely eavesdrop on phone calls (at least, we're not supposed to), so why should social media be different ?

My previous answer has been that social media is just not fairly comparable to traditional media. I stand by this assertion : it offers mixed media, audience reach and participation, and longevity of content in a way so unique that it deserves to be treated as its own format. To compare it to a phone call is a bit like comparing a train to a horse : yes, you can use both of them to get around, but they're really not the same thing at all - even from just a transport perspective.

But it's not really very satisfactory to just point out that social media is different. After all, if we were asked why trains were important in the Industrial Revolution, we could hardly be content by saying "because they were better than horses". Far better to define the exact differences, the relevant rules that apply in all conditions to decide when regulation of content is legitimate. Otherwise we risk it becoming a special case, with the alleged need for monitoring being little better than "because I want to". If we can state the general rules, however, then no-one can accuse us of unfairness. 

And anyway, only pointing out that social media is different to other media is like saying that it's only the consequences which matter, i.e. that the ends justify the means. That's something I'd rather avoid.

Andreas Geisler's (yes, him again) suggestion in the discussion is that it's about promotion. Television and radio are clearly publishers, but the telephone and postal services are merely delivery agencies. They explicitly have no knowledge of the content they convey, or even anything of the sender and receiver besides names and addresses. This lack of knowledge is an explicit feature of the service, why we largely feel able to trust such services with confidential information. Without it, these services would be unusable. 

Additionally, but secondarily to this, they don't host content either. They don't store anything a minute longer than they need to deliver it. They don't make the content available to anyone besides the intended recipients. True, they don't actively prevent resharing content either, but their very nature makes this difficult enough to deter almost everyone besides the truly obsessed. The responsibility for what happens to the content they deliver lies entirely with the recipient : they themselves accept responsibility for delivery only - nothing else.

Compare this with television, radio, or book publishers. Those both generate, host, and disseminate content, and as such are definitely responsible for the content they distribute. As such they are subject to very different expectations and regulations.

What about social media ? Well, knowledge of the content posted is intrinsic to the business model, as is providing tools for resharing. You create the content, yes, but they facilitate both its delivery and availability. In stark contrast, the phone service would very much like you to talk to lots of people, and the postal service would be a lot happier if everyone started writing letters again, but they don't provide any direct means to keep you engaged. You don't want your phone company automatically interjecting when you're about to hang up with a voice saying, "did you also want to talk about the geopolitical situation in North Korea ?". It'd be like if the Matrix was real but run by Clippy.

Likewise if you hire a delivery van, they don't go out of their way to find you more stuff to move from point A to point B. That's just not what delivery companies are for. Nobody would want them to be like that, because it wouldn't work.

(There are plenty of specialist exceptions, e.g. delivering medical products, but we can probably safely ignore these here.)

And that basically is what social media outlets typically do. They actively recommend you particular communities, people, topics or other ways to find content you're interested in. Not even the worst cold callers will phone you up and give you a list of ten other people interested in donkeys because you happened to go to a farm zoo one morning, but that's quite a reasonable analogy for how social media works. What would be unbearable for the delivery sector is not merely desirable but to a large degree actually essential in the social media arena.

Incidentally, note how weird social media is compared to other outlets. TV is definitely a publisher, but it doesn't encourage resharing content. It likes you to promote references, yes - telling your friends about a good show boosts ratings, and TV likes that. But that's not sharing the content itself at all, any more than sharing the address of a museum is equivalent to giving everyone a physical copy of Tutankhamun's death mask. Nor does the phone service make it easy to record and reshare. One-off, user-initiated delivery is not just comparable to prolonged hosting and sharing within a wide community.

These basics hosting and sharing features are common to all the major social media outlets. And it's this intrinsic awareness of content which, it's suggested, is what makes them publishers. A choice is made based on content to form links between people. Indeed, creating and promoting a recommendation is itself a form of content creation and publishing.

And it's also a tacit endorsement of the content itself, if only to a limited degree. Just as a publisher authorising a cookbook doesn't have to think every meal is delicious but does have to ensure it won't actually poison anyone, so does a social media company recommending white supremacist groups to non-members bear some measure of responsibility for this. They're not wholly responsible for the content, but they can't be given carte blanche either. They didn't create the original content, but they do create recommendation links - which are essentially derivative works.

Crucially, this is by choice. No-one is forcing social media to exist, nor for it to recommend anything to anyone. Instant messaging services don't do this - they can still be fairly called delivery services. But in choosing to host, share, and recommend, social media companies make knowledge of content fundamental to their success. It's this aspect of choice which means that even if this is done algorithmically, their responsibility for this is at most only slightly diminished. This makes them publishers. Perhaps they are not 100% responsible for all content published on their site, but they are absolutely, 100% responsible for all content generated by themselves, even algorithmically.

To be fair, not all of it works the same way. There's no reason you can't have a system where the user alone is responsible for what they sign up to. You could in principle have a system where there are absolutely no recommendations whatever. Would this still be a publisher, or would it instead be reverting to a delivery agency ?

A better analogy here might be a bookshop. If the shop decides to label and organise the books to help with sales, they are responsible for the choices they make. They too are generating content (however limited) and making it available, i.e. they are themselves publishers. Even this can carry consequences, such as putting the Bible in the "history" section instead of the "religious studies" area.

But if the bookshop decides not to do this, to have a great big heap of books and allow anyone at all to come in and buy anything... that's probably not a publisher. It's also extremely stupid, like selling random items from an uncatalogued storage locker. What's this, an automatic rifle ? Sure ! A smallpox sample ? That'll be £3.50 sir, there you are... okay, such a shop would no longer be a publisher, but it would also be stupid and unworkable.

Yet the social media outlets I use are actually somewhat close to that. They don't provide any recommendation systems themselves, but let users set their own hashtags to label their own content for others to find. Needless to say, this is open to abuse, and using the wrong hashtags is one of the major annoyances. But actually it mostly works.

But does it alleviate the responsibility of the host ? At least partially this is surely the case. They did not create the hashtags the users generate, so cannot be directly or entirely culpable for that : they aren't publishing anything. If they have no control over who joins the service, then we might even extend this to a total mitigation of responsibility - just as a phone company cannot be held responsible if a terrorist uses a public phone, but could be if there was an order barring them from having a private telephone but they allowed an installation.

So there are still no good direct analogues for social media. But there are at least a variety of comparable situations we can use for broad guidance, which I think goes quite some way towards settling the question. Anything you generate and make available counts as publication, hence a service which recommends contacts is a publisher. If you don't, and don't set restrictions on who can join, you're probably closer to being a delivery agency.

The model of a purely user-driven service is a grey area. The wider question, of course, is whether this is a good idea. While we should have means-driven, not ends-drive, rules, we still should consider the consequences. This power to infinitely generate and duplicate multimedia content, to share it globally with strangers, to directly interact with people in prolonged, recorded conversations, is unprecedented. True, for now mainstream media still dominates most people's thinking. But we would be fools to assume this will last indefinitely, or to think that the regulations which we accept for more familiar delivery services are perfect analogies. Evading the label of "publisher" somehow feels hollow if it still allows for the viral spread of misinformation.

For now, the only conclusion here is that yes, social media services are indeed largely publishers, and at a minimum must accept responsibility for anything they themselves create in relation to user's own content. How much further we should take this I leave for now as an open question.

Review : Three Mile Island

You may recall that I'm a big fan of HBO's Chernobyl, so it seems only fair that I should give Netflix's own version a go.

Meltdown : Three Mile Island clearly tries to compete/not-compete by being a documentary rather than a drama. Ironically, given that Chernobyl was entirely at liberty to bend the facts, Three Mile Island falls short in the important regard of explaining what actually happened. It does not have Jared Harris explaining the technical details using a series of flash cards, which is a sequence it badly needs. But then, so does pretty much everything.

For this reason the first couple of episodes are honestly a bit lame. It just doesn't cover the technical details nearly well enough to explain what went wrong, which makes the focus on the humanitarian side of things feel strangely hollow. Their stories would have been a lot more powerful, not less, had they properly explained how the (arguably near) disaster happened. I do feel it would have worked better as a trilogy, cutting out a lot of baggage in the first two episodes and replacing it with some much-needed science.

But stick with it, because in the second two episodes the show is easily as good as anything Chernobyl has to offer.

While the first half covers the incident itself and the reactions of those involved (the townspeople, the government, the engineers), the second half branches out to cover the repair operation - and the truly Chernobyl-scale disaster that was avoided by only the narrowest of margins therein. The main focus shifts from the ordinary public to two prominent figures : one who is clearly unaware of how much of a colossally stupid tit he sounds like, and the other a more straightforward tragic-hero type.

Recently I postulated that maybe one reason for such poor decision-making in the Soviet era was due to its intensely hierarchical structure. When you can't have any discussions without consequences, when everything you do becomes accountable and have no choice but to rise or fall, you get a system where everyone punches downwards. In such a system, lying and self-interest are virtually one and the same. In contrast, a more egalitarian framework, where discussions with peers are largely consequence-free, allows far more respect for the truth, because most of the time no-one is punching anyone in any direction.

The second half of Three Mile Island thus presents an excellent counterpoint to Chernobyl. To oversimplify somewhat, the Soviet system gave the government far too much power. In capitalist America, the government did not have enough power - at least, not enough power independent of its corporate overlords. The unfettered drive for wealth inequality leads to corporations having no real, effective oversight. Indeed the show makes it clear that they actually murdered at lease one prominent anti-nuclear activist, that in this system profit comes before truth every time.

(It would be a very interesting study to compare if and how extreme wealth inequality leads not just to corruption but also the hierarchical structure characteristic of mismanagement, or if incompetence here occurs in a different structure.)

What makes the show at least feel like it has a high degree of credibility is that we hear from a wide variety of sources. When accusations of corruption are levelled, we get to hear the response from those in charge. Predictably, there responses are lame beyond belief, with the main respondent from the Nuclear Regulatory Council dismissing everything as "drama" for some reason. And you really have the urge to shout at the guy that he's being absolutely pathetically arrogant. It's the sort of "I'm sorry you were offended" non-apology that characterises the corrupt, just as murdering the messenger is all but openly admitting culpability.

Best of all, the main protagonist is avowedly to this day pro-nuclear. It's not the technology that's the problem, he says. It's the management structure, the profit motive. A nuclear industry run like this will never be viable.

I do have to wonder if this is a lesson that applies more generally. Honestly, transparency and accountability would have likely prevented the disaster in the first place. An independent system for monitoring the radiation release would have indicated just how bad things were, rather than - FFS ! - letting them mark their own homework by having the NRC trust the energy company's own reports. And it probably would have made this cheaper, by avoiding the need for the hugely expensive clean-up operations.

Likewise, American healthcare is preposterously expensive. I daresay that a free and fair competition does, under some circumstances, lead to lower costs for consumers. But it seems utterly mad that anyone is still insisting that the free market is always the answer. Ironically, sometimes it seems that it's profit that drives costs up, not down. 

It's not even that you can't have profits. You can. You can have a safe, profitable nuclear industry, for sure. But you can't have profit be the overriding motivation in any industry where the consequences of mistakes are people's lives.

Anyway, great show. 6/10 for the first two episodes, 9/10 for the second two. Well worth a watch.

Review : Pagan Britain

Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...